CN109189970A - Picture similarity comparison method and device - Google Patents
Picture similarity comparison method and device Download PDFInfo
- Publication number
- CN109189970A CN109189970A CN201811096782.8A CN201811096782A CN109189970A CN 109189970 A CN109189970 A CN 109189970A CN 201811096782 A CN201811096782 A CN 201811096782A CN 109189970 A CN109189970 A CN 109189970A
- Authority
- CN
- China
- Prior art keywords
- picture
- measured
- target photo
- emphasis
- similarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Present disclose provides a kind of picture similarity comparison method and devices, are related to field of image recognition.This method comprises: extracting the global information of picture and Target Photo to be measured and the keynote message of picture to be measured and Target Photo;The global information of picture to be measured is compared with the global information of Target Photo and determines the first similarity;If the first similarity is greater than first threshold, the keynote message of picture to be measured is compared with the keynote message of Target Photo and determines the second similarity;If the second similarity is greater than second threshold, it is determined that picture to be measured is similar to Target Photo.The disclosure can differentiate whether picture is similar pictures by comparing keynote message, to take into account the accuracy rate and recall rate of picture similarity comparison, and then improve the accuracy of picture recognition while taking into account overall similarity.
Description
Technical field
This disclosure relates to field of image recognition more particularly to a kind of picture similarity comparison method and device.
Background technique
Existing picture similarity compares, and compromise appropriate is done usually between accuracy rate and recall rate.For example, being searched in picture
Its recall rate is more valued in rope field, and compares field in accurate picture, then more values accuracy rate.But actually in picture
Similarity compares field, has higher requirement for accuracy rate and recall rate.
For example, whether the cover photo of more same book and the book cover photo on network are identical, if there is version
Text caused by changing is different, cover redesigns etc..It is required that on the one hand very fine difference is identified, such as the change of word content
Change, but then, it is desirable that filter out illumination, it is reflective, be stained, size design is lack of standardization, post corset and antifalsification label etc. is all
More noises.The picture comparison method of prevalence above-mentioned can filter out the influence of noise, and an identical book is determined as " phase
Together ", but fine text difference can not be identified.And use the deep learning similar to Siamese Network (twin network) etc.
Method, the Differential Input by two figures as network, accurately identifies the other purpose of literal field although can reach, is difficult to ignore
Excessive noise effect.
Summary of the invention
The disclosure technical problem to be solved is to provide a kind of picture similarity comparison method and device, can take into account
The accuracy rate and recall rate that picture similarity compares, and then improve the accuracy of picture recognition.
On the one hand according to the disclosure, a kind of picture similarity comparison method is proposed, comprising: extract picture to be measured and target figure
The keynote message of the global information of piece and picture to be measured and Target Photo;By the global information and Target Photo of picture to be measured
Global information be compared determine the first similarity;If the first similarity is greater than first threshold, by the emphasis of picture to be measured
Information is compared with the keynote message of Target Photo determines the second similarity;If the second similarity is greater than second threshold, really
Fixed picture to be measured is similar to Target Photo.
Optionally, the keynote message for extracting picture and Target Photo to be measured comprises determining that emphasis subgraph in picture to be measured
With the emphasis subgraph in Target Photo;Extract the feature vector of the emphasis subgraph of picture to be measured and the emphasis subgraph of Target Photo
Feature vector;The keynote message of picture to be measured is compared with the keynote message of Target Photo and determines the second similarity packet
It includes: the feature vector of the emphasis subgraph of picture to be measured is carried out with the feature vector of the emphasis subgraph of Target Photo corresponding region
Comparison, determines the second similarity.
Optionally it is determined that emphasis subgraph comprises determining that key content region in picture and/or Target Photo to be measured;
Using the corresponding image in key content region as emphasis subgraph.
Optionally, key content is that can indicate the content of picture region indexing.
Optionally, key content region is determined based on optical character recognizer and/or object recognition algorithm.
Optionally it is determined that the global information of picture to be measured and the keynote message of Target Photo include: to extract picture to be measured
Feature vector and Target Photo feature vector;The global information of the global information of picture to be measured and Target Photo is carried out
It compares and determines that the first similarity includes: that the feature vector of picture to be measured is compared to determine the with the feature vector of Target Photo
One similarity.
Optionally, the feature vector of model extraction emphasis subgraph is extracted based on the first eigen vector.
Optionally, this method further include: generation first is labeled to the eigen vector of the emphasis subgraph in samples pictures
Mark file;Emphasis subgraph and the first mark file based on samples pictures extract model to first eigenvector and are trained.
Optionally, the feature vector of model extraction picture and Target Photo to be measured is extracted based on second feature vector.
Optionally, generation the second mark file is labeled to the eigen vector of samples pictures;Based on samples pictures and
Two mark files extract model to second feature vector and are trained.
According to another aspect of the present disclosure, it is also proposed that a kind of picture similarity comparison device, comprising: global information extracts single
Member, for extracting the global information of picture and Target Photo to be measured;Keynote message extraction unit is used for picture to be measured and target figure
The keynote message of piece;Global information comparing unit, for by the global information of the global information of picture to be measured and Target Photo into
Row, which compares, determines the first similarity;Keynote message comparing unit will be to mapping if being greater than first threshold for the first similarity
The keynote message of piece is compared with the keynote message of Target Photo determines the second similarity;Similarity determining unit, if for
Second similarity is greater than second threshold, it is determined that picture to be measured is similar to Target Photo.
Optionally, keynote message extraction unit is for determining the emphasis subgraph in picture to be measured and the emphasis in Target Photo
Subgraph extracts the feature vector of the feature vector of the emphasis subgraph of picture to be measured and the emphasis subgraph of Target Photo;Emphasis letter
Comparing unit is ceased to be used for the spy of the feature vector of the emphasis subgraph of picture to be measured and the emphasis subgraph of Target Photo corresponding region
Sign vector compares, and determines the second similarity.
Optionally, keynote message extraction unit is also used to determine in picture and/or Target Photo to be measured where key content
Region, using the corresponding image in key content region as emphasis subgraph.
Optionally, key content is that can indicate the content of picture region indexing.
Optionally, keynote message extraction unit is used to determine based on optical character recognizer and/or object recognition algorithm
Key content region.
Optionally, global information extraction unit be used for extract picture to be measured feature vector and Target Photo feature to
Amount;Global information comparing unit, which is used to for the feature vector of picture to be measured being compared with the feature vector of Target Photo, determines the
One similarity.
Optionally, keynote message extraction unit is also used to extract the spy of model extraction emphasis subgraph based on the first eigen vector
Levy vector.
Optionally, device further include: the first eigen vector extracts model training unit, for the weight in samples pictures
The eigen vector of dot map is labeled generation the first mark file, emphasis subgraph and the first mark file based on samples pictures
Model is extracted to first eigenvector to be trained.
Optionally, global information extraction unit is also used to extract model extraction picture to be measured and mesh based on second feature vector
It marks on a map the feature vector of piece.
Optionally, device further include: second feature vector extracts model training unit, for the characteristic to samples pictures
Vector is labeled generation the second mark file, extracts model to second feature vector based on samples pictures and the second mark file
It is trained.
According to another aspect of the present disclosure, it is also proposed that a kind of picture similarity comparison device, comprising: memory;And coupling
It is connected to the processor of memory, processor is configured as the picture similarity for example above-mentioned based on the instruction execution for being stored in memory
Control methods.
According to another aspect of the present disclosure, it is also proposed that a kind of computer readable storage medium is stored thereon with computer journey
The step of sequence instruction, which realizes above-mentioned picture similarity comparison method when being executed by processor.
Compared with prior art, the embodiment of the present disclosure, can be by comparing emphasis letter while taking into account overall similarity
Breath to take into account the accuracy rate and recall rate of picture similarity comparison, and then improves to differentiate whether picture is similar pictures
The accuracy of picture recognition.
By the detailed description referring to the drawings to the exemplary embodiment of the disclosure, the other feature of the disclosure and its
Advantage will become apparent.
Detailed description of the invention
The attached drawing for constituting part of specification describes embodiment of the disclosure, and together with the description for solving
Release the principle of the disclosure.
The disclosure can be more clearly understood according to following detailed description referring to attached drawing, in which:
Fig. 1 is the flow diagram of one embodiment of disclosure picture similarity comparison method.
Fig. 2 is the flow diagram of another embodiment of disclosure picture similarity comparison method.
Fig. 3 is the flow diagram of the further embodiment of disclosure picture similarity comparison method.
Fig. 4 is the structural schematic diagram of disclosure picture similarity comparison device one embodiment.
Fig. 5 is the structural schematic diagram of another embodiment of disclosure picture similarity comparison device.
Fig. 6 is the structural schematic diagram of disclosure picture similarity comparison device further embodiment.
Fig. 7 is the structural schematic diagram of another embodiment of disclosure picture similarity comparison device.
Specific embodiment
The various exemplary embodiments of the disclosure are described in detail now with reference to attached drawing.It should also be noted that unless in addition having
Body explanation, the unlimited system of component and the positioned opposite of step, numerical expression and the numerical value otherwise illustrated in these embodiments is originally
Scope of disclosure.
Simultaneously, it should be appreciated that for ease of description, the size of various pieces shown in attached drawing is not according to reality
Proportionate relationship draw.
Be to the description only actually of at least one exemplary embodiment below it is illustrative, never as to the disclosure
And its application or any restrictions used.
Technology, method and apparatus known to person of ordinary skill in the relevant may be not discussed in detail, but suitable
In the case of, the technology, method and apparatus should be considered as authorizing part of specification.
It is shown here and discuss all examples in, any occurrence should be construed as merely illustratively, without
It is as limitation.Therefore, the other examples of exemplary embodiment can have different values.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, then in subsequent attached drawing does not need that it is further discussed.
For the purposes, technical schemes and advantages of the disclosure are more clearly understood, below in conjunction with specific embodiment, and reference
The disclosure is further described in attached drawing.
Fig. 1 is the flow diagram of one embodiment of disclosure picture similarity comparison method.
In step 110, the global information of picture and Target Photo to be measured and the weight of picture to be measured and Target Photo are extracted
Point information.The global information of picture is, for example, the whole coarse information such as picture color, size.The keynote message of picture is for example
For detailed information such as local text version, trade marks.
In step 120, the global information of picture to be measured is compared with the global information of Target Photo and determines the first phase
Like degree.The step is coarseness comparison, that is, loosens the distance threshold of comparison, neglect small difference, only focus on the picture of large area
Face is different.When whole picture takes coarseness to compare, it is primarily upon the recall rate of picture.
In step 130, if the first similarity is greater than first threshold, by the keynote message of picture to be measured and Target Photo
Keynote message, which is compared, determines the second similarity.Wherein, if the first similarity be less than or equal to first threshold, illustrate picture it
Between have a biggish difference, i.e., picture to be measured and Target Photo are dissimilar.If the first similarity is greater than first threshold, need to compare
The keynote message that can indicate picture region indexing in picture carries out fine granularity comparison to picture.It is main to close when fine granularity compares
Infuse the accuracy rate of picture.
In step 140, if the second similarity is greater than second threshold, it is determined that picture to be measured is similar to Target Photo.Even
It is compared by fine granularity, the similarity of picture is greater than threshold value, then illustrates that two pictures are identical.
In this embodiment, while taking into account overall similarity, can be by comparing keynote message to differentiate picture
No is similar pictures, to take into account the accuracy rate and recall rate of picture similarity comparison, and then improves the accurate of picture recognition
Property.
Fig. 2 is the flow diagram of another embodiment of disclosure picture similarity comparison method.
In step 210, the feature vector of picture to be measured and the feature vector of Target Photo are extracted.
In one embodiment, the spy of model extraction picture and Target Photo to be measured can be extracted based on second feature vector
Levy vector.Wherein it is possible to first be labeled generation the second mark file to the eigen vector of samples pictures, it is then based on sample graph
Piece and the second mark file extract model to second feature vector and are trained.Model is extracted training second feature vector
Afterwards, picture to be measured and Target Photo are separately input into the model, then the feature vector and target of available picture to be measured
The feature vector of picture.
In one embodiment, which extracts model and uses deep learning CNN (Convolutional
Neural Network, convolutional neural networks) technology.
In step 220, the emphasis subgraph in picture to be measured and the emphasis subgraph in Target Photo are determined.For example, first determining
Key content region in picture and Target Photo to be measured, using the corresponding image in key content region as emphasis
Figure.Wherein, key content is that can indicate the content of picture region indexing.For example, for books, if the pattern of books picture is basic
Unanimously, but text information be " first edition " and " second edition ", then illustrate that this two books are not same books, this " first edition " and
" second edition " is key content.
In one embodiment, key content institute can be determined based on optical character recognizer or object recognition algorithm
In region.For example, can use optical character recognizer detects character area, it can use object recognition algorithm and detect
Pictorial trademark etc. in picture.Wherein, key content can inherently distinguish the similitude of picture.
In step 230, the spy of the feature vector of the emphasis subgraph of picture to be measured and the emphasis subgraph of Target Photo is extracted
Levy vector.
In one embodiment, the feature vector of model extraction emphasis subgraph can be extracted based on the first eigen vector.Its
In, generation the first mark file first is labeled to the eigen vector of the emphasis subgraph in samples pictures;Based on samples pictures
Emphasis subgraph and the first mark file extract model to first eigenvector and are trained.It is extracted training first eigenvector
After model, the emphasis subgraph in picture to be measured and the emphasis subgraph in Target Photo are separately input into the first eigen vector and extracted
Model can obtain the feature vector of the feature vector of the emphasis subgraph of picture to be measured and the emphasis subgraph of Target Photo.
In one embodiment, which extracts model and uses deep learning CNN technology.First eigenvector
It extracts model and mutually isostructural network can be used with second feature vector extraction model, trained by using corresponding picture,
Obtain different recognition capabilities.The feature vector that model output is extracted by comparing second feature vector can judge biggish figure
Aberration is other, ignores lesser, details difference, and the feature vector that model output is extracted by comparing first eigenvector can be sentenced
The image difference for breaking subtle.
In step 240, the feature vector of picture to be measured is compared with the feature vector of Target Photo and determines the first phase
Like degree.
In step 250, judge whether the first similarity is greater than first threshold, if so, thening follow the steps 260, otherwise, executes
Step 290.
In step 260, by the emphasis subgraph of the feature vector of the emphasis subgraph of picture to be measured and Target Photo corresponding region
Feature vector compare, determine the second similarity.
Wherein, emphasis subgraph can have multiple, when being compared, should compare the emphasis subgraph of the same area in picture.
For example, if the emphasis subgraph of emphasis subgraph and the lower right corner in picture comprising the upper left corner, it can be left to picture to be measured is first compared
The feature vector of the emphasis subgraph of the feature vector and Target Photo upper left corner of the emphasis subgraph at upper angle, then compares picture to be measured
The feature vector of the emphasis subgraph of the feature vector and Target Photo lower right corner of the emphasis subgraph in the lower right corner.
In step 270, judge whether the second similarity is greater than second threshold, if so, thening follow the steps 280, otherwise, executes
Step 290.Wherein, second threshold can be identical with first threshold, can also be different.
In step 280, determine that picture to be measured is identical as Target Photo.
In step 290, determine that picture to be measured and Target Photo be not identical.
In this embodiment, picture global information is all compared with two dimensions of keynote message, take into account coarseness with
Fine-grained difference improves the accuracy of picture recognition to improve the accuracy rate and recall rate of picture recognition.
Fig. 3 is the flow diagram of the further embodiment of disclosure picture similarity comparison method.In the embodiment with
It is introduced for book cover.
In step 310, book cover 1 and book cover 2 are pre-processed.
In step 320, the feature vector of model extraction book cover 1 and book cover 2 is extracted using second feature vector.
In step 330, the character area of book cover discrimination can be indicated by obtaining book cover 1 and book cover 2.?
In the embodiment, using text information as judge books whether be same book most important and essential foundation, such as " on
Volume " indicates that two are not same books with " volume two ", and " first edition " and " second edition " equally indicates that two are not same
Book, even if the pattern of picture is almost the same.Textual entry region division is gone out for example, can use optical character recognizer
Come, is cut into subgraph one by one.
In one embodiment, can also be using pictorial trademark as keynote message, such as identified using object recognition algorithm
The pictorial trademark of book cover out.
In step 340, character area is aligned.For example, by text in character area in book cover 1 and book cover 2
The pixel in region is translated, so that corresponding character area alignment in two covers.Such as judgement " first edition " and " the respectively
Two editions " in " the " and " version " word whether be aligned, if not being aligned, need by book cover 1 " the " in book cover 2
" the " alignment, " version " in book cover 1 is aligned with " version " in book cover 2, at this point, " one " in book cover 1 and
The region of " two " in book cover 2 is also just aligned.
In step 350, character area in model extraction book cover 1 and book cover 2 is extracted using first eigenvector
Feature vector.Wherein, step 350 can execute after step 320.
In step 360, the whether similar of the feature vector of book cover 1 and book cover 2 judged, if similar, is executed
Step 370, otherwise, step 390 is executed.
In step 370, judge whether the feature vector of character area in book cover 1 and book cover 2 is similar, if phase
Seemingly, 380 are thened follow the steps, it is no to then follow the steps 390.
In step 380, determine that book cover 1 and book cover 2 are identical.
In step 390, determine that book cover 1 and book cover 2 be not identical.
In this embodiment, while taking into account overall similarity, can be come by core, emphasis area information
Differentiate whether picture is similar pictures, be capable of intelligence excludes difference caused by various noises, and accurately identifies text
Variation also can recognize that and even if the variation to a word.
Fig. 4 is the structural schematic diagram of disclosure picture similarity comparison device one embodiment.The device includes global letter
Cease extraction unit 410, keynote message extraction unit 420, global information comparing unit 430, keynote message comparing unit 440 and phase
Like degree determination unit 450.
Global information extraction unit 410 is used to extract the global information of picture and Target Photo to be measured.
Keynote message extraction unit 420 is used for the keynote message of picture to be measured and Target Photo.
Global information comparing unit 430 is for comparing the global information of the global information of picture to be measured and Target Photo
To determining first similarity.
If keynote message comparing unit 440 is greater than first threshold for the first similarity, the emphasis of picture to be measured is believed
Breath is compared with the keynote message of Target Photo determines the second similarity.
If similarity determining unit 450 is greater than second threshold for the second similarity, it is determined that picture to be measured and target figure
Piece is similar.
In this embodiment, while taking into account overall similarity, can be by comparing keynote message to differentiate picture
No is similar pictures, to take into account the accuracy rate and recall rate of picture similarity comparison, and then improves the accurate of picture recognition
Property.
In another embodiment of the disclosure, global information extraction unit 410 be used to extract the feature of picture to be measured to
The feature vector of amount and Target Photo.For example, model extraction picture to be measured and target can be extracted based on second feature vector
The feature vector of picture.
Keynote message extraction unit 420 is used to determine the emphasis subgraph in picture to be measured and emphasis in Target Photo
Figure, extracts the feature vector of the feature vector of the emphasis subgraph of picture to be measured and the emphasis subgraph of Target Photo.For example, determining
Key content region in picture and Target Photo to be measured, using the corresponding image in key content region as emphasis
Figure extracts the feature vector of model extraction emphasis subgraph based on the first eigen vector.In one embodiment, optics can be based on
Text region algorithm or object recognition algorithm determine key content region.
Global information comparing unit 430 is for comparing the feature vector of the feature vector of picture to be measured and Target Photo
To determining first similarity.
If keynote message comparing unit 440 is greater than first threshold for the first similarity, by emphasis of picture to be measured
The feature vector of figure and the feature vector of the emphasis subgraph of Target Photo corresponding region compare, and determine the second similarity
If similarity determining unit 450 is greater than second threshold for the second similarity, it is determined that picture to be measured and target figure
Piece is similar.If the second similarity is less than or equal to second threshold or the first similarity is less than or equal to first threshold, it is determined that be measured
Picture and Target Photo are dissimilar.
In this embodiment, on the one hand, the comparison for taking whole picture coarseness identifies the biggish difference of picture
Place, is primarily upon raising recall rate;On the other hand, for the practical application of picture, picture region point can be indicated by finding out in picture
The key content of degree comes out its position detection, carries out fine-grained comparison, is primarily upon its accuracy rate;Two such dimension
Comparison, improve the accuracy of picture recognition.
Fig. 5 is the structural schematic diagram of another embodiment of disclosure picture similarity comparison device.The device further includes
One eigen vector extracts model training unit 510 and second feature vector extracts model training unit 520.
First eigen vector extract model training unit 510 be used for the eigen vector of the emphasis subgraph in samples pictures into
Rower note generates the first mark file, and emphasis subgraph and the first mark file based on samples pictures extract first eigenvector
Model is trained.After training first eigenvector and extracting model, keynote message extraction unit 420 will be in picture to be measured
Emphasis subgraph in emphasis subgraph and Target Photo is separately input into the first eigen vector and extracts model, can obtain picture to be measured
Emphasis subgraph feature vector and Target Photo emphasis subgraph feature vector.
In one embodiment, which extracts model and uses deep learning CNN technology.
Second feature vector extracts model training unit 520 and is used to be labeled the eigen vector of samples pictures generation the
Two mark files extract model to second feature vector based on samples pictures and the second mark file and are trained.It is training
After second feature vector extracts model, picture to be measured and Target Photo are separately input into the mould by global information extraction unit 410
Type, the then feature vector of available picture to be measured and the feature vector of Target Photo.
In one embodiment, which extracts model and uses deep learning CNN technology.
First eigenvector extracts model and mutually isostructural network can be used with second feature vector extraction model, by adopting
It is trained with corresponding picture, obtains different recognition capabilities.The feature of model output is extracted by comparing second feature vector
Vector can judge biggish image difference, ignore lesser, details difference, extract model by comparing first eigenvector
The feature vector of output can judge subtle image difference.
Fig. 6 is the structural schematic diagram of disclosure picture similarity comparison device further embodiment.The device includes storage
Device 610 and processor 620, in which: memory 610 can be disk, flash memory or other any non-volatile memory mediums.Storage
Device is used to store the instruction in embodiment corresponding to Fig. 1-3.Processor 620 is coupled to memory 610, can be used as one or more
A integrated circuit is implemented, such as microprocessor or microcontroller.The processor 620 is for executing the finger stored in memory
It enables.
It in one embodiment, can be as shown in fig. 7, the device 700 includes memory 710 and processor 720.Processing
Device 720 is coupled to memory 710 by BUS bus 730.The device 700 can also be connected to outside by memory interface 740 and deposit
Storage device 750 can also be connected to network or an other department of computer science to call external data by network interface 760
System (not shown), no longer describes in detail herein.
In this embodiment, it is instructed by memory stores data, then above-metioned instruction is handled by processor, it is whole taking into account
While body similarity, it can differentiate whether picture is similar pictures by comparing keynote message, so that it is similar to take into account picture
The accuracy rate and recall rate compared is spent, and then improves the accuracy of picture recognition.
In another embodiment, a kind of computer readable storage medium, is stored thereon with computer program instructions, this refers to
The step of order realizes the method in embodiment corresponding to Fig. 1-3 when being executed by processor.It should be understood by those skilled in the art that,
Embodiment of the disclosure can provide as method, apparatus or computer program product.Therefore, complete hardware reality can be used in the disclosure
Apply the form of example, complete software embodiment or embodiment combining software and hardware aspects.Moreover, the disclosure can be used one
It is a or it is multiple wherein include computer usable program code computer can with non-transient storage medium (including but not limited to
Magnetic disk storage, CD-ROM, optical memory etc.) on the form of computer program product implemented.
The disclosure is reference according to the method for the embodiment of the present disclosure, the flow chart of equipment (system) and computer program product
And/or block diagram describes.It should be understood that each process in flowchart and/or the block diagram can be realized by computer program instructions
And/or the combination of the process and/or box in box and flowchart and/or the block diagram.It can provide these computer programs to refer to
Enable the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to generate
One machine so that by the instruction that the processor of computer or other programmable data processing devices executes generate for realizing
The device for the function of being specified in one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
So far, the disclosure is described in detail.In order to avoid covering the design of the disclosure, it is public that this field institute is not described
The some details known.Those skilled in the art as described above, completely it can be appreciated how implementing technology disclosed herein
Scheme.
Although being described in detail by some specific embodiments of the example to the disclosure, the skill of this field
Art personnel it should be understood that above example merely to be illustrated, rather than in order to limit the scope of the present disclosure.The skill of this field
Art personnel are it should be understood that can modify to above embodiments in the case where not departing from the scope of the present disclosure and spirit.This public affairs
The range opened is defined by the following claims.
Claims (22)
1. a kind of picture similarity comparison method, comprising:
Extract the global information of picture and Target Photo to be measured and the emphasis letter of the picture to be measured and the Target Photo
Breath;
The global information of the picture to be measured is compared with the global information of the Target Photo and determines the first similarity;
If first similarity is greater than first threshold, by the weight of the keynote message of the picture to be measured and the Target Photo
Point information, which is compared, determines the second similarity;
If second similarity is greater than second threshold, it is determined that the picture to be measured is similar to the Target Photo.
2. picture similarity comparison method according to claim 1, wherein
The keynote message for extracting the picture to be measured and the Target Photo includes:
Determine the emphasis subgraph in the emphasis subgraph and the Target Photo in the picture to be measured;
Extract the feature vector of the feature vector of the emphasis subgraph of the picture to be measured and the emphasis subgraph of the Target Photo;
The keynote message of the picture to be measured is compared with the keynote message of the Target Photo and determines the second similarity packet
It includes:
Feature by the feature vector of the emphasis subgraph of the picture to be measured, with the emphasis subgraph of the Target Photo corresponding region
Vector compares, and determines second similarity.
3. picture similarity comparison method according to claim 2, wherein determine that emphasis subgraph includes:
Determine key content region in the picture to be measured and/or the Target Photo;
Using the corresponding image in the key content region as the emphasis subgraph.
4. picture similarity comparison method according to claim 3, wherein
The key content is that can indicate the content of picture region indexing.
5. picture similarity comparison method according to claim 3, wherein
The key content region is determined based on optical character recognizer and/or object recognition algorithm.
6. picture similarity comparison method according to claim 2, wherein determine the global information and mesh of picture to be measured
The keynote message of piece of marking on a map includes:
Extract the feature vector of the picture to be measured and the feature vector of the Target Photo;
The global information of the picture to be measured is compared with the global information of the Target Photo and determines the first similarity packet
It includes:
The feature vector of the picture to be measured is compared with the feature vector of the Target Photo and determines that described first is similar
Degree.
7. according to any picture similarity comparison method of claim 2-5, wherein
The feature vector of emphasis subgraph described in model extraction is extracted based on the first eigen vector.
8. picture similarity comparison method according to claim 7, further includes:
Generation the first mark file is labeled to the eigen vector of the emphasis subgraph in samples pictures;
Emphasis subgraph based on the samples pictures and the first mark file to the first eigenvector extract model into
Row training.
9. picture similarity comparison method according to claim 6, wherein
Feature vector based on picture and the Target Photo to be measured described in second feature vector extraction model extraction.
10. picture similarity comparison method according to claim 9, further includes:
Generation the second mark file is labeled to the eigen vector of samples pictures;
Model is extracted to the second feature vector and is trained based on the samples pictures and the second mark file.
11. a kind of picture similarity comparison device, comprising:
Global information extraction unit, for extracting the global information of picture and Target Photo to be measured;
Keynote message extraction unit, the keynote message for the picture to be measured and the Target Photo;
Global information comparing unit, for carrying out the global information of the global information of the picture to be measured and the Target Photo
It compares and determines the first similarity;
Keynote message comparing unit, if being greater than first threshold for first similarity, by the emphasis of the picture to be measured
Information is compared with the keynote message of the Target Photo determines the second similarity;
Similarity determining unit, if for second similarity be greater than second threshold, it is determined that the picture to be measured with it is described
Target Photo is similar.
12. picture similarity comparison device according to claim 11, wherein
The keynote message extraction unit is used to determine the weight in the emphasis subgraph and the Target Photo in the picture to be measured
Dot map, extract the feature of the feature vector of the emphasis subgraph of the picture to be measured and the emphasis subgraph of the Target Photo to
Amount;
The keynote message comparing unit is used for the feature vector of the emphasis subgraph of the picture to be measured and the Target Photo
The feature vector of the emphasis subgraph of corresponding region compares, and determines second similarity.
13. picture similarity comparison device according to claim 12, wherein
The keynote message extraction unit is also used to determine in the picture to be measured and/or the Target Photo where key content
Region, using the corresponding image in the key content region as the emphasis subgraph.
14. picture similarity comparison device according to claim 13, wherein
The key content is that can indicate the content of picture region indexing.
15. picture similarity comparison device according to claim 13, wherein
The keynote message extraction unit is used to determine the emphasis based on optical character recognizer and/or object recognition algorithm
Content region.
16. picture similarity comparison device according to claim 12, wherein
The global information extraction unit is for extracting the feature vector of the picture to be measured and the feature of the Target Photo
Vector;
The global information comparing unit is used for the feature vector of the feature vector of the picture to be measured and the Target Photo
It is compared and determines first similarity.
17. any picture similarity comparison device of 2-15 according to claim 1, wherein
The keynote message extraction unit is also used to extract the feature of emphasis subgraph described in model extraction based on the first eigen vector
Vector.
18. picture similarity comparison device according to claim 17, further includes:
First eigen vector extracts model training unit, is labeled for the eigen vector to the emphasis subgraph in samples pictures
Generate the first mark file, emphasis subgraph based on the samples pictures and the first mark file to the fisrt feature to
Amount is extracted model and is trained.
19. picture similarity comparison device according to claim 16, wherein
The global information extraction unit is also used to extract picture to be measured described in model extraction and described based on second feature vector
The feature vector of Target Photo.
20. picture similarity comparison device according to claim 19, further includes:
Second feature vector extracts model training unit, is labeled the second mark of generation for the eigen vector to samples pictures
File is extracted model to the second feature vector and is trained based on the samples pictures and the second mark file.
21. a kind of picture similarity comparison device, comprising:
Memory;And
It is coupled to the processor of the memory, the processor is configured to based on the instruction execution for being stored in the memory
Picture similarity comparison method as described in any one of claim 1 to 10.
22. a kind of computer readable storage medium, is stored thereon with computer program instructions, real when which is executed by processor
The step of existing claims 1 to 10 described in any item picture similarity comparison methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811096782.8A CN109189970A (en) | 2018-09-20 | 2018-09-20 | Picture similarity comparison method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811096782.8A CN109189970A (en) | 2018-09-20 | 2018-09-20 | Picture similarity comparison method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109189970A true CN109189970A (en) | 2019-01-11 |
Family
ID=64908802
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811096782.8A Pending CN109189970A (en) | 2018-09-20 | 2018-09-20 | Picture similarity comparison method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109189970A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110033023A (en) * | 2019-03-11 | 2019-07-19 | 北京光年无限科技有限公司 | It is a kind of based on the image processing method and system of drawing this identification |
CN110533057A (en) * | 2019-04-29 | 2019-12-03 | 浙江科技学院 | A kind of Chinese character method for recognizing verification code under list sample and few sample scene |
CN110598019A (en) * | 2019-09-11 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Repeated image identification method and device |
CN110737417A (en) * | 2019-09-30 | 2020-01-31 | 深圳市格上视点科技有限公司 | demonstration equipment and display control method and device of marking line thereof |
CN110781917A (en) * | 2019-09-18 | 2020-02-11 | 北京三快在线科技有限公司 | Method and device for detecting repeated image, electronic equipment and readable storage medium |
CN112434727A (en) * | 2020-05-02 | 2021-03-02 | 支付宝实验室(新加坡)有限公司 | Identity document authentication method and system |
CN112733578A (en) * | 2019-10-28 | 2021-04-30 | 普天信息技术有限公司 | Vehicle weight identification method and system |
WO2021244138A1 (en) * | 2020-06-04 | 2021-12-09 | Oppo广东移动通信有限公司 | Dial generation method and apparatus, electronic device and computer-readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100191722A1 (en) * | 2006-08-07 | 2010-07-29 | Oren Boiman | Data similarity and importance using local and global evidence scores |
CN106933867A (en) * | 2015-12-30 | 2017-07-07 | 杭州华为企业通信技术有限公司 | A kind of image inquiry method and device |
CN107239565A (en) * | 2017-06-14 | 2017-10-10 | 电子科技大学 | A kind of image search method based on salient region |
CN107330359A (en) * | 2017-05-23 | 2017-11-07 | 深圳市深网视界科技有限公司 | A kind of method and apparatus of face contrast |
-
2018
- 2018-09-20 CN CN201811096782.8A patent/CN109189970A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100191722A1 (en) * | 2006-08-07 | 2010-07-29 | Oren Boiman | Data similarity and importance using local and global evidence scores |
CN106933867A (en) * | 2015-12-30 | 2017-07-07 | 杭州华为企业通信技术有限公司 | A kind of image inquiry method and device |
CN107330359A (en) * | 2017-05-23 | 2017-11-07 | 深圳市深网视界科技有限公司 | A kind of method and apparatus of face contrast |
CN107239565A (en) * | 2017-06-14 | 2017-10-10 | 电子科技大学 | A kind of image search method based on salient region |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110033023A (en) * | 2019-03-11 | 2019-07-19 | 北京光年无限科技有限公司 | It is a kind of based on the image processing method and system of drawing this identification |
CN110033023B (en) * | 2019-03-11 | 2021-06-15 | 北京光年无限科技有限公司 | Image data processing method and system based on picture book recognition |
CN110533057A (en) * | 2019-04-29 | 2019-12-03 | 浙江科技学院 | A kind of Chinese character method for recognizing verification code under list sample and few sample scene |
CN110533057B (en) * | 2019-04-29 | 2022-08-12 | 浙江科技学院 | Chinese character verification code identification method under single-sample and few-sample scene |
CN110598019A (en) * | 2019-09-11 | 2019-12-20 | 腾讯科技(深圳)有限公司 | Repeated image identification method and device |
CN110781917A (en) * | 2019-09-18 | 2020-02-11 | 北京三快在线科技有限公司 | Method and device for detecting repeated image, electronic equipment and readable storage medium |
CN110781917B (en) * | 2019-09-18 | 2021-03-02 | 北京三快在线科技有限公司 | Method and device for detecting repeated image, electronic equipment and readable storage medium |
CN110737417B (en) * | 2019-09-30 | 2024-01-23 | 深圳市格上视点科技有限公司 | Demonstration equipment and display control method and device of marking line of demonstration equipment |
CN110737417A (en) * | 2019-09-30 | 2020-01-31 | 深圳市格上视点科技有限公司 | demonstration equipment and display control method and device of marking line thereof |
CN112733578A (en) * | 2019-10-28 | 2021-04-30 | 普天信息技术有限公司 | Vehicle weight identification method and system |
CN112733578B (en) * | 2019-10-28 | 2024-05-24 | 普天信息技术有限公司 | Vehicle re-identification method and system |
CN112434727A (en) * | 2020-05-02 | 2021-03-02 | 支付宝实验室(新加坡)有限公司 | Identity document authentication method and system |
WO2021244138A1 (en) * | 2020-06-04 | 2021-12-09 | Oppo广东移动通信有限公司 | Dial generation method and apparatus, electronic device and computer-readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109189970A (en) | Picture similarity comparison method and device | |
US11610394B2 (en) | Neural network model training method and apparatus, living body detecting method and apparatus, device and storage medium | |
CN106874909B (en) | A kind of recognition methods of image character and its device | |
KR102128533B1 (en) | Keypoint managing apparatus for enhancing product recognition using machine learning | |
US9910847B2 (en) | Language identification | |
CN105956059A (en) | Emotion recognition-based information recommendation method and apparatus | |
KR20140010164A (en) | System and method for recognizing text information in object | |
CN110136198A (en) | Image processing method and its device, equipment and storage medium | |
CN113469067B (en) | Document analysis method, device, computer equipment and storage medium | |
CN111242124A (en) | Certificate classification method, device and equipment | |
CN112200031A (en) | Network model training method and equipment for generating image corresponding word description | |
Silanon | Thai Finger‐Spelling Recognition Using a Cascaded Classifier Based on Histogram of Orientation Gradient Features | |
CN110363190A (en) | A kind of character recognition method, device and equipment | |
CN110059542A (en) | The method and relevant device of face In vivo detection based on improved Resnet | |
JP7141518B2 (en) | Finger vein matching method, device, computer equipment, and storage medium | |
KR20190115509A (en) | Automatic Sign Language Recognition Method and System | |
Singh et al. | Face recognition using open source computer vision library (OpenCV) with Python | |
CN105740879B (en) | The zero sample image classification method based on multi-modal discriminant analysis | |
Sanalohit et al. | TFS recognition: Investigating MPH]{Thai finger spelling recognition: Investigating MediaPipe Hands potentials | |
CN114282258A (en) | Screen capture data desensitization method and device, computer equipment and storage medium | |
Pasumarthy et al. | An Indian currency recognition model for assisting visually impaired individuals | |
CN103984415B (en) | A kind of information processing method and electronic equipment | |
Vidhyalakshmi et al. | Text detection in natural images with hybrid stroke feature transform and high performance deep Convnet computing | |
Zhang et al. | Face occlusion detection using cascaded convolutional neural network | |
Vishwanath et al. | Deep reader: Information extraction from document images via relation extraction and natural language |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |