CN106897722A - A kind of trademark image retrieval method based on region shape feature - Google Patents

A kind of trademark image retrieval method based on region shape feature Download PDF

Info

Publication number
CN106897722A
CN106897722A CN201510961229.6A CN201510961229A CN106897722A CN 106897722 A CN106897722 A CN 106897722A CN 201510961229 A CN201510961229 A CN 201510961229A CN 106897722 A CN106897722 A CN 106897722A
Authority
CN
China
Prior art keywords
image
density
rotation
feature
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510961229.6A
Other languages
Chinese (zh)
Inventor
王斌
曾范清
郑雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Finance and Economics
Original Assignee
Nanjing University of Finance and Economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Finance and Economics filed Critical Nanjing University of Finance and Economics
Priority to CN201510961229.6A priority Critical patent/CN106897722A/en
Publication of CN106897722A publication Critical patent/CN106897722A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

Stratified density (RHD, Rotation Hierarchical Density) is rotated the invention discloses a kind of trademark image retrieval method one based on region shape feature, belongs to digital image processing techniques field.Image search method of the invention is in the region shape feature of description expression bianry image, the thought divided using rotation layering carries out layering segmentation to the region shape of image, the Feature Extraction Method that prior art is used is substituted using the Feature Extraction Method of rotation layering, the pixel characteristic and area features of abstract image region shape, and be combined using different weight coefficients according to contribution of both features in iamge description, propose a kind of combination of provincial characteristics and describe algorithm, the algorithm only needs to carry out simply multiplication and add operation, feature extraction process calculates simple, take memory space relatively low, be conducive to programming realization.The present invention can effectively abstract image region hyperspin feature, and image pixel can be specifically presented tending to the distribution situation in image border region, meet the visual characteristic of people's object observing, this feature has rotation, scaling and scale invariability.

Description

A kind of trademark image retrieval method based on region shape feature
Technical field
The present invention relates to a kind of trademark image retrieval method based on region shape feature, belong to digital image processing techniques Field.
Background technology
Due to the present invention extract be binary digital image shape facility, for the RGB color figure collected in reality Picture, needs that all images for gathering are transformed into binary digital image (i.e. by image preprocessing before abstract image feature Black white image:Shape area pixel value is 1, and background area pixels value is for 0).
The identification retrieval of image generally includes following steps:
Feature extraction process:
1st, it is input into training image;
2nd, image preprocessing, bianry image is changed into by the training image of input;
3rd, the region shape feature of bianry image is extracted one by one according to Feature Extraction Algorithm;
4th, necessary treatment is done to the region shape feature for extracting, and is stored.
Image retrieval identification process:
1st, input inquiry image, pre-processes to it, converts thereof into bianry image;
2nd, the region shape feature of the query image of binaryzation is extracted, and does respective handling;
3rd, according to matching similarity measurement criterion, the distance of query image characteristic vector and training image is calculated;
4th, the above-mentioned all metric ranges being calculated are ranked up, retrieval query image.
Image-region shape description method the most frequently used at present is divided into two major classes:Method based on border and based on region Method.What the shape description method based on border was utilized is the boundary characteristic of image, such as:Contour line feature etc., often requires that Survey region contour line closure, regional connectivity, extraction be region shape local feature;Shape description side based on region What method was utilized is the feature of each pixel in shape area, not only allows for the boundary pixel feature of region shape, also Have studied the internal structure of region shape, extraction be region shape global characteristics.
In survey region shape facility, the feature of some points of interest is mainly extracted, then according to the point of interest for extracting Feature construction characteristic vector, carries out image retrieval.According to whether smaller region is segmented the image into, can be based on region Description method is divided into:Global characteristics describe method and structure character description method.Table 1 gives conventional region shape The description method of feature and its classification.
Table 1. some classical region shapes describe method and its classification
The part common method that region shape classical above describes method is made a concrete analysis of:
(1) geometric invariant moment (Hu squares) according to the gray value of image distinguish function define zeroth order square, high-order will and central moment, 7 square groups with translation invariance, rotational invariance and constant rate are derived using second order and third central moment.Use Hu squares The characteristic quantity of composition can rapidly recognize image, but the discrimination of Hu moment characteristics is relatively low.Hu not High Order Moments of bending moment it is anti- Making an uproar property is poor, general only to use its low-order moment, and low order Hu squares cannot accurate description image detailed information, cause the description to image It is imperfect.Therefore, bending moment can not describe complicated image unity and coherence in writing feature to Hu, be generally used to describe the shape of image, to image In big object (such as:Simple characters in fruit shape, car plate etc.) recognition effect can be relatively better.
(2) Zernike not bending moments
For the definition of the Zernike squares of two-dimensional function f (x, y):
In formula:N is whole non-negative Number, n- | m | are even numbers, and n >=| m |;Rnm(ρ) represents the radial polynomial of point (x, y);ρ is the vector of former point-to-point (x, y) Length, i.e.,θ is ρ vectors and x-axis angle in the counterclockwise direction, i.e. θ=arctan (y/ X) (- 1 < x, y < 1).
It is the digital picture of N × N, the real part C of its Zernike square for image sizenmWith imaginary part SnmCan be written as Discrete form:
Bending moment is not capable of any high-order moment characteristics of structural map picture to Zernike, preferably describes the detailed information of image, uses A small amount of moment characteristics can just be reduced, reconstructed image.Compared with Hu not bending moment, Zernike not bending moment image recognition effect more preferably, And be concise in expression, redundancy is less, is widely used in the middle of Study on Target Recognition.But the High Order Moment of Zernike not bending moments is easy By noise jamming, change greatly, and its each rank square component is not tight with the correlation of the visual signature of shape, is difficult intuitively to see Examine description of the square component to region shape.
(3) algorithm of convex hull according to the distribution of point of interest by arbitrary binary image segmentation into multiple convex closures, then to each Convex closure carries out feature extraction, and finally combining the feature of each convex closure carries out image retrieval identification, the computation complexity phase of the algorithm To relatively low, but missing or the repetition of pixel are easily caused when convex closure is divided.
(4) AHDH algorithms recursively computational geometry barycenter, multiple is divided the image into using the hierarchicabstract decomposition technology of image Subregion;The absolute density and relative density feature in each region, the density histogram of structure realm shape facility are calculated simultaneously As the Feature Descriptor of bianry image.
In the algorithm, bianry image is regarded as a two dimensional surface, the set of pixels of the region shape in plane is defined as B, the pixel number for belonging to the shape is defined as N.Before partitioning algorithm starts, bianry image is pre-processed, by two Be worth image center translation arrive its centroid position so that image holding translation invariance.Algorithm is once all pixels point Coordinate is maintained for constant;Then, i-th rectangular area of bianry image is defined asThere is a black picture element collection in the region. By binary image segmentation into several regions, construction orthogonal grid line crosses whole bianry image to geometry barycenter according to image, so Next layer carries out same segmentation afterwards, successively to l layers of bianry image layering segmentation, or is denoted as " level ".It is each that segmentation is obtained There are two principal characters individual rectangular area:Black picture element is countedWith area
On each dividing layer l, regional is calculatedGeometry barycenter, then according to each regionGeometry Barycenter is by the region segmentation into four sub-regionsJ={ 1,2,3,4 }.Initial region is (i.e.:L=1) it is whole binary map As region.During l > 1, l layers of all of region is all the subregion of (l-1) layer.This iteration is performed until termination iteration layer.
The density feature and relative density feature of zoning shape, computing formula is: (wherein:1≤i≤4l-1, 1≤j≤4);
In each iteration layer l, according to new subregion, 4 are constructedl-1× 4 eigenmatrix FAl
So, l layers of density feature vector FV is definedlWith relative characteristic intensity vector
Relative density vector is quantified.For every sub-regionsIf relative densityThen mark should Region is " full ";Otherwise, it is " empty " to mark the region.If regionBlack picture element be uniformly distributed in four sons In region, then the desired value of each sub-regions relative density should all be 1, therefore, subregion labeled as " full " or " empty " depends on its internal black picture element.Define a distribution word wi, to be distributed word wiIt is the construction Distribution Dictionary of content L, then L represents the annexation of 16 kinds " full " or " empty " between current region and its four sub-regions.Assuming that with E represents " empty ", is represented " full " with F, then the composition form of dictionary L is:
L=[EEEE, EEEF, EEFE ..., FFFF]=[w0, w1, w2..., w15]
Each layer of quantization characteristic can be obtained based on the dictionaryIt is expressed as follows:
Density feature, relative density feature and the quantization resolution latent structure obtained according to above dividing method go out new Feature Descriptor vector FV, its formula is expressed as follows:
This adaptive geometric segmentation and the combination of characteristic quantification improve retrieval accurate rate, reduce retrieval miss rate. But, the characteristics of image that AHDH algorithms are extracted has following two problems when image is described:A image that () AHDH algorithms are extracted Feature cannot keep feature rotational invariance.If the image in image library there occurs rotation, then the pixel distribution of image Will change, when carrying out image segmentation according to the barycenter of the bianry image of area pixel feature calculation, each sub-regions figure The segmentation result of picture can also change, and the Feature Descriptor of construction is unsatisfactory for rotational invariance, will be unable to description figure exactly The provincial characteristics of picture, the efficiency of image retrieval declines, and is more likely to occur flase drop.(b) AHDH algorithms each iteration layer l, RegionWill segmentation generation 4l-1Individual region, during feature extraction need respectively to this 4l-1The feature calculation in individual region, calculates multiple Miscellaneous, data storage takes larger space.It is deep in iterative segmentation, i.e. l than it is larger when, it is special because its subregion quantity is excessive Easily overflowed when levying calculating and data storage, the interruption for causing feature to describe, it is impossible to complete image retrieval.
In sum, the region shape feature of image can well express and describe the characteristic of target image, Ke Yiyi Characteristic matching, implementation pattern identification and target retrieval are carried out according to the image-region shape facility for extracting.
The content of the invention
The technical problem to be solved in the present invention is to solve the existing Feature Extraction Method (AHDH) based on region shape to deposit Due to image rotation cause extract region shape feature change cause iamge description inaccurate and iteration layer compared with The problems such as feature calculation and data storage when deep are overflowed, there is provided efficient, accurate expression and description target figure can be realized always The Feature Extraction Method of picture, while ensureing that the region shape feature for extracting has translation invariance, scale invariability and rotation not Denaturation.
The present invention uses following technical scheme:
A kind of digital image understanding method, including to the region shape character description method of target image, the feature is retouched State rotation stratified density (RHD) feature of method using object region shape.
Rotation stratified density (RHD) character description method of the object region shape is using " rotation-layering " point Algorithm realization is cut, detailed process is comprised the following steps:
Step A, the image that the target image of input is converted into binaryzation, its general type can be expressed as follows:
Here x, y represent the horizontal stroke of each pixel, ordinate in target image shape, and B is the region shape of image distribution;Step Rapid B, for bianry image B, its black pixel point number scale makees N, and area is denoted as E.Calculate the region shape barycenter (x of imagec, yc), set up rectangular coordinate system, x=y by origin of barycentercIt is horizontal axis, y=xcIt is vertical coordinate axle, x > ycIt is x-axis Positive direction, y > xcIt is y-axis positive direction, centroid calculation formula is:
Step C, with the horizontal of pixel that nth pixel value in the image-region shape is 1, ordinate (xn, yn) construction is again Number cn, have:
cn=xn+i*yn (3)
In the rectangular coordinate system that complex conversion to step B is set up, argument of the pixel in new coordinate system is calculated:
Z=xn-xc+i·(yc-yn)=r (cos βn+isinβn) (4)
Wherein, xn-xc∈ R, yc-yn∈R.R is that the mould of plural number is long,βnIt is argument of a complex number,If the argument β for trying to achievenIt is negative value, then uses formula βnn+ 360 ° convert it to 0 ° 360 ° It is interior, it is easy to cut zone to choose and judges.Then the argument β of all black pixel points in object region shape is calculatedn(n =1,2 ..., N);
Step D, hypothesis target image are rotated, and the anglec of rotation is θi, M θ is uniformly chosen in 0 °~360 °iValue.Root According to the argument β being calculatednValue determine whether select n-th black pixel point as its subregion pixel, judge Foundation is expressed as follows:
All black pixel points to object region carry out selection judgement, obtain target image rotation θiWhen target son RegionMulti-layer segmentation is carried out to it according to the barycenter of subregion, the region on assigned direction is chosen;
" rotation-layering " partitioning algorithm, uniform sampling θiValue, sample frequency M=90.There is θ in object regioni The target subregion chosen during the rotation of angleThere are 90, calculate each target subregionBlack picture element pointsWith face ProductAnd to each target subregionCarry out layering segmentation;To each target subregionLayering segmentation, construction rotation layering Density feature vector, comprises the following steps that:
Step E, calculate each target subregionBarycenter, to each target subregionSplit, in l layers of (l >=2) when, choose each target subregionSubregion beNoteWithBlack picture element points beWithArea ForWith
Step F, calculating absolute densityAnd relative densityRotation dividing layer (l=1), absolute densityIt refer to target figure As rotation θiWhen cutting choose shape subregionPixel numberWith the ratio of the pixel number N of former target image B, phase To densityIt refer to absolute densityWith shape subregion areaWith business's ratio of the ratio of E.Layering dividing layer (l >=2), definitely DensityRefer toIt is distributed to target subregion'sRatio, relative densityRefer toWith subregion areaWithRatio business's ratio.When image is rotated, its rotation multi-density features formula is: Or
Step G, structural feature vector FVlWithEach layer rotation multi-density features vector of l >=1 is by target image Rotation θiThe absolute density that (i=1,2, L, M) is extracted afterwardsAnd relative densityConstitute, its characteristic vector form is as follows:
The rotation multi-density features, are first normalized to relative density feature so that two kinds of features are in same number Magnitude;Then consistency treatment is carried out to two kinds of features, it is to avoid due to the characteristic vector element that target image itself rotation causes Translation dislocation cause retrieval error;Contribution of last both features during retrieval is different, selection Be combined for two category features for extracting by suitable weight coefficient w.
The formula of the assemblage characteristic is expressed as:
Can be with the rotation multi-density features of tectonic association vector FC according to assemblage characteristicl, this feature vector is by definitely close The combined density feature of degree and relative density is constituted.Specifically it is expressed as follows:
Discrete Fourier transform is carried out to each row combined density characteristic vector, and asks for spoke value, specific calculating is constituted such as Under::
According to each row assemblage characteristic vector FFC that above-mentioned treatment is obtainedlTectonic association Feature Descriptor FFC:
Rotation stratified density (RHD) requirement of object region shape is to each pixel in object region All processed accordingly, considered the contour pixel feature and image-region inner structural features of image-region, overcome Complicated and data storage overflow problem is calculated during the abstract image feature of AHDH algorithms, and it is special to maintain image-region shape Translation, yardstick and the rotational invariance levied.
Rotation stratified density is iterated layering segmentation to image according to the pixel barycenter of image, and abstract image revolves Multi-density features when turning;Assuming that image is rotated, anglec of rotation θ, according to rotation segmentation criterion selection image now Subregion is simultaneously iterated segmentation to it, multi-density features during abstract image rotation θ;Uniformly adopted in 0 °~360 ° intervals Sample M different anglec of rotation θ, the multi-density features during rotation of the different angle value θ of abstract image generation, construct rotation respectively Turn multi-density features.This feature can efficiently, accurately be expressed and describe target image characteristics information, be described using this feature Image can reach image retrieval recognition efficiency higher.
Brief description of the drawings
Fig. 1 is typically based on the identification searching system structured flowchart of image-region shape facility;
Fig. 2 RHD algorithm structure block diagrams;
The cut zone selection schematic diagram that Fig. 3 images are rotated;
Cut zone selection schematic diagram during Fig. 4 image rotation angle θ=0 °;
Cut zone selection schematic diagram during Fig. 5 image rotation angle θ=60 °;
The mpeg7 image sets that Fig. 6 RHD and AHDH algorithm performances compare;
Parts of images collection in Fig. 7 trademark images storehouse TradeMark70
Specific embodiment
Technical scheme is described in detail below in conjunction with the accompanying drawings:
Accompanying drawing 1 is the identification searching system structured flowchart for being typically based on image-region shape facility, wherein dotted line frame table Show the Feature Extraction Method that prior art is used, solid box is Feature Extraction Method of the present invention.Using above-mentioned dress Put carry out image-region shape facility extract when, according to following steps:
Step 1, input test collection image:
Step 101:Whether the test image for judging input is bianry image, if so, step 2 is then performed, if it is not, then holding Row step 102;
Step 102:Image preprocessing is carried out to test image, black and white binary image (shape area pixel is converted thereof into It is 1 to be worth, and background area pixels value is for 0);
Step 2, bianry image can be expressed as formula (1) form, inquire about the rotary area all pixels value of target image Abscissa x, ordinate y for 1 each picture point, and two-dimensional array A and B are charged to respectively, the length N of statistics array A calculates two-value The area E of image-region;
Step 3, the barycenter (x that shape area is calculated according to formula (2)c, yc), with barycenter (xc, yc) right angle is set up for origin Coordinate system, x=ycIt is horizontal axis, y=xcIt is vertical coordinate axle, x > ycIt is x-axis positive direction, y > xcIt is y-axis positive direction;
Step 4, calculate argument of the pixel in new coordinate system that each pixel values of N in object region are 1:
Step 401:Nth pixel value is the horizontal stroke of 1 pixel, ordinate (x in image-region shapen, yn) as a example by, press According to the plural c that formula (3) construction is made up of the pixel coordinaten
Step 402:The plural number is transformed into the coordinate system of step 3 foundation according to formula (4), and calculates the pixel and existed Argument β in new coordinate systemn.Judge argument βnWhether it is negative value, if so, step 403 is then performed, if it is not, then performing step 404;
Step 403:Argument by pixel that nth pixel value in image-region shape is 1 in new coordinate system is changed into 0 °~360 °, conversion formula is βnn+360°;
Step 404:The pixel quantity N of argument is calculated0, judge N0Whether N is equal to, if so, then performing step 5, if it is not, then performing step 405;
Step 405:The pixel that the next pixel value in image-region shape is 1 is chosen, step 401 is performed;
Step 5, hypothesis object region are rotated, anglec of rotation θ.M rotation of uniform sampling in 0 °~360 ° Angle value, is denoted as θi, have:0≤θi< 360,1≤i≤M.θ is made firsti=0 °, step 501 is performed,;
Step 501:Assuming that angle, θ=the θ of image rotationi, judge that image occurs θ according to formula (5)iDuring the rotation of angle, It is distributed to each pixel (when image is rotated, the schematic diagram that its subregion is chosen is as shown in Figure 3) of its subregion, statistics Pixel numberCalculate subregion area
Step 502:Accumulative iteration depth l, the absolute density of rotation layering is calculated according to formula (6)And relative densityWhether iteration depth l is judged less than L, if so, step 503 is then performed, if it is not, then performing step 504;
Step 503:Target image is substituted with the subregion, (image is rotated, the son of its layering segmentation to perform step 2 Region choosing method is as shown in accompanying drawing 4, accompanying drawing 5);
Step 504:Update the angle, θ of step 5i, θii+ 360 °/M, judge θiWhether 360 ° are less than, if so, then performing Step 501, if it is not, then performing step 6;
Step 6, construction rotation multi-density features description, comprise the following steps that:
Step 601:According to absolute on each iteration layer l in the rotation layering cutting procedure that formula (7) process step 5 is obtained DensityAnd relative densityThe density feature vector set up on each iteration layer l;
Step 602:To each iteration layer l on relative density characteristic vector carry out numerical value normalized, make its with it is absolute The characteristic value of density feature is on the same order of magnitude;
Step 603:Image itself is avoided to rotate the shifting of the image vector row element for causing using discrete Fourier transform The error of retrieval is caused in position, and the formula of one-dimensional discrete signal Fourier transformation is as follows:
Wherein, c (i) be input discrete signal sequence, n for discrete signal sequence length, C (u) be c (i) by from Dissipate the Fourier transformation value of Fourier transformation.
Step 604:Discrete Fourier transform is carried out to extracting the characteristic vector on each iteration layer l, Fourier transformation system is used Several modulus value substitutes each row, and specific formula for calculation is as follows:
Step 605:Due to absolute featureAnd relative characteristicContribution during retrieval is different, selection , be combined for two category features for extracting according to formula (8) by suitable weight coefficient w, constitutes combined density feature
Step 606:The combined density being made up of this layer of combined density feature on each iteration layer l is set up according to formula (9) special Levy vectorial FCl
Step 607:Discrete Fourier transform is carried out to each layer combined density characteristic vector according to formula (10), image is overcome The element shift problem of the combined density characteristic vector that itself rotation is caused, and it is special to ask for each layer combined density according to formula (11) The discrete Fourier transform amplitude for levying vector substitutes its former each row feature.
Step 608:The rotation multi-density features of combination of the structure form as shown in formula (12) describe submatrix.
Step 7, for test set in each width target image, extracted every in test set according to step 1 to step 6 The eigenvectors matrix of test image, matrix size is L × M, and it is stored in one-dimension array in the form of packet elements In Feature.Assuming that test set contains C class different images, k image subset is included per class testing image, then in the test set Contain C × k target detection image.
So far, the above completes feature extraction and the characteristic storage process of target image, and feature of the invention is described below With with image retrieval identification process, specific implementation process is as follows:
Step 7, test image concentrate choose piece image as query image, extract its image-region shape facility square Battle array FVa
I-th (i=1,2, L, C × k) number constituent element bag in step 8, the array Feature of selection test chart image set, remembers this Eigenmatrix is FVb, calculate FVaWith FVbThe distance between, computing formula is as follows:
Step 9, C × k similarity being calculated to step 8 distance sort from small to large, choose preceding m=2 × k width most Similar image, judges whether this m width image belongs to same class with query image, counts related figure of a sort to query image As number c;
Step 10, calculating concentrate the retrieval accuracy Precision of searched targets query image, feature to take out in test image Take time FETime, retrieval time RRTime;
Whether the query image number count of step 11, statistics retrieval, judge count less than C × k, if so, then Step 7 is performed, if it is not, then performing step 12;
Step 12, all retrieval accuracy Precision, feature extraction time for being calculated in step 10 is asked for respectively The average of FETime, retrieval time RRTime this three classes matching result;
Step 13, the average retrieval accuracy according to calculating, feature extraction average time, identification retrieval average time comment The quality of the valency image retrieval identifying system.Experiment shows, the inventive method feature extraction time and characteristic key recognition time It is shorter, recognize that the accuracy of retrieval is higher.
In order to verify the effect of the inventive method, following experiment has been carried out:
1st, experiment condition:
The experimental facilities of this paper is Intel (R) Core for operating system Microsoft Windows an XP, CPU (TM) computer of 2 QuadQ8200 2.33GHz 2.00GB.Software-programming languages are Matlab (7.1 versions).
2nd, experimental technique:
The system basic framework (as shown in Figure 1) that experiment is recognized using image retrieval, by the portion in figure shown in solid box Divide the part shown in replacement dotted line frame.Solid box is the method for expressing and describing image area characteristics, and the present invention is extracted using RHD The region shape feature of image, its flow chart is as shown in Figure 2.
The present invention carries out emulation experiment in two image data bases, uses mpeg7 image data sets (as shown in Figure 5) The validity of RHD algorithms is verified, and compares the retrieval performance of RHD algorithms and AHDH algorithms, protrusion theory value of the invention;Together When both image-recognizing methods are applied to the TradeMark70 trademark images storehouse of the random acquisition from network (such as the institute of accompanying drawing 7 Show) retrieval, propose application value of the invention.
The image in image data base is pre-processed successively, the bianry image of needs is converted thereof into, according to specific Embodiment, using the density feature and area features of RHD algorithm abstract images, construction rotates multi-density features matrix, and Each eigenmatrix of real-time storage.Regard the every piece image in image library as target query image, it is desirable to retrieve within the library Recognition target image.
During certain width target image in retrieval image library, there be the image subset similar with the target image in image library K width.Query image RHD eigenmatrixes are extracted, the RHD eigenmatrixes for calculating the image are all per piece image with image library RHD eigenmatrixes between similarity distance.
Euclidean distance d between any two images is expressed as:
The value d apart from expression formula between two images is smaller, just represents two images more similar.According to distance measure This feature, can by comparison chart picture, according to Euclidean distance, (similarity is from high to low) is ranked up from small to large, choose before N Width image shows, intuitively judges Search Results.
3rd, the evaluation index of experimental result:
Assessment level of the invention is accuracy (Precision), i.e., after primary retrieval, the intra-class correlation that system is returned Picture number (C) account for it is all return image (R) ratios, i.e. Precision=C/R, the relative number (C) of similar correct return More, then accuracy is higher.
The present invention is estimated as the assessment level of retrieval accuracy using bull-score, is returned from high to low according to similarity The average retrieval accuracy of the preceding m width retrieval result image for returning.Using every width bianry image as searched targets, according to similarity degree Amount criterion retrieval, finally returns that m=2 × k most like image.What is wherein returned is the similar related figure of this width bianry image Picture number is c, then primary system retrieval degree p=c/k.Retrieval when keeping records C × k width image carries out target retrieval is accurate Degree, feature extraction time and retrieval time, then the average retrieval accuracy of system is the retrieval essence of C × k width bianry images The average value of exactness.
In addition, the extraction time (FETime, Feature Extraction Time) of image-region hyperspin feature and The retrieval time (RRTime, Retrieving Recognition Time) is also the assessment of image indexing system algorithm performance An important content.Feature extraction time FETime is image procossing single width bianry image, to the rotation in its region shape Feature is extracted and is the time T of some necessary treatment1;Assuming that query image is matched with the characteristic similarity of training image Time is T2, then retrieval time RRTime is the region shape characteristic time T for extracting query image1And query image with The time that the property data base of all training images carries out similarity mode adds up, i.e. RRTime=T1+C×k×T2, wherein C is The classification parameter of different images in image data base, k is the image subset number in each classification.
On the premise of retrieval accuracy (Precision) is not reduced, feature extraction time (FETime) and retrieval Time, (RRTime) was lower, showed that the time complexity of image feature extraction process and image retrieval identification process is relatively low, image The performance of searching system algorithm is better.
4 and the contrast and experiment of prior art:
Adaptive layered density histogram (AHDH) algorithm and the stratified density method based on multiple rotary-multi-layer segmentation (RHD) algorithm retrieves MPEG-7 bianry images storehouse respectively, by bull-score Likelihood Computation average retrieval accuracy (Precision), feature extraction time (FETime), identification retrieval time (RRTime), concrete outcome such as table 2 show:
Method Precision% RRTime(s) FETime(s)
AHDH 63.95 2.145 2.059
RHD 72.82 0.529 0.043
The Performance comparision of the AHDH of table 2 and RHD retrieval bianry images storehouse MPEG-7
Retrieval performance with regard to table 2 with reference to AHDH, RHD algorithm makees following com-parison and analysis:
(1) comparing of average retrieval accuracy (Precision):Because AHDH algorithms are in the inspection of relatively low iterative segmentation layer Rope accuracy is not high, but with the intensification of dividing layer, AHDH algorithms preferably excavate local feature.And during experiment simulation, due to Experiment simulation platform internal memory limitation, can only iterative segmentation to L=6, its retrieval accuracy is not high.Original text is directly quoted herein to give The Simulation results for going out:During L=10, its Precision=63.95%.RHD algorithms use region when iteration layer is relatively low When rotation combination Feature Descriptor describes image and carries out the retrieval of image, it is possible to obtain retrieval accuracy higher. Experiment is obtained:Iteration layer L=3 is taken, region rotation combination Feature Descriptor during weighting correlation coefficient w=0.9 describes image, its Retrieval accuracy Precision=72.82%.
(2) comparing of feature extraction time (FETime):AHDH is to Image Iterative dividing layer L=6, the feature of single image Extraction time FETime=2.059 (s);It carries out feature extraction time FETime=0.043 (s) to RHD algorithms to single image, The time complexity of abstract image feature is greatly reduced,
(3) comparing of identification retrieval time (RRTime):AHDH is matched with RHD algorithms using the characteristics of image for extracting Time RRTime needed for retrieval is by feature extraction time FETime and all shape similarity match times T2It is cumulative, it is similar Spend match time T2It is mainly relevant with the dimension of characteristic vector and the measuring similarity criterion for using, for the two methods for comparing Use Euclidean distance measuring similarity criterion.The region shape characteristic time that AHDH algorithms extract each width target image is long, Therefore its retrieval time is more long.As shown in Table 2, identification retrieval time RRTime=2.145 (s) of AHDH algorithms;The knowledge of RHD algorithms Other retrieval time RRTime=0.529 (s).
Respectively using the Image Description Methods based on adaptive layered density histogram (AHDH) and based on rotation-segmentation Stratified density describes the 1400 width Binary Trademarks in method (RHD) retrieval TradeMark70 trademark images storehouse, by bull- Score Likelihood Computation average retrievals accuracy (Precision), feature extraction time (FETime), identification retrieval time (RRTime), concrete outcome such as table 3 shows:
Method Precision% FETime(s) RRTime(s)
AHDH 25.27 1.851 1.699
RHD 80.50 0.636 0.755
The Performance comparision of the AHDH of table 3 and RHD retrieval trademark images storehouse TradeMark70
As shown in Table 3:6 layers of adaptive layered density feature of extraction trademark image region shape of AHDH algorithm iterations, and When carrying out the retrieval of trademark image, its retrieval accuracy is relatively low, Precision=25.27%, extracts single image region Time FETime=1.851 (s) of feature, retrieve time RRTime=1.699 (s) of image database recognition query image. And set forth herein RHD algorithms using relatively low iteration layer rotation multi-density features describe identification image, combinations of features Weight coefficient w=0.9, can reach retrieval accuracy Precision=80.50% higher, extract single image region special Time FETime=0.636 (s) levied, retrieve time RRTime=0.755 (s) of image database recognition query image.
It is different from mpeg7 image data bases, the trademark image in TradeMark70 trademark images storehouse considered rotation, The change in shape such as scaling, translation and mapping, because the characteristics of image that AHDH algorithms are extracted is unsatisfactory for rotational invariance, should in retrieval During image data base, its retrieval accuracy can decrease than retrieving the retrieval accuracy of mpeg7;And RHD algorithms have very well Rotation, yardstick and translation invariance, the accuracy of its retrieval is higher.
As can be seen here, new RHD algorithms in the retrieval of trademark image, characteristic image extract need time compared with It is few, the influence that image rotation is described to characteristics of image is overcome, good retrieval effect is reached, can be widely applied to retrieval The contour features such as trade mark are simple, the discontinuous image in region.

Claims (9)

1. a kind of digital image understanding method, including to the region shape character description method of target image, it is characterised in that institute State rotation stratified density (RHD) feature of character description method using object region shape.
2. rotation stratified density (RHD) feature of object region shape as claimed in claim 1, it is characterised in that described Rotation stratified density realizes that detailed process is comprised the following steps using " rotation-layering " partitioning algorithm:
Step A, the image that the target image of input is converted into binaryzation, its general type can be expressed as follows:
Here x, y represent the horizontal stroke of each pixel, ordinate in target image shape, and B is the region shape of target image distribution;
Step B, for bianry image B, its black pixel point number scale makees N, and area is denoted as E.Calculate the region shape matter of image The heart (xc, yc), set up rectangular coordinate system, x=y by origin of barycentercIt is horizontal axis, y=xcIt is vertical coordinate axle, x > yc It is x-axis positive direction, y > xcIt is y-axis positive direction, centroid calculation formula is as follows:
Step C, with the horizontal of n-th black pixel point, ordinate (x in the image-region shapen, yn) construction plural number cn, have:cn= xn+i*yn, argument of the pixel in the coordinate system that B sets up is calculated, calculating process is as follows:
Z=xn-xc+i·(yc-yn)=r (cos βn+isinβn),
Wherein, xn-xc∈ R, yc-yn∈R.R is that the mould of plural number is long,βnIt is argument of a complex number,
If the argument β for trying to achievenIt is negative value, then uses formula βnn+ 360 ° convert it in 0 °~360 °, are easy to segmentation Choose and judge in region.Then the argument β of all black pixel points in object region shape is calculatedn(n=1,2 ..., N);
Step D, hypothesis target image are rotated, and the anglec of rotation is θi, M θ is uniformly chosen in 0 °~360 °iValue.According to meter The argument β for obtainingnValue determine whether select n-th black pixel point as its subregion pixel, judgment basis It is expressed as follows:
All black pixel points to object region carry out selection judgement, obtain target image rotation θiWhen target son RegionMulti-layer segmentation is carried out to it according to the barycenter of subregion, the region on assigned direction is chosen.
3. partitioning algorithm " is rotated-is layered " as claimed in claim 2, it is characterised in that uniform sampling θiValue, sample frequency M= 90。
4. sample frequency M=90 as claimed in claim 3, it is characterised in that object region occurs θiSelected during the rotation of angle The target subregion for takingThere are 90, calculate each target subregionBlack picture element pointsWith areaAnd to each Target subregionCarry out layering segmentation.
5. as claimed in claim 4 to each target subregionLayering segmentation, it is characterised in that construction rotation stratified density is special Vector is levied, is comprised the following steps that:
Step E, calculate each target subregionBarycenter, to each target subregionSplit, l layers (l >=2) When, choose each target subregionSubregion beNoteWithBlack picture element points beWithArea isWith
Step F, calculating absolute densityAnd relative densityRotation dividing layer (l=1), absolute densityIt refer to target image rotation Turn θiWhen cutting choose shape subregionPixel numberWith the ratio of the pixel number N of former target image B, it is relatively close DegreeIt refer to absolute densityWith shape subregion areaWith business's ratio of the ratio of E.Layering dividing layer (l >=2), absolute densityRefer toIt is distributed to target subregion'sRatio, relative densityRefer toWith target subregion areaWithRatio business's ratio.When target image is rotated, its rotation multi-density features formula is as follows:
Or
Step G, structural feature vector FVlWithEach layer rotation multi-density features vector of l >=1 rotates by target image θiThe absolute density that (i=1,2, L, M) is extracted afterwardsAnd relative densityConstitute, its characteristic vector form is as follows:
6. multi-density features are rotated as claimed in claim 5, it is characterised in that first relative density feature is normalized makes Obtain two kinds of features and be in the same order of magnitude;Then consistency treatment is carried out to two kinds of features, it is to avoid due to target image itself rotation The translation dislocation of the characteristic vector element quoted causes retrieval error;Last both features are during retrieval Contribution it is different, select suitable weight coefficient w to be combined two category features for extracting.
7. two category features for extracting are combined using weight coefficient w as claimed in claim 6, it is characterised in that its combination Characteristic formula is:Can be with the rotation multi-density features of tectonic association vector according to assemblage characteristic FCl
8. the rotation multi-density features vector FC for combining as claimed in claim 7l, it is characterised in that this feature vector is by absolute The combined density feature of density and relative density is constituted.Specifically it is expressed as follows:
Group can be constructed according to this The rotation multi-density features of conjunction describe sub- FFC.
9. the rotation multi-density features of combination describe sub- FFC as claimed in claim 8, it is characterised in that description is by each The assemblage characteristic vector of iterative segmentation layer is constituted, and its form is as follows:
CN201510961229.6A 2015-12-18 2015-12-18 A kind of trademark image retrieval method based on region shape feature Pending CN106897722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510961229.6A CN106897722A (en) 2015-12-18 2015-12-18 A kind of trademark image retrieval method based on region shape feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510961229.6A CN106897722A (en) 2015-12-18 2015-12-18 A kind of trademark image retrieval method based on region shape feature

Publications (1)

Publication Number Publication Date
CN106897722A true CN106897722A (en) 2017-06-27

Family

ID=59191359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510961229.6A Pending CN106897722A (en) 2015-12-18 2015-12-18 A kind of trademark image retrieval method based on region shape feature

Country Status (1)

Country Link
CN (1) CN106897722A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108763380A (en) * 2018-05-18 2018-11-06 徐庆 Brand recognition search method, device, computer equipment and storage medium
CN108845999A (en) * 2018-04-03 2018-11-20 南昌奇眸科技有限公司 A kind of trademark image retrieval method compared based on multiple dimensioned provincial characteristics
CN109344313A (en) * 2018-07-31 2019-02-15 中山大学 A kind of Automatic identification method based on trademark image
CN109376748A (en) * 2018-10-25 2019-02-22 惠州学院 A kind of image shape Feature Extraction System
WO2019128735A1 (en) * 2017-12-28 2019-07-04 华为技术有限公司 Imaging processing method and device
CN110895572A (en) * 2018-08-23 2020-03-20 北京京东尚科信息技术有限公司 Image searching method and device
CN112115905A (en) * 2020-09-25 2020-12-22 广东电网有限责任公司 Electrical experiment report identification method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1715328A1 (en) * 2005-04-18 2006-10-25 Khs Ag Inspection apparatus for inspecting sealed containers
CN103258037A (en) * 2013-05-16 2013-08-21 西安工业大学 Trademark identification searching method for multiple combined contents
CN104021229A (en) * 2014-06-25 2014-09-03 厦门大学 Shape representing and matching method for trademark image retrieval
CN104199931A (en) * 2014-09-04 2014-12-10 厦门大学 Trademark image consistent semantic extraction method and trademark retrieval method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1715328A1 (en) * 2005-04-18 2006-10-25 Khs Ag Inspection apparatus for inspecting sealed containers
CN103258037A (en) * 2013-05-16 2013-08-21 西安工业大学 Trademark identification searching method for multiple combined contents
CN104021229A (en) * 2014-06-25 2014-09-03 厦门大学 Shape representing and matching method for trademark image retrieval
CN104199931A (en) * 2014-09-04 2014-12-10 厦门大学 Trademark image consistent semantic extraction method and trademark retrieval method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ALIREZA KHOTANZADA等: "ROTATION INVARIANT IMAGE RECOGNITION USING FEATURES SELECTED VIA A SYSTEMATIC METHOD", 《PATTERN RECOGNITION》 *
P. AYYALASOMAYAJULA.等: "Rotation, Scale and Translation invariant image", 《IEEE》 *
PANAGIOTIS SIDIROPOULOS等: "Content-based binary image retrieval using the adaptive hierarchical density histogram", 《PATTERN RECOGNITION》 *
胡芝兰等: "基于分层密度特征的文档图像检索", 《清华大学学报(自然科学版)》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019128735A1 (en) * 2017-12-28 2019-07-04 华为技术有限公司 Imaging processing method and device
CN109982088A (en) * 2017-12-28 2019-07-05 华为技术有限公司 Image processing method and device
CN109982088B (en) * 2017-12-28 2021-07-16 华为技术有限公司 Image processing method and device
CN108845999A (en) * 2018-04-03 2018-11-20 南昌奇眸科技有限公司 A kind of trademark image retrieval method compared based on multiple dimensioned provincial characteristics
CN108845999B (en) * 2018-04-03 2021-08-06 南昌奇眸科技有限公司 Trademark image retrieval method based on multi-scale regional feature comparison
CN108763380A (en) * 2018-05-18 2018-11-06 徐庆 Brand recognition search method, device, computer equipment and storage medium
CN108763380B (en) * 2018-05-18 2022-03-08 徐庆 Trademark identification retrieval method and device, computer equipment and storage medium
CN109344313A (en) * 2018-07-31 2019-02-15 中山大学 A kind of Automatic identification method based on trademark image
CN110895572A (en) * 2018-08-23 2020-03-20 北京京东尚科信息技术有限公司 Image searching method and device
CN109376748A (en) * 2018-10-25 2019-02-22 惠州学院 A kind of image shape Feature Extraction System
CN112115905A (en) * 2020-09-25 2020-12-22 广东电网有限责任公司 Electrical experiment report identification method and device

Similar Documents

Publication Publication Date Title
CN106897722A (en) A kind of trademark image retrieval method based on region shape feature
US8015125B2 (en) Multi-scale segmentation and partial matching 3D models
CN108595636A (en) The image search method of cartographical sketching based on depth cross-module state correlation study
Kim et al. Color–texture segmentation using unsupervised graph cuts
CN103678504B (en) Similarity-based breast image matching image searching method and system
CN101877007A (en) Remote sensing image retrieval method with integration of spatial direction relation semanteme
CN107392215A (en) A kind of multigraph detection method based on SIFT algorithms
CN105740378B (en) Digital pathology full-section image retrieval method
CN113223173B (en) Three-dimensional model reconstruction migration method and system based on graph model
CN105205135B (en) A kind of 3D model retrieval methods and its retrieval device based on topic model
Jain et al. M-ary Random Forest-A new multidimensional partitioning approach to Random Forest
CN106951873B (en) Remote sensing image target identification method
Memon et al. 3D shape retrieval using bag of word approaches
CN105844299A (en) Image classification method based on bag of words
Mercioni et al. A study on Hierarchical Clustering and the Distance metrics for Identifying Architectural Styles
CN111339332B (en) Three-dimensional volume data retrieval method based on tree structure topological graph
Ahmad et al. A fusion of labeled-grid shape descriptors with weighted ranking algorithm for shapes recognition
Kumar et al. Automatic feature weight determination using indexing and pseudo-relevance feedback for multi-feature content-based image retrieval
Yao et al. Combining intrinsic dimension and local tangent space for manifold spectral clustering image segmentation
Wu et al. Similar image retrieval in large-scale trademark databases based on regional and boundary fusion feature
CN106570124A (en) Remote sensing image semantic retrieval method and remote sensing image semantic retrieval system based on object level association rule
Zhao et al. Image retrieval based on color features and information entropy
CN108959650A (en) Image search method based on symbiosis SURF feature
Jiang et al. Partial shape matching of 3D models based on the Laplace-Beltrami operator eigenfunction
Kasim et al. Fuzzy C means for image batik clustering based on spatial features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170627

WD01 Invention patent application deemed withdrawn after publication