CN103177097A - Image sample library feature representing method based on grayscale distribution statistical information - Google Patents

Image sample library feature representing method based on grayscale distribution statistical information Download PDF

Info

Publication number
CN103177097A
CN103177097A CN2013100875545A CN201310087554A CN103177097A CN 103177097 A CN103177097 A CN 103177097A CN 2013100875545 A CN2013100875545 A CN 2013100875545A CN 201310087554 A CN201310087554 A CN 201310087554A CN 103177097 A CN103177097 A CN 103177097A
Authority
CN
China
Prior art keywords
pair
point
sample image
class
location point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100875545A
Other languages
Chinese (zh)
Other versions
CN103177097B (en
Inventor
彭浩宇
王勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Chaodaquanqiushi Trading Co Ltd
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN201310087554.5A priority Critical patent/CN103177097B/en
Publication of CN103177097A publication Critical patent/CN103177097A/en
Application granted granted Critical
Publication of CN103177097B publication Critical patent/CN103177097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an image sample library feature representing method based on grayscale distribution statistical information. The image sample library feature representing method based on the grayscale distribution statistical information comprises the steps of selecting a certain number of a position dot pair collection according to an image size and a feature of a certain type of samples, then confirming mutual relations among position spot pairs to gray level average value in a field of all sample calculation position points of the type according to two gray level average values of the position spot pairs in the samples, confirming reliability and relevancy of mutual relations of the position dot pairs between the same type of samples and the type of samples and other types of samples, and finally selecting parts of position dot pairs which are high in reliability and small in relevancy and mutual relations of the position dot pairs from the initial position dot pair collection, wherein the position dot pairs and the mutual relations of the position dot pairs are used as character representations of the types of the samples. The image sample library feature representing method based on the grayscale distribution statistical information is especially suitable for an image sample library such as a feature extraction and representation of auto logos and road signs, wherein the image sample library is low in resolution, and an image structure feature of the image sample library is obvious,.

Description

The image pattern planting modes on sink characteristic method for expressing of intensity-based distribution statistics information
Technical field
The present invention relates to technical field of image processing, relate in particular to a kind of image pattern planting modes on sink characteristic method for expressing of intensity-based distribution statistics information.
Background technology
Along with the development of machine vision theory with technology, picture material is identified and understood the focus that becomes research, have wide application market.Especially at intelligent transportation field, the demand that the images such as car mark, road sign and signal post are identified is day by day strong.At present, image recognition is generally based on the supervised learning method, need to set up corresponding image pattern storehouse according to application demand and trains and learn.The key of setting up Sample Storehouse is that sample characteristics is described, extracts and represents.Feature is described and is expressed characteristics of image with image-elements such as point, edge, color and textures; Feature extraction is to adopt image processing method to extract feature from image to describe image-element used; Character representation adopts the formalization mode of computing machine approval to organize and define these and describes the image-element that extracts according to feature.Character representation is the formalization result that feature is described and extracted, and represents to identify this image by Characteristic of Image.
Characteristic of Image represents that following several types is generally arranged:
1) statistical nature: color (gray scale) histogram and image moment are the image statistics features of commonly using.Histogram is simple and easy to use, but the profound image information that is beyond expression.Image moment comprises Hu square, Zemike square, wavelet moment etc., can the Description Image overall situation and local feature, but calculated amount is large, and require clear picture.
2) and the edge: unique point is the point of stable in properties in image, comprises harris angle point, SIFT unique point, Surf unique point etc.The edge is made of the pixel of gradient of image and gray scale transition, and the classic algorithm such as available Canny are extracted.But point and edge feature are inapplicable for fuzzyyer low-resolution image.
3) texture: textural characteristics means the another kind of important visual signature of image, and the spatial variations situation of texture structure reflection brightness of image has local with whole self-similarity.The method of texture analysis has multiple, and is as spatial autocorrelation method, co-occurrence matrix method, Tamura method etc., very effective to the image of texture-rich.
4) transform domain feature: image is carried out various mathematic(al) manipulations, with the coefficient of transform domain as characteristics of image.Such as wavelet transformation, bent wave conversion, Fourier transform, Hough conversion etc.These conversion General Requirements images reach the certain resolution requirement.
5) algebraic characteristic: image can represent with matrix.Algebraic characteristic utilizes matrix theory to extract the method for feature from image array.As PCA, LDA, ICA and SVD etc.But such character representation method is very large for the low-resolution image error.
For specific images such as car mark, road sign, signal posts, be subjected to acquisition condition and environmental impact, have the problems such as resolution is low, easily stained, illumination effect is large, its sample image substantially only remains with fuzzy general shape structure, is difficult to therefrom extract stable point, edge and textural characteristics.Therefore, above method is all not too applicable.
Summary of the invention
The objective of the invention is for the deficiencies in the prior art, propose a kind of image pattern planting modes on sink characteristic method for expressing of intensity-based distribution statistics information.By specific image being set up the required character representation method of Sample Storehouse, solve that sample image resolution is low, easily feature extraction and the expression in the large situation of stained, illumination effect.By extract the gray-scale relation information on a large amount of relative positions in sample image, and by the comparison between similar sample inside and inhomogeneity sample, filter out can reflect from the statistical significance such sample image gray distribution features point to set of relations as character representation.
The technical scheme that the present invention solves its technical matters employing is as follows:
Step 1: to the sample image of each type, select N 0 Individual location point pair.According to size and the architectural feature of such sample image, chosen position point is to collection in the sample image scope Pair c :
Pair c = { Pair c,i < P 1 , P 2 >, i=1,2,..., N 0 }
Subscript C=1,2..., C, CTotal sample image type in the expression Sample Storehouse, iExpression the cOf class sample image iIndividual location point pair comprises P 1 =(x 1 , y 1 )With P 2 =(x 2 , y 2 )Two location points, x,yBe the relative value after the normalization of sample image planimetric coordinates, N 0 Be natural number.
Having chosen that location point is right manually chosen and automatically chooses dual mode, all follows following principle:
1) P 1 With P 2 Can not be adjacent;
2) any two location points pair Pair c,i P 1 , P 2 And Pair c,j P 1 ', P 2 ' between, P k With P k ' ( k=1,2) can not be simultaneously adjacent;
3) at least 20% location point centering, P 1 With P 2 One is positioned on the outstanding structure that reflects characteristics of image, and one is dropped on background; This can be satisfied by manually choosing;
4) location point is uniformly distributed in plane of delineation scope.
Step 2: right cAll sample images under class are asked for wherein each sample image sAt the relevant position point PGray average I
Step 3: right cEach sample image under class s, ask for the mutual relationship between location point pair, be called for short " point is to relation "; According to sample image s cClass point to concentrate the iIndividual location point pair Pair c,i P k , P l Gray average I 1 With I 2 , calculate location point pair Pair c,i At sample image sIn point to relation R C, s, i :
Figure 267117DEST_PATH_IMAGE002
Wherein ThFor greater than zero threshold value;
Step 4: ask for cThe point of class sample image is to set of relations R c And calculating confidence level; According to cDifferent sample images in the class sample image are determined each location point pair Pair c,i cIn the class sample image iThe right point of individual location point is to relation R C, i , and calculation level is to relation R C, i cConfidence level between the different sample images of class Rel c,i Confidence level has reflected that a location point ties up to the degree of reliability in this class sample image to the pass, and computation process is as follows:
Step 4-1: with three sign amounts p1 , p2 , p3Zero setting;
Step 4-2: from location point to the collection Pair c In choose a location point pair Pair c,i
Step 4-3: for each sample image in such sample image storehouse S, s=1,2..., S, SFor the sample image sum, calculate Pair c,i P 1 , P 2 In P 1 With P 2 Gray average in two location point fields I 1 With I 2 And calculate this location point at sample image sIn point to relation R c , s,i If, R c , s,i =" P 1 P 2 ", p1Add 1, if R c , s,i =" P 1 = P 2 ", p2Add 1, otherwise p3Add 1;
Step 4-4: in the sample image storehouse all sample images calculate complete after, according to the sign amount P1, p2, p3In maximal value, with location point pair Pair c,i cPoint in class is to relation R C, i Confirm as one of corresponding three kinds of relations;
Step 4-5: with location point pair Pair c,i cPoint in class is to relation R C, i Confidence level Rel c,i Assignment is MAX (p1, p2, p3)/S
Step 4-6: if location point is to collection Pair c In untreated location point pair is arranged, get back to step 4-2; Otherwise enter step 4-7;
Step 4-7: all location points pair Pair c,i Confidence level Rel c,i Calculate complete after, from location point to the collection Pair c Middle selection confidence level Rel c,i Greater than threshold value Th N 1 Individual location point is to forming new point to collection Pair c ' :
Pair c = { Pair c,i < P 1 , P 2 > | Rel c,i > Th, i=1,2,..., N 1 }
Step 5: calculate cThe class sample point is to collection Pair c ' The degree of correlation of mid point to relation and other types sample image; The degree of correlation has reflected that a point ties up to differentiation degree in dissimilar sample image to the point of collection to the pass, the degree of correlation is less, the type point is larger to the relation difference to the point that collection calculates in other types, more easily classifies accurately based on this character representation to set of relations; Filter out with the location point of other class degree of correlation minimums collection Pair c ' ' The relatedness computation process is as follows:
Step 5-1: right The c classSample point is to collection Pair c ' In each location point pair Pair c,i < P 1 , P 2 , calculate two location points of this location point centering P 1, P 2 Point in the other types sample image is to relation R C ', i ,
Figure 2013100875545100002DEST_PATH_IMAGE003
, method is referring to step 2 and step 3;
Step 5-2: obtain The c classPoint is to collection Pair c ' The degree of correlation with the type sample image CoRel C, c ' :
Figure 2013100875545100002DEST_PATH_IMAGE005
Step 5-3: according to this cThe degree of correlation of class sample image and other types sample image is from putting collection Pair c ' In filter out N 2 Individual location point is to forming reposition point to collection Pair c ' ' , make the reposition point to collection Pair c ' ' Minimum to concerning the degree of correlation with other types point.
Figure 2013100875545100002DEST_PATH_IMAGE007
Step 6: set up that in the image pattern storehouse, the Different categories of samples Characteristic of Image represents; To each class sample image, build location point to collection Pair c ' ' And respective point is to set of relations R c And confidence level Rel c The character representation of collection.
Beneficial effect of the present invention is as follows:
By craft and the right mutual relationship of random site point, image distribution information has been described from the statistical significance, low and can well describe and represent its characteristics of image for the image of certain design feature for resolution, can resist stained fuzzy and illumination effect, feature extraction efficient is high, dimension is low, is conducive to follow-up learning classification algorithm and realizes.
Description of drawings
Fig. 1 be in the sample of the present invention's classification car mark " masses " three location points to and three kinds of mutual relationship figure.
Embodiment
The invention will be further described below in conjunction with accompanying drawing.
Step 1: to the sample image of each type, select N 0 Individual location point pair.According to size and the architectural feature of such sample image, chosen position point is to collection in the sample image scope Pair c :
Pair c = { Pair c,i < P 1 , P 2 >, i=1,2,..., N 0 }
Subscript C=1,2..., C, CTotal sample image type in the expression Sample Storehouse, iExpression the cOf class sample image iIndividual location point pair comprises P 1 =(x 1 , y 1 )With P 2 =(x 2 , y 2 )Two location points, x,yBe the relative value after the normalization of sample image planimetric coordinates, N 0 Be natural number.
Having chosen that location point is right manually chosen and automatically chooses dual mode, all follows following principle:
1) P 1 With P 2 Can not be adjacent;
2) any two location points pair Pair c,i P 1 , P 2 And Pair c,j P 1 ', P 2 ' between, P k With P k ' ( k=1,2) can not be simultaneously adjacent;
3) at least 20% location point centering, P 1 With P 2 One is positioned on the outstanding structure that reflects characteristics of image, and one is dropped on background; This can be satisfied by manually choosing;
4) location point is uniformly distributed in plane of delineation scope.
Step 2: right cAll sample images under class are asked for wherein each sample image sAt the relevant position point PGray average I
Step 3: right cEach sample image under class s, ask for the mutual relationship between location point pair, be called for short " point is to relation "; According to sample image s cClass point to concentrate the iIndividual location point pair Pair c,i P k , P l Gray average I 1 With I 2 , calculate location point pair Pair c,i At sample image sIn point to relation R C, s, i :
Figure 2013100875545100002DEST_PATH_IMAGE009
Wherein ThFor greater than zero threshold value;
As shown in Figure 1 cThe sample image of class car mark " masses " sIn three location points to and three kinds of mutual relationships as follows:
Pair c,1 < P 1 , P 2 >, I 1 = 250, I 2 =40, R c,s,i = “ P 1 > P 2
Pair c,2 < P 3 , P 4 >, I 3 = 42, I 2 =245, R c,s,i = “ P 3 < P 4
Pair c,3 < P 5 , P 6 >, I 1 = 250, I 2 =240, R c,s,i = “ P 5 = P 6
Step 4: ask for cThe point of class sample image is to set of relations R c And calculating confidence level; According to cDifferent sample images in the class sample image are determined each location point pair Pair c,i cIn the class sample image iThe right point of individual location point is to relation R C, i , and calculation level is to relation R C, i cConfidence level between the different sample images of class Rel c,i Confidence level has reflected that a location point ties up to the degree of reliability in this class sample image to the pass, and computation process is as follows:
Step 4-1: with three sign amounts p1 , p2 , p3Zero setting;
Step 4-2: from location point to the collection Pair c In choose a location point pair Pair c,i
Step 4-3: for each sample image in such sample image storehouse S, s=1,2..., S, SFor the sample image sum, calculate Pair c,i P 1 , P 2 In P 1 With P 2 Gray average in two location point fields I 1 With I 2 And calculate this location point at sample image sIn point to relation R c , s,i If, R c , s,i =" P 1 P 2 ", p1Add 1, if R c , s,i =" P 1 = P 2 ", p2Add 1, otherwise p3Add 1;
Step 4-4: in the sample image storehouse all sample images calculate complete after, according to the sign amount P1, p2, p3In maximal value, with location point pair Pair c,i cPoint in class is to relation R C, i Confirm as one of corresponding three kinds of relations;
Step 4-5: with location point pair Pair c,i cPoint in class is to relation R C, i Confidence level Rel c,i Assignment is MAX (p1, p2, p3)/S
Step 4-6: if location point is to collection Pair c In untreated location point pair is arranged, get back to step 4-2; Otherwise enter step 4-7;
Step 4-7: all location points pair Pair c,i Confidence level Rel c,i Calculate complete after, from location point to the collection Pair c Middle selection confidence level Rel c,i Greater than threshold value Th N 1 Individual location point is to forming new point to collection Pair c ' :
Pair c = { Pair c,i < P 1 , P 2 > | Rel c,i > Th, i=1,2,..., N 1 }
Step 5: calculate cThe class sample point is to collection Pair c ' The degree of correlation of mid point to relation and other types sample image; The degree of correlation has reflected that a point ties up to differentiation degree in dissimilar sample image to the point of collection to the pass, the degree of correlation is less, the type point is larger to the relation difference to the point that collection calculates in other types, more easily classifies accurately based on this character representation to set of relations; Filter out with the location point of other class degree of correlation minimums collection Pair c ' ' The relatedness computation process is as follows:
Step 5-1: right The c classSample point is to collection Pair c ' In each location point pair Pair c,i < P 1 , P 2 , calculate two location points of this location point centering P 1, P 2 Point in the other types sample image is to relation R C ', i ,
Figure 2013100875545100002DEST_PATH_IMAGE011
* MERGEFORMAT, method is referring to step 2 and step 3;
Step 5-2: obtain The c classPoint is to collection Pair c ' The degree of correlation with the type sample image CoRel C, c ' :
Figure 2013100875545100002DEST_PATH_IMAGE012
Step 5-3: according to this cThe degree of correlation of class sample image and other types sample image is from putting collection Pair c ' In filter out N 2 Individual location point is to forming reposition point to collection Pair c ' ' , make the reposition point to collection Pair c ' ' Minimum to concerning the degree of correlation with other types point.
Figure 2013100875545100002DEST_PATH_IMAGE013
Step 6: set up that in the image pattern storehouse, the Different categories of samples Characteristic of Image represents; To each class sample image, build location point to collection Pair c ' ' And respective point is to set of relations R c And confidence level Rel c The character representation of collection.
Embodiment
The present embodiment is the front face image of car that gathers certain city's traffic block port, therefrom intercepts the car standard specimen originally, uses the inventive method to set up car mark Sample Storehouse.Gather altogether 2126 bayonet socket images, comprise the 65 common car marks of class, every class car mark comprises 10 above samples at least, and resolution contains two kinds of illumination conditions of day and night in 50*50 pixel left and right.
Implementation step is as follows:
Step 1: the car standard specimen to each type originally normalizes to unified resolution, selects 400 location points pair.Wherein manually choose 50 pairs, choose at random 350 pairs.
Step 2: right cAll samples under class S, ask for each sample sAt the relevant position point PThe 3*3 field in gray average I
Step 3: right cEach sample under class s, according to the gray average of location point under this sample IAsk for a little to relation R C, s, i
Step 4: calculate cThe point of class sample is to set of relations R c And calculating confidence level.According to cDifferent samples in class are determined each location point pair Pair c,i cPoint in class is to relation R C, i , and calculate mutual relationship R C, i cConfidence level between the different samples of class Rel c,i From Pair c Middle selection confidence level Rel c,i 200 positions greater than threshold value 0.8 form new point to collection Pair c ' If point is to inadequate 200, a certain amount of point of random selection is right again, repeating step 2 ~ 4.
Step 5: calculate cThe class sample point is to collection Pair c ' The degree of correlation of mid point to relation and other types sample, and filter out with 100 location points of other class degree of correlation minimums collection Pair c ' '
Step 6: the character representation of setting up Different categories of samples in car mark Sample Storehouse.To each class sample, build location point to collection Pair c ' ' And respective point is to set of relations R c And confidence level Rel c The character representation of collection.
Use Adaboost sorter of car mark Sample Storehouse learning training of this character representation, 1000 new cars that gather are marked on a map as carrying out discriminator: in 500 of daytime, wherein 487 of correct identifications; In 500 of night, wherein 465 of correct identifications.

Claims (1)

1. the image pattern planting modes on sink characteristic method for expressing of intensity-based distribution statistics information is characterized in that comprising following steps:
Step 1: to the sample image of each type, select N 0 Individual location point pair; According to size and the architectural feature of such sample image, chosen position point is to collection in the sample image scope Pair c :
Pair c ={ Pair c,i < P 1 , P 2 >, i=1,2,..., N 0 }
Subscript C=1,2..., C, CTotal sample image type in the expression Sample Storehouse, iExpression the cOf class sample image iIndividual location point pair comprises P 1 =(x 1 , y 1 )With P 2 =(x 2 , y 2 )Two location points, x,yBe the relative value after the normalization of sample image planimetric coordinates, N 0 Be natural number;
Having chosen that location point is right manually chosen and automatically chooses dual mode, all follows following principle:
1) P 1 With P 2 Can not be adjacent;
2) any two location points pair Pair c,i P 1 , P 2 And Pair c,j P 1 ', P 2 ' between, P k With P k ' ( k=1,2) can not be simultaneously adjacent;
3) at least 20% location point centering, P 1 With P 2 One is positioned on the outstanding structure that reflects characteristics of image, and one is dropped on background; This can be satisfied by manually choosing;
4) location point is uniformly distributed in plane of delineation scope;
Step 2: right cAll sample images under class are asked for wherein each sample image sAt the relevant position point PGray average I
Step 3: right cEach sample image under class s, ask for the mutual relationship between location point pair, be called for short " point is to relation "; According to sample image s cClass point to concentrate the iIndividual location point pair Pair c,i P k , P l Gray average I 1 With I 2 , calculate location point pair Pair c,i At sample image sIn point to relation R C, s, i :
Figure 2013100875545100001DEST_PATH_IMAGE001
Wherein ThFor greater than zero threshold value;
Step 4: ask for cThe point of class sample image is to set of relations R c And calculating confidence level; According to cDifferent sample images in the class sample image are determined each location point pair Pair c,i cIn the class sample image iThe right point of individual location point is to relation R C, i , and calculation level is to relation R C, i cConfidence level between the different sample images of class Rel c,i Confidence level has reflected that a location point ties up to the degree of reliability in this class sample image to the pass, and computation process is as follows:
Step 4-1: with three sign amounts p1 , p2, p3Zero setting;
Step 4-2: from location point to the collection Pair c In choose a location point pair Pair c,i
Step 4-3: for each sample image in such sample image storehouse S, s=1,2..., S, SFor the sample image sum, calculate Pair c,i P 1 , P 2 In P 1 With P 2 Gray average in two location point fields I 1 With I 2 And calculate this location point at sample image sIn point to relation R c , s, i If, R c , s, i =" P 1 P 2 ", p1Add 1, if R c , s, i =" P 1 = P 2 ", p2Add 1, otherwise p3Add 1;
Figure 2013100875545100001DEST_PATH_IMAGE003
Step 4-4: in the sample image storehouse all sample images calculate complete after, according to the sign amount P1, p2, p3In maximal value, with location point pair Pair c,i cPoint in class is to relation R C, i Confirm as one of corresponding three kinds of relations;
Step 4-5: with location point pair Pair c,i cPoint in class is to relation R C, i Confidence level Rel c,i Assignment is MAX (p1, p2, p3)/S
Step 4-6: if location point is to collection Pair c In untreated location point pair is arranged, get back to step 4-2; Otherwise enter step 4-7;
Step 4-7: all location points pair Pair c,i Confidence level Rel c,i Calculate complete after, from location point to the collection Pair c Middle selection confidence level Rel c,i Greater than threshold value Th N 1 Individual location point is to forming new point to collection Pair c ' :
Pair c = { Pair c,i < P 1 , P 2 > | Rel c,i > Th, i=1,2,..., N 1 }
Step 5: calculate cThe class sample point is to collection Pair c ' The degree of correlation of mid point to relation and other types sample image; The degree of correlation has reflected that a point ties up to differentiation degree in dissimilar sample image to the point of collection to the pass, the degree of correlation is less, the type point is larger to the relation difference to the point that collection calculates in other types, more easily classifies accurately based on this character representation to set of relations; Filter out with the location point of other class degree of correlation minimums collection Pair c ' ' The relatedness computation process is as follows:
Step 5-1: right The c classSample point is to collection Pair c ' In each location point pair Pair c,i < P 1 , P 2 , calculate two location points of this location point centering P 1, P 2 Point in the other types sample image is to relation R C ', i ,, method is referring to step 2 and step 3;
Step 5-2: obtain The c classPoint is to collection Pair c ' The degree of correlation with the type sample image CoRel C, c ' :
Step 5-3: according to this cThe degree of correlation of class sample image and other types sample image is from putting collection Pair c ' In filter out N 2 Individual location point is to forming reposition point to collection Pair c ' ' , make the reposition point to collection Pair c ' ' Minimum to concerning the degree of correlation with other types point;
Figure 2013100875545100001DEST_PATH_IMAGE006
Step 6: set up that in the image pattern storehouse, the Different categories of samples Characteristic of Image represents; To each class sample image, build location point to collection Pair c ' ' And respective point is to set of relations R c And confidence level Rel c The character representation of collection.
CN201310087554.5A 2013-03-19 2013-03-19 Based on the image pattern planting modes on sink characteristic method for expressing of intensity profile statistical information Active CN103177097B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310087554.5A CN103177097B (en) 2013-03-19 2013-03-19 Based on the image pattern planting modes on sink characteristic method for expressing of intensity profile statistical information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310087554.5A CN103177097B (en) 2013-03-19 2013-03-19 Based on the image pattern planting modes on sink characteristic method for expressing of intensity profile statistical information

Publications (2)

Publication Number Publication Date
CN103177097A true CN103177097A (en) 2013-06-26
CN103177097B CN103177097B (en) 2015-09-16

Family

ID=48636958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310087554.5A Active CN103177097B (en) 2013-03-19 2013-03-19 Based on the image pattern planting modes on sink characteristic method for expressing of intensity profile statistical information

Country Status (1)

Country Link
CN (1) CN103177097B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205444A (en) * 2015-08-14 2015-12-30 合肥工业大学 Vehicle logo identification method based on dot pair characteristics
CN107239754A (en) * 2017-05-23 2017-10-10 淮阴工学院 Automobile logo identification method based on sparse sampling intensity profile and gradient distribution

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101196980A (en) * 2006-12-25 2008-06-11 四川川大智胜软件股份有限公司 Method for accurately recognizing high speed mobile vehicle mark based on video
CN101520849A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification
US8014035B2 (en) * 2008-09-10 2011-09-06 Xerox Corporation Decoding message data embedded in an image print via halftone dot orientation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101196980A (en) * 2006-12-25 2008-06-11 四川川大智胜软件股份有限公司 Method for accurately recognizing high speed mobile vehicle mark based on video
US8014035B2 (en) * 2008-09-10 2011-09-06 Xerox Corporation Decoding message data embedded in an image print via halftone dot orientation
CN101520849A (en) * 2009-03-24 2009-09-02 上海水晶石信息技术有限公司 Reality augmenting method and reality augmenting system based on image characteristic point extraction and random tree classification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BILAL BATAINEH等: "《A novel statistical feature extraction method for textual images: Optical font recognition》", 《EXPERT SYSTEMS WITH APPLICATIONS》 *
葛二壮等: "《基于灰度分布和字符紧密性特征的车牌定位方法》", 《微电子学与计算机》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205444A (en) * 2015-08-14 2015-12-30 合肥工业大学 Vehicle logo identification method based on dot pair characteristics
CN107239754A (en) * 2017-05-23 2017-10-10 淮阴工学院 Automobile logo identification method based on sparse sampling intensity profile and gradient distribution
CN107239754B (en) * 2017-05-23 2019-10-29 淮阴工学院 Automobile logo identification method based on sparse sampling intensity profile and gradient distribution

Also Published As

Publication number Publication date
CN103177097B (en) 2015-09-16

Similar Documents

Publication Publication Date Title
CN103049763B (en) Context-constraint-based target identification method
CN105389550B (en) It is a kind of based on sparse guide and the remote sensing target detection method that significantly drives
CN103761531B (en) The sparse coding license plate character recognition method of Shape-based interpolation contour feature
CN104036239B (en) Fast high-resolution SAR (synthetic aperture radar) image ship detection method based on feature fusion and clustering
CN103034863B (en) The remote sensing image road acquisition methods of a kind of syncaryon Fisher and multiple dimensioned extraction
CN101551863B (en) Method for extracting roads from remote sensing image based on non-sub-sampled contourlet transform
CN110210362A (en) A kind of method for traffic sign detection based on convolutional neural networks
CN101814144B (en) Water-free bridge target identification method in remote sensing image
CN102831427B (en) Texture feature extraction method fused with visual significance and gray level co-occurrence matrix (GLCM)
CN105528595A (en) Method for identifying and positioning power transmission line insulators in unmanned aerial vehicle aerial images
CN106920243A (en) The ceramic material part method for sequence image segmentation of improved full convolutional neural networks
CN105354568A (en) Convolutional neural network based vehicle logo identification method
CN110222767B (en) Three-dimensional point cloud classification method based on nested neural network and grid map
CN102622607A (en) Remote sensing image classification method based on multi-feature fusion
CN111191628B (en) Remote sensing image earthquake damage building identification method based on decision tree and feature optimization
CN102867195B (en) Method for detecting and identifying a plurality of types of objects in remote sensing image
CN104598885A (en) Method for detecting and locating text sign in street view image
CN103927511A (en) Image identification method based on difference feature description
CN105447492B (en) A kind of Image Description Methods based on two-dimentional local binary patterns
CN103440488A (en) Method for identifying pest
CN109635733B (en) Parking lot and vehicle target detection method based on visual saliency and queue correction
CN109635726A (en) A kind of landslide identification method based on the symmetrical multiple dimensioned pond of depth network integration
CN103679719A (en) Image segmentation method
CN106503748A (en) A kind of based on S SIFT features and the vehicle targets of SVM training aids
CN104680169A (en) Semi-supervised diagnostic characteristic selecting method aiming at thematic information extraction of high-spatial resolution remote sensing image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200825

Address after: Shop No. C003, area 11, first floor, Tianze outlets City Plaza, No. 88, Fucong Road, Nantong town, Minhou County, Fuzhou City, Fujian Province

Patentee after: Fujian chaodaquanqiushi Trading Co., Ltd

Address before: Hangzhou City, Zhejiang province 310018 Xiasha Higher Education Park is 18 street.

Patentee before: ZHEJIANG GONGSHANG University