CN105825203B - Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods - Google Patents

Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods Download PDF

Info

Publication number
CN105825203B
CN105825203B CN201610200615.8A CN201610200615A CN105825203B CN 105825203 B CN105825203 B CN 105825203B CN 201610200615 A CN201610200615 A CN 201610200615A CN 105825203 B CN105825203 B CN 105825203B
Authority
CN
China
Prior art keywords
image
candidate region
point
matching
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610200615.8A
Other languages
Chinese (zh)
Other versions
CN105825203A (en
Inventor
李建华
魏瑾瑜
卢湖川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201610200615.8A priority Critical patent/CN105825203B/en
Publication of CN105825203A publication Critical patent/CN105825203A/en
Application granted granted Critical
Publication of CN105825203B publication Critical patent/CN105825203B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to computer vision fields, are related to image procossing relevant knowledge, in particular to shape matching method.It is identified it is characterized in that extracting ground arrow mark from video to be measured.Firstly, obtaining the top view of each frame image using inverse projection mapping;Secondly, carrying out image segmentation in HSV space using K mean cluster method, brightness and the qualified connected region of color are isolated, and geometric dimension screening is carried out to these connected regions;Again, edge extracting is carried out to candidate region, the multiple dimensioned HOG feature in part is extracted to each marginal point;It is finally put between template and candidate region using this feature to matching, geometry matching is carried out to matching result again, identifies the classification in the region.The invention has the advantages that overcome ground arrow mark detection in occur block, wear, deformation, rotation and other mark interference the case where.Under the conditions of above undesirable, the present invention still has preferable discrimination.

Description

Based on point to matching and the matched ground arrow mark detection of geometry and identification Method
Technical field
The invention belongs to computer vision field, the relevant knowledge being related in image procossing is in particular to a kind of to be based on point To matching and the matched ground arrow mark detection of geometry and recognition methods.
Background technique
In past two ten years, traffic above-ground landmark identification is as unmanned and intelligent transportation important composition portion Point, attract the researcher of many computer vision fields.Therefore, there is many efficient and practical technology and method.? In these marks, ground arrow mark contains important Traffic Information, so the detection and identification to this kind of mark are aobvious It obtains particularly important.The representative article delivered successively since 2004 is described as follows below.
Rebut, J. et al. are in " Image segmentation and pattern recognition for road Marking analysis.In International Symposium on Industrial Electronics, 2004 " texts In the candidate region extracted described by Fourier's operator, and candidate region is identified in conjunction with KNN classifier.But It is that Fourier's operator requires the integrality of outline of candidate region very high, is not suitable for blocking in this way and candidate regions The case where domain well damage.Suchitra, S. et al. are in article " A practical system for road marking Detection and recognition.In TIP, candidate region is resolved into several pieces first by 2009 ", using in the direction x and y On gradient value size and positive and negative by edge segments.Then Hough transformation carried out respectively to every section of edge, and to obtaining suddenly Husband space carries out peakology, obtains the angle information at edge, judges that every section of edge is tilted to the left or is tilted to the right.It connects Get off template piecemeal, the inclined direction of left edge and right hand edge to each piece is summarized, and according to summing up As a result candidate region is detected, finds out the image block of the condition of satisfaction, and be combined to these image blocks, judge whether be Arrow mark is simultaneously classified.But this method depends on the integrality at arrow mark edge, so for there is the arrow worn Leader will recognition effect is bad.
Yuhang He et al. is in article " Using edit distance and junction feature to detect and recognize arrow road marking.in Intelligent Transportation Systems (ITSC), a node diagnostic is proposed in 2014 ", each candidate region is expressed as a node string, and according to each section The position and angle of point are encoded.Then the similitude of candidate region and template image is calculated using coding result.This method The partial structurtes of arrow mark are combined together with overall structure, it is higher to the robustness worn and blocked, but for going out Now the arrow mark discrimination of whole deformation is lower.
Summary of the invention
In view of the deficiencies of the prior art, the present invention provides one kind based on point to matching and the matched ground arrow of geometry Mark Detection and recognition methods, this method have other vehicles to hide ground arrow mark part in the video that vehicle-mounted camera is shot The case where gear, ground arrow mark are apart from automobile too far and the case where deformation occurs and the change of vehicle traveling direction is so that extract When containing other traffic above-ground marks in the case where arrow mark run-off the straight out, in video (lane line, zebra stripes etc.), energy It is enough correctly to detect ground arrow mark, and accurately quickly classify to it.
In order to achieve the above object, the technical scheme is that
One kind utilizes matching and the matched ground arrow mark detection of geometry and recognition methods, this method based on point Inverse projection method handles each frame of Vehicular video, to obtain the top view of road scene.According to traffic above-ground The luminance difference of mark and ground is clustered using K mean cluster method in the HSV space of top view to extract brightness and satisfy With the qualified connected region of degree.Connected region is screened using the stock size of arrow mark, obtains arrow mark Arrow mark detection is completed in candidate region.In cognitive phase, the invention proposes a kind of points to match phase to matching and geometry In conjunction with recognition methods, take full advantage of arrow mark part and whole shape information.On real road, straight trip left and right Turn mark and the straight trip mark that turns left seldom exist, and turn left mark and right-hand rotation mark be it is symmetrical, only need to one of them into Row is detected and is identified, therefore the present invention turns left to indicate that (L) and straight trip turn right and indicate that (SR) is detected only to straight trip mark (S) With identification.Fig. 1 is system block diagram of the invention.Implementing step includes:
The first step, inverse projection mapping
Due to the viewing angle problem of vehicle mounted camera shooting video, there are serious perspectives to become for the traffic above-ground mark of acquisition Shape, this will affect the recognition effect to arrow mark.In order to eliminate this influence, in the present invention first to each frame mileage chart As carrying out inverse projection mapping processing, the top view of road scene is obtained, avoids traffic above-ground mark that serious deformation occurs.We adopt Inverse projection mapping is realized with trilinear method, initially sets up bodywork reference frame and camera coordinate system.In world coordinate system, XvIt is directed toward The front of automotive ordinate axis, YvIt is directed toward the right perpendicular to automotive ordinate axis, ZvIt is directed toward the top perpendicular to automotive ordinate axis.If ground is flat It is smooth, camera coordinate system origin video camera optical center, successively around Xv、Yv、ZvThe rotation angle of axis be ψ,And θ.Optical center is in car body Coordinate in coordinate system is t=(l, d, h).If having in bodywork reference frame a little for pv(xv, yv,zv), it is in camera coordinate system In coordinate be pc(xc,yc,zc), relationship between the two are as follows:
Wherein, As can be seen that if to realize inverse projection map, need to calculate ψ,θ and
Six outer parameters of t=(l, d, h).
Take up an official post to anticipate one for flat ground and is parallel to XvAxis, and arriving its distance is a straight line L, in bodywork reference frame Parametric equation be xv=s, yv=a, zv=0, wherein s is any real number.According to pin-hole imaging model, in conjunction with formula (1), directly Parametric equation of the line L on plane of delineation coordinate system are as follows:
Wherein, dx, dyRespectively horizontal and vertical proportionality coefficient (camera intrinsic parameter);U and v is plane of delineation coordinate The coordinate of system;I and j is the coordinate of pixel coordinate system;fiAnd fjThe respectively focal length (camera intrinsic parameter) in the direction i and j.Straight line Fasten that there are end point (u in image coordinateh,vh)
If there is three straight lines for being parallel to L on ground, then this three straight lines have identical end point.We utilize this Relation of equal quantity, in the case where known camera intrinsic parameter, so that it may obtain outer parameter ψ,θ and t=(l, d, h).It will Outer parameter ψ,θ and t=(l, d, h) substitutes into formula (3), finds pair of the point in image coordinate on car body coordinate plane Ying Dian realizes conversion of the image coordinate plane to car body coordinate plane, completes inverse projection mapping, obtains the vertical view of road scene Figure.
Second step, image segmentation
Ideally, ground arrow mark is white connected region and has apparent luminance difference with ground.But due to The interference with other regions is blocked, the binarization method for being difficult to adopt given threshold comes out these extracted regions.In order to avoid The loss of ground arrow mark luminance information, present invention employs the methods for carrying out K mean cluster in particular color space to come pair Image is split.Cluster is a kind of method by targeted packets.K mean cluster thinks that each target has the space of oneself Position, the principle to these target partitions is: it is as close as possible from the Target space position in the cluster of place, from other clusters Target space position is as far as possible.K mean cluster needs to specify number of clusters in advance and measures two Target space position distances Distance measure.
Since hsv color space is approximate homogeneous color space, relative to rgb space, it more meets human vision system System, the Euclidean distance of hsv color space two o'clock and the perception degree of people are approximate directly proportional.In hsv color space, saturation degree point Amount S and luminance component V can describe color of image feature and shape feature respectively, and two components are independent, luminance component V It is unrelated with the color information of image.Therefore, RGB color image is transformed into hsv color space in the present invention, by saturation degree point Measure S and luminance component V recombination, then to reconstructed picture carry out K mean cluster image is split, using Euclidean distance estimate by Image segmentation is three layers, that most layer of the pixel met the requirements is final segmentation result, and by connection wherein included Region is as candidate region, wherein one layer contains all colours saturation degree and brightness meets the connected region of condition.
Third step, candidate region screening
In China, the size of traffic above-ground mark needs to meet unification of the motherland standard.In the present invention, we utilize Geometric dimension parameter screens candidate region.Distortion is generated since arrow length can be influenced by distance, we only choose Arrow width, the ratio of width to height and area exclude non-arrow region.But in actual road conditions, arrow mark can be due to blocking Factor and it is imperfect, thus we do not screen in strict accordance with national standard size, but near selection standard size Certain section is as filter criteria.
4th step, edge detection
The edge of connected region contains its most of geological information, therefore we indicate that previous step is filtered out with edge Candidate connected region.Since the edge of connected region will appear in burr and image, inevitable noise can be to subsequent Matching result impacts, so we have carried out expansive working to candidate region, reduces burrs on edges, and choosing can be effective The Canny edge detection algorithm for inhibiting noise and edge registration carries out Canny edge detection to candidate region, obtain compared with For smooth edge.The algorithm estimates signal-to-noise ratio and positioning product, available optimization Approaching Results.Therefore, should Edge detection method can be very good the candidate region edge for solving the problems, such as to extract, and there are burrs.
5th step, feature extraction and construction feature collection
The local shape information that marginal point includes can be described by feature.It is special using the multiple dimensioned HOG in a part Sign is to be described connected region edge.The division of rectangle HOG block: an image block (Block) is by several units (Cell) Composition, a unit are made of several pixels.Gradient direction statistics, resulting histogram are independently done in each cell Using gradient direction as horizontal axis, gradient direction can use 0~180 degree or 0~360 degree, and ground arrow mark in the present invention is detected Better result can be obtained by choosing 0~180 degree.This gradient scope is divided into several direction subintervals again (orientation bin), each Direction interval can correspond to a histogram column.In the present invention, we use 9 subintervals.
Centered on the multiple dimensioned HOG feature in part refers to the marginal point of the above onestep extraction, the rectangular block of several scales is chosen Gradient direction statistics is carried out, obtains the HOG feature vector of local different scale, and together by these combination of eigenvectors, The feature vector of this combination contains local feature abundant at marginal point.Feature extraction and construction feature collection specific steps Are as follows:
5.1) the HOG feature extraction that part is carried out at marginal point calculates gradient direction to all marginal points, chooses any Marginal point A is intercepted centered on marginal point A, and size is the image block of a × a, which is divided into 4 units;It will be terraced Degree direction value range is divided into k subinterval, obtains 4 × k=4k dimension part HOG feature vector;
5.2) it takes centered on marginal point A, size is the image block of 2a × 2a, and image block is equally divided into 4 a × a's Sub-block calculates 4k dimensional feature vector according to 5.1 the methods to each sub-block respectively, and four 4k dimensional feature vectors are concatenated To obtain 4k × 4=16k dimension part HOG feature vector;
5.3) centered on marginal point A, size is the image block of 4a × 4a for interception, and image block is equally divided into 16 a × a Sub-block, 16k dimensional feature vector is calculated according to 5.2 the methods to each sub-block respectively, by four 16k dimensional feature vector strings It picks up to obtain 16k × 4=64k dimension part HOG feature vector;
5.4) the local HOG feature vector under above three scale is connected in series, forms the 4k+16k+64k of marginal point A =84k dimensional feature vector, and the connected region is indicated with this 84k dimensional feature vector;
5.5) construction template library includes straight trip mark S, and left-hand rotation mark L and straight trip turn right and indicate SR;For in template library Candidate region in arrow image and test image obtains marginal point according to above-mentioned steps and each marginal point is corresponding 84k dimensional feature vector;The corresponding feature vector of all marginal points in candidate region constitutes the set of eigenvectors of the candidate region;Template The corresponding feature vector of all marginal points of image constitutes the set of eigenvectors of the template.
In order to realize that arrow mark matches, we construct a template library.This template library contains six kinds of ground arrows Leader will, including left-hand rotation, right-hand rotation, straight trip, straight trip left-hand rotation, straight trip right-hand rotation and straight trip left/right rotation.For all arrows in template library Leader will requires to carry out two steps of above-mentioned edge detection and feature extraction, and the feature vector composition characteristic collection extracted, For subsequent matching process.
6th step is put to matching
For each test image, above-mentioned steps have been carried out all to obtain candidate region and its edge, and to each A candidate region edge carries out feature extraction, obtains corresponding feature vector.Each of one width test image candidate region It needs to be matched with template all in template library.In the present invention, we carry out first with the feature vector of marginal point Point filters out the marginal point with identical partial structurtes to matching, excludes abnormal point, improves further matched efficiency.
First, it is assumed that candidate region and a certain template have extracted M and N number of marginal point respectively, each marginal point is right A feature vector is answered, we construct the Euclidean distance between a M × N matrix D two groups of feature vectors of storage, Euclidean distance Represent the difference between two feature vectors.Wherein, i-th of marginal point of the in store candidate region element di, j of D matrix Euclidean distance between the corresponding feature vector of j-th of marginal point of corresponding feature vector and template image.Next, utilizing Euclidean distance matrix D is put to matching.Assuming that DiIt is the i-th row of matrix D, DiIn element according to ascending order arrange to obtain to AmountD’iIn the ratio between two neighboring element constitute vector R=[r1,…, rj,…,rN-1], i.e.,If the r in RkIt is first value for being greater than preset threshold α, then D'iIn the corresponding template image marginal point of preceding k value be and the matched point of i-th of candidate region marginal point.And so on, it is right Every a line of D all carries out the operation, so that it may find in test image and the matched point of each candidate region marginal point, shape At the matching double points from candidate region to template image direction.This matching process is two-way, therefore we are to each column of D It is equally handled, obtains the matching double points from template image to candidate region direction, the intersection of this two groups of matching double points is formed Matching double points collection, here it is points to matched final result.This matching double points collection will be used for geometry matching.
7th step, geometry matching
The point of previous step improves further matched validity to matching, but candidate region may with it is more in template library A template partial structurtes all having the same, so point can not accurately provide recognition result to matching.In the present invention, we Will point to it is matched on the basis of carry out geometry matching, these analyze the whole geometry structure of composition, it is quasi- The true classification for identifying candidate region.
Matching double points collection obtained in the previous step contains the matching double points between candidate region and all template images, matching The point half of point centering is present in candidate region, and half is present in template image.This two groups of points are respectively in test image and template Two scatter plots are formd in image.The geometric center of scatter plot is c, is got by the mean value computation of coordinate.Assuming that piAnd pjFor Any two points on one scatter plot, d (pi, c) and d (pj, c) and respectively represent vectorWithLength, θijIt is two Angle (the θ of vectorij∈[0,π]).We utilize two K0×K0Lower triangular matrix is represented comprising K0The scatter plot of a point, this Two matrixes is defined as:
G={ gij|i∈[1,K0-1];j∈[0,i-1]} (6)
Θ={ θij|i∈[1,K0-1];j∈[0,i-1]} (7)
Wherein, gij=min (d (pi,c)/d(pj,c),d(pj,c)/d(pi,c)).Obviously, two matrixes for rotation and Scale has invariance, is only influenced by the geometry of scatter plot.
GcAnd GtThe respectively G matrix of candidate region and template image, ΘcAnd ΘtRespectively candidate region and template image Θ matrix.We utilize ΘcAnd ΘtThe difference of middle element is come to GcAnd GtIt is filtered, so that some abnormal matching double points It can be excluded.Filtering principle is as follows:
Wherein, γ is the threshold value by experiment setting.Use Euclidean distance in the present invention to measureWith's Difference:
Wherein, s is the number of matrix non-zero element.Assuming that there is K template image in template library, any candidate region is all An e value is calculated with this K template image, the minimum value in these values is if it is less than preset threshold value, then this is minimum It is worth corresponding template image to be considered as with candidate region being the same category.The setting of threshold value is different different templates, is It is obtained by experiment.
The invention has the benefit that overcoming the screening often occurred in the detection of ground arrow mark and identification process Situations such as gear, deformation and rotation, and discrimination is higher.
Detailed description of the invention
Fig. 1 is system block diagram;
Fig. 2 (a) is initial pictures;Fig. 2 (b) is the top view after Inverse projection;Two-value after Fig. 2 (c) K mean cluster Image;
Detection and recognition result figure when the multiple arrows of Fig. 3 (a) occur simultaneously;Detection is tied with identification when Fig. 3 (b) arrow is worn Fruit figure;Detection and recognition result figure when Fig. 3 (c) arrow serious deformation;Detection and recognition result figure when Fig. 3 (d) arrow tilts;Figure Detection and recognition result figure when 3 (e) other surface marks interfere.
Specific embodiment
Step 1: on real road, straight trip left/right rotation mark and straight trip turn left to indicate seldom exist, and the mark that turns left It is that symmetrically, only one of them need to be detected and is identified, therefore the present invention only indicates (S) to straight trip with right-hand rotation mark, a left side Turn mark (L) and straight trip turns right and indicates that (SR) is detected and identified.
Step 2: assuming that the road in front of video camera is flat, I={ (u, v) } ∈ E2The initial pictures obtained are represented, V={ (xv,yv,zv)}∈E3Image after representing Inverse projection.The scene top view that we want be W=(x, y, 0)}∈V.The process of inverse projection mapping can regard the conversion from image coordinate plane to car body coordinate plane, i.e., different seats as Cursor position indicates Same Scene.In the present invention, we are provided with area-of-interest (1/2nd area bodywork reference frame Jin Che Domain), by the available transformed image W of formula (1) and (2).The pixel value of pixel is equal in I related like vegetarian refreshments in W Pixel value, if pixel corresponding position in I has exceeded the image range of acquisition in W, the pixel will be set as in W Black.As shown in Fig. 2 (b), the top view of original image can be obtained after Inverse projection.
Step 3: the Euclidean distance of hsv color space two o'clock and the perception degree of people are approximate directly proportional, and it has a weight The feature wanted: luminance component V is unrelated with the color information of image, i.e., between the saturation degree and brightness of the image in hsv color space With relatively independent feature, therefore RGB color image is transformed into hsv color spatially first in the present invention, takes out it In V layer and S layer reassemble into two channel images, to this reconstructed picture progress K mean cluster.In the present invention, we will The segmentation number of plies is set as 3, estimates 3 clusters of progress using Euclidean distance and has obtained optimal Clustering Effect.Finally have chosen picture That most layer of pixel of the element value greater than 200 is final segmentation result, and using connected region wherein included as candidate Region.This method takes full advantage of luminance difference and the colouring information of ground arrow mark and ground to detect each frame of video Candidate region in image.
Step 3: containing many connected regions in the image after segmentation, in order to be screened to these connected regions, I The step of having sought the width of each connected region, the ratio of width to height and area, then having pressed table 1 these three amounts are carried out it is preliminary Screening.As shown in Fig. 2 (c), after K mean cluster and geometric dimension choice of parameters, obtain comprising all candidate regions Bianry image.
The screening of 1 candidate region of table
Step 4: in order to keep the edge extracted more smooth, we have carried out expansive working to candidate region, reduce The interference such as burr.Then Canny edge detection has been carried out to candidate region, has obtained more smooth edge.
Step 5: gradient direction is calculated to all marginal points that previous step detects, any edge point A is chosen, cuts first It takes centered on marginal point A, which is divided into 4 units, gradient direction is put down by the image block that size is 16 × 16 9 subintervals are divided into, 4 × 9=36 dimension HOG feature vector can be thus obtained.Then in intercepting again with marginal point A and being The heart, size are 32 × 32 image block, then image block is equally divided into 4 16 × 16 sub-blocks, and each sub-block is further divided into 4 lists Member calculates 4 × 9=36 dimension local feature vectors of each unit, four feature vectors is connected in series, available in this way 36 × 4=144 dimensional feature vector.Centered on marginal point A choose 64 × 64 image block, by image block be equally divided into 4 32 × 32 sub-block calculates 36 × 4=144 dimensional feature vector to each sub-block respectively according to the method described above, by four 144 dimensional features Vector is connected in series to obtain 36 × 4 × 4=576.Finally the local HOG feature under these three scales is connected in series, being formed should 756 dimensional feature vectors of marginal point.
Step 6: it for the candidate region in the arrow image and test image in template library, is required to according to above-mentioned step Suddenly marginal point and corresponding 756 dimensional feature vector of each marginal point are obtained.The corresponding feature of all marginal points in candidate region to Amount constitutes the set of eigenvectors of the candidate region, similarly, the corresponding feature vector of all marginal points of a template image Constitute the set of eigenvectors of the template.Next, each candidate region requires to carry out with all templates in template library Point is to matching, to choose the marginal point in template set with candidate region with identical partial structurtes.We according to the step of table 2 into Row point is to matching.
2 points of table to matching
Step 7: match point obtained in the previous step has half to belong to candidate region, and scatter plot is formed at candidate region, Similarly, the other half marginal point forms scatter plot on template image.Next, we will be to the scatter plot at candidate region Geometry matching is carried out with the scatter plot of each template.Calculate the angle theta of any two points on scatter plotijWith the distance at the center of arriving Compare gij, and according to formula (6) and (7) structural matrix G and Θ.We can respectively obtain two matrix G of candidate region in this wayc And ΘcAnd the G of template imagetAnd Θt.Then, we utilize ΘcAnd ΘtThe difference of middle element is right according to formula (8) and (9) GcAnd GtIn element screened, if the difference of angle is greater than 90 degree, be considered as this two groups of matching double points exception.Finally to process The G of screeningcAnd GtMatrix seeks Euclidean distance, this single stepping carries out between candidate region and each template image, Euclidean If being less than threshold value apart from minimum value, this thinks identical as candidate region classification apart from corresponding template, and otherwise candidate region is non- Arrow mark.
Fig. 3 is to detect and identify to ground arrow mark in varied situations in the present invention as a result, our method can The case where to handle more arrows while occur, arrow mark abrasion, the deformation of arrow mark entirety, arrow mark rotation, Yi Jiqi The case where his surface mark interference.Meanwhile under the conditions of above undesirable, the present invention still has preferable discrimination.

Claims (1)

1. one kind is based on point to matching and the matched ground arrow mark detection of geometry and recognition methods, which is characterized in that The ground arrow mark detection with recognition methods only to straight trip indicate S, turn left mark L and straight trip turn right mark SR carry out detection and Identification, includes the following steps:
The first step, inverse projection mapping
1.1) in world coordinate system, XvIt is directed toward the front of automotive ordinate axis, YvIt is directed toward the right perpendicular to automotive ordinate axis, ZvIt is directed toward Perpendicular to the top of automotive ordinate axis;If ground even, camera coordinate system origin video camera optical center, successively around Xv、Yv、Zv The rotation angle of axis be ψ,And θ;Coordinate of the optical center in bodywork reference frame is t=(l, d, h);If having a bit in bodywork reference frame For pv(xv,yv,zv), its coordinate in camera coordinate system is pc(xc,yc,zc), relationship between the two are as follows:
Wherein,
1.2) take up an official post to anticipate one for flat ground and be parallel to XvAxis, and arrive XvWheelbase is from the straight line L for a, in bodywork reference frame In parametric equation be xv=s, yv=a, zv=0, wherein s is any real number;In the case where known camera intrinsic parameter, directly Parametric equation of the line L on plane of delineation coordinate system are as follows:
Wherein, dx、dy、fiAnd fjFor camera intrinsic parameter, dxFor grid scale coefficient, dyLongitudinal proportionality coefficient, fiFor the direction i coke Away from fjFor the direction j focal length;U and v is the coordinate of plane of delineation coordinate system;I and j is the coordinate of pixel coordinate system;
1.3) two straight lines are chosen again on road surface, it is made to be parallel to straight line L, respectively according to formula (4) (5) calculate straight line L and End point (the u that this two straight lines are fastened in image coordinateh,vh):
This three straight lines have identical end point, using relation of equal quantity, calculate outer parameter ψ,θ and t=(l, d, h);
1.4) by outer parameter ψ,θ and t=(l, d, h) substitutes into formula (3), obtains the point on car body coordinate plane in image Corresponding points on coordinate realize conversion of the image coordinate plane to car body coordinate plane, complete inverse projection mapping, obtain road field The top view of scape;
Second step, image segmentation
RGB color image is transformed into hsv color space, saturation degree component S and luminance component V recombination is subjected to K mean cluster, To be estimated using Euclidean distance and divides the image into three layers, that most layer of the pixel met the requirements is final segmentation result, And using connected region wherein included as candidate region;
Third step screens candidate region
Candidate region is screened using arrow mark standard size, using arrow width, the ratio of width to height and area exclusion are non- Arrow region;
4th step, edge detection
Expansive working is carried out to candidate region, reduces burrs on edges;Canny edge detection is carried out to candidate region, is obtained more Smooth edge;
5th step, feature extraction and construction feature collection
5.1) the HOG feature extraction that part is carried out at marginal point calculates gradient direction to all marginal points, chooses any edge Point A is intercepted centered on marginal point A, and size is the image block of a × a, which is divided into 4 units;By gradient side It is divided into k subinterval to value range, obtains 4 × k=4k dimension part HOG feature vector;
5.2) it taking centered on marginal point A, size is the image block of 2a × 2a, image block is equally divided into the sub-block of 4 a × a, 4k dimensional feature vector is calculated according to 5.1 the methods to each sub-block respectively, four 4k dimensional feature vectors are connected in series to obtain 4k × 4=16k ties up part HOG feature vector;
5.3) centered on marginal point A, size is the image block of 4a × 4a for interception, and image block is equally divided into the son of 16 a × a Block calculates 16k dimensional feature vector according to 5.2 the methods to each sub-block respectively, and four 16k dimensional feature vectors are concatenated To obtain 16k × 4=64k dimension part HOG feature vector;
5.4) the local HOG feature vector under above three scale is connected in series, forms the 4k+16k+64k=of marginal point A 84k dimensional feature vector, and the connected region is indicated with this 84k dimensional feature vector;
5.5) construction template library includes straight trip mark S, and left-hand rotation mark L and straight trip turn right and indicate SR;For the arrow in template library Candidate region in image and test image, successively according to above-mentioned steps 5.1), 5.2), 5.3), 5.4) obtain marginal point and The corresponding 84k dimensional feature vector of each marginal point;The corresponding feature vector of all marginal points in candidate region constitutes the candidate regions The set of eigenvectors in domain;The corresponding feature vector of all marginal points of template image constitutes the set of eigenvectors of the template;
6th step is put to matching
6.1) candidate region and a certain template have extracted M and N number of marginal point respectively, construct a M × N matrix D, and storage is candidate Euclidean distance between region and template image feature vector;DiIt is the i-th row of matrix D;DiIn element arranged according to ascending order It arrivesD'iThe ratio between middle adjacent element constitutes R=[r1,…,rj,…, rN-1], i.e.,If the r in RkIt is first value for being greater than preset threshold α, then D'iIn preceding k The corresponding template image marginal point of a value is and the matched point of i-th of candidate region marginal point;
6.2) every a line of matrix D is handled all in accordance with step 6.1), obtain on template image with each candidate region The matched point of marginal point forms the matching double points from candidate region to template image direction;
6.3) each column of matrix D are handled all in accordance with step 6.1), is obtained from template image to candidate region direction Matching double points;Intersection is taken to the matching result of row and column, completes the point of template image and candidate region to matching;
7th step, geometry matching
Point forms scatter plot to matching result on template image and test image respectively, calculates the folder of any two points on scatter plot Angle θijWith the distance ratio g to centerij, construct two K0×K0Lower triangular matrix is represented comprising K0The scatter plot of a point:
G={ gij|i∈[1,K0-1];j∈[0,i-1]} (6)
Θ={ θij|i∈[1,K0-1];j∈[0,i-1]} (7)
GcAnd GtThe respectively G matrix of candidate region and template image, ΘcAnd ΘtThe respectively Θ of candidate region and template image Matrix;We utilize ΘcAnd ΘtThe difference of middle element is come to GcAnd GtIt is filtered, excludes abnormal point, obtainWith
The matrix of candidate region is measured using Euclidean distanceWith the matrix of all template imagesOtherness, wherein if Euclidean distance minimum value is less than preset threshold gamma, then corresponding template image is considered as with candidate region being the same category.
CN201610200615.8A 2016-03-30 2016-03-30 Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods Expired - Fee Related CN105825203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610200615.8A CN105825203B (en) 2016-03-30 2016-03-30 Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610200615.8A CN105825203B (en) 2016-03-30 2016-03-30 Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods

Publications (2)

Publication Number Publication Date
CN105825203A CN105825203A (en) 2016-08-03
CN105825203B true CN105825203B (en) 2018-12-18

Family

ID=56526615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610200615.8A Expired - Fee Related CN105825203B (en) 2016-03-30 2016-03-30 Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods

Country Status (1)

Country Link
CN (1) CN105825203B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10481609B2 (en) 2016-12-09 2019-11-19 Ford Global Technologies, Llc Parking-lot-navigation system and method
CN106651963B (en) * 2016-12-29 2019-04-26 清华大学苏州汽车研究院(吴江) A kind of installation parameter scaling method of the vehicle-mounted camera for driving assistance system
CN108437986B (en) * 2017-02-16 2020-07-03 上海汽车集团股份有限公司 Vehicle driving assistance system and assistance method
GB2569803B (en) * 2017-12-22 2021-11-24 Novarum Dx Ltd Analysis of a captured image to determine a test outcome
CN108898078A (en) * 2018-06-15 2018-11-27 上海理工大学 A kind of traffic sign real-time detection recognition methods of multiple dimensioned deconvolution neural network
CN111161140B (en) * 2018-11-08 2023-09-19 银河水滴科技(北京)有限公司 Distortion image correction method and device
CN109902718B (en) * 2019-01-24 2023-04-07 西北大学 Two-dimensional shape matching method
CN109934169A (en) * 2019-03-13 2019-06-25 东软睿驰汽车技术(沈阳)有限公司 A kind of Lane detection method and device
CN111783807A (en) * 2019-04-28 2020-10-16 北京京东尚科信息技术有限公司 Picture extraction method and device and computer-readable storage medium
JP7289723B2 (en) * 2019-05-23 2023-06-12 日立Astemo株式会社 Object recognition device
CN111210456B (en) * 2019-12-31 2023-03-10 武汉中海庭数据技术有限公司 High-precision direction arrow extraction method and system based on point cloud
CN111476157B (en) * 2020-04-07 2020-11-03 南京慧视领航信息技术有限公司 Lane guide arrow recognition method under intersection monitoring environment
CN111932621B (en) * 2020-08-07 2022-06-17 武汉中海庭数据技术有限公司 Method and device for evaluating arrow extraction confidence
CN112464737B (en) * 2020-11-04 2022-02-22 浙江预策科技有限公司 Road marking detection and identification method, electronic device and storage medium
CN113158976B (en) * 2021-05-13 2024-04-02 北京纵目安驰智能科技有限公司 Ground arrow identification method, system, terminal and computer readable storage medium
CN114440834B (en) * 2022-01-27 2023-05-02 中国人民解放军战略支援部队信息工程大学 Object space and image space matching method of non-coding mark
CN114549649A (en) * 2022-04-27 2022-05-27 江苏智绘空天技术研究院有限公司 Feature matching-based rapid identification method for scanned map point symbols

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361350A (en) * 2014-10-28 2015-02-18 奇瑞汽车股份有限公司 Traffic sign identification system
CN104463105A (en) * 2014-11-19 2015-03-25 深圳市腾讯计算机***有限公司 Guide board recognizing method and device
CN105069419A (en) * 2015-07-27 2015-11-18 上海应用技术学院 Traffic sign detection method based on edge color pair and characteristic filters

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060098877A1 (en) * 2004-11-09 2006-05-11 Nick Barnes Detecting shapes in image data
US7831098B2 (en) * 2006-11-07 2010-11-09 Recognition Robotics System and method for visual searching of objects using lines

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361350A (en) * 2014-10-28 2015-02-18 奇瑞汽车股份有限公司 Traffic sign identification system
CN104463105A (en) * 2014-11-19 2015-03-25 深圳市腾讯计算机***有限公司 Guide board recognizing method and device
CN105069419A (en) * 2015-07-27 2015-11-18 上海应用技术学院 Traffic sign detection method based on edge color pair and characteristic filters

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多尺度-多形状HOG特征的行人检测方法;牛杰 等;《计算机技术与发展》;20110930;第21卷(第9期);99-102,106 *
面条目标Voronoi图生成的动态距离变换策略;李成名 等;《遥感信息》;20000131(第1期);6-11 *

Also Published As

Publication number Publication date
CN105825203A (en) 2016-08-03

Similar Documents

Publication Publication Date Title
CN105825203B (en) Based on point to matching and the matched ground arrow mark detection of geometry and recognition methods
CN106709436B (en) Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system
CN109886896B (en) Blue license plate segmentation and correction method
CN105488454B (en) Front vehicles detection and ranging based on monocular vision
CN105046196B (en) Front truck information of vehicles structuring output method based on concatenated convolutional neutral net
Kong et al. General road detection from a single image
CN112819094B (en) Target detection and identification method based on structural similarity measurement
CN103984946B (en) High resolution remote sensing map road extraction method based on K-means
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN110175576A (en) A kind of driving vehicle visible detection method of combination laser point cloud data
CN105005766B (en) A kind of body color recognition methods
CN107463918A (en) Lane line extracting method based on laser point cloud and image data fusion
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN105160691A (en) Color histogram based vehicle body color identification method
CN106127137A (en) A kind of target detection recognizer based on 3D trajectory analysis
CN105787481B (en) A kind of object detection method and its application based on the potential regional analysis of Objective
CN107016362B (en) Vehicle weight recognition method and system based on vehicle front windshield pasted mark
CN102663357A (en) Color characteristic-based detection algorithm for stall at parking lot
Liu et al. Real-time recognition of road traffic sign in motion image based on genetic algorithm
CN109584281A (en) It is a kind of that method of counting is layered based on the Algorithm for Overlapping Granule object of color image and depth image
CN104050447A (en) Traffic light identification method and device
CN104217217A (en) Vehicle logo detection method and system based on two-layer classification
CN108647664B (en) Lane line detection method based on look-around image
CN104298969A (en) Crowd scale statistical method based on color and HAAR feature fusion
CN112464731B (en) Traffic sign detection and identification method based on image processing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181218

CF01 Termination of patent right due to non-payment of annual fee