CN106529391B - A kind of speed limit road traffic sign detection of robust and recognition methods - Google Patents

A kind of speed limit road traffic sign detection of robust and recognition methods Download PDF

Info

Publication number
CN106529391B
CN106529391B CN201610810614.5A CN201610810614A CN106529391B CN 106529391 B CN106529391 B CN 106529391B CN 201610810614 A CN201610810614 A CN 201610810614A CN 106529391 B CN106529391 B CN 106529391B
Authority
CN
China
Prior art keywords
significance
map
model
superpixel
saliency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610810614.5A
Other languages
Chinese (zh)
Other versions
CN106529391A (en
Inventor
赵祥模
刘占文
沈超
王润民
徐江
高涛
杨楠
***
王姣姣
周洲
樊星
林杉
张珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201610810614.5A priority Critical patent/CN106529391B/en
Publication of CN106529391A publication Critical patent/CN106529391A/en
Application granted granted Critical
Publication of CN106529391B publication Critical patent/CN106529391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of speed limit road traffic sign detection of robust and recognition methods, the conspicuousness model of multiple features fusion is initially set up, then by being updated to each layer of multiple features fusion conspicuousness model and iteration, obtain level conspicuousness degree figure;Then multilayer saliency map is solved, obtains optimal saliency map, and obtain ROI in optimal notable figure, the ROI of acquisition is sent into based in the good CNN model of super-pixel pre-training, classifies to ROI, provides recognition result;Conspicuousness model of the inventive algorithm based on priori position and boundary characteristic can preferably protrude the traffic sign of two sides, the structural information of image is efficiently utilized by the saliency map after multi-level Fusion, and remain many small scale detailed information in circle marker, keep target more complete uniform, is conducive to the efficiency and precision for improving identification.

Description

Robust speed-limiting traffic sign detection and identification method
Technical Field
The invention belongs to the field of computer vision, relates to an image recognition method, and particularly relates to a robust speed-limiting traffic sign detection and recognition method.
Background
With the development of economic technology, automobiles play an increasingly important role in daily life of people, and people have more and more demands on automobiles, including the gradual emergence of various safety driving assisting technologies, such as an adaptive cruise control system, an anti-collision system, an anticipatory perception collision system, a parking space recognition system, a night vision system and the like. When the scientific technology is developed to a certain degree, the safe and efficient automobile unmanned technology can be finally realized, namely the unmanned automobile is realized. According to statistics, the number of people died in traffic accidents is about 100 ten thousand every year, and the death accidents caused by human misoperation account for the most part; and the unmanned technology can reduce the situation of human misoperation to a great extent, and has higher safety compared with the traditional automobile. The unmanned vehicle is a research hotspot all the time because the unmanned vehicle has wide application prospect in many fields, such as military field, and can replace personnel to complete tasks of investigation, patrol, search, rescue, material transportation and the like; the detection and identification of the speed-limiting traffic signs in the roads is a key technology in the research of the unmanned vehicle, and the result directly influences the speed and the safety of the unmanned vehicle.
At present, a plurality of traditional methods related to traffic sign detection and identification methods are used for detecting and identifying speed-limiting traffic signs, such as a traffic sign detection and identification algorithm based on template matching, a traffic sign detection and identification algorithm based on HOG feature + SVM classifier, a traffic sign detection and identification method based on LogitBoost waterfall type cascade classifier combination, a traffic sign detection and identification algorithm based on BP neural network, and the like. Because the traffic sign is affected by the service life and the external environment, the problems of fouling and fading, distortion, light reflection and the like are easily caused, the identification methods only analyze certain characteristics of the target image, and the effective information in the target image is not sufficiently utilized, so that the target detection and identification accuracy rate is low, and the actual detection and identification effect is poor.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a graph model level significance detection model based on prior information constraint and multi-level feature fusion by using a human visual attention mechanism for reference, extract a region of interest (ROI), extract and classify the candidate regions by combining with a CNN (convolutional neural network) to establish a robust speed-limiting traffic sign recognition system, and solve the problem of low target detection and recognition accuracy rate in the prior art.
In order to solve the technical problems, the invention adopts the following technical scheme:
a robust speed-limiting traffic sign detection and identification method specifically comprises the following steps:
the method comprises the following steps: performing superpixel segmentation on the original image by using an over-segmentation method, and mapping the superpixel image obtained after the original image is segmented to obtain an undirected weight map;
step two: according to the undirected weighted graph obtained in the step one, establishing a multi-feature fusion significance model by using the prior position constraint feature and the local feature of the target, wherein the local feature comprises a color feature and a boundary feature;
step three, based on the multi-feature fusion significance model obtained in the step two, establishing a combination rule function of the first layer vertex in the multilayer significance map, and updating and iterating the multi-feature fusion significance model of each layer in the multilayer significance map to obtain the multilayer significance map;
solving the multilayer saliency map to obtain an optimal saliency map, and obtaining an ROI image on the optimal saliency map;
step five: selecting part of training samples to train the CNN model to obtain a trained CNN model;
step six: using the trained CNN model to identify the ROI image obtained in the step four;
the invention also has the following distinguishing technical characteristics:
further, the undirected graph in the step one is represented as: g ═ V, E, where V is the set of vertices in the undirected graph, represented by the superpixel regions, V ═ {1, 2, …, i }, and E is the set of edges in the undirected graph.
Further, the specific steps of the second step include:
step 2.1: let x beiIs a super pixel region RiCalculating a super pixel region RiInner xiThe number of pixels, denoted as N (x)i) (ii) a Assume a priori position xcSetting the position prior probability pcAnd a weighted value lambda, which is generally 0.1-0.2, and calculating the contribution degree of the significance, wherein the formula is as follows:
step 2.2: let omegaiIs a super pixel region RiNeighborhood of (2), calculating RiAnd omegaiInner adjacent super pixel region RjBoundary strength B (R)i,Rj) Sum the boundary intensities, asCalculating a superpixel region RiNeighborhood Ω ofiInner super pixel region RjNumber N (R) ofj) (ii) a Separately calculating the super pixel regions Ri,RjColor mean value C ofi,Cj(ii) a Finally, calculating the neighborhood contrast N of each super pixeliAs a saliency value of each vertex in the undirected weight graph, the calculation formula is as follows:
step 2.3: establishing a multi-feature fusion significance model based on the prior position constraint function obtained in the step 2.1 and the neighborhood contrast obtained in the step 2.2:
si=Li*Ni
further, the third step specifically comprises:
step 3.1: let S (R)1,R2) Is a super pixel region R1And a super pixel region R2When R is significant1、R2Adjacent to each other, and R1、R2Significant correlation of (A) at R1Neighborhood Ω of1And R2Neighborhood Ω of2When the contents are all minimum, merging; otherwise, merging;
wherein, P (R)1,R2) Representing a super pixel region R1And a super pixel region R2Combined probability, s1,s2Respectively representing super-pixel regions R in multi-feature fusion significance model1,R2A corresponding saliency value; si,sjRepresenting a superpixel region R in a multi-feature fusion saliency model1,R2The saliency value of the corresponding superpixel neighborhood.
Step 3.2: after the areas are combined, the second step is repeated, and the neighborhood contrast N is comparediAnd a priori position constraint function LiUpdating to obtain the significance map S of the next layer1And sequentially iterating until the maximum speed limit sign presents a large-scale structure in the topmost significance map, namely when the boundary strength of the target and the background is greatly different, finally obtaining a multilayer significance map.
Further, the specific steps of the fourth step include:
step 4.1: solving the multilayer saliency map by using a minimized cost function to obtain an optimal saliency map;
step 4.2: taking the aspect ratio of 1: 1-2: the region 1 is used as a detection window, the minimum and maximum super-pixel regions in the detection window are removed, and the detection window is slid on the optimal significance map to obtain an ROI image;
further, the specific steps of step 4.1 include:
step 4.1.1: will not have every vertex s in the weight graphiAs a random event, set of random variables S ═ SiI belongs to V and is defined as a Markov random field on the set V relative to a neighborhood system omega; establishing a first layer S in a multilayer saliency map based on prior position constraint characteristics and local characteristic information0The saliency penalty function for each superpixel in:
wherein, denotes S0The corresponding significance value, V, of the middle region i in the significance map of the layer IlDenotes SlThe vertex set of the corresponding graph model, theta is the parameter set required for constructing the original saliency map, and comprises lambda and xp、kiThree parameters, λ is a weighted value, xpIs an assumed a priori position, kiThe number of super pixel regions obtained by over-segmentation;
step 4.1.2: establishing a significance penalty function of vertex interaction between layers in the multilayer significance graph:
i.e., the generation required for the saliency of layer i vertex to become the saliency of layer j +1 after mergingA price; wherein It is shown that the edge energy term only calculates the contrast at the interface,the neighborhood local contrast of the l layer region i and the l +1 layer region p respectively,for the difference of the prior position constraints of the l layer region i and the l +1 layer region p, Euclidean distance measurement is adopted, and a balance factor β is used for balancing the influence of two cost terms;
step 4.1.3: finally, solving a minimized cost function based on an energy minimization algorithm of graph cut to obtain an optimal significance graph;
further, the concrete steps of the fifth step include:
step 5.1: performing superpixel segmentation on the selected training samples by using an over-segmentation method to obtain a superpixel map of each training sample, randomly selecting superpixels with the proportion of 10-30% in each training sample as the centers of the training samples, and performing extended filling on the superpixel map by using the average pixel value of the boundary of the training samples to obtain a filled image with the same size as the ROI image obtained in the step four;
step 5.2: determining a label of the filling image according to the area overlapping rate of the filling image and the training sample, wherein when the overlapping rate is more than 50%, the label of the filling image is 1; when the overlapping rate is less than 50%, the label is 0;
step 5.3: sending the filling image obtained in the step 5.2 into a CNN model for training to obtain a trained CNN model;
further, the specific steps of the sixth step are as follows: aiming at the target, classifying the ROI image obtained in the step four by adopting a trained CaffeNet model, and giving a final classification result by a classifier;
the beneficial effects of the invention are as follows:
(1) the prior position constraint characteristic and the local characteristic of the target image are comprehensively considered, the information of the target image is fully utilized, and accurate detection of the speed limit sign is facilitated.
(2) By constructing a hierarchical significance model and an optimal significance map, the significance of the speed limit sign is enhanced to the greatest extent, and the significance of the background is reduced, so that the detection effect is improved.
(3) In the training and recognition stage, a visual attention mechanism is introduced into target detection, and a super-pixel pre-training strategy is adopted to train the CNN aiming at the speed limit sign, so that the learning and recognition capability of the CNN model is improved, and the final generated effect is inaccessible by the traditional method.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a schematic diagram of a hierarchical merge rule.
Fig. 3 is a diagram illustrating a specific hierarchical merging process.
Fig. 4 is a schematic diagram of obtaining ROI based on optimal saliency map.
FIG. 5 is a superpixel pre-training strategy.
Fig. 6 is a CNN framework.
Fig. 7 shows the test results.
The invention is further explained below with reference to the drawings and the detailed description.
Detailed Description
In order to make the purpose, technical scheme and advantages of the present invention clearer, the present invention is further described in detail with reference to the accompanying drawings and embodiments; a robust speed-limiting traffic sign detection and identification method is characterized in that a graph model level significance detection model based on prior information constraint and multilevel feature fusion is provided by using human visual attention mechanism for reference, ROI (region of interest) is extracted, and feature extraction and classification are carried out on candidate regions by combining with CNN (convolutional neural network), so that a robust speed-limiting traffic sign identification system is established; the method specifically comprises the following steps:
the method comprises the following steps: performing superpixel segmentation on an original image by using an over-segmentation method, and mapping a superpixel graph obtained after the original image is segmented to obtain an undirected weight graph which is represented as follows: g ═ V, E, where V is the set of vertices in the undirected graph, represented by the superpixel regions, V ═ 1, 2,... i }, and E is the set of edges in the undirected graph;
step two: on the basis of the undirected weighted graph obtained in the step one, establishing a multi-feature fusion significance model by using the prior position constraint feature and the local feature of the target, wherein the local feature comprises a color feature and a boundary feature;
step 2.1: let x beiIs a super pixel region RiCalculating a super pixel region RiInner xiThe number of pixels, denoted as N (x)i) (ii) a Assume a priori position xcSetting the position prior probability pcAnd a weighted value lambda (generally 0.1-0.2) is calculated, and the contribution degree of the significance is calculated according to the following formula:
step 2.2: let omegaiIs a super pixel region RiNeighborhood of (2), calculating RiAnd omegaiInner adjacent super pixel region RjBoundary strength B (R)i,Rj) Sum the boundary intensities, asCalculating a superpixel region RiNeighborhood Ω ofiInner super pixel region RjNumber N (R) ofj) (ii) a Separately calculating the super pixel regions Ri,RjColor mean value C ofi,Cj(ii) a Finally, calculating the neighborhood contrast N of each super pixeliAs a saliency value of each vertex in the undirected weight graph, the calculation formula is as follows:
step 2.3: establishing a multi-feature fusion significance model based on the prior position constraint function obtained in the step 2.1 and the neighborhood contrast obtained in the step 2.2:
si=Li*Ni
step three, based on the multi-feature fusion significance model obtained in the step two, a combination rule function of the first layer vertex in the multilayer significance map is established, and the multilayer significance map S is obtained by updating and iterating the multi-feature fusion significance model of each layer in the multilayer significance map0,S1,S2,S1,…,SkThe schematic diagram of the hierarchical merging rule is shown in fig. 2;
step 3.1: let S (R)1,R2) Is a super pixel region R1And a super pixel region R2When R is significant1、R2Adjacent to each other, and R1、R2Significant correlation of (A) at R1Neighborhood Ω of1And R2Neighborhood Ω of2When the contents are all minimum, merging; otherwise, merging;
wherein, P (R)1,R2) Representing a super pixel region R1And a super pixel region R2Combined probability, s1,s2Respectively representing super-pixel regions R in multi-feature fusion significance model1,R2A corresponding saliency value; si,sjRepresenting a superpixel region R in a multi-feature fusion saliency model1,R2The saliency value of the corresponding superpixel neighborhood.
Step 3.2: after the areas are combined, the second step is repeated, and the neighborhood contrast N is comparediAnd a priori position constraint function LiUpdating to obtain the significance map S of the next layer1And sequentially iterating until the maximum speed limit sign presents a large-scale structure in the topmost significance map, namely when a multilayer significance map S is reached4Then, a multi-layer saliency map is finally obtained, and the process of layer merging is shown in fig. 3 for a specific detection task;
step four, solving the multilayer saliency map to obtain an optimal saliency map, sliding a detection window on the optimal saliency map to obtain an ROI image, and acquiring the ROI according to a specific detection task, wherein the process is shown in FIG. 4;
step 4.1: the equivalence of the Markov random field and Gibbs distribution is utilized, a cost function fusing the multilayer saliency maps is constructed by adopting two gene group potential energies, and the multilayer saliency maps are solved by utilizing the minimized cost function to obtain the optimal saliency maps;
step 4.1.1: will not have every vertex s in the weight graphiAs a random event, set of random variables S ═ SiI belongs to V and is defined as a Markov random field on the set V relative to a neighborhood system omega; establishing a first layer S in a multilayer saliency map based on prior position constraint characteristics and local characteristic information0The saliency penalty function for each superpixel in:
wherein, denotes S0The corresponding significance value, V, of the middle region i in the significance map of the layer IlDenotes SlThe vertex set of the corresponding graph model, theta is the parameter set required for constructing the original saliency map, and comprises lambda and xp、kiThree parameters, λ is a weighted value, xpIs an assumed a priori position, kiThe number of super pixel regions obtained by over-segmentation;
step 4.1.2: establishing a significance penalty function of vertex interaction between layers in the multilayer significance graph:
namely the cost required for the significance of the l-layer vertex i to become the significance of the l + 1-layer vertex j after combination; wherein It is shown that the edge energy term only calculates the contrast at the interface,the neighborhood local contrast of the l layer region i and the l +1 layer region p respectively,the balance factor β is used to balance the influence of two cost terms using Euclidean distance measures for the difference of the prior position constraints of the l layer region i and the l +1 layer region p;
Step 4.1.3: finally, solving a minimized cost function based on an energy minimization algorithm of graph cut to obtain an optimal significance graph;
step 4.2: according to the shape characteristics of the speed limit sign, taking a region with the length-width ratio of 2:1 in the speed limit sign as a detection window, removing the minimum and maximum super-pixel regions in the detection window, and sliding the detection window on the optimal significance map to obtain an ROI image;
step five: selecting part of training samples to train the CNN model to obtain a training CNN model;
step 5.1: performing superpixel segmentation on the selected training samples by using an over-segmentation method to obtain a superpixel graph of each training sample, and randomly selecting 10-30% of superpixels from each training sample as the center of the training sample, wherein the selection proportion of samples with large areas is 10% and the selection proportion of samples with small areas is 30%, performing extended filling by using the average pixel value of the superpixels at the boundary of the training sample to obtain a filled image with the same size as the CNN input ROI image, and a specific superpixel pre-training strategy is shown in FIG. 5;
step 5.2: determining a label of the filling image according to the area overlapping rate of the filling image and the training sample, wherein when the overlapping rate is more than 50%, the label of the filling image is 1; when the overlapping rate is less than 50%, the label is 0;
step 5.3: sending the filled image obtained in the step 5.2 into a CNN model for training to obtain a training CNN model, as shown in FIG. 6;
step six: and (3) identifying the ROI image obtained in the step four by using a training CNN model, wherein the method specifically comprises the following steps: classifying the ROI images obtained in the fourth step by adopting a trained CaffeNet architecture, wherein the CaffeNet architecture is composed of 5 convolution layers, and the arrangement of each convolution layer is respectively 9 multiplied by 84conv → 2 multiplied by 2 maximum posing → 3 multiplied by 126conv → 2 multiplied by 2 maximum posing → 4 multiplied by 252conv → 1 multiplied by 66conv → 3 multiplied by 3 maximum posing → 2 multiplied by 2 maximum posing; finally, the softmax classifier gives the final classification result.
And (3) effect analysis:
in order to verify the effectiveness of the method, GSTDB is selected as a detection data set, GSTRB is selected as a training data set, the sizes of training samples are different from 15Pixels multiplied by 15Pixels to 250Pixels multiplied by 250Pixels, the training samples are subjected to over-segmentation and expanded and filled to be equal to the size of an ROI (region of interest) input into the CNN, the sizes of the ROI are all 48Pixels multiplied by 48Pixels, and finally the training sample set based on the superpixel CNN pre-training strategy comprises more than 100 ten thousand training images and more than 50 ten thousand sample images for cross validation is obtained; in order to enable the prior probabilities of the left region, the middle region and the right region in the target image to occupy a reasonable proportion, the prior probabilities of the speed limit signs are adjusted to be 26.7 percent, 22.2 percent and 52.1 percent respectively; GC, BL and BSCA algorithms are respectively selected to be compared with the method, and the obtained experimental result is shown in figure 7;
it can be seen that, for the speed limit sign detection task, when the GC algorithm detects a test image with weak contrast as in fig. 7 by using the global contrast, the detection result is very unsatisfactory; the BSCA algorithm is based on prior information with small occurrence probability of boundary targets around the image, so that the significance of a central region is obviously high, and the robustness of a specific detection task is poor; the BL algorithm and the algorithm of the invention both reserve relatively complete image large-scale structure information; meanwhile, the algorithm can better highlight the traffic signs on two sides based on the prior position and the significance model of the boundary characteristics, effectively utilizes the structural information of the image through the significance map after multi-level fusion, and reserves a plurality of small-scale detail information in the circular signs, so that the target is more complete and uniform, and the recognition efficiency and precision are improved; in addition, training and testing are carried out on the SVM classifier based on the same data set, the total recognition accuracy is 95.73%, while the recognition accuracy of the CNN is 97.85%, and the method has better feature extraction and classification capability for specific speed-limiting traffic signs.

Claims (7)

1. A robust speed-limiting traffic sign detection and identification method comprises the following steps: performing superpixel segmentation on the original image by using an over-segmentation method, and mapping the superpixel image obtained after the original image is segmented to obtain an undirected weight map; the method is characterized by further comprising the following steps:
step two: according to the undirected weighted graph obtained in the step one, establishing a multi-feature fusion significance model by using the prior position constraint feature and the local feature of the target, wherein the local feature comprises a color feature and a boundary feature; the method specifically comprises the following steps:
step 2.1:let x beiIs a super pixel region RiCalculating a super pixel region RiInner xiThe number of pixels, denoted as N (x)i) (ii) a Assume a priori position xcSetting the position prior probability pcAnd a weighted value lambda, which is generally 0.1-0.2, and calculating the contribution degree of the significance, wherein the formula is as follows:
step 2.2: let omegaiIs a super pixel region RiNeighborhood of (2), calculating RiAnd omegaiInner adjacent super pixel region RjBoundary strength B (R)i,Rj) Sum the boundary intensities, asCalculating a superpixel region RiNeighborhood Ω ofiInner super pixel region RjNumber N (R) ofj) (ii) a Separately calculating the super pixel regions Ri,RjColor mean value C ofi,Cj(ii) a Finally, calculating the neighborhood contrast N of each super pixeliAs a saliency value of each vertex in the undirected weight graph, the calculation formula is as follows:
step 2.3: establishing a multi-feature fusion significance model based on the prior position constraint function obtained in the step 2.1 and the neighborhood contrast obtained in the step 2.2:
si=Li*Ni
step three, based on the multi-feature fusion significance model obtained in the step two, establishing a combination rule function of the first layer vertex in the multilayer significance map, and updating and iterating the multi-feature fusion significance model of each layer in the multilayer significance map to obtain the multilayer significance map;
solving the multilayer saliency map to obtain an optimal saliency map, and obtaining an ROI image on the optimal saliency map;
step five: selecting part of training samples to train the CNN model to obtain a trained CNN model;
step six: and (4) using the trained CNN model to identify the ROI image obtained in the step four.
2. The robust speed limit traffic sign detection and identification method according to claim 1, wherein the undirected weight graph in the step one is represented as: g ═ V, E, where V is the set of vertices in the undirected graph, represented by the superpixel regions, V ═ {1, 2, …, i }, and E is the set of edges in the undirected graph.
3. The robust speed limit traffic sign detection and identification method according to claim 1, wherein the specific steps of step three include:
step 3.1: let S (R)1,R2) Is a super pixel region R1And a super pixel region R2When R is significant1、R2Adjacent to each other, and R1、R2Significant correlation of (A) at R1Neighborhood Ω of1And R2Neighborhood Ω of2When the contents are all minimum, merging; otherwise, merging;
wherein, P (R)1,R2) Representing a super pixel region R1And a super pixel region R2Combined probability, s1,s2Respectively representing super-pixel regions R in multi-feature fusion significance model1,R2A corresponding saliency value; si,sjRepresenting a superpixel region R in a multi-feature fusion saliency model1,R2A saliency value of a corresponding superpixel neighborhood;
step 3.2: region(s)After merging, repeating the second step to obtain the neighborhood contrast NiAnd a priori position constraint function LiUpdating to obtain the significance map S of the next layer1And sequentially iterating until the maximum speed limit sign presents a large-scale structure in the topmost significance map, namely when the boundary strength of the target and the background is greatly different, finally obtaining a multilayer significance map S0,S1,…,Sk
4. The robust speed limit traffic sign detection and identification method according to claim 1, wherein the concrete steps of the fourth step include:
step 4.1: solving the multilayer saliency map by using a minimized cost function to obtain an optimal saliency map;
step 4.2: taking the aspect ratio of 1: 1-2: and 1, taking the area as a detection window, removing smaller and larger super-pixel areas in the detection window, and sliding the detection window on the optimal significance map to obtain an ROI image.
5. The robust speed limit traffic sign detection and identification method according to claim 4, characterized in that the specific steps of step 4.1 include:
step 4.1.1: will not have every vertex S in the weight graphiAs a random event, set of random variables S ═ SiI belongs to V and is defined as a Markov random field on the set V relative to a neighborhood system omega; establishing a first layer S in a multilayer saliency map based on prior position constraint characteristics and local characteristic information0The saliency penalty function for each superpixel in:
wherein, denotes S0The corresponding significance value, V, of the middle region i in the significance map of the layer IlDenotes SlThe vertex set of the corresponding graph model, theta is the parameter set required for constructing the original saliency map, and comprises lambda and xp、kiThree parameters, λ is a weighted value, xpIs an assumed a priori position, kiThe number of super pixel regions obtained by over-segmentation;
step 4.1.2: establishing a significance penalty function of vertex interaction between layers in the multilayer significance graph:
namely the cost required for the significance of the l-layer vertex i to become the significance of the l + 1-layer vertex j after combination; wherein It is shown that the edge energy term only calculates the contrast at the interface,the neighborhood local contrast of the l layer region i and the l +1 layer region p respectively,for the difference of the prior position constraints of the l layer region i and the l +1 layer region p, Euclidean distance measurement is adopted, and a balance factor β is used for balancing the influence of two cost terms;
step 4.1.3: and finally, solving a minimized cost function based on an energy minimization algorithm of graph cut to obtain an optimal significance graph.
6. The robust speed limit traffic sign detection and identification method according to claim 1, wherein the concrete steps of the fifth step include:
step 5.1: performing superpixel segmentation on the selected training samples by using an over-segmentation method to obtain a superpixel map of each training sample, randomly selecting superpixels with the proportion of 10-30% in each training sample as the centers of the training samples, and performing extended filling on the superpixel map by using the average pixel value of the boundary of the training samples to obtain a filled image with the same size as the ROI image obtained in the step four;
step 5.2: determining a label of the filling image according to the area overlapping rate of the filling image and the training sample, wherein when the overlapping rate is more than 50%, the label of the filling image is 1; when the overlapping rate is less than 50%, the label is 0;
step 5.3: and (5) sending the filling image subjected to the step (5.2) into a CNN model for training to obtain a trained CNN model.
7. The robust speed limit traffic sign detection and identification method according to claim 1, characterized in that the concrete steps of step six are: and aiming at the target, classifying the ROI image obtained in the step four by adopting a trained CaffeNet model, and giving a final classification result by using a classifier.
CN201610810614.5A 2016-09-08 2016-09-08 A kind of speed limit road traffic sign detection of robust and recognition methods Active CN106529391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610810614.5A CN106529391B (en) 2016-09-08 2016-09-08 A kind of speed limit road traffic sign detection of robust and recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610810614.5A CN106529391B (en) 2016-09-08 2016-09-08 A kind of speed limit road traffic sign detection of robust and recognition methods

Publications (2)

Publication Number Publication Date
CN106529391A CN106529391A (en) 2017-03-22
CN106529391B true CN106529391B (en) 2019-06-18

Family

ID=58343556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610810614.5A Active CN106529391B (en) 2016-09-08 2016-09-08 A kind of speed limit road traffic sign detection of robust and recognition methods

Country Status (1)

Country Link
CN (1) CN106529391B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909059A (en) * 2017-11-30 2018-04-13 中南大学 It is a kind of towards cooperateing with complicated City scenarios the traffic mark board of bionical vision to detect and recognition methods
CN108898078A (en) * 2018-06-15 2018-11-27 上海理工大学 A kind of traffic sign real-time detection recognition methods of multiple dimensioned deconvolution neural network
CN111383473B (en) * 2018-12-29 2022-02-08 安波福电子(苏州)有限公司 Self-adaptive cruise system based on traffic sign speed limit indication
CN116978233B (en) * 2023-09-22 2023-12-26 深圳市城市交通规划设计研究中心股份有限公司 Active variable speed limiting method for accident-prone region

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226820A (en) * 2013-04-17 2013-07-31 南京理工大学 Improved two-dimensional maximum entropy division night vision image fusion target detection algorithm
CN104462502A (en) * 2014-12-19 2015-03-25 中国科学院深圳先进技术研究院 Image retrieval method based on feature fusion
CN105260737A (en) * 2015-11-25 2016-01-20 武汉大学 Automatic laser scanning data physical plane extraction method with multi-scale characteristics fused
CN105868774A (en) * 2016-03-24 2016-08-17 西安电子科技大学 Selective search and convolutional neural network based vehicle logo recognition method
CN105930868A (en) * 2016-04-20 2016-09-07 北京航空航天大学 Low-resolution airport target detection method based on hierarchical reinforcement learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226820A (en) * 2013-04-17 2013-07-31 南京理工大学 Improved two-dimensional maximum entropy division night vision image fusion target detection algorithm
CN104462502A (en) * 2014-12-19 2015-03-25 中国科学院深圳先进技术研究院 Image retrieval method based on feature fusion
CN105260737A (en) * 2015-11-25 2016-01-20 武汉大学 Automatic laser scanning data physical plane extraction method with multi-scale characteristics fused
CN105868774A (en) * 2016-03-24 2016-08-17 西安电子科技大学 Selective search and convolutional neural network based vehicle logo recognition method
CN105930868A (en) * 2016-04-20 2016-09-07 北京航空航天大学 Low-resolution airport target detection method based on hierarchical reinforcement learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Robust Chinese Traffic Sign Detection and Recognition with Deep Convolutional Neural Network;Rongqiang Qian 等;《2015 11th International Conference on Natural Computation (ICNC)》;20160111;第791-796页
Traffic Sign Recognition Using Deep Convolutional Networks and Extreme Learning Machine;Yujun Zeng等;《International Conference on Intelligent Science and Big Data Engineering》;20151022;第272-280页
Visual saliency detection based on multi-scale and multi-channel mean;Lang Sun等;《Multimedia Tools and Applications》;20160131;第667-684页
卷积神经网络分类模型在模式识别中的新进展;胡正平 等;《燕山大学学报》;20150731;第283-291页
基于视觉注意机制的弱对比度下;刘占文 等;《中国公路学报》;20160831;第124-133页

Also Published As

Publication number Publication date
CN106529391A (en) 2017-03-22

Similar Documents

Publication Publication Date Title
CN112101221B (en) Method for real-time detection and identification of traffic signal lamp
CN102509091B (en) Airplane tail number recognition method
CN111178213A (en) Aerial photography vehicle detection method based on deep learning
CN106529391B (en) A kind of speed limit road traffic sign detection of robust and recognition methods
CN111104903A (en) Depth perception traffic scene multi-target detection method and system
CN108427919B (en) Unsupervised oil tank target detection method based on shape-guided saliency model
CN112084890A (en) Multi-scale traffic signal sign identification method based on GMM and CQFL
Li et al. Cluster naturalistic driving encounters using deep unsupervised learning
CN112990065A (en) Optimized YOLOv5 model-based vehicle classification detection method
CN111462140A (en) Real-time image instance segmentation method based on block splicing
Fan et al. Pavement cracks coupled with shadows: A new shadow-crack dataset and a shadow-removal-oriented crack detection approach
CN111950583A (en) Multi-scale traffic signal sign identification method based on GMM clustering
Kuchkorov et al. Traffic and road sign recognition using deep convolutional neural network
CN115984537A (en) Image processing method and device and related equipment
BARODI et al. Improved deep learning performance for real-time traffic sign detection and recognition applicable to intelligent transportation systems
CN104331708B (en) A kind of zebra crossing automatic detection analysis method and system
CN113468994A (en) Three-dimensional target detection method based on weighted sampling and multi-resolution feature extraction
CN116630702A (en) Pavement adhesion coefficient prediction method based on semantic segmentation network
CN112949595A (en) Improved pedestrian and vehicle safety distance detection algorithm based on YOLOv5
Lee et al. Pix2Pix-based data augmentation method for building an image dataset of black ice
Goel et al. Enhancement of Potholes Detection using SSD Algorithm
Shi et al. Traffic Sign Instances Segmentation Using Aliased Residual Structure and Adaptive Focus Localizer
Chen et al. Road segmentation via iterative deep analysis
Tang et al. A Comparison of Road Damage Detection Based on YOLOv8
Ng et al. Real-Time Detection of Objects on Roads for Autonomous Vehicles Using Deep Learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant