CN105930791B - The pavement marking recognition methods of multi-cam fusion based on DS evidence theory - Google Patents
The pavement marking recognition methods of multi-cam fusion based on DS evidence theory Download PDFInfo
- Publication number
- CN105930791B CN105930791B CN201610244995.5A CN201610244995A CN105930791B CN 105930791 B CN105930791 B CN 105930791B CN 201610244995 A CN201610244995 A CN 201610244995A CN 105930791 B CN105930791 B CN 105930791B
- Authority
- CN
- China
- Prior art keywords
- image
- area
- evidence
- interest
- proposition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/09—Recognition of logos
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The pavement marking recognition methods for the multi-cam fusion based on DS evidence theory that the present invention relates to a kind of, belongs to technical field of image processing.This method is turned left mainly for straight trip, left-hand rotation, right-hand rotation, straight trip, straight trip is turned right, and these five types of road traffic Warning Marks are identified, is divided into training and test two parts.In the training stage, the histograms of oriented gradients feature of training sample is extracted, sample characteristics and class label are imported in support vector machines and carry out classification based training, obtain trained classifier;In test phase, area-of-interest is obtained by image preprocessing, it extracts the histograms of oriented gradients feature of area-of-interest and is sent into classifier and classify, belong to the confidence level of each classification according to the mark to be identified that classifier obtains, in conjunction with DS Data Fusion Based on Evidence Theory method and maximum trust value decision rule, final landmark identification result is determined.The present invention uses the multi-cam data fusion method based on DS evidence theory, and the information for merging multi-cam obtains final recognition result, can stablize, efficiently identify pavement marking.
Description
Technical field
The invention belongs to technical field of image processing, are related to a kind of road surface of multi-cam fusion based on DS evidence theory
Traffic sign recognition method.
Background technique
Pith of the intelligent automobile as intelligent transportation system will play more and more important work in people's lives
With.Pith of the traffic sign recognition system as intelligent automobile environment sensing, plays in intelligent transportation system
Important function.With the development of intelligent automobile technology, intelligent transportation decision system needs to know the related letter of vehicle local environment
Breath, to make correct decisions.
It is well known that different lanes has the function of the responsible right side different, the responsible left-hand rotation having has at intersection
Turn, the lane of running car directly determines that the direction of advance of automobile, existing navigation system do not have identification track direction also
Function, therefore will cause lane of the intelligent vehicle where when reaching crossing and do not violate the traffic regulations not pair.Pavement marking
Identification is precisely in order to informing the travelable direction in intelligent vehicle place lane, whether needing changing Lane, enhanced navigation equipment is led
Boat ability provides more more accurately road environment information for the decision system of intelligent vehicle.Existing road signs detection
In terms of Study of recognition is concentrated mainly on roadside traffic sign, however to the pavement marking for equally carrying a large amount of road informations
Research is then less, and most researchs are all based on the detection identification of single camera, and detection identification error is larger.
Summary of the invention
In view of this, the road surface for the multi-cam fusion that the purpose of the present invention is to provide a kind of based on DS evidence theory is handed over
Logical sign, this method utilize the recognition result of multi-cam, in conjunction with DS Data Fusion Based on Evidence Theory method, to taking the photograph more
As the recognition result of head is effectively merged, it can identify to stability and high efficiency that the multiclass road surface in intelligent vehicle running environment is handed over
Logical mark, provides more road environment information, the homing capability of enhanced navigation equipment for the decision system of intelligent vehicle.
In order to achieve the above objectives, the invention provides the following technical scheme:
A kind of pavement marking recognition methods of the multi-cam fusion based on DS evidence theory, this method includes following
Step:
Step 1: training set image being divided into straight trip, left-hand rotation, right-hand rotation, straight trip left-hand rotation, straight trip right-hand rotation and does not contain indicateing arm
The negative sample of will totally six class, and classification mark is carried out to every image;
Step 2: extract sample image histograms of oriented gradients (Histograms of Oriented Gradient,
HOG) feature;The HOG feature and class label of sample image are imported and carry out sample training study in support vector machines (SVM),
Obtain trained classifier;
Step 3: obtaining the vehicle front image of multiple vehicle-mounted camera shootings on intelligent vehicle, choose the lower half of every image
Part is used as image to be processed;Grayscale image is converted by image to be processed, and carries out median filtering;Using hough transformation and directly
Line fitting detects the lane line in this lane, using the region between two lane lines as preliminary area-of-interest;
Step 4: binary conversion treatment being carried out between the preliminary area-of-interest lane line, and is carried out with morphology opening and closing operation
Morphologic filtering, then edge detection is carried out to bianry image;It is maximum to area in edge-detected image close and contour area into
Row filling, obtaining may be comprising the area-of-interest of pavement marking;
Step 5: extracting the histograms of oriented gradients feature of above-mentioned area-of-interest on original image, the feature of extraction is sent
Enter and classify in trained support vector machine classifier, obtains the recognition result of area-of-interest;
Step 6: mark to be identified belongs to each classification in every image being calculated according to support vector machine classifier
Probability determine final recognition result in conjunction with DS Data Fusion Based on Evidence Theory method and maximum trust value decision rule.
Further, the step 4 specifically includes:
Step 41: maximum variance between clusters (OTSU) being used to the binary conversion treatment of image, the threshold of maximum variance between clusters
It is worth calculation formula are as follows:
In formula, T is the segmentation threshold of foreground and background, and G (T) is the inter-class variance of prospect and background, p1、p2Before respectively
Ratio shared by scape and background pixel, u1, u2Respectively foreground and background region average gray, u are entire preliminary area-of-interest
Average gray, when T makes inter-class variance maximum, T at this time is final segmentation threshold;
Step 42: morphologic filtering operation being carried out to bianry image, for eliminating wisp, the boundary of smooth large area
And the minuscule hole of target internal is filled, connect adjacent domain;
Step 43: Canny edge detection being carried out to bianry image, obtains the edge-detected image of preliminary area-of-interest;
Canny algorithm realizes edge extracting using dual threshold method, and two of them threshold value is respectively h1And h2, edge detection will be upper
State high threshold h of the value as Canny edge detection for the segmentation threshold T that OTSU algorithm acquires2, Low threshold h1Value are as follows:
h1=0.5h2;
Step 44: the area of all closed contours in edge-detected image is calculated, it is maximum to area to close and contour area
It is filled;If the height and width of the boundary rectangle of above-mentioned largest contours are respectively h, w, which is respectively extended to its height up and down
1/6, i.e. h/6, respectively extend its wide 1/6, i.e. w/6 to the left and right, be exactly the figure by the obtained rectangular area of above-mentioned processing
The area-of-interest of picture.
Further, the step 6 specifically includes:
Step 61: belonging to the probability of each classification using mark to be identified in support vector machine classifier acquisition each image
rij, wherein i (i=1,2 ..., m) indicates that i-th of camera, j (j=1,2 ..., 6) indicate flag category, rijIt indicates to take the photograph for i-th
As the probability that landmark identification to be identified is type j by head;
Step 62:DS Data Fusion Based on Evidence Theory method refers to Dempster rule of combination (Dempster ' s
Combinational rule), also referred to as Evidence Combination Methods formula, basic conception are as follows: set Θ as identification framework, it is complete by one
And mutual exclusive proposition collection is combined into power set 2Θ, Basic probability assignment function (Basic Probability is defined on it
Assignment, BPA): m (A) ∈ (0,1), and meet:
Wherein, A represents any proposition in identification framework, and m (A) is known as the Basic Probability As-signment of A, indicates that evidence supports proposition
The degree that A occurs;If m (A) ≠ 0, A is known as a burnt member;
If there are two inference systems, their probability assignment is m respectively1, m2, i.e. m1, m2Solely for two on identification framework
Vertical evidence, for proposition A, by the rule of the two Evidence Combination Methods are as follows:
Wherein, K is normaliztion constant, A1And A2For the element in power set;
For proposition A, belief function is defined as:All subset B's is substantially general in expression proposition A
The sum of rate distribution, i.e., to total degree of belief of A;When A is single element proposition, Bel (A)=m (A);
Step 63: by flag category S to be identified1, S2…S6As the proposition in identification framework Θ, camera C1, C2…
CiEvidence is used as to the judgement of mark type, obtained mark to be identified belongs to the probability work of each classification when each camera identifies
For Basic Probability As-signment, each evidence is merged into a new evidence body with above-mentioned Dempster composition rule, i.e., by
Merge rule and the basic reliability distribution of different evidence bodies is merged to the belief assignment for generating a totality;
Step 64: according to maximum trust value method, calculating the belief function value of each proposition, select the knot with maximum trust value
Fruit is as final recognition result.
The beneficial effects of the present invention are: the present invention utilizes the recognition result of multi-cam, in conjunction with DS evidence theory data
Fusion method effectively merges the recognition result of multi-cam, can identify to stability and high efficiency that intelligent vehicle travels ring
Multiclass pavement marking in border provides more road environment information, enhanced navigation equipment for the decision system of intelligent vehicle
Homing capability.
Detailed description of the invention
In order to keep the purpose of the present invention, technical scheme and beneficial effects clearer, the present invention provides following attached drawing and carries out
Illustrate:
Fig. 1 is the whole identification process figure of pavement marking recognition methods;
Fig. 2 is histograms of oriented gradients (HOG) feature extraction flow chart;
Fig. 3 be traffic sign area-of-interest detection after image (a is image after OTSU binaryzation, and b is morphologic filtering
Image afterwards, c are image after Canny edge detection, and d is the image after being filled to largest contours);
Fig. 4 is the recognition methods flow chart of the multi-cam data fusion based on DS evidence theory.
Specific embodiment
Below in conjunction with attached drawing, a preferred embodiment of the present invention will be described in detail.
Fig. 1 show the pavement marking identifying system of the multi-cam fusion of the present invention based on DS evidence theory
Flow chart is divided into two parts of training and identification, and training part key step is as follows:
(1) training set image is divided into straight trip, left-hand rotation, right-hand rotation, straight trip left-hand rotation, keeps straight on and turn right and without containing Warning Mark
Negative sample totally six class, and classification mark is carried out to every image;
(2) histograms of oriented gradients (Histograms of Oriented Gradient, HOG) of each image is extracted
Feature.In the present embodiment, the cell factory (cells) for dividing the image into 8*8 pixel, is counted using the histogram of 9 bin
The gradient information of this 8*8 pixel.In order to have better invariance to illumination and shade, 4 adjacent cell factories are constituted
One block (block), and the histogram obtained to each piece is normalized, and the block descriptor after normalization is known as HOG and is retouched
Symbol is stated, all pieces of image of HOG set of descriptors is just formed into final feature vector altogether.It is straight that Fig. 2 show direction gradient
Side's figure (HOG) characteristic extraction procedure.
(3) by the HOG feature of positive negative sample, the label of positive negative sample, be input to the support of radial basis function core (RBF) to
Sample training is carried out in amount machine (SVM), generates classifier.
The pavement marker that each camera takes is identified in real time with above-mentioned trained classifier:
1) vehicle-mounted multi-cam is numbered respectively as C1, C2, C3…Ci, i is camera number;
2) vehicle-mounted camera is opened, vehicle front image is shot, chooses the lower half portion of every image as figure to be processed
Picture;
3) gray level image is converted by image to be processed, and carries out the median filtering of 3*3 to gray level image;
4) lane line in image is extracted using hough transformation and straight line fitting, by the region between two lane lines
As preliminary area-of-interest;
5) adaptive threshold fuzziness method --- maximum variance between clusters are used between the preliminary area-of-interest lane line
(OTSU), binary conversion treatment, the threshold value calculation method of maximum variance between clusters are carried out are as follows:
It is simple to make to calculate, it reduces and calculates the time, take G (T)=p1p2(u1-u2)2;Wherein, T is the segmentation of foreground and background
Threshold value, G (T) are the inter-class variance of prospect and background, p1、p2Respectively ratio shared by foreground and background pixel, u1, u2Respectively
For foreground and background region average gray, u is the average gray of entire preliminary area-of-interest, when T makes inter-class variance maximum
When, T at this time is the segmentation threshold that OTSU method determines.Shown in image such as Fig. 3 (a) after binaryzation.
6) morphologic filtering operation is carried out to bianry image, morphology opening operation is carried out to image with the structural element of 3*3
(first corrode and expand afterwards), etching operation can eliminate wisp, the boundary of smooth large area, and expansive working is used to fill mesh
Minuscule hole inside mark.Shown in image such as Fig. 3 (b) after morphologic filtering.
7) Canny edge detection is carried out to the bianry image after morphologic filtering, image such as Fig. 3 (c) after edge detection
It is shown.Canny algorithm realizes edge extracting using dual threshold method, and two of them threshold value is respectively h1And h2, side of the present invention
The value for the segmentation threshold T that edge acquires above-mentioned OTSU algorithm when detecting is as the high threshold h of Canny edge detection2, Low threshold h1
Value are as follows:
h1=0.5h2。
8) all profiles that above-mentioned edge-detected image is found with profile testing method, calculate all contour areas, opposite
Maximum close of product is filled with contour area.
With the identical corrosion of structural element and expansion form operation filtering, elimination burr and noise, smooth region boundary,
Obtaining may be comprising the bianry image of the area-of-interest of traffic sign, as shown in Fig. 3 (d).
9) height and width for setting the boundary rectangle of above-mentioned largest contours are respectively h, w, which is respectively extended it up and down
High 1/6, i.e. h/6 respectively extend its wide 1/6, i.e. w/6 to the left and right, are exactly this by the obtained rectangular area of above-mentioned processing
The area-of-interest of mark may be included in image.
10) median filtering is carried out to the correspondence area-of-interest of the gray level image of original image, and will with REGION INTERPOLATION method
Its image for being normalized to 128*128.
11) identical method, extracts the HOG feature of area-of-interest when according to training classifier.
12) by the HOG feature input of extraction, trained support vector machines (SVM) classifier carries out Classification and Identification in advance,
Obtain the classification recognition result of each camera shooting image of same mark.
13) above-mentioned SVM classifier is in the recognition result of every image, comprising in image captured by each camera wait know
The not mark probability that belongs to each classification.
14) probability for belonging to each classification to mark to be identified in every image obtained above, with DS evidence theory
Data fusion method merged, theoretical basis is as follows: be equipped with a decision problem, possible outcomes all for the problem
Collection shares Θ expression, and Θ is referred to as identification framework, is combined into power set 2 by complete and mutual exclusive proposition collectionΘ, define on it
Basic probability assignment function (Basic Probability Assignment, BPA): m (A) ∈ (0,1), and meet:
Wherein, A represents any proposition in identification framework, and m (A) is known as the Basic Probability As-signment of A, indicates that evidence supports proposition
The degree that A occurs.If m (A) ≠ 0, A is known as a burnt member.
For proposition A, belief function is defined as:All subset B's is substantially general in expression proposition A
The sum of rate distribution, i.e., to total degree of belief of A.When A is single element proposition, Bel (A)=m (A).
If m1, m2For two corroborations on identification framework, A1, A2For the element in power set, then synthesized with Dempster
Rule will be after the two Evidence Combination Methods are as follows:
Wherein, K is normaliztion constant, calculation method are as follows:
Above-mentioned Evidence Combination Methods formula provides the composition rule of two evidences, and the combination for multiple evidences can repeat
Combination of two is carried out to more evidences with above formula, combines later combined chance assignment are as follows:
Wherein,
For the pavement marker identifying system of multi-cam fusion, the type of target is exactly proposition, each camera
It is equivalent to an evidence body, what each camera was provided by shooting, processing is exactly evidence to the judging result of targeted species.
For pavement marking identifying system, single pavement marking is mainly identified, the present invention mainly identifies
Six class traffic signs, then identification framework Θ={ S1, S2, S3, S4, S5, S6, detection identification is carried out to mark with multiple cameras,
Basic probability assignment assignment is obtained, as shown in table 1;
In table 1, ri1Indicate i-th of camera by landmark identification be type S1Probability.
The basic reliability distribution of different evidence bodies is merged by the data in table 1 using Dempster composition rule and is generated
One overall belief assignment.It can be counted according to the definition of belief function since proposition is all single element proposition in the identification framework
It calculates belief function Bel (A)=m (A), according to maximum Bel method, selects the target with maximum trust value as final identification knot
Fruit.
Table 1: the Basic Probability As-signment that multi-cam determines
Finally, it is stated that preferred embodiment above is only used to illustrate the technical scheme of the present invention and not to limit it, although logical
It crosses above preferred embodiment the present invention is described in detail, however, those skilled in the art should understand that, can be
Various changes are made to it in form and in details, without departing from claims of the present invention limited range.
Claims (2)
1. a kind of pavement marking recognition methods of the multi-cam fusion based on DS evidence theory, it is characterised in that: the party
Method the following steps are included:
Step 1: training set image being divided into straight trip, left-hand rotation, right-hand rotation, straight trip left-hand rotation, keeps straight on and turns right and without containing Warning Mark
Negative sample totally six class, and classification mark is carried out to every image;
Step 2: extracting the histograms of oriented gradients feature of sample image, i.e. HOG feature;By the HOG feature and class of sample image
Distinguishing label, which imports, carries out sample training study in support vector machines, obtain trained classifier;
Step 3: obtaining the vehicle front image of multiple vehicle-mounted camera shootings on intelligent vehicle, choose the lower half portion of every image
As image to be processed;Grayscale image is converted by image to be processed, and carries out median filtering;It is quasi- using hough transformation and straight line
The lane line for detecting this lane is closed, using the region between two lane lines as preliminary area-of-interest;
Step 4: binary conversion treatment being carried out between the preliminary area-of-interest lane line, and carries out form with morphology opening and closing operation
Filtering is learned, then edge detection is carried out to bianry image;It is maximum to area in edge-detected image to close and contour area is filled out
It fills, obtaining may be comprising the area-of-interest of pavement marking;
Step 5: extracting the histograms of oriented gradients feature of above-mentioned area-of-interest on original image, the feature of extraction is sent into
Classify in trained support vector machine classifier, obtains the recognition result of area-of-interest;
Step 6: mark to be identified belongs to the general of each classification in every image being calculated according to support vector machine classifier
Rate determines final recognition result in conjunction with DS Data Fusion Based on Evidence Theory method and maximum trust value decision rule;
The step 4 specifically includes:
Step 41: maximum variance between clusters OTSU, the threshold calculations of maximum variance between clusters are used to the binary conversion treatment of image
Formula are as follows:
In formula, T is the segmentation threshold of foreground and background, and G (T) is the inter-class variance of prospect and background, p1、p2Respectively prospect and
Ratio shared by background pixel, u1, u2Respectively foreground and background region average gray, u are the flat of entire preliminary area-of-interest
Equal gray scale, when T makes inter-class variance maximum, T at this time is final segmentation threshold;
Step 42: morphologic filtering operation being carried out to bianry image, for eliminating wisp, the boundary of smooth large area is simultaneously filled out
The minuscule hole of target internal is filled, adjacent domain is connected;
Step 43: Canny edge detection being carried out to bianry image, obtains the edge-detected image of preliminary area-of-interest;Canny
Algorithm realizes edge extracting using dual threshold method, and two of them threshold value is respectively h1And h2, edge detection is by above-mentioned OTSU
High threshold h of the value for the segmentation threshold T that algorithm acquires as Canny edge detection2, Low threshold h1Value are as follows:
h1=0.5h2;
Step 44: calculate edge-detected image in all closed contours area, it is maximum to area close and contour area carry out
Filling;If the height and width of the boundary rectangle of above-mentioned largest contours are respectively h, w, which is respectively extended up and down its high 1/
6, i.e. h/6 respectively extend its wide 1/6, i.e. w/6 to the left and right, pass through the obtained rectangular area of above-mentioned processing, the exactly image
Area-of-interest.
2. a kind of pavement marking identification side of multi-cam fusion based on DS evidence theory according to claim 1
Method, it is characterised in that: the step 6 specifically includes:
Step 61: obtaining the probability r that mark to be identified in each image belongs to each classification using support vector machine classifierij,
Wherein i indicates i-th of camera, i=1,2 ..., m;J expression flag category, j=1,2 ..., 6;rijIndicate that i-th of camera will
Landmark identification to be identified is the probability of type j;
Step 62:DS Data Fusion Based on Evidence Theory method refers to Dempster rule of combination, also referred to as Evidence Combination Methods formula, base
This concept is as follows: setting Θ as identification framework, is combined into power set 2 by a complete and mutual exclusive proposition collectionΘ, base is defined on it
This probability distribution function BPA:m (A) ∈ (0,1), and meet:
(1)
(2)
Wherein, A represents any proposition in identification framework, and m (A) is known as the Basic Probability As-signment of A, indicates that evidence supports proposition A hair
Raw degree;If m (A) ≠ 0, A is known as a burnt member;
If there are two inference systems, their probability assignment is m respectively1, m2, i.e. m1, m2It is independent for two on identification framework
Evidence, for proposition A, by the rule of the two Evidence Combination Methods are as follows:
Wherein, K is normaliztion constant, A1And A2For the element in power set;
For proposition A, belief function is defined as:Indicate the elementary probability point of all subset B in proposition A
The sum of with, i.e., to total degree of belief of A;When A is single element proposition, Bel (A)=m (A);
Step 63: by flag category S to be identified1, S2…S6As the proposition in identification framework Θ, camera C1, C2…CiIt is right
Indicate the judgement of type as evidence, the mark to be identified that each camera obtains when identifying belongs to the probability of each classification as base
Each evidence is merged into a new evidence body with above-mentioned Dempster composition rule, i.e., by merging by this probability assignment
The basic reliability distribution of different evidence bodies is merged the belief assignment for generating a totality by rule;
Step 64: according to maximum trust value method, calculating the belief function value of each proposition, the result with maximum trust value is selected to make
For final recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610244995.5A CN105930791B (en) | 2016-04-19 | 2016-04-19 | The pavement marking recognition methods of multi-cam fusion based on DS evidence theory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610244995.5A CN105930791B (en) | 2016-04-19 | 2016-04-19 | The pavement marking recognition methods of multi-cam fusion based on DS evidence theory |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105930791A CN105930791A (en) | 2016-09-07 |
CN105930791B true CN105930791B (en) | 2019-07-16 |
Family
ID=56838447
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610244995.5A Active CN105930791B (en) | 2016-04-19 | 2016-04-19 | The pavement marking recognition methods of multi-cam fusion based on DS evidence theory |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105930791B (en) |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106529542A (en) * | 2016-09-30 | 2017-03-22 | 中国石油天然气股份有限公司 | Indicator diagram identification method and device |
CN107101598B (en) * | 2017-03-10 | 2023-11-03 | 华南理工大学 | Automatic detection method and device for concentricity quality of piezoelectric ceramic silver sheet |
CN107066952A (en) * | 2017-03-15 | 2017-08-18 | 中山大学 | A kind of method for detecting lane lines |
CN107122737B (en) * | 2017-04-26 | 2020-07-31 | 聊城大学 | Automatic detection and identification method for road traffic signs |
US10373000B2 (en) * | 2017-08-15 | 2019-08-06 | GM Global Technology Operations LLC | Method of classifying a condition of a road surface |
CN113762252B (en) * | 2017-08-18 | 2023-10-24 | 深圳市道通智能航空技术股份有限公司 | Unmanned aerial vehicle intelligent following target determining method, unmanned aerial vehicle and remote controller |
CN107944425A (en) * | 2017-12-12 | 2018-04-20 | 北京小米移动软件有限公司 | The recognition methods of road sign and device |
CN108090459B (en) * | 2017-12-29 | 2020-07-17 | 北京华航无线电测量研究所 | Traffic sign detection and identification method suitable for vehicle-mounted vision system |
CN108229386B (en) * | 2017-12-29 | 2021-12-14 | 百度在线网络技术(北京)有限公司 | Method, apparatus, and medium for detecting lane line |
CN108280442B (en) * | 2018-02-10 | 2020-07-28 | 西安交通大学 | Multi-source target fusion method based on track matching |
CN108492877B (en) * | 2018-03-26 | 2021-04-27 | 西安电子科技大学 | Cardiovascular disease auxiliary prediction method based on DS evidence theory |
CN110390224B (en) * | 2018-04-16 | 2021-06-25 | 阿里巴巴(中国)有限公司 | Traffic sign recognition method and device |
CN109063740A (en) * | 2018-07-05 | 2018-12-21 | 高镜尧 | The detection model of ultrasonic image common-denominator target constructs and detection method, device |
CN109409246B (en) * | 2018-09-30 | 2020-11-27 | 中国地质大学(武汉) | Sparse coding-based accelerated robust feature bimodal gesture intention understanding method |
CN109447979B (en) * | 2018-11-09 | 2021-09-28 | 哈尔滨工业大学 | Target detection method based on deep learning and image processing algorithm |
CN109859509A (en) * | 2018-11-13 | 2019-06-07 | 惠州市德赛西威汽车电子股份有限公司 | Lane state based reminding method and equipment |
CN109215364B (en) * | 2018-11-19 | 2020-08-18 | 长沙智能驾驶研究院有限公司 | Traffic signal recognition method, system, device and storage medium |
CN111444749B (en) * | 2019-01-17 | 2023-09-01 | 杭州海康威视数字技术股份有限公司 | Method and device for identifying road surface guide mark and storage medium |
CN110046651B (en) * | 2019-03-15 | 2021-01-19 | 西安交通大学 | Pipeline state identification method based on monitoring data multi-attribute feature fusion |
CN110136100B (en) * | 2019-04-16 | 2021-02-19 | 华南理工大学 | Automatic classification method and device for CT slice images |
CN110472657B (en) * | 2019-07-04 | 2021-09-03 | 西北工业大学 | Image classification method based on trust function theory |
CN110633800B (en) * | 2019-10-18 | 2022-08-02 | 北京邮电大学 | Lane position determination method, apparatus, and storage medium based on autonomous vehicle |
CN111899377A (en) * | 2020-07-28 | 2020-11-06 | 中国第一汽车股份有限公司 | Road traffic sign prompting method and device, vehicle and storage medium |
CN111950456A (en) * | 2020-08-12 | 2020-11-17 | 成都成设航空科技股份公司 | Intelligent FOD detection method and system based on unmanned aerial vehicle |
CN111999441A (en) * | 2020-08-28 | 2020-11-27 | 福建美营自动化科技有限公司 | Multi-channel extremely-low-concentration combustible and explosive gas rapid detector and gas discrimination method |
CN112580717A (en) * | 2020-12-17 | 2021-03-30 | 百度在线网络技术(北京)有限公司 | Model training method, positioning element searching method and device |
CN113591727A (en) * | 2021-08-03 | 2021-11-02 | 彭刚 | Traffic signal recognition device of distribution robot |
CN116794624A (en) * | 2022-12-26 | 2023-09-22 | 南京航空航天大学 | ResNet-based data domain and image domain combined SAR target recognition method |
CN117523605B (en) * | 2023-11-03 | 2024-06-11 | 广东工业大学 | Substation animal intrusion detection method based on multi-sensor information fusion |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102262728B (en) * | 2011-07-28 | 2012-12-19 | 电子科技大学 | Road traffic sign identification method |
CN102542260A (en) * | 2011-12-30 | 2012-07-04 | 中南大学 | Method for recognizing road traffic sign for unmanned vehicle |
CN103577809B (en) * | 2013-11-12 | 2016-08-17 | 北京联合大学 | A kind of method that traffic above-ground mark based on intelligent driving detects in real time |
CN103971128B (en) * | 2014-05-23 | 2017-04-05 | 北京理工大学 | A kind of traffic sign recognition method towards automatic driving car |
CN104408324B (en) * | 2014-12-11 | 2017-06-13 | 云南师范大学 | Multiple sensor information amalgamation method based on D S evidence theories |
CN104732211B (en) * | 2015-03-19 | 2017-12-08 | 杭州电子科技大学 | A kind of method for traffic sign detection based on adaptive threshold |
CN105335701B (en) * | 2015-09-30 | 2019-01-04 | 中国科学院合肥物质科学研究院 | A kind of pedestrian detection method based on HOG Yu D-S evidence theory multi-information fusion |
CN105469124A (en) * | 2015-11-20 | 2016-04-06 | 厦门雅迅网络股份有限公司 | Traffic sign classification method |
-
2016
- 2016-04-19 CN CN201610244995.5A patent/CN105930791B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN105930791A (en) | 2016-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105930791B (en) | The pavement marking recognition methods of multi-cam fusion based on DS evidence theory | |
WO2018072233A1 (en) | Method and system for vehicle tag detection and recognition based on selective search algorithm | |
CN105335702B (en) | A kind of bayonet model recognizing method based on statistical learning | |
CN103077407B (en) | Car logo positioning and recognition method and car logo positioning and recognition system | |
CN107798335A (en) | A kind of automobile logo identification method for merging sliding window and Faster R CNN convolutional neural networks | |
Bailo et al. | Robust road marking detection and recognition using density-based grouping and machine learning techniques | |
CN109271991A (en) | A kind of detection method of license plate based on deep learning | |
CN103136528B (en) | A kind of licence plate recognition method based on dual edge detection | |
CN105740886B (en) | A kind of automobile logo identification method based on machine learning | |
CN106529532A (en) | License plate identification system based on integral feature channels and gray projection | |
CN105205480A (en) | Complex scene human eye locating method and system | |
CN103034836A (en) | Road sign detection method and device | |
CN104978567A (en) | Vehicle detection method based on scenario classification | |
CN104200228A (en) | Recognizing method and system for safety belt | |
CN107103303A (en) | A kind of pedestrian detection method based on GMM backgrounds difference and union feature | |
CN103106409A (en) | Composite character extraction method aiming at head shoulder detection | |
CN106503748A (en) | A kind of based on S SIFT features and the vehicle targets of SVM training aids | |
CN105224945B (en) | A kind of automobile logo identification method based on joint-detection and identification algorithm | |
CN108875803A (en) | A kind of detection of harmful influence haulage vehicle and recognition methods based on video image | |
CN107330027A (en) | A kind of Weakly supervised depth station caption detection method | |
Dehshibi et al. | Persian vehicle license plate recognition using multiclass Adaboost | |
CN105354573A (en) | Container license plate identification method and system | |
CN108664969A (en) | Landmark identification method based on condition random field | |
Ingole et al. | Characters feature based Indian vehicle license plate detection and recognition | |
Elbamby et al. | Real-time automatic multi-style license plate detection in videos |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |