CN104091344B - Road dividing method - Google Patents

Road dividing method Download PDF

Info

Publication number
CN104091344B
CN104091344B CN201410350481.9A CN201410350481A CN104091344B CN 104091344 B CN104091344 B CN 104091344B CN 201410350481 A CN201410350481 A CN 201410350481A CN 104091344 B CN104091344 B CN 104091344B
Authority
CN
China
Prior art keywords
training
road
mlp
road image
gab
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410350481.9A
Other languages
Chinese (zh)
Other versions
CN104091344A (en
Inventor
汤淑明
袁俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201410350481.9A priority Critical patent/CN104091344B/en
Publication of CN104091344A publication Critical patent/CN104091344A/en
Application granted granted Critical
Publication of CN104091344B publication Critical patent/CN104091344B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a road dividing method. The method comprises the steps that S1, a multilayer perceptron is trained offline to fit the mapping relation between the characteristics of adjacent pixel points and an n-link weight so as to estimate the n-link weight, and a gentle adaboost classifier is trained offline to fit the mapping relation between the area characteristics of the pixel points and a t-link weight so as to estimate the t-link weight; and S2, the n-link weight and the t-link weight are estimated through the well-trained multilayer perceptron and the well-trained gentle adaboost classifier respectively, a road image is divided online through an s-t image division method, and a road area in the road image is obtained. The road dividing method has good robustness for interference factors in the road environment.

Description

A kind of lane segmentation method
Technical field
The present invention relates to vehicle assistant drive technical field, particularly a kind of lane segmentation side for road detection system Method.
Background technology
The development of auto industry has had the history of more than 100 years, the appearance of automobile greatly improve the production of people and Life style, promotes the development of social economy.Automobile also brings problems while offering convenience for human lives, Such as traffic accident takes place frequently, and energy resource consumption is too fast, and environmental pollution is serious etc..In the urban district of traffic congestion, driver's average minute Clock needs to complete manipulation of the 20-30 hand-foot coordination action to realize vehicle, therefore driving task is phase under traffic congestion When complicated.With the development of society, requirement of the people to automobile in security, the feature of environmental protection, economy and comfortableness is also got over Come higher.Sensor, computer, automatically control, the technology such as artificial intelligence and machine vision in constantly development and is reformed, and More and more apply in Communication and Transportation Engineering.This background has expedited the emergence of the research and development of intelligent vehicle, vehicle is possessed independently Drive and aid in the function of driving, to realize safety, energy-conservation, convenient, comfortable driving experience.
Road detection system is the important component part of intelligent vehicle DAS (Driver Assistant System).Intelligent vehicle passes through Road Detection System can traffic areas and the vehicle body position of road boundary and attitude relatively to obtain.So, can be realized by Road Detection Auxiliary automobile navigation, lane departure warning, track keeps, and adaptive cruise monitors the driving condition of driver, predicts driver Behavior be intended to etc. it is many auxiliary drive function.
At present most Approach for road detection is based on computer vision.In these methods, by road area picture The process that vegetarian refreshments separates with other area pixel points is referred to as lane segmentation.Lane segmentation is a challenging problem: On the one hand, due to the impact of the factors such as road surface material, weather condition, illumination variation, road surface has various outward appearances;It is another Aspect, with the motion of vehicle, road surface and background are, in dynamic change, and vehicle and pedestrian etc. to be generally there are on road surface Disturbing factor.Above-mentioned factor is easily impacted to the accuracy of lane segmentation, is that lane segmentation brings extreme difficulties.
The content of the invention
(1) technical problem is solved
The invention aims to the technical problem that current lane segmentation method is easily subject to environmental factor interference is solved, Therefore, the present invention proposes a kind of lane segmentation method of robust.
(2) technical scheme
It is as follows to solve above-mentioned technical problem, a kind of lane segmentation method of present invention proposition, including step:
Step S1:Off-line training multi-layer perception (MLP) (Multilayer Perceptron, MLP) is being fitted neighbor pixel Feature to n-link weights mapping relations, to estimate n-link weights, off-line training Gentle Adaboost (GAB) Grader, carrys out match pixel neighborhood of a point feature to the mapping relations of t-link weights, to estimate t-link weights;
Step S2:N-link weights and t-link weights are estimated respectively using the MLP and GAB that train, using s-t figures Figure segmentation method is split online to road image, obtains the road area in road image.
(3) beneficial effect
The present invention carries out lane segmentation under the method frame that figure cuts, and has closely combined the global information in road image And local message, there is preferable robustness to local interference;The present invention is fitted the feature and n- of neighbor pixel by MLP Mapping relations between link weights, can make up the contrast meter of neighbor pixel used in traditional method cut based on figure The mode of n-link weights is calculated easily by the shortcoming of environmental factor interference;The present invention is by GAB come match pixel neighborhood of a point feature To the mapping relations of t-link weights, can weaken that road surface outward appearance is changeable and background dynamics variation characteristic is for the shadow of lane segmentation Ring, improve the accuracy of segmentation.Therefore, the present invention has preferable robustness to the disturbing factor in road environment.
Description of the drawings
Fig. 1 is the flow chart of lane segmentation of the present invention;
Fig. 2 is the connection diagram of adjacent node in s-t figures of the present invention;
Fig. 3 is s-t figures connection diagram of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention become more apparent, below in conjunction with specific embodiment, and reference Accompanying drawing, the present invention is described in more detail.
The figure that the present invention proposes a kind of n-link weights for lane segmentation and can learn cuts method, by road image s- T figures (s-t Graph) expressions, realize Road image segmentation, as shown in figure 1, institute in the case where figure cuts the method frame of (Graph Cut) The method of stating includes:
Step S1:Off-line training MLP being fitted the mapping relations of the feature of neighbor pixel to n-link weights, to Estimate n-link weights, off-line training GAB carrys out match pixel neighborhood of a point feature to the mapping relations of t-link weights, to Estimate t-link (Terminal Link) weights.
Step S2:N-link weights and t-link weights are estimated respectively using the MLP and GAB that train, using s-t figures Figure segmentation method is split online to road image, obtains the road area in road image.
It is a kind of combined optimization method based on graph theory that described figure cuts, and has in computer vision field and widely should With.When figure cuts is applied to image segmentation, image to be split is levied with s-t charts, with max-flow/minimal cut (Max-flow/ Min-cut) algorithm asks for the minimal cut of s-t figures.The process for asking for minimal cut is equal to the process for minimizing an energy function, Obtain minimal cut and optimum segmentation of the image under the energy function is obtained.
The training process of the MLP is comprised the following steps:
Step S1A:Training sample is extracted from the road image sample set for marking;
Because pixel is numerous in road image, it is not necessary that all pixels point in the image that extraction has been marked is used as instruction Practice sample.On the one hand, extracting excessive sample can bring the sharp increase of amount of calculation;On the other hand, excessive sample is difficult to can MLP Focus on some disturbing factors, such as shade, hot spot, lane markings etc..It is therefore desirable to using a kind of sampling policy from mark Sample is extracted in good image.Only adjacent from the region of the road surface pixel of positive sample, negative sample extracts from the boundary of road. It is not because that we are indifferent to the n-link weights in non-rice habitats region from non-rice habitats extracted region positive sample, and reduces data Diversity is conducive to the convergence of MLP.High-contrast area, high brightness in the present invention according to certain probability from road area Region and general area randomly select positive sample, and the probability of institute's foundation determines ratio of each area sample in positive sample, enters And determine powers of the MLP for various different road conditions adaptability.Pixel on road boundary is all by as negative sample This, because the number of these pixels is far smaller than the number of road area pixel.The corresponding estimator output power of positive sample It is worth for 1, the corresponding estimator output weights of negative sample are 0.Each training sample for being extracted is the spy of one 10 × 1 dimension Vector is levied, included feature and its definition is as shown in table 1.
The input feature vector of the multi-layer perception (MLP) of table 1
Step S1B:Using the training sample training MLP for extracting, until its training error is in receptible scope, obtain The parameter of MLP.
Mapping relations between the feature and n-link weights of neighbor pixel are learnt using MLP, for estimating n-link Weights.Road image is represented the present invention adjacent node in s-t figures is connected by way of eight connectivity with s-t figures, is such as schemed In 2 shown in (a), adjacent node by level, vertical be connected with diagonal side.Also, (a) is segmented into multiple by one in Fig. 2 The elementary cell of individual node and four edges composition, in such as Fig. 2 shown in (b).Certainly, the node in image border does not have described Four edges, but can lack while regard as weights be 0 while.So, can with four independent MLP to be fitted Fig. 2 in The mapping relations of the weights of the n-link of the four direction shown in (b).Each MLP is one four layers of BP Neural Network Network, including an input layer, two hidden layers and an output layer, respectively comprising 10,20,20,1 neurons.Each neuron Activation primitive be sigmoid functions.The characteristic vector of one 10 × 1 dimension of input of each MLP, its definition is as shown in table 1. MLP output areas are that between 0 to 1, output is closer to 1, and corresponding n-link weights are bigger.
By RPROP algorithms, using the training sample training multi-layer perception (MLP) extracted in step S1A.In order to eliminate positive sample Impact of the difference of this and negative sample quantity to training result, balance hits rate and false alarm rate, needs to be negative sample in training Addition weight, weight coefficient is as follows:
Wherein, w (y) be for negative sample addition weight coefficient, NpAnd NnThe number of positive sample and negative sample, y are represented respectively It is the class label of sample, y=1 is positive sample, is otherwise negative sample.
The training process of the GAB is comprised the following steps:
Step S1a:Training sample is extracted from the road image sample set for marking;
According to certain ratio from road area and non-rice habitats extracted region positive sample and negative sample.Each sample is one The characteristic vectors of individual 13 × 1 dimension, comprising feature have R, G, B value of pixel, in 9 × 9 windows centered on the pixel The average and variance of R, G, B value, the Grad and gradient direction of the pixel, and its coordinate under image coordinate system.Positive sample This corresponding output valve is 1, and the corresponding output valve of negative sample is -1.
Step S1b:Using the training sample training GAB for extracting, until its training error is in receptible scope, obtain The parameter of t-link weights estimation devices.
Using the mapping relations of GAB graders match pixel neighborhood of a point feature to t-link weights.Because GAB is One mutation algorithm of Adaboost algorithm, with good feature selecting ability and numerical stability, for noise has more preferably Robustness, and GAB is a kind of good homing method for having a supervision.The GAB graders are made up of 500 decision-making stubs, Each decision-making stub can be regarded as the decision tree of an only one of which node.The output of GAB is not class label, but all The weighted sum of the votes of Weak Classifier.For positive sample, it is intended that GAB output on the occasion of;For negative sample, it is intended that GAB exports negative value.Using the training sample training GAB graders extracted in step Sla.
The online cutting procedure of road image is comprised the following steps:
Step S21:Road image I is obtained by vehicle-mounted imageing sensor;
Step S22:S-t figures are built, each pixel in road image I is set to into adjacent node (Neighborhood Nodes), adjacent node is connected by way of four connections or eight connectivity with adjacent node, source node (Source Node) generation Table road area is simultaneously set to prospect, and sink nodes (Sink Node) represent non-rice habitats region and are set to background, each adjacent node point It is not connected with source node and sink nodes;S-t figures are a kind of graph models, including two kinds of nodes and two kinds of sides.Two kinds of nodes are respectively Adjacent node and end node, end node includes source node and convergence point.
Connection between s-t figure interior joints as shown in figure 3, s signs for source node, t signs for sink nodes, it is black Color solid dot is adjacent node.The eight connectivity mode that adjacent node passes through is connected, and the weights in each edge represent the side two ends top The correlation degree of point, the weights on the side that adjacent node is connected with source node represent adjacent node as the general of the pixel of road area Rate, the weights on the side that adjacent node is connected with sink nodes represent adjacent node as the probability of non-rice habitats area pixel point.
Step S23:Using the MLP for training estimate respectively in road image I each adjacent node be adjacent node it Between connection side weights;
Each adjacent node needs the side for estimating weights to have its left side adjacent node, and upper left side adjacent node, top is adjacent Node and upper right side adjacent node connection side, in such as Fig. 2 shown in (b).Node in image border does not have described four edges, But can lack while regard as weights be 0 while.The feature (definition is as shown in table 1) of each adjacent node is extracted, by spy Levy and be input in corresponding MLP.The output of MLP is the weights of corresponding sides, i.e.,:
B(p, q)=MLPd(vP, q), d=dir (p, q) (2)
Wherein, B(p, q)It is multi-layer perception (MLP) MLPd(vP, q) output corresponding sides weights, MLPd(vP, q) represent for estimating Count the multi-layer perception (MLP) of the n-link weights on d-th direction, vP, qIt is the characteristic vector of input, dir (p, q) represents adjacent picture Plain p and q connects the direction on side.
Step S24:Estimate that each adjacent node is connected side with source node in road image I respectively using the GAB for training Weights, each adjacent node be connected the weights on side with sink nodes;
The feature of each adjacent node is extracted, the GAB graders for training are input into.GAB graders export all weak typings The weighted sum of the votes of device, in addition it is also necessary to which, further across change of scale to interval [0,1], the weight calculation for connecting side is as follows:
Wherein, lpFor the class label of pixel p, w is normal number, xpIt is outputs of the GAB at pixel p, lpR when=1p (xp, lp) for pixel p to sink nodes connect side weight, lpR when=0p(xp, lp) it is connections of the pixel p to source node The weight on side.
Step S25:The minimal cut of the s-t figures for asking for building using max-flow/minimal cut algorithm, in obtaining road image I The class label L={ l of each pixeli|li∈ { 0,1 }, i=1,2 ... N }, wherein, N is the individual of pixel in road image I Number, liFor the class label of pixel i, li=1 represents pixel i as road area pixel, and no person is non-rice habitats area pixel Point.
Particular embodiments described above, has been carried out further in detail to the purpose of the present invention, technical scheme and beneficial effect Describe in detail it is bright, should be understood that the foregoing is only the present invention specific embodiment, be not limited to the present invention, it is all Within the spirit and principles in the present invention, any modification, equivalent substitution and improvements done etc., should be included in the guarantor of the present invention Within the scope of shield.

Claims (6)

1. a kind of lane segmentation method, including step is as follows:
Step S1:Off-line training multi-layer perception (MLP) MLP, to estimate n-link weights, off-line training Gentle Adaboost divide Class device GAB, to estimate t-link weights;
Step S2:N-link weights and t-link weights are estimated respectively using the MLP and GAB that train, are cut using the figure of s-t figures Method is split online to road image, obtains the road area in road image.
2. lane segmentation method as claimed in claim 1, it is characterised in that the effect of the MLP is fitting neighbor pixel Feature to n-link weights mapping relations.
3. lane segmentation method as claimed in claim 1, it is characterised in that the process of the training MLP is comprised the following steps:
Step S1A:Training sample is extracted from the road image sample set for marking;
Step S1B:Using the training sample training MLP for extracting, until its training error is in receptible scope, MLP is obtained Parameter.
4. lane segmentation method as claimed in claim 1, it is characterised in that the effect of the GAB is the neighbour of match pixel point Mapping relations of the characteristic of field to t-link weights.
5. lane segmentation method as claimed in claim 1, it is characterised in that the process of the training GAB is comprised the following steps:
Step S1a:Training sample is extracted from the road image sample set for marking;
Step S1b:Using the training sample training GAB for extracting, until its training error is in receptible scope, GAB is obtained Parameter.
6. lane segmentation method as claimed in claim 1, it is characterised in that road image the is split mistake online Journey is comprised the following steps:
Step S21:Road image is obtained by vehicle-mounted imageing sensor;
Step S22:Build s-t figures, each pixel in road image be set to into adjacent node, adjacent node by four connections or The mode of person's eight connectivity is connected with adjacent node, and source node represents road area and is set to prospect, and sink nodes represent non-rice habitats area Domain is simultaneously set to background, and each adjacent node is connected respectively with source node and sink nodes;
Step S23:Estimate that each adjacent node is adjacent between node in road image I respectively using the MLP for training The weights on connection side;
Step S24:Estimate that each adjacent node in road image is connected the power on side with source node respectively using the GAB for training Value, each adjacent node are connected the weights on side with sink nodes;
Step S25:The minimal cut of the s-t figures for asking for building using max-flow/minimal cut algorithm, obtains each picture in road image The class label L={ l of vegetarian refreshmentsi|li∈ { 0,1 }, i=1,2 ... N }, wherein, N be road image in pixel number, liIt is picture The class label of vegetarian refreshments i, li=1 represents pixel i as road area pixel, is otherwise non-rice habitats area pixel point.
CN201410350481.9A 2014-07-22 2014-07-22 Road dividing method Active CN104091344B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410350481.9A CN104091344B (en) 2014-07-22 2014-07-22 Road dividing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410350481.9A CN104091344B (en) 2014-07-22 2014-07-22 Road dividing method

Publications (2)

Publication Number Publication Date
CN104091344A CN104091344A (en) 2014-10-08
CN104091344B true CN104091344B (en) 2017-04-19

Family

ID=51639059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410350481.9A Active CN104091344B (en) 2014-07-22 2014-07-22 Road dividing method

Country Status (1)

Country Link
CN (1) CN104091344B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631880B (en) * 2015-12-31 2019-03-22 百度在线网络技术(北京)有限公司 Lane line dividing method and device
CN106558058B (en) * 2016-11-29 2020-10-09 北京图森未来科技有限公司 Segmentation model training method, road segmentation method, vehicle control method and device
CN108229274B (en) * 2017-02-28 2020-09-04 北京市商汤科技开发有限公司 Method and device for training multilayer neural network model and recognizing road characteristics

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101558404A (en) * 2005-06-17 2009-10-14 微软公司 Image segmentation
CN102609686A (en) * 2012-01-19 2012-07-25 宁波大学 Pedestrian detection method
CN103473767A (en) * 2013-09-05 2013-12-25 中国科学院深圳先进技术研究院 Segmentation method and system for abdomen soft tissue nuclear magnetism image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7212201B1 (en) * 1999-09-23 2007-05-01 New York University Method and apparatus for segmenting an image in order to locate a part thereof
US6961454B2 (en) * 2001-10-04 2005-11-01 Siemens Corporation Research, Inc. System and method for segmenting the left ventricle in a cardiac MR image
US7400757B2 (en) * 2001-10-04 2008-07-15 Siemens Medical Solutions Usa, Inc. System and method for segmenting the left ventricle in a cardiac image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101558404A (en) * 2005-06-17 2009-10-14 微软公司 Image segmentation
CN102609686A (en) * 2012-01-19 2012-07-25 宁波大学 Pedestrian detection method
CN103473767A (en) * 2013-09-05 2013-12-25 中国科学院深圳先进技术研究院 Segmentation method and system for abdomen soft tissue nuclear magnetism image

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A Road Detection Algorithm by Boosting Using Feature Combination;SHA Yun 等;《Intelligent Vehicles Symposium》;20070615;第364-368页 *
An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision;Yuri Boykov 等;《PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20040930;第26卷(第9期);第1124-1137页 *
Image Segmentation: A Survey of Graph-cut Methods;Faliu Yi 等;《Systems and Informatics》;20120520;第1937页右栏第1-2段、第1940页右栏第1-3段 *
Road Detection and Classification in Urban Environments Using Conditional Random Field Models;Jyun-Fan Tsai 等;《Intelligent Transportation Systems Conference》;20060920;第963-967页 *

Also Published As

Publication number Publication date
CN104091344A (en) 2014-10-08

Similar Documents

Publication Publication Date Title
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN110007675B (en) Vehicle automatic driving decision-making system based on driving situation map and training set preparation method based on unmanned aerial vehicle
CN106599773B (en) Deep learning image identification method and system for intelligent driving and terminal equipment
Kühnl et al. Monocular road segmentation using slow feature analysis
CN107862261A (en) Image people counting method based on multiple dimensioned convolutional neural networks
CN105844257A (en) Early warning system based on machine vision driving-in-fog road denoter missing and early warning method
CN104715244A (en) Multi-viewing-angle face detection method based on skin color segmentation and machine learning
CN111045422A (en) Control method for automatically driving and importing 'machine intelligence acquisition' model
CN105893951A (en) Multidimensional non-wearable type traffic police gesture identification method and system for driverless vehicles
CN105930800A (en) Lane line detection method and device
CN102509098A (en) Fisheye image vehicle identification method
CN110414418A (en) A kind of Approach for road detection of image-lidar image data Multiscale Fusion
CN104091344B (en) Road dividing method
CN111881802B (en) Traffic police gesture recognition method based on double-branch space-time graph convolutional network
CN104036246A (en) Lane line positioning method based on multi-feature fusion and polymorphism mean value
CN106599848A (en) Depth visual feature and support vector machine-based terrain texture recognition algorithm
CN111079675A (en) Driving behavior analysis method based on target detection and target tracking
CN109948433A (en) A kind of embedded human face tracing method and device
Fleyeh et al. Traffic sign detection based on AdaBoost color segmentation and SVM classification
CN105205460A (en) Face expression feature extraction and recognition method based on maximum direction encoding
CN111046710A (en) Image extraction method for importing SDL (software development language) model
CN113538357B (en) Shadow interference resistant road surface state online detection method
CN106529391A (en) Robust speed-limit traffic sign detection and recognition method
CN109886125A (en) A kind of method and Approach for road detection constructing Road Detection model
CN111038521A (en) Method for forming automatic driving consciousness decision model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant