CN108846328A - Lane detection method based on geometry regularization constraint - Google Patents
Lane detection method based on geometry regularization constraint Download PDFInfo
- Publication number
- CN108846328A CN108846328A CN201810527769.7A CN201810527769A CN108846328A CN 108846328 A CN108846328 A CN 108846328A CN 201810527769 A CN201810527769 A CN 201810527769A CN 108846328 A CN108846328 A CN 108846328A
- Authority
- CN
- China
- Prior art keywords
- lane
- lane detection
- image
- network
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention proposes a kind of lane detection methods based on geometry regularization constraint, including:Step S1 carries out feature extraction for input Driving Scene image, obtains preliminary lane detection and lane detection result;Step S2 carries out intersection comparison to preliminary lane detection and lane detection result, corrects detection error region and exports final lane detection result.Step S3, by loss function based on structural information with intersect in conjunction with entropy loss, testing result is optimized, and training network.The present invention is a kind of travelable region segmentation method of high-efficiency high-accuracy, on existing lane detection model, by being introduced into the intrinsic geological information of road in traffic scene as constraint, effectively exclusion environmental disturbances, and improves the accuracy of lane detection.The present invention does not need to pre-process image and post-process, and realizes lane detection end to end.Experimental result shows that, compared to classical detection method, the present invention has a distinct increment in accuracy in detection.
Description
Technical field
It is specifically a kind of to be based on geometry regularization constraint the present invention relates to the lane detection technical field of view-based access control model image
Lane detection method.
Background technique
The lane detection of view-based access control model image is one of major issue of intelligent driving, is mainly used for from traffic scene image
Middle detection present feasible sails lane region.Based on lane detection result, intelligent driving system can carry out path planning and drive
Sail behaviour decision making.But currently, still there are various limitations in precision and applicable scene for lane detection method.
Existing lane detection method can be mainly divided into three kinds.First method is based primarily upon textural characteristics to traffic field
Self similarity region in scape carries out region fusion using the methods of region growing, finally obtains lane region.But this method is difficult
To handle the dissimilar region in the region of lane, thus it is too sensitive to shade and other interference.Second method is then based on vehicle
The marginal information in road extracts marginal information using high-pass filter or gradient and is fitted final lane using curve fitting algorithm
Boundary curve confines final lane region using lane edge.But the edge occlusion issue as present in actual scene and
The testing result robustness of object interference problem, this method is poor.A kind of last method uses deep learning method, passes through language
Justice segmentation network, first extract traffic scene abstract characteristics, recycle feature reconstruction Pixel-level lane area probability figure to
Detect lane.Although deep learning can generally detect lane region, detail section detection effect is poor, and by complicated field
Scape is affected.
In conclusion existing lane detection method only considered the partial information in traffic scene at present, therefore do not have
There are high-precision and strong robustness.
Summary of the invention
For the shortcomings of the prior art, the lane inspection based on inherent geometrical constraint that the present invention provides a kind of
Survey method, this method on the basis of existing research, consider lane in geometrical constraint progress lane detection.The present invention passes through
The neural network model of a multiple target, i.e. lane detection and lane detection are constructed, neural network can learn two targets
Between inner link.On this basis, two targets are attached by feature extraction network, network further realizes two
The mutual effect of constraint value of person.In addition, the invention also provides the loss functions based on geometrical constraint to guide network training.
The present invention is achieved by the following technical solutions.
A kind of lane detection method based on geometry regularization constraint, includes the following steps:
Step S1 carries out feature extraction for input Driving Scene image, obtains preliminary lane detection result and lane line
Testing result;
Step S2 carries out intersection comparison to preliminary lane detection result and lane detection result, corrects detection error area
Domain simultaneously exports final lane detection result and lane detection result.
Preferably, the step S1 includes following sub-step:
Step S11 extracts input Driving Scene image using multiple convolutional layers and down-sampling layer building feature extraction network
Characteristics of image;Wherein:
The input of feature extraction network is the input Driving Scene image after down-sampling layer minification;Pass through convolution
Layer, feature extraction network are successively extracted by specific to abstract characteristics of image;
The network structure of feature extraction network is:B-CR(32)-CR(32)–M-CR(64)-CR(64)–M-CR(128)-CR
(128)-CR(128)–M-CR(256)-CR(256)-CR(256)–M-CR(512)-CR(512)-CR(512);Wherein B indicates to criticize
Normalization layer, C indicate that convolutional layer, R indicate that active coating ReLU, M indicate down-sampling layer;Digital representation convolutional layer output in bracket
Port number;The active coating ReLU is defined as:
Wherein x is the input of active coating ReLU;
The characteristics of image of feature extraction network output is fe, to guarantee tensor Scale invariant, in characteristics of image feOn the basis of
It is coupled one and characteristics of image feThe identical full null tensor zero of size, the characteristics of image of final feature extraction network output are
fez, it is defined as:
fez=[fe, zero]k
Wherein []kIndicate feIt ties up and is coupled along kth with two tensors of zero;
Step S12, to the characteristics of image f extractedez, using the pixel classifications net of warp lamination and up-sampling layer composition
Network carries out preliminary lane region detection to input Driving Scene image;Step S13, to the characteristics of image f extractedez, using anti-
The pixel classifications network of convolutional layer and up-sampling layer composition carries out preliminary lane detection to input Driving Scene image;
Wherein, in step S12 and step S13, while characteristics of image f is usedez, but use two pixel classifications networks point
It Shi Xian not lane and lane detection.
The characteristics of image f that will be extracted in step S11ezPixel classifications by being made of up-sampling layer and warp lamination respectively
Network obtains the characteristic pattern with input Driving Scene image equal resolution, and using characteristic pattern to belonging to each pixel
Classification is classified;
Pixel classifications network and feature extraction Network Mirror are symmetrical;The network structure of pixel classifications network is:DR(512)–
DR(512)–DR(512)–U–DR(256)–DR(256)–DR(256)–U–DR(128)–DR(128)–DR(128)–U–DR(64)–
DR(64)–U–DR(32)–DS(z);Wherein D indicates that warp lamination, U indicate that up-sampling layer, S indicate active coating Sigmoid;Bracket
Interior digital representation warp lamination output channel number;When the last one warp lamination output channel number z is 1, pixel is indicated
Belong to lane region or lane line, when the last one warp lamination output channel number z is 0, indicates that speed limit point is not belonging to lane
Region or lane line;
The active coating Sigmoid is defined as function:
Wherein, x is the input of active coating Sigmoid;
By up-sampling layer identical with down-sampling number of layers, characteristic pattern is restored to input to drive by pixel classifications network
Scene image equal resolution, to realize that characteristic pattern and pixel correspond;Active coating Sigmoid function by pixel with
The form of probability is classified, and final output probability graph indicates that each pixel belongs to the probability of lane region or lane line, i.e.,
Obtain preliminary lane detection result and lane detection result.
Preferably, the step S2 includes following sub-step:
Step S21 is based on characteristics of image feWith preliminary lane detection as a result, by extract lane line in geometry about
Beam corrects lane detection result;
Step S22 is based on characteristics of image feWith preliminary lane detection result, by extract lane edge geometrical constraint,
Correct lane detection result.
Preferably, the step S21 includes following sub-step:
Step S211, extracts lane line using preliminary lane detection result and corrects feature, carries out geometry to lane detection
Constraint;Wherein:
In order to extract lane line amendment feature and with characteristics of image f obtained in step S11eIt is merged, corrects feature
The lane line for extracting network output corrects feature fmrSize requirements and characteristics of image feSize is identical;Based on this, corrects feature and mention
The network structure for taking network is:B-CR(32)-CR(32)–M-CR(64)-CR(64)–M-CR(128)-CR(128)-CR(128)–
M-CR(256)-CR(256)-CR(256)–M-CR(512)-CR(512)-CR(512);Wherein B indicates batch normalization layer, and C is indicated
Convolutional layer, R indicate active coating, and M indicates down-sampling layer;Digital representation convolutional layer output channel number in bracket;
Correct the spy of the penultimate warp lamination output of pixel classifications network in feature extraction network receiving step S13
Sign figure progress feature, which is brought up again, to be taken;
Step S212 corrects feature f using lane linemrLane detection result is modified, and generates accurate lane
Testing result;Wherein:
Lane line obtained in step S211 is corrected into feature fmrThe characteristics of image f obtained with step S11eConnection, obtains
Eventually for the input feature vector f of lane detectionel, it is defined as:
fel=[fe, fmr]k
By input feature vector felThe pixel classifications network defined in input step S12 is carried out using identical network parameter
Lane detection finally obtains lane detection result that is accurate, constraining by lane line geometrical relationship.
Preferably, the step S22 includes following sub-step:
Step S221, extracts lane using preliminary lane detection result and corrects feature, carries out geometry about to lane detection
Beam;Wherein:
In order to extract lane amendment feature and with the characteristics of image f in step S11eIt is merged, corrects feature extraction net
Correct feature f in the lane of network outputlrSize requirements and characteristics of image feSize is identical, is based on this, amendment feature extraction network
Network structure is:B-CR(32)-CR(32)–M-CR(64)-CR(64)–M-CR(128)-CR(128)-CR(128)–M-CR
(256)-CR(256)-CR(256)–M-CR(512)-CR(512)-CR(512);Wherein B indicates batch normalization layer, and C indicates convolution
Layer, R indicate active coating, and M indicates down-sampling layer;Digital representation convolutional layer output channel number in bracket;
Correct the spy of the penultimate warp lamination output of pixel classifications network in feature extraction network receiving step S12
Sign figure progress feature, which is brought up again, to be taken;
Step S222 corrects feature f using lanelrLane detection result is modified, and generates accurate lane
Line testing result;Wherein:
Feature f is corrected into lane obtained in step S221lrThe characteristics of image f obtained with step S11eConnection, obtains most
It is used for the input feature vector f of lane detection eventuallyem, it is defined as:
fem=[fe, flr]k
By input feature vector femThe pixel classifications network defined in input step S13 is carried out using identical network parameter
Lane detection finally obtains lane detection result that is accurate, constraining by lane geometrical relationship.
It preferably, further include following any one or any number of features:
Driving Scene image size after minification is:w*h*3;Wherein w is picture traverse, and h is picture altitude, figure
As channel is 3;
Characteristics of image feSize be
Characteristics of image fezSize be
In step s 12, include using the classification that characteristic pattern classifies to each pixel generic:Lane
Region and non-lane region;
In step s 13, include using the classification that characteristic pattern classifies to each pixel generic:Lane
Line region and non-lane line region.
It preferably, further include following any one or any multinomial feature:
Lane line corrects feature fmrSize be
Input feature vector felSize is
Wherein w is the Driving Scene picture traverse after minification, and h is the Driving Scene picture altitude after minification.
It preferably, further include following any one or any multinomial feature:
Correct feature f in lanelrSize be
Input feature vector femSize is
Wherein w is the Driving Scene picture traverse after minification, and h is the Driving Scene picture altitude after minification.
It preferably, further include step S3, it is right through the loss function based on structural information in conjunction with cross entropy loss function
Lane detection result and lane detection result optimize, and above-mentioned all-network is trained on end-to-end ground simultaneously.
Preferably, the step S3 is specially:
For lane detection result:
Boundary consistency is measured using the loss function based on boundary consistency, and by handing over and comparing, is obtained based on friendship
And the loss function of ratio optimizes lane detection result;Wherein, the loss function based on boundary consistency refers to the assumption that lane
With lane line with the loss function of internal requirement on boundary;Loss function l based on friendship and ratiobaIt is defined as follows:
lba=1-IoU
Wherein xiFor the pixel for inputting Driving Scene image, p (xi) it is pixel xiPosition active coating Sigmoid output
Probability value;y(xi) it is pixel xiConcrete class, * indicate Pixel-level multiplication;
For lane detection result:
Lane detection result is optimized using the loss function based on region;Wherein, the loss function based on region
It is defined as follows:
Wherein bound term G (xi)=1 indicates all pixels in the region of lane, Ir(xi) indicate all based on lane
The parameter probability valuing in the lane region that line testing result is restored;
The spatial coherence between pixel is depended on by the method that lane detection result restores lane region, i.e., most
The parameter probability valuing in the lane region that should be contributed identical information between relevant pixel, therefore be resumed and nearest therewith
The probability value of pixel is identical on lane line, defines Ir(xi) as follows:
Ir(xi)=Ib(x′j)
Wherein d (xi, mj) indicate pixel xiAnd mjEuclidean distance, Ib(x′j) it is in pixel x 'jOn lane line it is general
Rate, argminmjExpression make thereafter surface function reach the smallest pixel position;Therefore the finally obtained loss letter based on region
Number laaIt is defined as follows:
Four different loss functions are added by weight, obtain the loss function l for training whole network,
It is defined as follows
L=llce+lmce+λ1lba+λ2laa
Wherein llceFor the loss function of lane detection target, l,ceFor the loss function of lane detection target, λ1For base
In friendship and the loss function l of ratiobaWeight, λ2For the loss function l based on regionaaWeight.
Lane detection method provided by the invention based on geometry regularization constraint is that one kind is mutually constrained by sub-network
The method for carrying out lane detection.Specifically, the present invention constructs a multi-target networks structure, learns the interior of lane and lane line
It is contacted in geometry, and realizes the mutual optimization of testing result between target by feature extraction network, thus compared to general
Method can obtain better testing result under complex scene and interference.In addition, on the basis of current existing loss function,
The present invention proposes that the loss function based on geometrical constraint guides network training, improves detection accuracy.
Compared with prior art, the present invention has the advantages that:
The present invention, which can effectively utilize in traffic scene, has the lane area information of high consistency and comprising curve
The lane line information at edge.Compared to current existing method, the present invention uses a variety of characteristics of image simultaneously, overcomes existing method
Limitation under certain interference, thus different scenes can be used, there is stronger robustness.
The present invention is on the basis of simple multi-target networks, the information transmitting being added between target, to form one two
Stage lane detection network.By the feature extraction to Preliminary detection result, it is total that invention enhances the information of multi-target networks
Enjoy effect.
The present invention introduces during training network based on the interior loss function in geometrical constraint, explicitly introduces several
What constraint is used for network training, further promotes detection accuracy.
Detailed description of the invention
Upon reading the detailed description of non-limiting embodiments with reference to the following drawings, other feature of the invention,
Objects and advantages will become more apparent upon:
Fig. 1 is the lane detection network frame figure in one embodiment of the invention based on geometry normalized constraint.
Fig. 2 is the loss function schematic diagram in one embodiment of the invention based on boundary priori knowledge, wherein (a) is lane
Detection and practical lane region comparison schematic diagram are (b) the loss function schematic diagram, for measuring lane detection and practical lane
The boundary consistency in region.
Fig. 3 is the loss function schematic diagram in one embodiment of the invention based on region priori knowledge, wherein (a) is lane
Line detection and practical lane line comparison schematic diagram, (b) the lane area schematic to be generated based on lane detection result;Figure
In, I1With I2Middle solid line is lane detection as a result, dotted line is missing inspection lane line;A is any position in the region of lane, P1With P2
For the intersection point of the vertical line of A to two lane lines.
Specific embodiment
It elaborates below to the embodiment of the present invention, the present embodiment carries out under the premise of the technical scheme of the present invention
Implement, the detailed implementation method and specific operation process are given, but protection scope of the present invention is not limited to following implementation
Example.
Shown in referring to Fig.1, a kind of lane detection method based on geometry regularization constraint includes the following steps:
Step S1 carries out feature extraction for input picture (Driving Scene image), obtains preliminary lane detection and lane
Line testing result;
Step S2 carries out intersection comparison to preliminary lane detection and lane detection result, and amendment detection error region is simultaneously
Export final lane detection result.
The above-mentioned lane detection method based on geometry regularization constraint is successfully realized the mutual constraint of network, obtains high-quality
The lane detection result of amount.
Preferably, the step S1 includes following sub-step:
Step S11 extracts input Driving Scene image using multiple convolutional layers and down-sampling layer building feature extraction network
Characteristics of image;
The input of feature extraction network be by down-sampling layer minification after Driving Scene image, size w*h*3,
Wherein w is picture traverse, and h is picture altitude, image channel 3;By convolutional layer, feature extraction network can successively extract by
Specific to abstract characteristics of image, and on the one hand down-sampling layer guarantees that operand will not be explosive with the increase of network depth
Increase, is extracted image feature the most significant on the other hand to prevent the loss of key message during down-sampling;
Specifically the network structure of feature extraction network is:B-CR(32)-CR(32)–M-CR(64)-CR(64)–M-CR
(128)-CR(128)-CR(128)–M-CR(256)-CR(256)-CR(256)–M-CR(512)-CR(512)-CR(512);Its
Middle B indicates batch normalization layer (Batch Normalization Layer) that C indicates convolutional layer (Convolution Layer), R
It indicates active coating (ReLU), M indicates down-sampling layer (Max Pooling Layer);Digital representation convolutional layer output in bracket
Port number;
The active coating (ReLU) is defined as:
Wherein x is the input of active coating (ReLU);
The characteristics of image of feature extraction network output is fe, feSize isTo guarantee tensor Scale invariant,
In characteristics of image feOn the basis of be coupled one with characteristics of image feThe identical full null tensor zero of size, final feature extraction net
The characteristics of image of network output should be fez, it is defined as:
Wherein []kIt indicates that two tensors are tieed up along kth to be coupled;Preferably, k=3;Characteristics of image fezSize be
Step S12, to the characteristics of image fe extractedez, using the pixel classifications net of warp lamination and up-sampling layer composition
Network carries out preliminary lane region detection to input Driving Scene image;
Specially:
The characteristics of image fe that will be extracted in step S11ezPass through the pixel classifications net being made of up-sampling layer and warp lamination
Network obtains the characteristic pattern with former input Driving Scene image equal resolution, and using characteristic pattern to belonging to each pixel
Classification is classified;Due to needing to restore to image resolution ratio, the pixel classifications network used here and feature extraction network mirror
As symmetrical network structure;
The specific network structure of pixel classifications network is:DR(512)–DR(512)–DR(512)–U–DR(256)–DR
(256)–DR(256)–U–DR(128)–DR(128)–DR(128)–U–DR(64)–DR(64)–U–DR(32)–DS(1);Wherein D
It indicates warp lamination (Deconvolution Layer), U indicates that up-sampling layer (Up-sample Layer), S indicate active coating
(Sigmoid), the digital representation warp lamination output channel number in bracket, it is noted that the output of the last one warp lamination is logical
Road number is 1, in order to differentiate whether pixel is lane region;
The active coating (Sigmoid) is defined as function:
Wherein, x is the input of active coating (Sigmoid);
By up-sampling layer identical with down-sampling number of layers, pixel classifications network characteristic pattern can be restored to input
Image equal resolution, to realize that characteristic pattern and pixel correspond;Active coating (Sigmoid) function is by pixel with general
The form of rate is classified, and final output probability graph indicates that each pixel belongs to the probability in lane region to get preliminary vehicle is arrived
Road testing result;
Step S13, to the characteristics of image fe extractedez, using the pixel classifications net of warp lamination and up-sampling layer composition
Network carries out preliminary lane detection to input Driving Scene image;
Specially:
Using with network structure identical in step S12 (DR (512)-DR (512)-DR (512)-U-DR (256)-DR
(256)-DR (256)-U-DR (128)-DR (128)-DR (128)-U-DR (64)-DR (64)-U-DR (32)-DS (1)), it will walk
The characteristics of image f extracted in rapid S11ezBy the pixel classifications network being made of up-sampling layer and warp lamination, obtain defeated with original
Enter the characteristic pattern of Driving Scene image equal resolution, and is classified using characteristic pattern to each pixel generic;
Identical as step S12, step S13 inputs identical characteristics of image fez。
Preferably, the step S2 includes following sub-step:
Step S21 is based on characteristics of image feWith preliminary lane detection as a result, by extract lane line in geometry about
Beam corrects lane detection result;
Step S22 is based on characteristics of image feeWith preliminary lane detection result, by extract lane edge geometrical constraint,
Correct lane detection result.
Preferably, the step S21 includes following sub-step:
Step S211, extracts lane line using preliminary lane detection result and corrects feature, carries out geometry to lane detection
Constraint;
Specially:
In order to extract lane line amendment feature and with the characteristics of image f in step S11eIt is merged, corrects feature extraction
The tensor f of network outputmr(i.e. lane line corrects feature fmr) size must be with characteristics of image feIt is identical, therefore correct feature extraction
The network structure of network (B-CR (32)-CR (32)-M-CR (64)-CR (64)-M-CR (128)-CR identical as step S11
(128)-CR(128)–M-CR(256)-CR(256)-CR(256)–M-CR(512)-CR(512)-CR(512));Final amendment is special
The lane line that sign extracts network output corrects feature fmrSize beSimultaneously in order to improve feature extraction efficiency, add
The convergence rate of fast network, this feature extract the penultimate warp lamination of pixel classifications network in network receiving step S13
The characteristic pattern progress feature of output, which is brought up again, to be taken, rather than the last one warp lamination exports;
Step S212 corrects feature f using lane linemrLane detection result is modified, and generates accurate lane
Testing result;
Specially:
Lane line obtained in step S211 is corrected into feature fmrThe characteristics of image f obtained with step S11eConnection, obtains
Eventually for the input feature vector f of lane detectionel, it is defined as:
fel=[fe, fmr]k
Preferably, k=3;Finally obtained input feature vector felSize isBy input feature vector felInput step
The pixel classifications network defined in rapid S12 carries out lane detection using identical network parameter;It can finally obtain accurate
, by lane line geometrical relationship constrain lane detection result.
Preferably, the step S22 includes following sub-step:
Step S221, extracts lane using preliminary lane detection result and corrects feature, carries out geometry about to lane detection
Beam;
Specially:
In order to extract lane amendment feature and with the characteristics of image f in step S11eIt is merged, corrects feature extraction net
The tensor f of network outputlr(i.e. feature f is corrected in lanelr) size must be with characteristics of image feIt is identical, therefore correct feature extraction network
Network structure (B-CR (32)-CR (32)-M-CR (64)-CR (64)-M-CR (128)-CR (128)-CR identical as step S11
(128)–M-CR(256)-CR(256)-CR(256)–M-CR(512)-CR(512)-CR(512));Final amendment feature extraction net
Correct feature f in the lane of network outputlrSize beSimultaneously in order to improve feature extraction efficiency, accelerate network
Convergence rate, this feature extract the spy of the penultimate warp lamination output of pixel classifications network in network receiving step S12
Sign figure progress feature, which is brought up again, to be taken, rather than the last one warp lamination exports;
Step S212 corrects feature f using lanelrLane detection result is modified, and generates accurate lane
Line testing result;
Specially:
Feature f is corrected into lane obtained in step S221lrThe characteristics of image f obtained with step S11eConnection, obtains most
It is used for the input feature vector f of lane detection eventuallyem, it is defined as:
fem=[fe, flr]3
Finally obtained input feature vector femSize isBy input feature vector femIt is defined in input step S13
Good pixel classifications network carries out lane detection using identical network parameter;It can finally obtain accurately, by lane
The lane detection result of geometrical relationship constraint.
Preferably, the lane detection method based on geometry regularization constraint further includes step S3, by being based on structure
The loss function of information optimizes lane detection result and lane detection result in conjunction with cross entropy loss function, and
Training network.
The step S3 is specially:
Referring to shown in Fig. 2, for lane detection result:
Boundary consistency is measured using the loss function based on boundary consistency, and by handing over and comparing, is obtained based on friendship
And the loss function of ratio optimizes lane detection result;Wherein, the loss function based on boundary consistency refers to the assumption that lane
With lane line with the loss function of internal requirement on boundary;Loss function l based on friendship and ratiobaIt is defined as follows:
lba=1-IoU
Wherein xiFor the pixel for inputting Driving Scene image, p (xi) it is pixel xiPosition active coating Sigmoid output
Probability value;y(xi) it is pixel xiConcrete class, * indicate Pixel-level multiplication;
Referring to shown in Fig. 3, for lane detection result:
Lane detection result is optimized using the loss function based on region;Wherein, the loss function based on region
It is defined as follows:
Wherein bound term G (xi)=1 indicates all pixels in the region of lane, Ir(xi) indicate all based on lane
The parameter probability valuing in the lane region that line testing result is restored;
The spatial coherence between pixel is depended on by the method that lane detection result restores lane region, i.e., most
The parameter probability valuing in the lane region that should be contributed identical information between relevant pixel, therefore be resumed and nearest therewith
The probability value of pixel is identical on lane line, defines Ir(xi) as follows:
Ir(xi)=Ib(x′j)
Wherein d (xi, mj) indicate pixel xiAnd mjEuclidean distance, Ib(x′j) it is in pixel x 'jOn lane line it is general
Rate, argminmjExpression make thereafter surface function reach the smallest pixel position;Therefore the finally obtained loss letter based on region
Number laaIt is defined as follows:
Four different loss functions are added by weight, obtain the loss function l for training whole network,
It is defined as follows
L=llce+lmce+λ1lba+λ2laa
Wherein llceFor the loss function of lane detection target, lmceFor the loss function of lane detection target, λ1For base
In friendship and the loss function l of ratiobaWeight, λ2For the loss function l based on regionaaWeight.
The above-mentioned lane detection method based on geometry regularization constraint of the present invention, for solving in intelligent driving scene
It can travel region segmentation problem, be a kind of travelable region segmentation method of high-efficiency high-accuracy.Including:Step S1, for input
Image carries out preliminary lane detection and lane detection, and segmentation obtains preliminary lane detection result;Step S2, to preliminary lane and
Lane detection result carries out intersection comparison, corrects detection error region and exports final lane detection result;In existing vehicle
On the basis of road detection model, the present invention, which passes through, is introduced into the intrinsic geological information of road in traffic scene as constraint, effectively excludes
Environmental disturbances, and improve the accuracy of lane detection.The present invention does not need to pre-process image and post-process, and may be implemented
Lane detection end to end.Experimental result is shown, compared to classical detection method, the present invention has larger mention in accuracy in detection
It rises.
The lane detection method based on geometry regularization constraint of above-mentioned offer, below to the design principle of this method and reality
Step is applied to be described in detail.
Different from common semantic segmentation, object different types of in scene is not only split by lane detection, also
It needs to distinguish different lanes, obtains high-precision lane region.In order to preferably overcome the self-similarity pair between adjacent lane
The influence of detection effect, the invention proposes a multi-target networks structure, the inherent geometry for learning lane and lane line is contacted,
And the mutual optimization of testing result is realized between target by feature extraction network, thus compared to conventional method in complicated field
Better testing result can be obtained under scape and interference.
1, preliminary lane and lane detection
Feature extraction first is carried out to original image, the size of input picture is w*h*3, and wherein w is picture traverse, and h is image
Highly, image channel 3.By convolutional layer, feature extraction network can be extracted successively by specific to abstract characteristics of image, and
Down-sampling on the one hand guarantee operand will not the explosive growth with the increase of network depth, be on the other hand extracted image most
The loss of key message during down-sampling is prevented for significant feature.
Specifically feature extraction network structure is:B-CR(32)-CR(32)–M-CR(64)-CR(64)–M-CR(128)-CR
(128)-CR(128)–M-CR(256)-CR(256)-CR(256)–M-CR(512)-CR(512)
-CR(512).Wherein B is indicated batch normalization layer (Batch Normalization Layer), and C indicates convolutional layer
(Convolution Layer), R indicate active coating (ReLU), and M indicates down-sampled layer (Max Pooling Layer).In bracket
Digital representation convolutional layer output channel number.The active coating ReLU is defined as:
Wherein x is the input of active coating ReLU.The characteristics of image of final feature extraction network output is fe, feSize isDue to needing to extract amendment feature in step s 2 and feature and f will be correctedeFusion, thus it is anti-in step S2
The port number of convolution kernel is not equal to feChannel.In order to guarantee that network is able to carry out end-to-end training, the present invention is in feature feBase
It is coupled one and f on plintheThe identical full null tensor zero of size, the feature of final output should be fez, it is defined as:
fez=[fe, zero]3
Wherein []kIt indicates that two tensors are tieed up along kth to be coupled.Final feature feSize be
By feature fezIt is calculated, is obtained and original image using the pixel classifications network being made of up-sampling layer and warp lamination
Classify as the characteristic pattern of equal resolution, and using characteristic pattern to each pixel generic.Due to needing to restore
To image resolution ratio, use and the symmetrical network structure of feature extraction Network Mirror here.
Specific network structure is DR (512)-DR (512)-DR (512)-U-DR (256)-DR (256)-DR (256)-U-
DR(128)–DR(128)–DR(128)–U–DR(64)–DR(64)–U–DR(32)–DS(1).Wherein D indicates warp lamination
(Deconvolution Layer), U indicate that up-sampling layer (Up-sample Layer), S indicate active coating (Sigmoid).It includes
Digital representation convolutional layer output channel in number, it is to be noted that the output channel number of last model is 1.
The active coating Sigmoid is defined as:
Wherein, x is the input of active coating Sigmoid.Pass through up-sampling layer identical with down-sampled number of layers, pixel classifications
Network characteristic pattern can be restored to input picture equal resolution, to realize that characteristic pattern and pixel correspond.
Sigmoid function classifies pixel in the form of probability, final output probability graph.
Loss function corresponding with active coating Sigmoid is to intersect entropy function, is defined as:
Wherein xiFor the pixel of image, p (xi) it is pixel xiThe probability value of position active coating Sigmoid output.y
(xi) it is pixel xiConcrete class, in the present invention, if xiBelong to lane or lane line is then 1, is otherwise 0.
Due to it is proposed by the present invention be a multi-target networks, it is therefore desirable to two independent pixel classifications networks respectively into
Runway detection and lane detection, the two networks use respective convolution kernel respectively, during being trained, two
Pixel classifications network is individually updated according to respective testing result.And since feature extraction network is two sub- network shares,
Therefore feature extraction network is influenced to update jointly by two testing results.Trained loss function is:
L=llce+lmce
Wherein, llceFor the loss function of lane detection target, and lmceFor the loss function of lane detection target, two
Weight is identical in the training process for function.
2, lane and lane detection amendment
On the basis of the first step, the present invention extracts amendment feature using preliminary lane detection result, examines to lane
It surveys and carries out geometrical constraint.In order to extract amendment feature and with feature feIt is merged, amendment feature extraction network uses and feature
Extract the identical network structure of network, final characteristic size and feIt is identical, beWhile in order to reduce network training
Parameter accelerates the convergence rate of network, and it is defeated that this feature extracts penultimate warp lamination in network reception pixel classifications network
Characteristic pattern progress feature out, which is brought up again, to be taken, rather than the last one warp lamination exports.Final two amendments feature extraction network
Receive the Preliminary detection of lane and lane line respectively as a result, output amendment feature flrAnd fmr。
Feature f will be correctedlr、fmrWith feature feConnection obtains the input spy eventually for lane detection and lane detection
Levy felAnd fem, it is defined as:
fel=[fe, fmr]3
fem=[fe, flr]3
Finally obtained feature felAnd femSize isIn order to realize network training end to end and inspection
It surveys, while in order to reduce network parameter, feature felAnd femThe pixel classifications network defined in the input first step, and preceding
Weight is shared into propagation and back-propagation process.It is since pixel classifications network receives in the first step feature has half
Complete zero feature, therefore this part weight will not work in the first step, and not participate in backpropagation.And this part weight only exists
Backpropagation is participated in this step.The training method of this step is described below.
3, structural penalties function defines
In order to explicitly introduce the geometrical constraint in lane, present invention employs based on structural information loss function with intersect
Entropy loss function combines, for optimizing detection result and training network.
For lane detection result, the erroneous detection problem overwhelming majority occurs in the form of region, and simple intersection
Entropy loss function can not measure the extent of deviation of lane geometry.Therefore it is directed to lane detection, the present invention, which uses, is based on boundary
The loss function of consistency, this loss function be based on the assumption that lane and lane line have on boundary it is interior it is consistent
Property.Since simple boundary relatively may cause great penalty values, so as to cause network training difficulty, therefore the present invention is used
It hands over and ratio is to measure boundary consistency, obtain optimizing lane detection result based on the loss function of friendship and ratio.Simultaneously based on friendship
The loss function l of ratiobaIt is defined as follows:
lba=1-IoU
Wherein xiFor the pixel of image, p (xi) it is pixel xiThe probability value of position active coating Sigmoid output.y
(xi) it is pixel xiConcrete class, * indicate Pixel-level multiplication.Since the calculating process of this loss function all uses Pixel-level
Operation, therefore entire loss function can be led, and be able to carry out and trained end to end.
For lane detection result, testing result is easier to be influenced by low signal-to-noise ratio and missing inspection problem occur,
Present invention employs a kind of, and the loss function based on region optimizes lane detection.This loss function is defined as follows:
Wherein bound term G (xi)=1 indicates all pixels in the region of lane and Ir(xi) indicate all based on lane
The parameter probability valuing in the lane region that line testing result is restored.
The spatial coherence between pixel is depended on by the method that lane detection result restores lane region,
Identical information should be contributed between i.e. maximally related pixel.Therefore the parameter probability valuing in lane region that is resumed and therewith most
The probability value of pixel is identical on close lane line, is defined as follows:
Ir(xi)=Ib(x′j)
Wherein d (xi, mj) indicate pixel xiAnd mjEuclidean distance, Ib(x′j) it is in pixel x 'jOn lane line it is general
Rate, argminmjExpression make below function reach the smallest pixel position.Therefore the finally obtained loss function based on region
laaIt is defined as follows:
In order to train whole network, four different loss functions are added by the present invention by weight, final damage
Function l is lost to be defined as follows
L=llce+lmce+λ1lba+λ2laa。
Specific example
This specific example is chosen two databases of KITTI and RVD and is tested, and the matching effect of the present embodiment is observed.
And be compared with existing optimal frequency showing method, experimental result is analyzed.In order to which the performance between more different models is poor
Different, network is trained only with the training image in database, without extra data.
KITTI database includes 289 trained pictures and 290 test pictures, respectively includes three kinds of different road fields
Scape:Single-lane road, multiple-lane road and without three-lane road.Single-lane road is defined as only tool, and there are two opposite direction lanes
Road, and multiple-lane road then has multiple lanes in a driving direction, no three-lane road does not have apparent lane markings.
Since no three-lane road has that part lane is difficult to define, in hands-on and test, we are by a small amount of nothing
Three-lane road excludes.
RVD database then comprises more than 10 hours traffic scene images acquired using multi-cam, has more than 10000
The image of Zhang Shougong mark, these images contain different weather and different road conditions, including highway scene, urban road
Scene, rainy day scene and night scenes.
Implementation result
On KITTI database, the relatively current existing method of the present invention is in accuracy rate (P), recall rate (R), F1 score
(F1Score) and hand over and than being greatly improved on (IoU).As shown in table 1, the present invention is in bicycle road, multilane and nothing
Better performance has all been obtained in the scene of lane, this illustrates that the present invention can better adapt to the variation of different scenes and environment,
With more robustness.
Table 1 is experimental result and comparison on KITTI data set:
It is worth noting that, as shown in table 2, being compared with simple multitask network, amendment feature and structural penalties function
It has been obviously improved performance.Compared to multitask network, correcting feature makes the friendship of verification result and mentions than promoting 1.3%, F1 score
Rise 0.007.And structural penalties function then makes the friendship of verification result and promotes 0.005 than promoting 0.9%, F1 score.And add simultaneously
Add the model of amendment feature and structural penalties function to reach best performance, hand over and promotes 0.012 than promoting 2.1%, F1 score.
Therefore amendment feature and structural penalties function have all played performance boost vital.
On RVD database, as shown in table 2, the present invention is also yielded good result.It notices and is different from its other party
Not with scene changes big ups and downs occur for method, accuracy rate of the invention, recall rate, F1 score and friendship and ratio, especially at night
Between under scene.The problem of lane line caused by illumination variation is difficult to is difficult to be solved by other methods, but the present invention is due to depositing
It is correcting, second-order correction can done in lane of the night to detection to obtain preferable result.
Table 2 is experimental result and comparison on RVD data set:
Specific embodiments of the present invention are described above.It is to be appreciated that the invention is not limited to above-mentioned
Particular implementation, those skilled in the art can make various deformations or amendments within the scope of the claims, this not shadow
Ring substantive content of the invention.
Claims (10)
1. a kind of lane detection method based on geometry regularization constraint, which is characterized in that include the following steps:
Step S1 carries out feature extraction for input Driving Scene image, obtains preliminary lane detection result and lane detection
As a result;
Step S2 carries out intersection comparison to preliminary lane detection result and lane detection result, and amendment detection error region is simultaneously
Export final lane detection result and lane detection result.
2. the lane detection method according to claim 1 based on geometry regularization constraint, which is characterized in that the step
S1 includes following sub-step:
Step S11 extracts the figure of input Driving Scene image using multiple convolutional layers and down-sampling layer building feature extraction network
As feature;Wherein:
The input of feature extraction network is the input Driving Scene image after down-sampling layer minification;By convolutional layer,
Feature extraction network is successively extracted by specific to abstract characteristics of image;
The network structure of feature extraction network is:B-CR(32)-CR(32)-M-CR(64)-CR(64)-M-CR(128)-CR
(128)-CR(128)-M-CR(256)-CR(256)-CR(256)-M-CR(512)-CR(512)-CR(512);Wherein B indicates to criticize
Normalization layer, C indicate that convolutional layer, R indicate that active coating ReLU, M indicate down-sampling layer;Digital representation convolutional layer output in bracket
Port number;The active coating ReLU is defined as:
Wherein x is the input of active coating ReLU;
The characteristics of image of feature extraction network output is fe, to guarantee tensor Scale invariant, in characteristics of image feOn the basis of be coupled
One and characteristics of image feThe identical full null tensor zero of size, the characteristics of image of final feature extraction network output is fez, fixed
Justice is:
fez=[fe, zero]k
Wherein []kIndicate feIt ties up and is coupled along kth with two tensors of zero;
Step S12, to the characteristics of image f extractedez, right using the pixel classifications network of warp lamination and up-sampling layer composition
It inputs Driving Scene image and carries out preliminary lane region detection;
Step S13, to the characteristics of image f extractedez, right using the pixel classifications network of warp lamination and up-sampling layer composition
It inputs Driving Scene image and carries out preliminary lane detection;
Wherein, in step S12 and step S13, while characteristics of image f is usedez, but it is real respectively using two pixel classifications networks
Existing lane and lane detection;
The characteristics of image f that will be extracted in step S11ezPixel classifications net by being made of up-sampling layer and warp lamination respectively
Network obtains the characteristic pattern with input Driving Scene image equal resolution, and using characteristic pattern to the affiliated class of each pixel
Do not classify;
Pixel classifications network and feature extraction Network Mirror are symmetrical;The network structure of pixel classifications network is:(512) one DR of DR
(512)-DR(512)-U-DR(256)-DR(256)-DR(256)-U-DR(128)-DR(128)-DR(128)-U-DR(64)-DR
(64)-U-DR(32)-DS(z);Wherein D indicates that warp lamination, U indicate that up-sampling layer, S indicate active coating Sigmoid;In bracket
Digital representation warp lamination output channel number;When the last one warp lamination output channel number z is 1, pixel category is indicated
In lane region or lane line, when the last one warp lamination output channel number z is 0, indicate that speed limit point is not belonging to lane area
Domain or lane line;
The active coating Sigmoid is defined as function:
Wherein, x is the input of active coating Sigmoid;
By up-sampling layer identical with down-sampling number of layers, pixel classifications network by characteristic pattern restore to input Driving Scene
Image equal resolution, to realize that characteristic pattern and pixel correspond;Active coating Sigmoid function is by pixel with probability
Form classify, final output probability graph indicates that each pixel belongs to the probability of lane region or lane line to get arriving
Preliminary lane detection result and lane detection result.
3. the lane detection method according to claim 2 based on geometry regularization constraint, which is characterized in that the step
S2 includes following sub-step:
Step S21 is based on characteristics of image feWith preliminary lane detection as a result, by extracting in lane line in geometrical constraint, amendment
Lane detection result;
Step S22 is based on characteristics of image feVehicle is corrected by extracting the geometrical constraint at lane edge with preliminary lane detection result
Diatom testing result.
4. the lane detection method according to claim 3 based on geometry regularization constraint, which is characterized in that the step
S21 includes following sub-step:
Step S211, extracts lane line using preliminary lane detection result and corrects feature, carries out geometrical constraint to lane detection;
Wherein:
In order to extract lane line amendment feature and with characteristics of image f obtained in step S11eIt is merged, corrects feature extraction net
The lane line of network output corrects feature fmrSize requirements and characteristics of image feSize is identical;Based on this, feature extraction network is corrected
Network structure be:B-CR(32)-CR(32)-M-CR(64)-CR(64)-M-CR(128)-CR(128)-CR(128)-M-CR
(256)-CR(256)-CR(256)-M-CR(512)-CR(512)-CR(512);Wherein B indicates batch normalization layer, and C indicates convolution
Layer, R indicate active coating, and M indicates down-sampling layer;Digital representation convolutional layer output channel number in bracket;
Correct the characteristic pattern of the penultimate warp lamination output of pixel classifications network in feature extraction network receiving step S13
Progress feature, which is brought up again, to be taken;
Step S212 corrects feature f using lane linemrLane detection result is modified, and generates accurate lane detection
As a result;Wherein:
Lane line obtained in step S211 is corrected into feature fmrThe characteristics of image f obtained with step S11eConnection obtains final
Input feature vector f for lane detectionel, it is defined as:
fel=[fe, fmr]k
By input feature vector felThe pixel classifications network defined in input step S12 carries out lane using identical network parameter
Detection finally obtains lane detection result that is accurate, constraining by lane line geometrical relationship.
5. the lane detection method according to claim 3 based on geometry regularization constraint, which is characterized in that the step
S22 includes following sub-step:
Step S221, extracts lane using preliminary lane detection result and corrects feature, carries out geometrical constraint to lane detection;Its
In:
In order to extract lane amendment feature and with the characteristics of image f in step S11eIt is merged, amendment feature extraction network output
Lane correct feature flrSize requirements and characteristics of image feSize is identical, is based on this, corrects the network knot of feature extraction network
Structure is:B-CR(32)-CR(32)-M-CR(64)-CR(64)-M-CR(128)-CR(128)-CR(128)-M-CR(256)-CR
(256)-CR(256)-M-CR(512)-CR(512)-CR(512);Wherein B indicates batch normalization layer, and C indicates that convolutional layer, R indicate
Active coating, M indicate down-sampling layer;Digital representation convolutional layer output channel number in bracket;
Correct the characteristic pattern of the penultimate warp lamination output of pixel classifications network in feature extraction network receiving step S12
Progress feature, which is brought up again, to be taken;
Step S222 corrects feature f using lanelrLane detection result is modified, and generates accurate lane line inspection
Survey result;Wherein:
Feature f is corrected into lane obtained in step S221lrThe characteristics of image f obtained with step S11eConnection, is finally used
In the input feature vector f of lane detectionem, it is defined as:
fem=[fe, flr]k
By input feature vector femThe pixel classifications network defined in input step S13 carries out lane using identical network parameter
Line detection finally obtains lane detection result that is accurate, constraining by lane geometrical relationship.
6. the lane detection method according to claim 2 based on geometry regularization constraint, which is characterized in that further include as
Under any one or any number of features:
Driving Scene image size after minification is:w*h*3;Wherein w is picture traverse, and h is picture altitude, and image is logical
Road is 3;
Characteristics of image feSize be
Characteristics of image fezSize be
In step s 12, include using the classification that characteristic pattern classifies to each pixel generic:Lane region
With non-lane region;
In step s 13, include using the classification that characteristic pattern classifies to each pixel generic:Lane line area
Domain and non-lane line region.
7. the lane detection method according to claim 4 based on geometry regularization constraint, which is characterized in that further include as
Lower any one or any multinomial feature:
Lane line corrects feature fmrSize be
Input feature vector felSize is
Wherein w is the Driving Scene picture traverse after minification, and h is the Driving Scene picture altitude after minification.
8. the lane detection method according to claim 5 based on geometry regularization constraint, which is characterized in that further include as
Lower any one or any multinomial feature:
Correct feature f in lanelrSize be
Input feature vector femSize is
Wherein w is the Driving Scene picture traverse after minification, and h is the Driving Scene picture altitude after minification.
9. the lane detection method according to any one of claim 1 to 8 based on geometry regularization constraint, feature exist
In further including step S3, through the loss function based on structural information in conjunction with cross entropy loss function, to lane detection result
It is optimized with lane detection result, and above-mentioned all-network is trained on end-to-end ground simultaneously.
10. the lane detection method according to claim 9 based on geometry regularization constraint, which is characterized in that the step
Suddenly S3 is specially:
For lane detection result:
Boundary consistency is measured using the loss function based on boundary consistency, and by handing over and comparing, obtain based on friendship and is compared
Loss function optimize lane detection result;Wherein, the loss function based on boundary consistency refers to the assumption that lane and vehicle
Diatom is on boundary with the loss function of internal requirement;Loss function l based on friendship and ratiobaIt is defined as follows:
lba=1-IoU
Wherein xiFor the pixel for inputting Driving Scene image, p (xi) it is pixel xiPosition active coating Sigmoid is exported general
Rate value;y(xi) it is pixel xiConcrete class, * indicate Pixel-level multiplication;
For lane detection result:
Lane detection result is optimized using the loss function based on region;Wherein, the loss function definition based on region
It is as follows:
Wherein bound term G (xi)=1 indicates all pixels in the region of lane, Ir(xi) indicate all based on lane line inspection
Survey the parameter probability valuing in the lane region that result is restored;
Depend on the spatial coherence between pixel by the method that lane detection result restores lane region, i.e., it is most related
Pixel between the parameter probability valuing in lane region that should contribute identical information, therefore be resumed and lane nearest therewith
The probability value of pixel is identical on line, defines Ir(xi) as follows:
Ir(xi)=Ib(x′j)
Wherein d (xi, mj) indicate pixel xiAnd mjEuclidean distance, Ib(x′j) it is in pixel x 'jOn lane line probability,Expression make thereafter surface function reach the smallest pixel position;Therefore the finally obtained loss function based on region
laaIt is defined as follows:
Four different loss functions are added by weight, obtain the loss function l for training whole network, are defined
It is as follows
L=llce+lmce+λ1lba+λ2laa
Wherein llceFor the loss function of lane detection target, lmceFor the loss function of lane detection target, λ1For based on hand over simultaneously
The loss function l of ratiobaWeight, λ2For the loss function l based on regionaaWeight.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810527769.7A CN108846328B (en) | 2018-05-29 | 2018-05-29 | Lane detection method based on geometric regularization constraint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810527769.7A CN108846328B (en) | 2018-05-29 | 2018-05-29 | Lane detection method based on geometric regularization constraint |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108846328A true CN108846328A (en) | 2018-11-20 |
CN108846328B CN108846328B (en) | 2020-10-16 |
Family
ID=64207991
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810527769.7A Active CN108846328B (en) | 2018-05-29 | 2018-05-29 | Lane detection method based on geometric regularization constraint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108846328B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109839937A (en) * | 2019-03-12 | 2019-06-04 | 百度在线网络技术(北京)有限公司 | Determine method, apparatus, the computer equipment of Vehicular automatic driving planning strategy |
CN110009090A (en) * | 2019-04-02 | 2019-07-12 | 北京市商汤科技开发有限公司 | Neural metwork training and image processing method and device |
CN110148148A (en) * | 2019-03-01 | 2019-08-20 | 北京纵目安驰智能科技有限公司 | A kind of training method, model and the storage medium of the lower edge detection model based on target detection |
CN110163077A (en) * | 2019-03-11 | 2019-08-23 | 重庆邮电大学 | A kind of lane recognition method based on full convolutional neural networks |
CN110427860A (en) * | 2019-07-26 | 2019-11-08 | 武汉中海庭数据技术有限公司 | A kind of Lane detection method, apparatus and storage medium |
CN111209777A (en) * | 2018-11-21 | 2020-05-29 | 北京市商汤科技开发有限公司 | Lane line detection method and device, electronic device and readable storage medium |
CN111832368A (en) * | 2019-04-23 | 2020-10-27 | 长沙智能驾驶研究院有限公司 | Training method and device for travelable region detection model and application |
CN112651328A (en) * | 2020-12-23 | 2021-04-13 | 浙江中正智能科技有限公司 | Iris segmentation method based on geometric position relation loss function |
CN114463720A (en) * | 2022-01-25 | 2022-05-10 | 杭州飞步科技有限公司 | Lane line detection method based on line segment intersection-to-parallel ratio loss function |
CN115496941A (en) * | 2022-09-19 | 2022-12-20 | 哈尔滨工业大学 | Knowledge-enhanced computer vision-based structural health diagnosis method |
CN116682087A (en) * | 2023-07-28 | 2023-09-01 | 安徽中科星驰自动驾驶技术有限公司 | Self-adaptive auxiliary driving method based on space pooling network lane detection |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007218705A (en) * | 2006-02-15 | 2007-08-30 | Mitsubishi Electric Corp | White line model measurement system, measuring truck, and white line model measuring device |
CN105488492A (en) * | 2015-12-25 | 2016-04-13 | 北京大学深圳研究生院 | Color image preprocessing method, road identification method and related device |
CN108009524A (en) * | 2017-12-25 | 2018-05-08 | 西北工业大学 | A kind of method for detecting lane lines based on full convolutional network |
-
2018
- 2018-05-29 CN CN201810527769.7A patent/CN108846328B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007218705A (en) * | 2006-02-15 | 2007-08-30 | Mitsubishi Electric Corp | White line model measurement system, measuring truck, and white line model measuring device |
CN105488492A (en) * | 2015-12-25 | 2016-04-13 | 北京大学深圳研究生院 | Color image preprocessing method, road identification method and related device |
CN108009524A (en) * | 2017-12-25 | 2018-05-08 | 西北工业大学 | A kind of method for detecting lane lines based on full convolutional network |
Non-Patent Citations (2)
Title |
---|
JUN LI 等: "Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 * |
王镇波 等: "一种交通监控场景下的多车道检测方法", 《计算机工程与应用》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111209777A (en) * | 2018-11-21 | 2020-05-29 | 北京市商汤科技开发有限公司 | Lane line detection method and device, electronic device and readable storage medium |
CN110148148A (en) * | 2019-03-01 | 2019-08-20 | 北京纵目安驰智能科技有限公司 | A kind of training method, model and the storage medium of the lower edge detection model based on target detection |
CN110163077A (en) * | 2019-03-11 | 2019-08-23 | 重庆邮电大学 | A kind of lane recognition method based on full convolutional neural networks |
CN109839937A (en) * | 2019-03-12 | 2019-06-04 | 百度在线网络技术(北京)有限公司 | Determine method, apparatus, the computer equipment of Vehicular automatic driving planning strategy |
CN109839937B (en) * | 2019-03-12 | 2023-04-07 | 百度在线网络技术(北京)有限公司 | Method, device and computer equipment for determining automatic driving planning strategy of vehicle |
CN110009090A (en) * | 2019-04-02 | 2019-07-12 | 北京市商汤科技开发有限公司 | Neural metwork training and image processing method and device |
CN111832368A (en) * | 2019-04-23 | 2020-10-27 | 长沙智能驾驶研究院有限公司 | Training method and device for travelable region detection model and application |
CN110427860B (en) * | 2019-07-26 | 2022-03-25 | 武汉中海庭数据技术有限公司 | Lane line identification method and device and storage medium |
CN110427860A (en) * | 2019-07-26 | 2019-11-08 | 武汉中海庭数据技术有限公司 | A kind of Lane detection method, apparatus and storage medium |
CN112651328A (en) * | 2020-12-23 | 2021-04-13 | 浙江中正智能科技有限公司 | Iris segmentation method based on geometric position relation loss function |
CN114463720A (en) * | 2022-01-25 | 2022-05-10 | 杭州飞步科技有限公司 | Lane line detection method based on line segment intersection-to-parallel ratio loss function |
CN115496941A (en) * | 2022-09-19 | 2022-12-20 | 哈尔滨工业大学 | Knowledge-enhanced computer vision-based structural health diagnosis method |
CN115496941B (en) * | 2022-09-19 | 2024-01-09 | 哈尔滨工业大学 | Structural health diagnosis method based on knowledge enhanced computer vision |
CN116682087A (en) * | 2023-07-28 | 2023-09-01 | 安徽中科星驰自动驾驶技术有限公司 | Self-adaptive auxiliary driving method based on space pooling network lane detection |
CN116682087B (en) * | 2023-07-28 | 2023-10-31 | 安徽中科星驰自动驾驶技术有限公司 | Self-adaptive auxiliary driving method based on space pooling network lane detection |
Also Published As
Publication number | Publication date |
---|---|
CN108846328B (en) | 2020-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108846328A (en) | Lane detection method based on geometry regularization constraint | |
Henry et al. | Road segmentation in SAR satellite images with deep fully convolutional neural networks | |
Prathap et al. | Deep learning approach for building detection in satellite multispectral imagery | |
CN110414387B (en) | Lane line multi-task learning detection method based on road segmentation | |
Li et al. | Road network extraction via deep learning and line integral convolution | |
CN105930868B (en) | A kind of low resolution airport target detection method based on stratification enhancing study | |
CN110084850B (en) | Dynamic scene visual positioning method based on image semantic segmentation | |
CN108596055B (en) | Airport target detection method of high-resolution remote sensing image under complex background | |
CN109409263A (en) | A kind of remote sensing image city feature variation detection method based on Siamese convolutional network | |
CN113673444B (en) | Intersection multi-view target detection method and system based on angular point pooling | |
CN106778605A (en) | Remote sensing image road net extraction method under navigation data auxiliary | |
Tan et al. | Vehicle detection in high resolution satellite remote sensing images based on deep learning | |
CN107423747B (en) | A kind of conspicuousness object detection method based on depth convolutional network | |
CN113052106B (en) | Airplane take-off and landing runway identification method based on PSPNet network | |
CN106910202B (en) | Image segmentation method and system for ground object of remote sensing image | |
CN109446894A (en) | The multispectral image change detecting method clustered based on probabilistic segmentation and Gaussian Mixture | |
CN109712071A (en) | Unmanned plane image mosaic and localization method based on track constraint | |
CN107944354A (en) | A kind of vehicle checking method based on deep learning | |
CN114913498A (en) | Parallel multi-scale feature aggregation lane line detection method based on key point estimation | |
CN111383273B (en) | High-speed rail contact net part positioning method based on improved structure reasoning network | |
Bastani et al. | Inferring and improving street maps with data-driven automation | |
Lu et al. | Edge-reinforced convolutional neural network for road detection in very-high-resolution remote sensing imagery | |
CN113989256A (en) | Detection model optimization method, detection method and detection device for remote sensing image building | |
Yue et al. | SCFNet: Semantic correction and focus network for remote sensing image object detection | |
Gorbachev et al. | Digital processing of aerospace images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |