CN110276267A - Method for detecting lane lines based on Spatial-LargeFOV deep learning network - Google Patents
Method for detecting lane lines based on Spatial-LargeFOV deep learning network Download PDFInfo
- Publication number
- CN110276267A CN110276267A CN201910454187.5A CN201910454187A CN110276267A CN 110276267 A CN110276267 A CN 110276267A CN 201910454187 A CN201910454187 A CN 201910454187A CN 110276267 A CN110276267 A CN 110276267A
- Authority
- CN
- China
- Prior art keywords
- lane
- lane line
- spatial
- largefov
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of method for detecting lane lines based on Spatial-LargeFOV deep learning network.This method completes the positioning to lane line by the training one semantic segmentation network for dividing lane line pixel region.In order to more effectively identify the continuity target of this long range of lane line, explore the coder-decoder structure with LargeFOV for basic network, on the one hand enhance spatial information using spatial convoluted structure, on the other hand increase receptive field using the empty convolution combination of different spreading rates, it is connected using jump and carries out multi-stage characteristics fusion, achieve the purpose that the multiple dimensioned contextual information of fusion and refinement segmenting edge.Accurate lane detection may be implemented in the present invention under different scenes, has a good application prospect.
Description
Technical field
The present invention relates to computer vision fields, more particularly to one kind to be based on Spatial-LargeFOV deep learning network
Method for detecting lane lines.
Background technique
Lane detection technology is one of essential key technology in automated driving system.In some simple scenarios
In, the effect of existing lane detection algorithm is able to satisfy practical application request, but carries out accurately in opening, complicated scene
Lane detection be still a full of challenges task, main difficulty is the image-forming condition (day in practical driving environment
Gas, blocks at illumination) and disturbing factor (lane line, ground road sign and the text of different mode) increase scene complexity, add
The big difficulty of lane detection.
Tradition method for detecting lane lines is examined in conjunction with the manual feature extractor and post-processing step highly customized mostly
Measuring car diatom passes through batten first with the feature extractions such as the color of lane line, texture, shape locally or globally feature later
Curve matching obtains smooth lane line.And most methods have a stringent hypotheses, such as lane line is parallel to each other, lane
Line is straight line or close to straight line, therefore can only solve the problems, such as lane detection in limited scene.
Computer vision field is pushed to a new height, especially convolutional Neural net based on the method for deep learning
Network can learn effective further feature in image automatically, have to go out very much using upper in target detection, target following etc. at present
The effect of color.Therefore based on the method for detecting lane lines of deep learning in terms of robustness, accuracy remote ultra-traditional method.
Summary of the invention
For the particularity of lane line target itself, the object of the present invention is to provide one kind to be based on Spatial-LargeFOV
The method for detecting lane lines of deep learning network, it is different using a semantic segmentation network implementations to acquired driving image
Accurate and quick lane detection under scene.
In order to achieve the above objectives, The technical solution adopted by the invention is as follows:
Step 1: after original image uniform sizes, it is input to the semantic segmentation that lane line is carried out in segmentation network, it is different
Lane line distribute different id, obtain the probability graph of all lane line segmentation results, continue to next frame input picture carry out
Lane line segmentation;
Step 2: the probability graph of lane line segmentation result is post-processed, that is, finds the peak response point of every lane line,
The lane point coordinate set of every lane line is obtained, connection lane point obtains final lane detection result.
Further, using convolutional neural networks model progress lane line segmentation, specific step is as follows in step 2:
Step 2-1: original driving image is unified for the input picture of 800*288 size;
Step 2-2: the obtained input picture of step 2-1 is sent to the convolutional neural networks mould for being used for lane line segmentation
Type, network export the probability graph of lane line segmentation result, the classification number of the corresponding output of the dimension of probability graph (lane line classification number+
Background).
Further, the convolutional neural networks model described in step 2 for the segmentation of lane line pixel is a combination
Spatial convoluted structure, empty convolution combination, jump connection and depth separate the lightweight network of convolution, specifically describe such as
Under:
LargeFOV is extended to coder-decoder network (such as table 1), and by each convolution block in overall network
The step-length of the last one convolutional layer is set as 2 to replace Maxpooling to make down-sampling, only retains one after Conv5
Avgpooling layers are made being averaged for global information.Coder-decoder is allocated as follows:
(1) encoder includes the structure in LargeFOV before Conv7 and Conv7;
(2) 0 decoders are the later all structures of Conv7:
A is combined using the empty convolution of different spreading rates to increase receptive field, is connected progress multi-stage characteristics using jump and is melted
It closes, obtains multiple dimensioned contextual information on the one hand more effectively to identify the continuity target of long range, on the other hand merge
Low-level features are to refine partitioning boundary.
B characteristic pattern first passes through the convolutional layer of a 3x3 before output layer to refine characteristic information, i.e. Conv8.
Before spatial convoluted structure (SCNN) is added to Conv8 by c, enhances spatial relation, make up to a certain extent
Because of illumination, weather, the situation that disturbing factors cause representative learning bad such as block.
D separates convolution with depth and replaces Standard convolution, protects while reducing model complexity and reducing and calculate cost
Demonstrate,prove similar precision.
1 model structure of table
SC1: the 1x1 convolutional layer that an output channel is 128 is connect, respectively after the output of convolutional layer 7 and Conv4_3 to reach
The output phase of two 1x1 convolutional layers is added the multistage semantic feature fusion of row later, obtains SC1 layers by the purpose being aligned to dimension
Output.Image down sampling rate is still 8 at this time.
SC2 jump structure 2: making convolution kernel size to SC1 layers first is 4, step-length 2, the deconvolution that output channel is 128
Operation achievees the purpose that 2 times of up-samplings with this, obtains Deconv1 layers of output.1x1 convolution operation is made to Conv3_3 later, with
Achieve the purpose that dimension is aligned.Finally the two is added to carry out multi-scale feature fusion, obtains SC2 layers of output.Image at this time
Down-sampling rate is still 4.
2D_Dropout layers: two-dimensional Dropout operation being made to Sum2 layers of outputs to avoid over-fitting, loss ratio setting
It is 0.1, obtains Sum2_Dropout.
SCNN_DULR: the characteristic pattern that Sum2_Dropout layers are exported makees respectively downwards, it is upward, four to the left and to the right
The SCNN in direction, slice convolution kernel width ω are set as 9, port number 128.
The conventional convolution of a 3x3 is added behind Conv8:SCNN, one side percentage regulation separates convolution sum SCNN knot
On the one hand it is defeated with correspondence to adjust port number in favor of up-sampling the detailed information for being restored to input size for the characteristic information of structure output
Classification number out.
2D_SoftMax layers: Conv8 output is sent into two-dimensional SoftMax layers, exports lane line probability graph.Pass through 4 times
Bilinear interpolation probability graph is reverted into original size size.
Further technical solution is to be characterized in that finding every lane line in the probability graph obtained in step 2-2
The maximum point of maximum response, i.e. probability value.Only retain the point that probability value in peak response point is greater than 0.5, that is, thinks probability value
Lane point greater than 0.5 is necessary being, and is thought when this lane line exists and is more than or equal to three lane points
This lane line is existing.
Further technical solution is, in order to which the lane detection result of the continuous interframe of video is smoother, to obtain to step 4
Lane point be fitted without least square method, i.e., the obtained lane point of probability graph directly links up work after carrying out abnormity point elimination
For final lane detection result.
Compared with prior art, the present invention it is significantly a little: (1) directly single image is handled, do not need into
The complicated image preprocessing of row and model export result post-processing operation;(2) the super expressway scene of simple background and background are multiple
Accurate lane detection may be implemented in miscellaneous urban road scene;(3) to illumination variation, Changes in weather and shooting angle
Degree variation has stronger robustness;(4) it calculates and is less costly than the lane detection algorithm for being mostly based on deep learning.
Detailed description of the invention
Fig. 1 is algorithm flow chart of the invention;
Fig. 2 is the present invention for enhancing the spatial convoluted structure chart of spatial information;
Fig. 3 is the present invention for dividing the convolutional neural networks model structure of lane line;
Fig. 4 is the result figure that lane line peak response point is found in the present invention.
Fig. 5 is the detection knot of the lane detection algorithm of the invention based on coder-decoder lightweight neural network
Fruit figure.
Specific embodiment
As shown in Figure 1, the present invention is based on the method for detecting lane lines steps of Spatial-LargeFOV deep learning network
It is as follows:
Step 1: after original image uniform sizes, it is input to the semantic segmentation that lane line is carried out in segmentation network, it is different
Lane line distribute different id, obtain the probability graph of all lane line segmentation results, continue to next frame input picture carry out
Lane line segmentation;
Step 2: the probability graph of lane line segmentation result is post-processed, that is, finds the peak response point of every lane line,
The lane point coordinate set of every lane line is obtained, connection lane point obtains final lane detection result.
It is embodied as follows.
Step 1: lane line pixel region division.
To the original image of input, the present invention directly zooms to the fixed dimension of 800x288 using the API of Tensorflow
It is sent to the convolutional neural networks model for lane line segmentation afterwards, network exports the probability graph of lane line segmentation result, probability graph
The corresponding output of dimension classification number (lane line classification number+background).
Lane line segmentation network is the coder-decoder network (such as table 1) with LargeFOV for basic network extension, and
2 are set to replace adopting under Maxpooling work by the step-length of the last one convolutional layer of each convolution block in overall network
Sample, one Avgpooling layers after only retaining Conv5 are made being averaged for global information.Coder-decoder is allocated as follows:
(1) encoder includes the structure in LargeFOV before Conv7 and Conv7.
(2) decoder is the later all structures of Conv7:
A is combined using the empty convolution of different spreading rates to increase receptive field, is connected progress multi-stage characteristics using jump and is melted
It closes, obtains multiple dimensioned contextual information on the one hand more effectively to identify the continuity target of long range, on the other hand merge
Low-level features are to refine partitioning boundary.
B characteristic pattern first passes through the convolutional layer of a 3x3 before output layer to refine characteristic information, i.e. Conv8.
Before spatial convoluted structure (SCNN) is added to Conv8 by c, enhances spatial relation, make up to a certain extent
Because of illumination, weather, the situation that disturbing factors cause representative learning bad such as block.
D separates convolution with depth and replaces Standard convolution, protects while reducing model complexity and reducing and calculate cost
Demonstrate,prove similar precision.
1 model structure of table
SC1: the 1x1 convolutional layer that an output channel is 128 is connect, respectively after the output of convolutional layer 7 and Conv4_3 to reach
The output phase of two 1x1 convolutional layers is added the multistage semantic feature fusion of row later, obtains SC1 layers by the purpose being aligned to dimension
Output.Image down sampling rate is still 8 at this time.
SC2 jump structure 2: making convolution kernel size to SC1 layers first is 4, step-length 2, the deconvolution that output channel is 128
Operation achievees the purpose that 2 times of up-samplings with this, obtains Deconv1 layers of output.1x1 convolution operation is made to Conv3_3 later, with
Achieve the purpose that dimension is aligned.Finally the two is added to carry out multi-scale feature fusion, obtains SC2 layers of output.Image at this time
Down-sampling rate is still 4.
2D_Dropout layers: two-dimensional Dropout operation being made to Sum2 layers of outputs to avoid over-fitting, loss ratio setting
It is 0.1, obtains Sum2_Dropout.
SCNN_DULR: structure such as Fig. 3, the characteristic pattern exported to Sum2_Dropout layers is made downwards, upwards, to the left respectively
The SCNN of four direction to the right, slice convolution kernel width ω are set as 9, port number 128.
The conventional convolution of a 3x3 is added behind Conv8:SCNN, one side percentage regulation separates convolution sum SCNN knot
On the one hand it is defeated with correspondence to adjust port number in favor of up-sampling the detailed information for being restored to input size for the characteristic information of structure output
Classification number out.
2D_SoftMax layers: Conv8 output is sent into two-dimensional SoftMax layers, exports lane line probability graph.Pass through 4 times
Bilinear interpolation probability graph is reverted into original size size.
Step 2: screening lane line probability graph peak response point
The present invention finds the maximum response of every lane line in the probability graph that the first step obtains, and figure four gives probability
The processing result of figure, spacing value take 50 pixels to find lane point maximum response.Given threshold is 0.5, that is, thinks probability
Lane point of the value greater than 0.5 is necessary being, and is recognized when this lane line exists and is more than or equal to three lane points
It is existing for this lane line.
Step 3: exporting final lane line result.
In order to which the lane detection result of the continuous interframe of video is smoother, the lane point obtained to step 4 is without minimum
Square law fitting, i.e., the lane point that probability graph obtains directly are linked up as final lane detection result.
Fig. 5 gives lane detection of the present invention under different scenes as a result, different colours represent different lane lines
id。
Claims (6)
1. a kind of method for detecting lane lines based on Spatial-LargeFOV deep learning network, it is characterised in that according to following
Step carries out:
Step 1: after original image uniform sizes, being input to the semantic segmentation that lane line is carried out in segmentation network, different vehicles
Diatom distributes different id, obtains the probability graph of all lane line segmentation results, continues to carry out lane to next frame input picture
Line segmentation;
Step 2: the probability graph of lane line segmentation result being post-processed, that is, finds the peak response point of every lane line, obtains
The lane point coordinate set of every lane line, connection lane point obtain final lane detection result.
2. the method for detecting lane lines according to claim 1 based on Spatial-LargeFOV deep learning network,
It is characterized in that:
Carrying out lane line segmentation using convolutional neural networks model in step 2, specific step is as follows:
Step 2-1: original driving image is unified for the input picture of 800*288 size;
Step 2-2: the obtained input picture of step 2-1 is sent to the convolutional neural networks mould for being used for lane line semantic segmentation
Type, network export the probability graph of lane line segmentation result, the classification number of the corresponding output of the dimension of probability graph (lane line classification number and
Background).
3. according to claim 1 or the lane detection side based on Spatial-LargeFOV deep learning network described in 2
Method, it is characterised in that:
The convolutional neural networks model for the segmentation of lane line pixel described in step 2 is one and combines spatial convoluted knot
Structure, empty convolution combination, jump connection and depth separate the lightweight network of convolution, are described in detail below:
Decoder is added after LargeFOV, is extended to coder-decoder network, and by each convolution block in overall network
The step-length of the last one convolutional layer be set as 2 to replace Maxpooling to make down-sampling, only retain one after Conv5
Avgpooling layers are made being averaged for global information;
Coder-decoder is allocated as follows:
Encoder includes the structure in LargeFOV before Conv7 and Conv7;
Decoder is the later all structures of Conv7:
A is combined using the empty convolution of different spreading rates to increase receptive field, is connected using jump and is carried out multi-stage characteristics fusion, and one
Aspect obtains multiple dimensioned contextual information more effectively to identify the continuity target of long range, on the other hand merges rudimentary spy
Sign is to refine partitioning boundary;
B characteristic pattern first passes through the convolutional layer of a 3x3 before output layer to refine characteristic information, i.e. Conv8;
Before spatial convoluted structure (SCNN) is added to Conv8 by c, enhances spatial relation, made up to a certain extent because of light
According to, weather, the situation that disturbing factors cause representative learning bad such as block;
D separates convolution with depth and replaces Standard convolution, guarantees phase while reducing model complexity and reducing and calculate cost
As precision.
4. the method for detecting lane lines according to claim 2 based on Spatial-LargeFOV deep learning network,
It is characterized in that:
The maximum response of every lane line, the i.e. maximum point of probability value are found in the probability graph obtained in step 2-2.
5. probability value is greater than 0.5 point in reservation peak response point, that is, think that lane point of the probability value greater than 0.5 is true
It is existing, and think that this lane line is existing when this lane line exists and is more than or equal to three lane points.
6. the method for detecting lane lines according to claim 1 based on Spatial-LargeFOV deep learning network,
It is characterized in that:
In order to which the lane detection result of the continuous interframe of video is smoother, the lane point obtained to step 2 is without least square
The lane point that method fitting, i.e. probability graph obtain directly links up after carrying out abnormity point elimination as final lane detection result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910454187.5A CN110276267A (en) | 2019-05-28 | 2019-05-28 | Method for detecting lane lines based on Spatial-LargeFOV deep learning network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910454187.5A CN110276267A (en) | 2019-05-28 | 2019-05-28 | Method for detecting lane lines based on Spatial-LargeFOV deep learning network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110276267A true CN110276267A (en) | 2019-09-24 |
Family
ID=67959127
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910454187.5A Pending CN110276267A (en) | 2019-05-28 | 2019-05-28 | Method for detecting lane lines based on Spatial-LargeFOV deep learning network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110276267A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008986A (en) * | 2019-11-20 | 2020-04-14 | 天津大学 | Remote sensing image segmentation method based on multitask semi-convolution |
CN111144304A (en) * | 2019-12-26 | 2020-05-12 | 上海眼控科技股份有限公司 | Vehicle target detection model generation method, vehicle target detection method and device |
CN111160205A (en) * | 2019-12-24 | 2020-05-15 | 江苏大学 | Embedded multi-class target end-to-end unified detection method for traffic scene |
CN111340694A (en) * | 2020-02-07 | 2020-06-26 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer-readable storage medium and computer equipment |
CN111914797A (en) * | 2020-08-17 | 2020-11-10 | 四川大学 | Traffic sign identification method based on multi-scale lightweight convolutional neural network |
CN112633177A (en) * | 2020-12-24 | 2021-04-09 | 浙江大学 | Lane line detection segmentation method based on attention space convolution neural network |
CN113128576A (en) * | 2021-04-02 | 2021-07-16 | 中国农业大学 | Crop row detection method and device based on deep learning image segmentation |
CN113343778A (en) * | 2021-05-14 | 2021-09-03 | 淮阴工学院 | Lane line detection method and system based on LaneSegNet |
CN113591614A (en) * | 2021-07-14 | 2021-11-02 | 西北工业大学 | Remote sensing image road extraction method based on adjacent spatial feature learning |
CN113591756A (en) * | 2021-08-06 | 2021-11-02 | 南京空天宇航科技有限公司 | Lane line detection method based on heterogeneous information interaction convolutional network |
CN113705575A (en) * | 2021-10-27 | 2021-11-26 | 北京美摄网络科技有限公司 | Image segmentation method, device, equipment and storage medium |
CN113780132A (en) * | 2021-08-31 | 2021-12-10 | 武汉理工大学 | Lane line detection method based on convolutional neural network |
WO2022134996A1 (en) * | 2020-12-25 | 2022-06-30 | Zhejiang Dahua Technology Co., Ltd. | Lane line detection method based on deep learning, and apparatus |
CN115565148A (en) * | 2022-11-09 | 2023-01-03 | 福思(杭州)智能科技有限公司 | Road image detection method, road image detection device, storage medium and electronic device |
CN116052110A (en) * | 2023-03-28 | 2023-05-02 | 四川公路桥梁建设集团有限公司 | Intelligent positioning method and system for pavement marking defects |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537197A (en) * | 2018-04-18 | 2018-09-14 | 吉林大学 | A kind of lane detection prior-warning device and method for early warning based on deep learning |
CN108764137A (en) * | 2018-05-29 | 2018-11-06 | 福州大学 | Vehicle traveling lane localization method based on semantic segmentation |
CN109101975A (en) * | 2018-08-20 | 2018-12-28 | 电子科技大学 | Image, semantic dividing method based on full convolutional neural networks |
-
2019
- 2019-05-28 CN CN201910454187.5A patent/CN110276267A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537197A (en) * | 2018-04-18 | 2018-09-14 | 吉林大学 | A kind of lane detection prior-warning device and method for early warning based on deep learning |
CN108764137A (en) * | 2018-05-29 | 2018-11-06 | 福州大学 | Vehicle traveling lane localization method based on semantic segmentation |
CN109101975A (en) * | 2018-08-20 | 2018-12-28 | 电子科技大学 | Image, semantic dividing method based on full convolutional neural networks |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008986A (en) * | 2019-11-20 | 2020-04-14 | 天津大学 | Remote sensing image segmentation method based on multitask semi-convolution |
CN111008986B (en) * | 2019-11-20 | 2023-09-05 | 天津大学 | Remote sensing image segmentation method based on multitasking semi-convolution |
CN111160205A (en) * | 2019-12-24 | 2020-05-15 | 江苏大学 | Embedded multi-class target end-to-end unified detection method for traffic scene |
CN111160205B (en) * | 2019-12-24 | 2023-09-05 | 江苏大学 | Method for uniformly detecting multiple embedded types of targets in traffic scene end-to-end |
CN111144304A (en) * | 2019-12-26 | 2020-05-12 | 上海眼控科技股份有限公司 | Vehicle target detection model generation method, vehicle target detection method and device |
CN111340694A (en) * | 2020-02-07 | 2020-06-26 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, computer-readable storage medium and computer equipment |
CN111340694B (en) * | 2020-02-07 | 2023-10-27 | 腾讯科技(深圳)有限公司 | Image processing method, apparatus, computer readable storage medium and computer device |
CN111914797A (en) * | 2020-08-17 | 2020-11-10 | 四川大学 | Traffic sign identification method based on multi-scale lightweight convolutional neural network |
CN111914797B (en) * | 2020-08-17 | 2022-08-12 | 四川大学 | Traffic sign identification method based on multi-scale lightweight convolutional neural network |
CN112633177A (en) * | 2020-12-24 | 2021-04-09 | 浙江大学 | Lane line detection segmentation method based on attention space convolution neural network |
WO2022134996A1 (en) * | 2020-12-25 | 2022-06-30 | Zhejiang Dahua Technology Co., Ltd. | Lane line detection method based on deep learning, and apparatus |
CN113128576A (en) * | 2021-04-02 | 2021-07-16 | 中国农业大学 | Crop row detection method and device based on deep learning image segmentation |
CN113343778B (en) * | 2021-05-14 | 2022-02-11 | 淮阴工学院 | Lane line detection method and system based on LaneSegNet |
WO2022237139A1 (en) * | 2021-05-14 | 2022-11-17 | 淮阴工学院 | Lanesegnet-based lane line detection method and system |
CN113343778A (en) * | 2021-05-14 | 2021-09-03 | 淮阴工学院 | Lane line detection method and system based on LaneSegNet |
CN113591614A (en) * | 2021-07-14 | 2021-11-02 | 西北工业大学 | Remote sensing image road extraction method based on adjacent spatial feature learning |
CN113591614B (en) * | 2021-07-14 | 2024-05-28 | 西北工业大学 | Remote sensing image road extraction method based on close-proximity spatial feature learning |
CN113591756A (en) * | 2021-08-06 | 2021-11-02 | 南京空天宇航科技有限公司 | Lane line detection method based on heterogeneous information interaction convolutional network |
CN113780132A (en) * | 2021-08-31 | 2021-12-10 | 武汉理工大学 | Lane line detection method based on convolutional neural network |
CN113780132B (en) * | 2021-08-31 | 2023-11-24 | 武汉理工大学 | Lane line detection method based on convolutional neural network |
CN113705575B (en) * | 2021-10-27 | 2022-04-08 | 北京美摄网络科技有限公司 | Image segmentation method, device, equipment and storage medium |
CN113705575A (en) * | 2021-10-27 | 2021-11-26 | 北京美摄网络科技有限公司 | Image segmentation method, device, equipment and storage medium |
CN115565148A (en) * | 2022-11-09 | 2023-01-03 | 福思(杭州)智能科技有限公司 | Road image detection method, road image detection device, storage medium and electronic device |
CN116052110A (en) * | 2023-03-28 | 2023-05-02 | 四川公路桥梁建设集团有限公司 | Intelligent positioning method and system for pavement marking defects |
CN116052110B (en) * | 2023-03-28 | 2023-06-13 | 四川公路桥梁建设集团有限公司 | Intelligent positioning method and system for pavement marking defects |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110276267A (en) | Method for detecting lane lines based on Spatial-LargeFOV deep learning network | |
CN109740465B (en) | Lane line detection algorithm based on example segmentation neural network framework | |
CN107274445B (en) | Image depth estimation method and system | |
CN109753913B (en) | Multi-mode video semantic segmentation method with high calculation efficiency | |
CN110659664B (en) | SSD-based high-precision small object identification method | |
CN107564009B (en) | Outdoor scene multi-target segmentation method based on deep convolutional neural network | |
CN105100640B (en) | A kind of local registration parallel video joining method and system | |
CN110399840B (en) | Rapid lawn semantic segmentation and boundary detection method | |
CN110348383B (en) | Road center line and double line extraction method based on convolutional neural network regression | |
CN112767418B (en) | Mirror image segmentation method based on depth perception | |
CN112364865B (en) | Method for detecting small moving target in complex scene | |
CN111401150A (en) | Multi-lane line detection method based on example segmentation and adaptive transformation algorithm | |
CN110717921B (en) | Full convolution neural network semantic segmentation method of improved coding and decoding structure | |
CN113807188A (en) | Unmanned aerial vehicle target tracking method based on anchor frame matching and Simese network | |
CN109242776B (en) | Double-lane line detection method based on visual system | |
CN109712071A (en) | Unmanned plane image mosaic and localization method based on track constraint | |
Fu et al. | Ad 2 attack: Adaptive adversarial attack on real-time uav tracking | |
CN114037938A (en) | NFL-Net-based low-illumination target detection method | |
CN113095371A (en) | Feature point matching method and system for three-dimensional reconstruction | |
CN115019340A (en) | Night pedestrian detection algorithm based on deep learning | |
CN111914596A (en) | Lane line detection method, device, system and storage medium | |
CN114241436A (en) | Lane line detection method and system for improving color space and search window | |
CN112101113B (en) | Lightweight unmanned aerial vehicle image small target detection method | |
CN111881914B (en) | License plate character segmentation method and system based on self-learning threshold | |
CN113627481A (en) | Multi-model combined unmanned aerial vehicle garbage classification method for smart gardens |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190924 |