CN115376082A - Lane line detection method integrating traditional feature extraction and deep neural network - Google Patents

Lane line detection method integrating traditional feature extraction and deep neural network Download PDF

Info

Publication number
CN115376082A
CN115376082A CN202210919555.0A CN202210919555A CN115376082A CN 115376082 A CN115376082 A CN 115376082A CN 202210919555 A CN202210919555 A CN 202210919555A CN 115376082 A CN115376082 A CN 115376082A
Authority
CN
China
Prior art keywords
lane line
road
neural network
deep neural
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210919555.0A
Other languages
Chinese (zh)
Other versions
CN115376082B (en
Inventor
魏超
张美迪
李路兴
随淑鑫
钱歆昊
胡乐云
徐扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing
Beijing Institute of Technology BIT
Original Assignee
Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing, Beijing Institute of Technology BIT filed Critical Yangtze River Delta Research Institute Of Beijing University Of Technology Jiaxing
Priority to CN202210919555.0A priority Critical patent/CN115376082B/en
Publication of CN115376082A publication Critical patent/CN115376082A/en
Application granted granted Critical
Publication of CN115376082B publication Critical patent/CN115376082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a lane line detection method integrating traditional feature extraction and a deep neural network, which comprises the following steps: extracting prior characteristics of a lane line in an input road picture based on the input road picture to obtain a prior characteristic map of the lane line; splicing the lane line prior characteristic graph and the road picture to obtain a road characteristic graph; and inputting the road characteristic graph into a deep neural network model, and performing characteristic extraction and key point prediction on the road characteristic graph to obtain the position coordinates of each key point in each lane line. According to the method, from the perspective of fusion of the traditional characteristics and the deep neural network, the prior traditional characteristics of the lane line are fully considered, the prior characteristics of the lane line are obtained by using a traditional characteristic extraction method before the image is input into the deep neural network, so that the advantages of the traditional feature-based lane line detection method and the deep learning method are complementary, and the robustness and the accuracy of a lane line detection algorithm are improved on the premise of meeting the real-time requirement.

Description

Lane line detection method integrating traditional feature extraction and deep neural network
Technical Field
The invention relates to the technical field of automatic driving, in particular to a lane line detection method integrating traditional feature extraction and a deep neural network.
Background
In the current society, automobiles become one of the most convenient and important transportation means in daily trips of people, but along with popularization of automobiles, traffic safety problems also pose serious threats to life and property safety of people, unmanned driving technology is greatly promoted for reducing traffic accidents and meeting better driving experience of drivers, and the automobile-mounted unmanned driving vehicle becomes a research hotspot in the current vehicle industry. As one of the important links of unmanned driving of vehicles, the lane line detection technology provides important information for functions of road environment perception, lane departure early warning, collision warning, path planning and the like of the vehicles.
The lane line detection method is developed to the present, and can be mainly divided into two types: one is a traditional detection method based on lane line features or models; the other type is a lane line detection method based on deep learning emerging along with the development of deep learning at home and abroad.
Traditional lane marking detection methods rely on a combination of manually extracted features and heuristic methods to identify lane segments, which often have distinct features such as color, shape, edge gradients, and intensity, as compared to other portions of the road image. The conventional lane line detection method is to extract features of a lane line, detect points belonging to the lane line, and then apply a defined model to the lane line to perform fitting, thereby identifying a complete lane line. The lane line detection using the traditional method has higher requirements on illumination conditions, the shielding condition and the damage condition of the lane line, the accuracy under complex road scenes is to be improved, but the algorithm is simpler, the real-time performance is better, the robustness to various scenes is better, and the interpretability is stronger.
With the continuous development of deep learning, more and more scholars apply the neural network to lane line detection. The convolutional neural network has a unique hierarchical connection structure, so that the feature extraction capability of the convolutional neural network is stronger, most of the learned features of the convolutional layer positioned at the bottom layer are edge information of the lane line, but deep information such as the color, texture and outline of the lane line can be continuously extracted and learned along with the deepening of the number of layers, so that a lane line detection task in a more complex environment can be processed, the algorithm accuracy is higher in a complex road scene, the model complexity is higher, the dependence on a data set is large, and the adaptivity and the robustness under different scenes are poorer.
In summary, both the conventional method and the deep learning-based method have corresponding disadvantages in lane line detection by using a single principle, and therefore, a lane line detection method combining the conventional feature extraction method and the deep neural network method is required, and advantages of the two methods are brought into play to make up for deficiencies.
Disclosure of Invention
The invention provides a lane line detection method integrating traditional feature extraction and a deep neural network, which aims to solve the problems that in the lane line detection method, the calculation complexity and model parameters of the network need to be reduced, the real-time performance is ensured, and the accuracy and the robustness of lane line detection are improved by considering the prior traditional features of a lane line.
In order to achieve the purpose, the invention provides the following scheme:
a lane line detection method fusing traditional feature extraction and a deep neural network comprises the following steps:
s1, extracting prior characteristics of a lane line in a road picture based on an input road picture to obtain a lane line prior characteristic map;
s2, splicing the lane line prior feature map and the road picture to obtain a road feature map;
and S3, inputting the road characteristic graph into a deep neural network model, and performing characteristic extraction and key point prediction on the road characteristic graph to obtain the position coordinates of key points of each lane line in a preset grid unit, wherein the key points are points belonging to the lane lines in the grid unit.
Preferably, in S1, acquiring the input road picture includes:
the method comprises the steps of fixing a vehicle-mounted camera at the center of the top of a vehicle position through a support, adjusting the installation angle of the vehicle-mounted camera, aligning the vehicle-mounted camera with an area to be detected, and collecting road images of the area to be detected in front of the vehicle by using the vehicle-mounted camera.
Preferably, obtaining the lane line prior feature map comprises:
s1.1, carrying out graying processing on the road picture based on a weighted average method, configuring the proportion of RGB components, and reserving the brightness information of a lane line in the road picture to obtain a single-channel grayscale map of a road;
s1.2, sequentially carrying out median filtering, linear gray stretching and OTSU automatic threshold segmentation on the single-channel gray image, and then selecting an interested area according to the installation angle of the vehicle-mounted camera and the characteristics of the detected road environment;
s1.3, canny edge detection operation is carried out on the image of the region of interest, gray values of all pixel points in the image after edge detection and the image before edge detection are weighted and averaged, and the output single-channel image is the prior feature map of the lane line.
Preferably, in S2, obtaining the road characteristic map includes:
and splicing the lane line prior feature map and the road picture on a channel dimension to obtain the road feature map after channel combination.
Preferably, the deep neural network model comprises a feature extraction network module and a key point prediction network module, and the feature extraction network module is used for learning features of different sizes of a lane line in the road feature map; the key point prediction network module is used for receiving the lane line characteristics from the characteristic extraction network module and outputting the position coordinates of each key point in each lane line.
Preferably, in S3, the extracting the feature of the road feature map includes: carrying out image transformation on the road characteristic diagram, adjusting the image, converting the road characteristic diagram into tensor and normalization processing, and acquiring the transformed road characteristic diagram; and inputting the road characteristic map after the image transformation into the characteristic extraction network module to obtain the lane line characteristics with different scales.
Preferably, the feature extraction network module is composed of a ResNet50 network from which a full connection layer is removed, a convolution layer is used for replacing a down-sampling module in the ResNet50 network, and an inverse bottleneck architecture is used for replacing a bottleneck architecture of a ResNet50 network residual block, so that information loss caused by dimension compression in the residual block is reduced.
Preferably, the predicting the key points of the road feature map comprises:
inputting the lane line characteristics learned by the characteristic extraction network module into the key point prediction network module, and predicting the coordinates of the key points of each lane line in the grid unit by using two full-connection layers in the key point prediction network module; and multiplying the grid unit width neural network with the predicted coordinates obtained by the grid unit depth neural network, calculating the coordinates of the lane line points in the input road picture, and outputting a predicted map of the lane line positions.
Preferably, the grid unit is a grid unit preset in the key point prediction network module.
Preferably, predicting the position of each lane line in the grid cell comprises:
predicting probability P by each lane line in all grid cells i,j Obtaining the positions of lane lines at the maximum position and the index k of the grid unit, and obtaining the prediction probabilities Prob at different grid units based on the softmax function i,j,: Then, the expectation of k is found, i.e. the predicted position of the lane line in the grid cell:
Prob i,j,: =softmax(P ,j,1:w )
Figure BDA0003777078870000051
wherein, P i,j,1:w For the predicted probability distribution of the ith lane line in the jth row, prob i,j,k Predicted probability, loc, of the i-th lane line for the jth row and kth column i,j And w is the position of the ith lane line in the jth row, and the preset grid unit column number.
The invention has the beneficial effects that:
according to the method, from the perspective of fusion of the traditional characteristics and the deep neural network, the prior traditional characteristics of the lane line are fully considered, the prior characteristics of the lane line are obtained by using a traditional characteristic extraction method before the image is input into the deep neural network, so that the advantages of the traditional feature-based lane line detection method and the deep learning method are complementary, and the robustness and the accuracy of a lane line detection algorithm are improved on the premise of meeting the real-time requirement.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of a lane line detection method that combines conventional feature extraction and a deep neural network in an embodiment of the present invention;
FIG. 2 is a flow diagram of a conventional feature extraction module in an embodiment of the invention;
FIG. 3 is a schematic diagram of a deep neural network model according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a deep neural network residual block structure according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
As shown in fig. 1, the present embodiment provides a lane line detection method integrating traditional feature extraction and a deep neural network, including the following steps:
step 1, fixing a vehicle-mounted camera at the center of a vehicle roof through a support, adjusting the angle of the camera, aligning the camera to a region to be detected, ensuring that the visual field of the camera is wide, acquiring images of a road in front of a vehicle by using the vehicle-mounted camera, and obtaining input road pictures with 1920 pixels wide, 1080 pixels high and 3 channels;
step 2, as shown in fig. 2, the input road picture is transmitted to a traditional feature extraction module, and the prior feature extraction of the lane line is performed by taking the preprocessing operation of the traditional lane line detection method as a reference, so as to obtain a single-channel lane line prior feature map:
step 2.1, carrying out weighted average graying processing on the road picture, uniformly configuring the proportion of RGB components in the result, and reserving the brightness information of the lane lines in the road picture to obtain a single-channel grayscale image of the road;
step 2.2, sequentially carrying out median filtering, linear gray stretching and OTSU automatic threshold segmentation on a single-channel gray map of the road to increase the brightness of the lane line part, then taking a trapezoidal area surrounded by (0, 1080), (700, 650), (1220, 650) and (1920, 1080) as an interested area according to the installation angle of a camera and the characteristics of the detected road environment, setting the gray value outside the area as 0, and rejecting an invalid part in the image;
step 2.3, canny edge detection operation of the image is carried out, gray values of pixel points of the image after edge detection and the image before edge detection are weighted and averaged, the output single-channel image is the prior feature map of the lane line, in the prior feature map of the lane line, important features of the lane line are reserved, a background area with the lowest attention degree is black, a road area with a certain attention degree is gray, a lane line area with the highest attention degree is white, and the similarity is remarkably improved under different road scenes, so that when the lane line is detected by a deep neural network trained by using an open data set in the follow-up process, the obvious accuracy reduction can not occur;
step 3, fusing the single-channel lane line prior feature map highlighting the effective features of the lane lines with the three-channel input road picture, and splicing in the channel dimension to obtain the road feature map with the channel number of 4, wherein the road feature map not only comprises the road picture directly shot by a vehicle-mounted camera, but also comprises the prior feature map for extracting the prior features of the lane lines, and can fully display the feature information of the lane lines, thereby being beneficial to the detection of a deep neural network;
and 4, inputting the road characteristic graph into a pre-trained deep neural network model, wherein the deep neural network model comprises a characteristic extraction network module and a key point prediction network module, and the specific structure is shown in fig. 3.
Step 4.1, before the deep neural network model is input, image transformation is carried out on the road characteristic graph, including the steps of adjusting the size of the image to 288 x 800, converting the characteristic graph into tensor and normalization processing, so that the road characteristic graph is convenient for network learning;
and 4.2, inputting the road characteristic graph after image transformation into a characteristic extraction network module, further learning the characteristics of the lane lines in different scales, wherein the characteristic extraction network module is formed by optimizing a ResNet50 network after a full connection layer is removed. Firstly, a big convolution kernel convolution layer with convolution kernel size of 4, step length of 4 and convolution kernel number of 64 is used for replacing a downsampling module which is composed of a convolution kernel size of 7, convolution kernel number of 64, convolution layer with step length of 2 and a maximum pooling downsampling layer with pooling kernel size of 3 in a ResNet50 network, the size of an obtained feature map is 72 multiplied by 200 multiplied by 64, the size and the depth of an output image processed by the big convolution kernel convolution layer are completely the same as those of the output image processed by the ResNet50 downsampling module, and the convolution kernel size of the big convolution kernel is equal to the step length, so that no overlapping area exists in a sense field of each convolution operation, information redundancy can be reduced and network efficiency can be improved under the condition that the size and the depth of the output image are not changed; after down-sampling, replacing a bottleneck architecture with a large number of middle channels and a small number of upper and lower channels of a ResNet50 network residual block by an inverse bottleneck architecture with a small number of middle channels and a large number of upper and lower channels, so as to reduce information loss caused by dimension compression in the residual block, wherein similar to the ResNet50, 4 residual layer deepening network structures respectively composed of 3, 4, 6 and 3 residual blocks are used, feature maps of 1/2, 1/4 and 1/8 containing higher-level lane line information are respectively obtained through a residual layer, and the size of a finally output feature map is 9 multiplied by 25 multiplied by 1024; meanwhile, the use of an activation layer and a normalization layer in the residual block is reduced, the adverse effect of frequent nonlinear mapping on network learning is avoided, and two types of reverse bottleneck architecture residual blocks of the improved network are shown in fig. 4;
step 4.3, inputting the feature map of 9 × 25 × 1024 size obtained by learning of the feature extraction network module into the key point prediction network module, combining the two fully-connected layers, and predicting the probability P of lane lines existing in each grid unit of (W + 1) × H × S size consisting of S lane lines in an area with H lines and W lines preset in the original image by using the feature X of the image and the classifier f, wherein the number of lines of the grid is 18, the number of columns is 200, and the number of predicted lane lines S is 4
P i,j,: =f ij (X),s.t.i∈[1,S],j∈[1,H]
Then, the probability P is predicted through each lane line in all grid units in one row i,j At the maximum, the index k of the grid unit obtains the position of the lane line, and firstly, the prediction probability Prob of different grid units is obtained by using the softmax function i,j,: Then, the expectation of k, which is the predicted position of the lane line in the grid cell,
Prob i,j,: =softmax(P i,j,1:w )
Figure BDA0003777078870000091
and 4.4, calculating the predicted coordinates of the lane line points in the input road picture through the preset grid unit width and the predicted coordinates obtained by the depth neural network, and outputting the predicted map of the lane line positions.
The invention relates to a lane line detection method based on the integration of a traditional feature detection method and a deep neural network detection method, which comprises the steps of extracting the prior feature of a lane line by using a traditional feature extraction module according to a road image acquired by a vision sensor, carrying out channel splicing on the obtained prior feature map of the lane line and an original road image to obtain a road feature map, inputting the road feature map into a pre-established deep neural network model, obtaining the position coordinates of each key point in each lane line in a grid unit by using the feature extraction network module and a key point prediction network module, calculating the coordinates of the lane line points in the road image by using the size of the preset grid unit, and outputting a position prediction map of the lane line.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.

Claims (10)

1. A lane line detection method fusing traditional feature extraction and a deep neural network is characterized by comprising the following steps:
s1, extracting prior characteristics of a lane line in a road picture based on an input road picture to obtain a lane line prior characteristic map;
s2, splicing the lane line prior characteristic graph and the road picture to obtain a road characteristic graph;
and S3, inputting the road characteristic graph into a deep neural network model, and performing characteristic extraction and key point prediction on the road characteristic graph to obtain the position coordinates of key points of each lane line in a preset grid unit, wherein the key points are points belonging to the lane lines in the grid unit.
2. The method for detecting lane lines by fusing conventional feature extraction and a deep neural network according to claim 1, wherein in S1, the obtaining the input road picture comprises:
the method comprises the steps of fixing a vehicle-mounted camera at the center of the top of a vehicle position through a support, adjusting the installation angle of the vehicle-mounted camera, aligning the vehicle-mounted camera with an area to be detected, and collecting road images of the area to be detected in front of the vehicle by using the vehicle-mounted camera.
3. The method for detecting the lane line fusing the conventional feature extraction and the deep neural network according to claim 2, wherein obtaining the prior feature map of the lane line comprises:
s1.1, carrying out graying processing on the road picture based on a weighted average method, configuring the proportion of RGB components, and reserving the brightness information of a lane line in the road picture to obtain a single-channel grayscale map of a road;
s1.2, sequentially carrying out median filtering, linear gray stretching and OTSU automatic threshold segmentation on the single-channel gray image, and then selecting an interested area according to the installation angle of the vehicle-mounted camera and the characteristics of the detected road environment;
s1.3, canny edge detection operation is carried out on the image of the region of interest, gray values of all pixel points in the image after edge detection and the image before edge detection are weighted and averaged, and the output single-channel image is the prior feature map of the lane line.
4. The method for detecting the lane line by fusing the conventional feature extraction and the deep neural network according to claim 1, wherein in S2, obtaining the road feature map comprises:
and splicing the lane line prior feature map and the road picture on a channel dimension to obtain the road feature map after channel combination.
5. The method for detecting the lane line fusing the traditional feature extraction and the deep neural network as claimed in claim 1, wherein the deep neural network model comprises a feature extraction network module and a key point prediction network module, and the feature extraction network module is used for learning features of different sizes of the lane line in the road feature map; the key point prediction network module is used for receiving the lane line characteristics from the characteristic extraction network module and outputting the position coordinates of each key point in each lane line.
6. The method for detecting the lane line fusing the conventional feature extraction and the deep neural network according to claim 5, wherein in S3, the feature extraction of the road feature map comprises: carrying out image transformation on the road characteristic diagram, adjusting the image, converting the road characteristic diagram into tensor and normalization processing, and acquiring the transformed road characteristic diagram; and inputting the road characteristic graph after the image transformation into the characteristic extraction network module to obtain the lane line characteristics with different scales.
7. The method for detecting the lane line fusing the conventional feature extraction and the deep neural network according to claim 6, wherein the feature extraction network module comprises a ResNet50 network with a full connection layer removed, a convolution layer is used to replace a down-sampling module in the ResNet50 network, and an inverse bottleneck architecture is used to replace a bottleneck architecture of a ResNet50 network residual block, so as to reduce information loss caused by dimension compression in the residual block.
8. The method for detecting the lane line fusing the conventional feature extraction and the deep neural network according to claim 7, wherein the predicting the key points of the road feature map comprises:
inputting the lane line characteristics learned by the characteristic extraction network module into the key point prediction network module, and predicting the coordinates of the key points of each lane line in the grid unit by using two full-connection layers in the key point prediction network module; and multiplying the predicted coordinates obtained by the grid unit width neural network and the grid unit depth neural network, calculating the coordinates of the lane line points in the input road picture, and outputting a predicted map of the lane line positions.
9. The method of claim 8, wherein the grid cells are preset in the keypoint prediction network module.
10. The method of claim 8, wherein predicting the position of each lane line in the grid cell comprises:
predicting probability P by each lane line in all grid cells i,j Obtaining the positions of lane lines at the maximum positions and the index k of the grid units, and obtaining the prediction probability Prob at different grid units based on the softmax function i,j,: Then, the expectation of k is found, i.e. the predicted position of the lane line in the grid cell:
Prob i,j,: =softmax(P ,j,1:w )
Figure FDA0003777078860000041
wherein, P i,j,1:w For the predicted probability distribution of the ith lane line in the jth row, prob i,j,k Predicted probability, loc, of the i-th lane line for the jth row and kth column i,j And w is the position of the ith lane line in the jth row, and is the preset grid unit column number.
CN202210919555.0A 2022-08-02 2022-08-02 Lane line detection method integrating traditional feature extraction and deep neural network Active CN115376082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210919555.0A CN115376082B (en) 2022-08-02 2022-08-02 Lane line detection method integrating traditional feature extraction and deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210919555.0A CN115376082B (en) 2022-08-02 2022-08-02 Lane line detection method integrating traditional feature extraction and deep neural network

Publications (2)

Publication Number Publication Date
CN115376082A true CN115376082A (en) 2022-11-22
CN115376082B CN115376082B (en) 2023-06-09

Family

ID=84063059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210919555.0A Active CN115376082B (en) 2022-08-02 2022-08-02 Lane line detection method integrating traditional feature extraction and deep neural network

Country Status (1)

Country Link
CN (1) CN115376082B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115775377A (en) * 2022-11-25 2023-03-10 北京化工大学 Automatic driving lane line segmentation method with image and steering wheel steering angle fused
TWI832591B (en) * 2022-11-30 2024-02-11 鴻海精密工業股份有限公司 Method for detecting lane line, computer device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345547A (en) * 2018-10-19 2019-02-15 天津天地伟业投资管理有限公司 Traffic lane line detecting method and device based on deep learning multitask network
CN109829403A (en) * 2019-01-22 2019-05-31 淮阴工学院 A kind of vehicle collision avoidance method for early warning and system based on deep learning
CN112966624A (en) * 2021-03-16 2021-06-15 北京主线科技有限公司 Lane line detection method and device, electronic equipment and storage medium
CN113313047A (en) * 2021-06-11 2021-08-27 中国科学技术大学 Lane line detection method and system based on lane structure prior
CN114120272A (en) * 2021-11-11 2022-03-01 东南大学 Multi-supervision intelligent lane line semantic segmentation method fusing edge detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345547A (en) * 2018-10-19 2019-02-15 天津天地伟业投资管理有限公司 Traffic lane line detecting method and device based on deep learning multitask network
CN109829403A (en) * 2019-01-22 2019-05-31 淮阴工学院 A kind of vehicle collision avoidance method for early warning and system based on deep learning
CN112966624A (en) * 2021-03-16 2021-06-15 北京主线科技有限公司 Lane line detection method and device, electronic equipment and storage medium
CN113313047A (en) * 2021-06-11 2021-08-27 中国科学技术大学 Lane line detection method and system based on lane structure prior
CN114120272A (en) * 2021-11-11 2022-03-01 东南大学 Multi-supervision intelligent lane line semantic segmentation method fusing edge detection

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115775377A (en) * 2022-11-25 2023-03-10 北京化工大学 Automatic driving lane line segmentation method with image and steering wheel steering angle fused
CN115775377B (en) * 2022-11-25 2023-10-20 北京化工大学 Automatic driving lane line segmentation method with fusion of image and steering angle of steering wheel
TWI832591B (en) * 2022-11-30 2024-02-11 鴻海精密工業股份有限公司 Method for detecting lane line, computer device and storage medium

Also Published As

Publication number Publication date
CN115376082B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN111368687B (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN110069986B (en) Traffic signal lamp identification method and system based on hybrid model
CN108875608B (en) Motor vehicle traffic signal identification method based on deep learning
CN115376082B (en) Lane line detection method integrating traditional feature extraction and deep neural network
CN109711264B (en) Method and device for detecting occupation of bus lane
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
CN113095152B (en) Regression-based lane line detection method and system
CN103824081A (en) Method for detecting rapid robustness traffic signs on outdoor bad illumination condition
CN111008633A (en) License plate character segmentation method based on attention mechanism
CN113326846B (en) Rapid bridge apparent disease detection method based on machine vision
CN115019043B (en) Cross-attention mechanism-based three-dimensional object detection method based on image point cloud fusion
CN115601717B (en) Deep learning-based traffic offence behavior classification detection method and SoC chip
CN114120272A (en) Multi-supervision intelligent lane line semantic segmentation method fusing edge detection
CN111209923A (en) Deep learning technology-based muck truck cover or uncover identification method
CN113205107A (en) Vehicle type recognition method based on improved high-efficiency network
CN117197763A (en) Road crack detection method and system based on cross attention guide feature alignment network
CN112085018A (en) License plate recognition system based on neural network
CN112115800A (en) Vehicle combination recognition system and method based on deep learning target detection
CN112528994B (en) Free angle license plate detection method, license plate recognition method and recognition system
CN112053407B (en) Automatic lane line detection method based on AI technology in traffic law enforcement image
CN116630702A (en) Pavement adhesion coefficient prediction method based on semantic segmentation network
CN116863227A (en) Hazardous chemical vehicle detection method based on improved YOLOv5
CN111104944A (en) License plate character detection and segmentation method based on R-FCN
CN116630920A (en) Improved lane line type identification method of YOLOv5s network model
CN113192018B (en) Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant