CN112101163A - Lane line detection method - Google Patents
Lane line detection method Download PDFInfo
- Publication number
- CN112101163A CN112101163A CN202010924093.2A CN202010924093A CN112101163A CN 112101163 A CN112101163 A CN 112101163A CN 202010924093 A CN202010924093 A CN 202010924093A CN 112101163 A CN112101163 A CN 112101163A
- Authority
- CN
- China
- Prior art keywords
- line
- lane line
- lane
- cluster
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000005054 agglomeration Methods 0.000 claims abstract description 6
- 230000002776 aggregation Effects 0.000 claims abstract description 6
- 230000009466 transformation Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 6
- 238000011524 similarity measure Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000003708 edge detection Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000003247 decreasing effect Effects 0.000 claims description 2
- 238000005286 illumination Methods 0.000 abstract description 6
- 238000013507 mapping Methods 0.000 abstract description 3
- 238000004364 calculation method Methods 0.000 abstract description 2
- 230000003044 adaptive effect Effects 0.000 description 6
- 230000007547 defect Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of automobile auxiliary driving and discloses a lane line detection method. The method comprises the steps of 1) detecting vanishing points by voting mapping, and establishing a self-adaptive region of interest (ROI); 2) converting the RGB color value of the image in the ROI into YCBCR color value and extracting the Y component of the white lane line and generating a binary image of the white lane line; 3) adopting an agglomeration type hierarchical clustering for the lane mark binary image; 4) and outputting continuous lane marks by adopting a fitting method. Compared with the prior art, the lane line detection method has the following advantages: the method has good robustness, effectively reduces the complexity of calculation by establishing the self-adaptive ROI, improves the efficiency of lane line detection under different illumination conditions and improves the real-time property of the algorithm.
Description
Technical Field
The invention belongs to the technical field of automobile auxiliary driving, and particularly relates to a lane line detection method.
Background
With the rapid development of economy in recent years and the rapid improvement of the living standard of people, vehicles on roads are more and more, and people pay more and more attention to the problem of vehicle driving safety. Statistically, about 50% of the automobile traffic accidents are caused by the deviation of the automobile from the normal driving lane, so that the research on the auxiliary driving technical lane of the automobile is very meaningful. The detection of the lane line is an important component of the automobile auxiliary driving technology, and plays a significant role in the research and development of the automobile auxiliary driving technology.
At present, a plurality of methods for detecting lane lines are provided, and vision-based methods such as inverse perspective mapping, particle filtering, Hough transformation and the like are proposed, but the methods have higher computational complexity and have undesirable performance under complex illumination conditions. It is therefore important to develop algorithms that can work in inclement weather, nighttime, and in a variety of different lighting conditions and reduce computational complexity.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects of the existing lane line detection method, the invention provides the lane line detection method, the adaptive ROI is established by adopting a voting method, the calculation complexity of the algorithm is reduced, the lane line detection efficiency under the condition of illumination change is improved, the defect of a fixed determination method of an interested region is compensated by adopting the adaptive ROI, the situation that the lower half part is insufficient to carry out lane line detection can be well solved, and the lane line detection method has good robustness by utilizing an agglomeration type hierarchical clustering method.
The technical scheme is as follows: the invention discloses a lane line detection method, which comprises the following steps:
step 1: performing vanishing point detection on the input original image;
step 2: establishing a self-adaptive region of interest (ROI) according to the position of the vanishing point detected in the step 1;
and step 3: converting the RGB color value of the image in the ROI into a YCBCR color value, extracting Y components and generating a binary image of a white lane line to obtain a candidate area of the white lane;
and 4, step 4: outputting a group of straight line sets with similar slope and Y-axis intercept to the binary image of the white lane line by adopting an agglomeration hierarchical clustering method;
and 5: and (4) adopting a fitting method for the straight line set in the step (4), and outputting continuous lane marks to obtain corresponding lane lines.
Further, the specific process of the vanishing point detection in the step 1 is as follows:
step 1.1: performing graying processing on an input original image, performing edge detection by using a canny edge detector, and outputting a plurality of small line segments;
step 1.2: carrying out line segment detection on the plurality of small line segments by using Hough transformation to obtain a plurality of detection lines;
step 1.3: and (3) calculating the intersection point of each detection line in the step 1.2, generating a voting graph of the accumulated detection line intersection points, finding the central point of the region with the most votes, and defining the central point as a vanishing point.
Further, in step 2, the vanishing point in step 1 is represented by a cross mark, the height of the region of interest ROI is determined by the region under the horizontal vanishing line of the vertical coordinate of the vanishing point, and the region of interest ROI is marked in the original image.
Further, in step 3, the RGB color space of the image is converted into the YCBCR color space and the Y component is extracted, where the conversion formula is:
Y=0.299R+0.587G+0.114B
r, G, B represents three components of an RGB image, which represent red, green, and blue components, respectively.
Further, the binarized image of the white lane line in step 3 may be represented as:
where C (x, Y) is a binary image of the Y component, T is a segmentation threshold, E (x, Y) is an image of the Y component, Sy(E (x, Y)) is the cumulative histogram in the Y component, which can be expressed as: sy(E(x,y))=Hy(1)+Hy(2)+···Hy(255),Hy(1)···Hy(255) Respectively representing the proportion of the pixel numbers of 1, 2, 255 to the total pixel number of the binary image.
Further, T is set to be 0.95-0.97.
Further, before the step 4 performs the clustering method according to the obtained candidate region of the white lane, the candidate region of the white lane line is preprocessed, which specifically includes: the method comprises the steps of extracting small linear structures of lanes from a binary image of a white lane line by using a sobel gradient operator and a canny edge detector, then using Hough transformation to detect the line segments, grouping the line segments according to the slope and the Y-axis intercept of the line segments, firstly grouping the slope of the line segments, and then grouping again according to the Y-axis intercept of the line segments in the grouping of the slope.
Further, the method for the condensed hierarchical clustering specifically comprises the following steps:
step 4.1: inputting a set of samples X ═ { X1, X2, X3,. and xn } representing line segments, N being a threshold used to stop sub-cluster merging;
step 4.2: starting with sample n disjoint clusters, each representing a cluster;
step 4.3: calculating a similarity measure between each pair of clusters, the similarity measure between clusters being an average distance from all elements of one cluster to all elements of another cluster;
step 4.4: finding out a pair of most similar clusters in the current cluster, merging the clusters into one cluster, and if the similarity of the clusters is less than or equal to N, taking the clusters as one cluster for further processing;
step 4.5: decreasing by one cluster, repeating steps 4.2, 4.3 and 4.4 until no two clusters are closer than N or a single cluster is reached;
step 4.6: a cluster sample is returned, which is a set of straight lines with similar slopes and Y-intercept.
Further, in the step 5, the straight line set is fitted by using a least square method, the disconnected straight line set is fitted into a straight line, and finally the continuous lane mark is obtained.
Has the advantages that:
1. the invention uses voting mapping to detect vanishing points to establish an adaptive ROI, which reduces the computational complexity of the vanishing point detection stage.
2. The invention adopts a self-adaptive ROI to make up for the defects of a fixed determination method of the region of interest, and can well solve the problem that the lower half part is insufficient for detecting the lane line.
3. The invention can better solve the problem that the detection of the lane line detection line is difficult under different illumination conditions.
Drawings
FIG. 1 is a flow chart of a lane marking detection method of the present invention;
FIG. 2 is a diagram illustrating an adaptive ROI established for detecting positions of vanishing points based on voting according to the present invention;
FIG. 3 is a schematic diagram of the adaptive ROI established by using voting method to detect the position of a vanishing point according to the present invention, wherein a represents an original road lane line image, and b represents the detected position of the vanishing point;
FIG. 4 is a binary image (candidate area of white lane) of the Y component of the present invention;
FIG. 5 is a linear set obtained by the clustering method according to the present invention;
FIG. 6 is a lane line obtained by least squares fitting according to the present invention.
Detailed Description
The technical solutions described in the present application are further described below with reference to the accompanying drawings and embodiments.
The lane line detection method implemented by the present invention is described by taking the lane line detection on a specific road as an example, and as shown in the flow chart of fig. 1, the method includes the following steps:
step 1: and performing vanishing point detection on the input original image.
Step 1.1: performing graying processing on an input original image, performing edge detection by using a canny edge detector, and outputting a plurality of small line segments;
step 1.2: carrying out line segment detection on the plurality of small line segments by using Hough transformation to obtain a plurality of detection lines;
step 1.3: and (3) calculating the intersection points of the detection lines in the step 1.2, generating a voting graph of the intersection points of the accumulated detection lines, finding the central point of the region with the most votes, and defining the central point as a vanishing point.
Referring to the schematic view of the road shown in fig. 3, the cross mark is the vanishing point.
Step 2: an adaptive region of interest ROI is established based on the position of the detected vanishing point (as shown in fig. 2): the vanishing point in step 1 is indicated by a cross mark. The area under the horizontal vanishing line of the vertical coordinates of the vanishing points determines the height of the region of interest (ROI). A region of interest (ROI) is marked in the original image, and the subsequent step is to process the region of interest (ROI) marked in the original image.
And step 3: converting the RGB color values of the image into YCBCR color values and extracting the Y component of the white lane line, and performing color space conversion according to the following formula:
Y=0.299R+0.587G+0.114B
r, G, B represents three components of an RGB image, which represent red, green, and blue components, respectively.
In step 3, the formula for color space conversion, the relationship of values at the pixel points of R, G and B in two different lighting situations, is related by a diagonal matrix transformation:
where a1, a2 and a3 are diagonal coefficients between m and n, and m and n are different illumination cases.
In the case of varying illumination, the order remains the same, and the following relationship can be derived:
Rm=a1Rn,Gm=a2Gn,Bm=a3Bn
the Y component in the YCBCR color space may also be related according to the above formula: y ism=AYnAnd white has the highest value under various lighting conditions in the Y color space, so that a white lane line can be detected.
Therefore, using the above characteristics in step 3, the binarized image of the white lane line can be obtained by the following formula:
where C (x, Y) is a binary image of the Y component, T is a segmentation threshold, E (x, Y) is an image of the Y component, Sy(E (x, Y)) is the cumulative histogram in the Y component, which can be expressed as: sy(E(x,y))=Hy(1)+Hy(2)+···Hy(255),Hy(1)···Hy(255) Respectively representing the proportion of the pixel numbers of 1, 2, 255 to the total pixel number of the binary image.
T is set to be 0.95-0.97, and is generally set to be 0.97 according to experience.
And 3, generating a binary image of the white lane line according to the formula, and obtaining a candidate area for white lane detection. Referring to fig. 4, fig. 4 is a binarized image of a region of interest (ROI) on a road shown in fig. 3.
And 4, step 4: and (4) outputting a group of straight line sets with similar slope and Y-axis intercept according to the candidate region clustering hierarchical method of the white lane obtained in the step (3). Before the adoption of the agglomeration type hierarchical clustering method, a small linear structure of a lane is extracted from a binary image of a white lane line by using a sobel gradient operator and canny edge detection, then line segment detection is carried out by using Hough transformation, line segments are grouped according to the slope and the Y-axis intercept of the line segments, the slope of the line segments is firstly grouped, and then grouping is carried out again according to the Y-axis intercept of the line segments in the grouping of the slope.
In step 4, the concrete method of the cohesive hierarchical clustering comprises the following steps: in the line segment grouping of the lane line detection, thresholds are set in the slope and the Y-axis intercept for grouping, and an agglomeration type hierarchical clustering method is adopted.
(1) A set of samples X ═ { X1, X2, X3., xn } is input to represent line segments, and an N threshold is used to stop sub-cluster merging.
(2) Starting from the samples n disjoint clusters, each one represents a cluster.
(3) A similarity measure between each pair of clusters is calculated, the similarity measure between clusters being the average distance from all elements of one cluster to all elements of another cluster.
(4) Finding out a pair of clusters which are most similar in the current cluster, combining the clusters, and if the similarity of the clusters is less than or equal to N, further processing the clusters as one cluster.
(5) One cluster is reduced.
(6) Steps 2, 3 and 4 are repeated until no two clusters are closer than N or a single cluster is reached.
(7) A cluster sample is returned, which is a set of straight lines with similar slopes and Y-intercept. Referring to fig. 5, fig. 5 is a straight line set obtained by a clustering method in the current situation of the highway.
And 5: outputting a group of straight line sets with similar slope and Y-axis intercept according to the cluster obtained in the step 4, fitting the straight line sets by using a least square method because the line segments are disconnected essentially, and finally fitting the disconnected line segments into a straight line to finally obtain a continuous lane mark, namely a corresponding lane line, referring to the attached figure 6, wherein a black line is a finally detected white lane line.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.
Claims (9)
1. A lane line detection method is characterized by comprising the following steps:
step 1: performing vanishing point detection on the input original image;
step 2: establishing a self-adaptive region of interest (ROI) according to the position of the vanishing point detected in the step 1;
and step 3: converting the RGB color value of the image in the ROI into a YCBCR color value, extracting Y components and generating a binary image of a white lane line to obtain a candidate area of the white lane;
and 4, step 4: outputting a group of straight line sets with similar slope and Y-axis intercept to the binary image of the white lane line by adopting an agglomeration hierarchical clustering method;
and 5: and (4) adopting a fitting method for the straight line set in the step (4), and outputting continuous lane marks to obtain corresponding lane lines.
2. The lane line detection method according to claim 1, wherein the specific process of the vanishing point detection in step 1 is as follows:
step 1.1: performing graying processing on an input original image, performing edge detection by using a canny edge detector, and outputting a plurality of small line segments;
step 1.2: carrying out line segment detection on the plurality of small line segments by using Hough transformation to obtain a plurality of detection lines;
step 1.3: and (3) calculating the intersection point of each detection line in the step 1.2, generating a voting graph of the accumulated detection line intersection points, finding the central point of the region with the most votes, and defining the central point as a vanishing point.
3. The lane line detection method according to claim 1, wherein in step 2, the vanishing point in step 1 is indicated by a cross mark, the height of the ROI is determined by a region under a horizontal vanishing line of vertical coordinates of the vanishing point, and the ROI is marked in the original image.
4. The lane line detecting method according to claim 1, wherein the RGB color space of the image is converted into the YCBCR color space and the Y component is extracted in step 3, and the conversion formula is:
Y=0.299R+0.587G+0.114B
r, G, B represents three components of an RGB image, which represent red, green, and blue components, respectively.
5. The lane line detection method according to claim 4, wherein the binarized image of the white lane line in step 3 is represented as:
where C (x, Y) is a binary image of the Y component, T is a segmentation threshold, E (x, Y) is an image of the Y component, Sy(E (x, Y)) is the cumulative histogram in the Y component, which can be expressed as: sy(E(x,y))=Hy(1)+Hy(2)+…Hy(255),Hy(1)…Hy(255) Respectively representing the proportion of the pixel numbers of 1, 2, 255 to the total pixel number of the binary image.
6. The lane line detection method according to claim 5, wherein T is set to 0.95 to 0.97.
7. The lane line detection method according to claim 1, wherein step 4 is to pre-process the candidate regions of the white lane lines before performing the clustering method according to the candidate regions of the obtained white lanes, and specifically comprises: the method comprises the steps of extracting small linear structures of lanes from a binary image of a white lane line by using a sobel gradient operator and a canny edge detector, then using Hough transformation to detect the line segments, grouping the line segments according to the slope and the Y-axis intercept of the line segments, firstly grouping the slope of the line segments, and then grouping again according to the Y-axis intercept of the line segments in the grouping of the slope.
8. The lane line detection method according to claim 1, wherein the agglomerative hierarchical clustering method comprises the following steps:
step 4.1: inputting a set of samples X ═ { X1, X2, X3,. and xn } representing line segments, N being a threshold used to stop sub-cluster merging;
step 4.2: starting with sample n disjoint clusters, each representing a cluster;
step 4.3: calculating a similarity measure between each pair of clusters, the similarity measure between clusters being an average distance from all elements of one cluster to all elements of another cluster;
step 4.4: finding out a pair of most similar clusters in the current cluster, merging the clusters into one cluster, and if the similarity of the clusters is less than or equal to N, taking the clusters as one cluster for further processing;
step 4.5: decreasing by one cluster, repeating steps 4.2, 4.3 and 4.4 until no two clusters are closer than N or a single cluster is reached;
step 4.6: a cluster sample is returned, which is a set of straight lines with similar slopes and Y-intercept.
9. The lane line detection method according to any one of claims 1 to 8, wherein in the step 5, the set of straight lines is fitted by using a least square method, and the set of disconnected straight lines is fitted into one straight line, so as to finally obtain the continuous lane mark.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010924093.2A CN112101163A (en) | 2020-09-04 | 2020-09-04 | Lane line detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010924093.2A CN112101163A (en) | 2020-09-04 | 2020-09-04 | Lane line detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112101163A true CN112101163A (en) | 2020-12-18 |
Family
ID=73758533
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010924093.2A Pending CN112101163A (en) | 2020-09-04 | 2020-09-04 | Lane line detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112101163A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113449659A (en) * | 2021-07-05 | 2021-09-28 | 淮阴工学院 | Method for detecting lane line |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101470807A (en) * | 2007-12-26 | 2009-07-01 | 河海大学常州校区 | Accurate detection method for highroad lane marker line |
CN103617412A (en) * | 2013-10-31 | 2014-03-05 | 电子科技大学 | Real-time lane line detection method |
US20140185879A1 (en) * | 2011-09-09 | 2014-07-03 | Industry-Academic Cooperation Foundation, Yonsei University | Apparatus and method for detecting traffic lane in real time |
CN104063711A (en) * | 2014-06-23 | 2014-09-24 | 西北工业大学 | Corridor vanishing point rapid detection algorithm based on K-means method |
CN104318258A (en) * | 2014-09-29 | 2015-01-28 | 南京邮电大学 | Time domain fuzzy and kalman filter-based lane detection method |
CN105893949A (en) * | 2016-03-29 | 2016-08-24 | 西南交通大学 | Lane line detection method under complex road condition scene |
US20160350603A1 (en) * | 2015-05-28 | 2016-12-01 | Tata Consultancy Services Limited | Lane detection |
CN106682586A (en) * | 2016-12-03 | 2017-05-17 | 北京联合大学 | Method for real-time lane line detection based on vision under complex lighting conditions |
CN108052904A (en) * | 2017-12-13 | 2018-05-18 | 辽宁工业大学 | The acquisition methods and device of lane line |
CN108647664A (en) * | 2018-05-18 | 2018-10-12 | 河海大学常州校区 | It is a kind of based on the method for detecting lane lines for looking around image |
CN108986453A (en) * | 2018-06-15 | 2018-12-11 | 华南师范大学 | A kind of traffic movement prediction method based on contextual information, system and device |
CN109002797A (en) * | 2018-07-16 | 2018-12-14 | 腾讯科技(深圳)有限公司 | Vehicle lane change detection method, device, storage medium and computer equipment |
CN109583280A (en) * | 2017-09-29 | 2019-04-05 | 比亚迪股份有限公司 | Lane detection method, apparatus, equipment and storage medium |
CN110163109A (en) * | 2019-04-23 | 2019-08-23 | 浙江大华技术股份有限公司 | A kind of lane line mask method and device |
CN110414385A (en) * | 2019-07-12 | 2019-11-05 | 淮阴工学院 | A kind of method for detecting lane lines and system based on homography conversion and characteristic window |
-
2020
- 2020-09-04 CN CN202010924093.2A patent/CN112101163A/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101470807A (en) * | 2007-12-26 | 2009-07-01 | 河海大学常州校区 | Accurate detection method for highroad lane marker line |
US20140185879A1 (en) * | 2011-09-09 | 2014-07-03 | Industry-Academic Cooperation Foundation, Yonsei University | Apparatus and method for detecting traffic lane in real time |
CN103617412A (en) * | 2013-10-31 | 2014-03-05 | 电子科技大学 | Real-time lane line detection method |
CN104063711A (en) * | 2014-06-23 | 2014-09-24 | 西北工业大学 | Corridor vanishing point rapid detection algorithm based on K-means method |
CN104318258A (en) * | 2014-09-29 | 2015-01-28 | 南京邮电大学 | Time domain fuzzy and kalman filter-based lane detection method |
US20160350603A1 (en) * | 2015-05-28 | 2016-12-01 | Tata Consultancy Services Limited | Lane detection |
CN105893949A (en) * | 2016-03-29 | 2016-08-24 | 西南交通大学 | Lane line detection method under complex road condition scene |
CN106682586A (en) * | 2016-12-03 | 2017-05-17 | 北京联合大学 | Method for real-time lane line detection based on vision under complex lighting conditions |
CN109583280A (en) * | 2017-09-29 | 2019-04-05 | 比亚迪股份有限公司 | Lane detection method, apparatus, equipment and storage medium |
CN108052904A (en) * | 2017-12-13 | 2018-05-18 | 辽宁工业大学 | The acquisition methods and device of lane line |
CN108647664A (en) * | 2018-05-18 | 2018-10-12 | 河海大学常州校区 | It is a kind of based on the method for detecting lane lines for looking around image |
CN108986453A (en) * | 2018-06-15 | 2018-12-11 | 华南师范大学 | A kind of traffic movement prediction method based on contextual information, system and device |
CN109002797A (en) * | 2018-07-16 | 2018-12-14 | 腾讯科技(深圳)有限公司 | Vehicle lane change detection method, device, storage medium and computer equipment |
CN110163109A (en) * | 2019-04-23 | 2019-08-23 | 浙江大华技术股份有限公司 | A kind of lane line mask method and device |
CN110414385A (en) * | 2019-07-12 | 2019-11-05 | 淮阴工学院 | A kind of method for detecting lane lines and system based on homography conversion and characteristic window |
Non-Patent Citations (4)
Title |
---|
RUDRA HOTA 等: "A Simple and Efficient Lane Detection using Clustering and Weighted Regression", 《15TH INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA COMAD 2009》, pages 1 - 9 * |
TOAN MINH HOANG 等: "Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor", 《SENSORS》, pages 1 - 29 * |
成春阳 等: "基于主动红外滤光环视成像的车道线检测算法", 《激光与光电子学进展》, pages 121014 - 1 * |
鱼兆伟 等: "基于动态感兴趣区域的光照无关车道线检测算法", 《计算机工程》, vol. 43, no. 2, pages 43 - 56 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113449659A (en) * | 2021-07-05 | 2021-09-28 | 淮阴工学院 | Method for detecting lane line |
CN113449659B (en) * | 2021-07-05 | 2024-04-23 | 淮阴工学院 | Lane line detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109886896B (en) | Blue license plate segmentation and correction method | |
CN102043950B (en) | Vehicle outline recognition method based on canny operator and marginal point statistic | |
CN112819094B (en) | Target detection and identification method based on structural similarity measurement | |
CN111563412B (en) | Rapid lane line detection method based on parameter space voting and Bessel fitting | |
CN109784344A (en) | A kind of non-targeted filtering method of image for ground level mark identification | |
CN110210451B (en) | Zebra crossing detection method | |
CN109299674B (en) | Tunnel illegal lane change detection method based on car lamp | |
CN109034019B (en) | Yellow double-row license plate character segmentation method based on row segmentation lines | |
CN103971128A (en) | Traffic sign recognition method for driverless car | |
CN107563331B (en) | Road sign line detection method and system based on geometric relationship | |
CN109816040B (en) | Deep learning-based urban inland inundation water depth detection method | |
CN109190483B (en) | Lane line detection method based on vision | |
CN104036246A (en) | Lane line positioning method based on multi-feature fusion and polymorphism mean value | |
CN108734131B (en) | Method for detecting symmetry of traffic sign in image | |
CN112001216A (en) | Automobile driving lane detection system based on computer | |
CN108647664B (en) | Lane line detection method based on look-around image | |
CN108304749A (en) | The recognition methods of road speed line, device and vehicle | |
CN109886168B (en) | Ground traffic sign identification method based on hierarchy | |
CN103593981A (en) | Vehicle model identification method based on video | |
CN114511770A (en) | Road sign plate identification method | |
CN105139011A (en) | Method and apparatus for identifying vehicle based on identification marker image | |
CN113837094A (en) | Road condition rapid analysis method based on full-color high-resolution remote sensing image | |
CN111652033A (en) | Lane line detection method based on OpenCV | |
Ingole et al. | Characters feature based Indian vehicle license plate detection and recognition | |
CN110782409B (en) | Method for removing shadow of multiple moving objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |