CN109086671B - Night lane marking line video detection method suitable for unmanned driving - Google Patents

Night lane marking line video detection method suitable for unmanned driving Download PDF

Info

Publication number
CN109086671B
CN109086671B CN201810723815.0A CN201810723815A CN109086671B CN 109086671 B CN109086671 B CN 109086671B CN 201810723815 A CN201810723815 A CN 201810723815A CN 109086671 B CN109086671 B CN 109086671B
Authority
CN
China
Prior art keywords
road
region
lane
interest
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810723815.0A
Other languages
Chinese (zh)
Other versions
CN109086671A (en
Inventor
贺璇
游峰
陈川
段征宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
South China University of Technology SCUT
Original Assignee
Tongji University
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University, South China University of Technology SCUT filed Critical Tongji University
Priority to CN201810723815.0A priority Critical patent/CN109086671B/en
Publication of CN109086671A publication Critical patent/CN109086671A/en
Application granted granted Critical
Publication of CN109086671B publication Critical patent/CN109086671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a night lane marking line video detection method suitable for unmanned driving, which comprises the following steps: 1) acquiring a night road image, and preprocessing the night road image, wherein the preprocessing comprises the steps of suppressing image noise by adopting median filtering and enhancing road edges by adopting a Sobel operator to eliminate useless information in the image; 2) generating a splayed self-adaptive interested region according to the preprocessed night road image; 3) classifying road boundary characteristic points in the splayed self-adaptive interest region, and respectively obtaining a left lane interest quadrilateral region and a right lane interest quadrilateral region according to a classification result; 4) and adopting improved Hough transformation to fit and recognize the lane marking line. Compared with the prior art, the method has the advantages of reducing the range of the screened road boundary points, classifying the lane boundary points, reducing the detection time of lane lines and the like.

Description

Night lane marking line video detection method suitable for unmanned driving
Technical Field
The invention relates to the field of intelligent traffic active safety, in particular to a night lane marking line video detection method suitable for unmanned driving.
Background
With the rapid development of urban economy in China, the quantity of cars owned is increased year by year, and the problem of traffic safety is gradually shown. According to official data of the national statistical office, 18 thousands of traffic accidents in China are shown in 2015, wherein the number of casualties exceeds 25 thousands, and the death rate of the accidents reaches 30%. In which the driver's attention is not concentrated, so that the vehicle deviates from the normal driving lane and a serious traffic casualty accident is frequently occurred. Therefore, it is important to develop active safety techniques for road sign line detection.
The road detection technology can be used for a driving assistance system and is also a key technology in the development process of unmanned vehicles. Currently, in europe and america, some lane detection systems, such as the RALPH system, the Start system, the AURORA system, and the ALVINN system, have been developed. In the above system, the detection of roads is performed using different road models and different boundary extraction techniques. However, the detection algorithm is mainly suitable for a uniform illumination scene in the daytime, and the algorithm lacks adaptability to light changes.
At present, no product specially used for road detection in night scenes exists in China. Research related to road detection mainly stays on a uniform lighting scene in the daytime. Because the night scene is complicated, the illumination is uneven, the road surface shadow is complicated, and few researches on the night complicated scene are carried out. While unmanned vehicles inevitably require addressing the need to drive at night. Therefore, research on reliable night road detection technology is needed, so that safety and practicability of unmanned driving are improved.
In addition, the region of interest established in the current road detection algorithm is mainly a fixed region of interest or a Kalman filter method to establish a self-adaptive region of interest, and the region of interest is usually rectangular. In fact, the lane lines are distributed in a splayed shape in the image, and more than 95% of useless area exists in the rectangular region of interest, and meanwhile, the rectangular region of interest is easily interfered by noise. Recently, foreign research has established that the region of interest is of the type "Λ". However, the night lane marking lines are arranged on one side of the illuminator, the left lane and the right lane are distributed differently after edge enhancement, and the interested area still has an optimization space. The optimization of the interesting region of the night image can improve the speed of the algorithm, so that the real-time requirement of the unmanned vehicle is met.
In general, at present, China lacks related products for road detection aiming at night scenes, and the existing research is not enough to meet the development of unmanned technology and lacks reliability and instantaneity. In order to avoid safety accidents caused by lane departure, the invention provides a night lane marking line video detection technology suitable for unmanned driving.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a night lane marking video detection method suitable for unmanned driving.
The purpose of the invention can be realized by the following technical scheme:
a night lane marking video detection method suitable for unmanned driving comprises the following steps:
1) acquiring a night road image, and preprocessing the night road image, wherein the preprocessing comprises the steps of suppressing image noise by adopting median filtering and enhancing road edges by adopting a Sobel operator to eliminate useless information in the image;
2) generating a splayed self-adaptive interested region according to the preprocessed night road image;
3) classifying road boundary characteristic points in the splayed self-adaptive interest region, and respectively obtaining a left lane interest quadrilateral region and a right lane interest quadrilateral region according to a classification result;
4) and adopting improved Hough transformation to fit and recognize the lane marking line.
The step 2) specifically comprises the following steps:
21) establishing an initialized splayed interested area:
setting a rectangular initialization area B in the edge-enhanced first imageiBy setting the coefficient of fluctuation flEnlarging initialization area BiIn the expanded initialization region BiCarrying out multi-directional search to obtain end points of road boundary points, and forming an initialized splayed interested area in a first image;
22) building an adaptive interest region in a shape of 'eight':
according to the interested region R of the last edge-enhanced night road imageaEnlarging R by using a transverse enlargement factoraObtaining an enlarged region RbAnd in the region RbIn-process multi-direction search for obtaining road boundary points in current imageAn endpoint.
The specific steps of the multidirectional search for obtaining the end points of the road boundary points are as follows:
for any set of laterally adjacent edge points a and b within the initialization area, if yb-ya≥dmin∩yb-ya≤dmaxThen the two points are the end points of the road boundary point and are drawn into a road boundary point set B, wherein xa,ya、xb,ybPixel coordinates of edge points a and b, respectively, dminAnd dmaxThe minimum and maximum values of the adjacent distances.
In the step 3), the classifying the road boundary feature points specifically comprises:
for the road boundary points at the left plane:
if f (x, y) belongs to B ^ f (x, y) belongs to Rcl∩SyIf > 0, f (x, y) is belonged to Blo
If f (x, y) belongs to B ^ f (x, y) belongs to Rcl∩SyIf < 0, f (x, y) is E.g. Bli
For road boundary points on the right plane:
if f (x, y) belongs to B ^ f (x, y) belongs to Rcr∩SyIf > 0, f (x, y) is belonged to Bro
If f (x, y) belongs to B ^ f (x, y) belongs to Rcr∩SyIf < 0, f (x, y) is E.g. Bri
Wherein f (x, y) is the pixel point position in the splay self-adaptive interested region, B is a road boundary point set, BloIs the outer boundary set of the left, BroIs the outer right boundary set, BliIs the left inner boundary set, BriIs the right inner boundary set, RclAdapting the left part of the region of interest for the shape of a Chinese character' bacrAdapting the right part of the region of interest for the shape of a Chinese character 'ba', SyThe pixel point value is calculated by a 3 multiplied by 3 longitudinal Sobel operator template.
In the step 4), the image is divided into a left part and a right part, the lane marking lines are respectively identified, and left and right boundaries of the road are obtained.
In the step 4), a left fluctuation angle A is setflAnd right undulation angle AfrThe boundary of the left lane and x
Figure BDA0001719118010000031
Line brcrThe normal angle of (c).
Compared with the prior art, the invention has the following advantages:
(1) establishing an adaptive splayed region of interest
The quadrilateral interesting regions are independently established according to the distribution of the left and right lane boundary points, so that the method is suitable for scenes with unbalanced road brightness distribution at night, can effectively avoid the interference of noise, and simultaneously greatly reduces the range of screening the road boundary points, thereby reducing the time for processing images.
(2) Classifying lane boundary points based on road boundary texture features
The road boundary texture features are found, and lane boundary points are divided into a left outer boundary set, a left inner boundary set and a right outer boundary set and a right inner boundary set on the built splayed interesting area according to the features, so that lane line identification is facilitated subsequently.
(3) Road sign line recognition based on improved Hough transformation
In the process of detecting the road marking line by Hough transformation, the search angles of the left and right lane lines are controlled based on the angles of the two quadrilateral side surfaces in the built splay-shaped interested region, so that the detection time of the lane lines is shortened.
Drawings
FIG. 1 is a flow chart of a night lane marking video detection algorithm suitable for unmanned driving.
Fig. 2 is an acquired original image of a road at night.
Fig. 3 is a night road image after median filtering.
Fig. 4 is an edge enhancement diagram of Sobel operator.
Fig. 5 is a flow chart of initializing a region of interest.
FIG. 6 is a schematic diagram of the end points of the multi-directional search road boundary points. The area within the white frame in the figure is the momentShape initialization area BiThe white arrows indicate the search direction, the dense white points are the screened lane boundary points, and the white dots indicate the end points of the searched road boundary points.
Fig. 7 is an initialized region of interest map. The white boxed area in the figure is the initialized region of interest established according to the method shown in fig. 5.
Fig. 8 is a flow chart of the adaptive region of interest in the shape of a "eight".
FIG. 9 is a schematic diagram of the end points of the multi-directional search road boundary points. The white area in the figure is R after the area of interest of the previous picture is enlargedb. White arrows indicate the search direction, dense white points are the screened lane boundary points, and white dots indicate the end points of the searched road boundary points.
Fig. 10 is a diagram of an adaptive region of interest in a shape of a Chinese character 'ba'. The white area inside the frame is the adaptive region of interest of the "eight" shape created according to the method shown in fig. 8.
Fig. 11 is a view of Hough transform coordinate system. Quadrilateral a in the figurelblcldlAnd quadrangle arbrcrdrRespectively, the regions of interest of the left and right formed lanes. Alpha and beta are the angles between the boundaries of the left and right lanes, respectively, and the x-axis.
Fig. 12 is a diagram showing the detection result of the road marking at night.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Examples
The invention provides a night lane marking line video detection method suitable for unmanned driving, which comprises the steps of preprocessing an image through median filtering and a Sobel operator, then establishing a self-adaptive splay interesting region based on geometric characteristics of a lane line, classifying boundary points according to textural characteristics of a road boundary, and then detecting the road boundary by adopting improved Hough transformation. The invention can detect the road at night, avoid traffic accidents caused by lane departure due to distracted driving, simultaneously serve unmanned vehicles and ensure the safety of travelers, and comprises the following specific steps:
step 1; carrying out difficult point analysis of road detection at night: road detection at night is more complicated than road detection under the even illumination scene of daytime, mainly reflects in:
1. the gray scale of the road image at night is integrally low, and the difficulty in extracting the lane boundary points is high.
2. Alternate light spots appear on the road surface, and edge deletion still exists after edge enhancement.
3. The night image is influenced by the car lights, the brightness of the near scene is high, the brightness of the far scene is low, and the lane line edge of the far scene is easy to lose.
4. The vehicle windshield is easy to reflect light, a brightness area is formed in an image, and the gray value of pixels in the area is enlarged.
Step 2; preprocessing a night road image: the difficulty of road detection is increased due to the complexity of the night images, so that the processing method is different from the processing of the acquired images under the uniform illumination scene in the daytime. The method comprises the steps of firstly adopting median filtering to inhibit image noise, overcoming the problem of image detail blurring caused by linear filters such as mean filtering and least mean square filtering under a certain condition, and effectively protecting edge and contour information. Meanwhile, the method does not need the statistical characteristics of the image in the actual operation, so the processing speed is relatively high, and the method is suitable for the road image preprocessing of the unmanned vehicle. And then carrying out road edge enhancement based on a Sobel operator. The Sobel operator is a first-order differential operator and can effectively eliminate most useless information in the image.
Step 3; constructing an adaptive interest region in a shape of 'eight':
firstly, establishing an initialization region of interest: inputting the image subjected to Sobel operator edge enhancement and setting a rectangular initialization region BiSuppose BiThe pixel of the inner transverse adjacent edge point is f (x)a,ya) And f (x)b,yb) Set of road boundaries B, adjacent minimum distance dminAdjacent maximum distance dmaxExtracting road boundary points according to the distance characteristics of the boundary points of the inner road and the outer road. If y isb-ya≥dmin∩yb-ya≤dmaxThen f (x)a,ya)∈B,f(xb,yb) E.g. B. In BiAnd searching the end points of the road boundary points in multiple directions. In order to take into account as many road boundary points as possible, the region of interest fluctuation coefficient f is setlAnd enlarging the width of the interest region to form an initialized splayed interest region.
Then, building an adaptive interest region of the Chinese character 'eight': calculating the interested region R of the road boundary endpoint based on the previous pictureaLinear relationship of the boundary and according to a transverse expansion coefficient ehEnlarging RaTo obtain Rb. Inputting new image after Sobel operator edge enhancement at RbAnd extracting road boundary points according to the distance features. At RbAnd searching the end points of the road boundary points in multiple directions. Setting a region of interest fluctuation coefficient flAnd expanding the width of the interesting area to form a splayed self-adaptive interesting area.
Step 4; road boundary point classification: defining a set of road boundary points in the interested region as B, and establishing a left part of the interest region in a shape like a Chinese character 'ba' as RclThe right part is RcrLeft outer boundary set BloLeft inner boundary set BliAnd, the outer right boundary set BroRight inner boundary set Bri. Assuming that the current pixel f is located at (x, y) in the image, the following classification is performed on each boundary point according to the texture features of the road boundary: for the road boundary points on the left plane. If f (x, y) belongs to B ∈ f (x, y) belongs to Rcl∩SyIf > 0, f (x, y) is belonged to Blo. ② if f (x, y) belongs to B ∈ f (x, y) belongs to Rcl∩SyIf < 0, f (x, y) is E.g. Bli. For the road boundary point on the right plane. ③ if f (x, y) belongs to B ^ f (x, y) belongs to Rcr∩SyIf > 0, f (x, y) is belonged to Bro. Fourthly, if f (x, y) belongs to B ∈ f (x, y) belongs to Rcr∩SyIf < 0, f (x, y) is E.g. Bri. The classification method is carried out on the road boundary characteristic points in the splayed self-adaptive interested region and can be used forThe noise interference is effectively avoided, and the accuracy requirement is met.
Step 5; and (3) identifying a road marking line: the lane model is selected as a straight line model, and an improved Hough transformation is selected to identify the lane boundary. Quadrangle a in Hough transformation coordinate system diagramlblcldlAnd quadrangle arbrcrdrRespectively, the regions of interest of the left and right formed lanes. Assume a straight line a in the left regionldlThe linear relation in the rectangular coordinate system is that y is klox+bloAt an angle of normal of
Figure BDA0001719118010000061
Straight line blclIs that y is equal to klix+bliAt an angle of normal of
Figure BDA0001719118010000062
And 0 < klo≤kli(ii) a Straight line a in the right regionrdrIs that y is equal to krix+briAt an angle of normal of
Figure BDA0001719118010000063
Straight line ardrIs that y is equal to krox+briAt an angle of normal of
Figure BDA0001719118010000064
And k isri≤kro<0。
In order to improve the calculation efficiency, the following search preconditions are proposed: the left lane is located on the left plane of the image, and the right lane is located on the half plane of the image. Therefore, in the lane detection process, the image is divided into a left part and a right part, and the left boundary and the right boundary of the road are respectively identified; let the left and right search fluctuation angles be AflAnd AfrThe included angles formed by the boundaries of the left lane and the right lane and the x axis are respectively alpha and beta, and the calculation ranges of the alpha and the beta are controlled to be within the range
Figure BDA0001719118010000065
③ in the quadrangle alblcldlSearching left lane line in the region of interest and obtaining a quadrangle arbrcrdrAnd searching a right lane line in the region of interest, and reducing the time for detecting the lane line.
Step 6; and (3) algorithm optimization: in order to meet the road detection practicability and safety of the unmanned vehicle, the road detection algorithm is provided with the following optimization measures: when the k of the built splay-shaped region of interest is establishedlo、kli、kriAnd kroThe difference of the corresponding value of the last road detection is larger than a threshold value cfThen the region of interest is deemed to be established with errors. And setting the error k as a corresponding value of the last road detection, enlarging the interesting area by utilizing a linear relation, and continuing to perform subsequent operations. When the area of the established interested area is smaller than the threshold value sfAnd expanding the interesting area by utilizing the linear relation and continuing to perform subsequent operation. And thirdly, when the road boundary end points searched in the direction are wrong, the road boundary end points are considered to be caused by too few road boundary points after the Sobel operator edge enhancement due to low brightness. Change 1.2.1 to Canny edge enhancement and operate again. And fourthly, when errors of the road sign line are identified by using Hough change, the Hough change fitting requirement cannot be met due to the fact that the number of road boundary points is too small. The processing method is the same as the third step.

Claims (4)

1. A night lane marking video detection method suitable for unmanned driving is characterized by comprising the following steps:
1) acquiring a night road image, and preprocessing the night road image, wherein the preprocessing comprises the steps of suppressing image noise by adopting median filtering and enhancing road edges by adopting a Sobel operator to eliminate useless information in the image;
2) generating a splayed self-adaptive interested region according to the preprocessed night road image, and specifically comprising the following steps of:
21) establishing an initialized splayed interested area:
setting in the first image after edge enhancementRectangular initialization area BiBy setting the coefficient of fluctuation flEnlarging initialization area BiIn the expanded initialization region BiCarrying out multi-directional search to obtain end points of road boundary points, and forming an initialized splayed interested area in a first image;
22) building an adaptive interest region in a shape of 'eight':
according to the interested region R of the last edge-enhanced night road imageaEnlarging R by using a transverse enlargement factoraObtaining an enlarged region RbAnd in the region RbThe method comprises the following steps of performing multidirectional search to obtain the end point of the road boundary point in the current image, wherein the specific steps of the multidirectional search to obtain the end point of the road boundary point are as follows:
for any set of laterally adjacent edge points a and b within the initialization area, if yb-ya≥dmin∩yb-ya≤dmaxThen the two points are the end points of the road boundary point and are drawn into a road boundary point set B, wherein xa,ya、xb,ybPixel coordinates of edge points a and b, respectively, dminAnd dmaxMinimum and maximum values of adjacent distances;
3) classifying road boundary characteristic points in the splayed self-adaptive interest region, and respectively obtaining a left lane interest quadrilateral region and a right lane interest quadrilateral region according to a classification result;
4) and adopting improved Hough transformation to fit and recognize the lane marking line.
2. The method for detecting the nighttime lane marking line video suitable for unmanned driving according to claim 1, wherein in the step 3), the classification of the road boundary feature points is specifically as follows:
for the road boundary points at the left plane:
if f (x, y) belongs to B ^ f (x, y) belongs to Rcl∩SyIf > 0, f (x, y) is belonged to Blo
If f (x, y) e B & (f)x,y)∈Rcl∩SyIf < 0, f (x, y) is E.g. Bli
For road boundary points on the right plane:
if f (x, y) belongs to B ^ f (x, y) belongs to Rcr∩SyIf > 0, f (x, y) is belonged to Bro
If f (x, y) belongs to B ^ f (x, y) belongs to Rcr∩SyIf < 0, f (x, y) is E.g. Bri
Wherein f (x, y) is the pixel point position in the splay self-adaptive interested region, B is a road boundary point set, BloIs the outer boundary set of the left, BroIs the outer right boundary set, BliIs the left inner boundary set, BriIs the right inner boundary set, RclAdapting the left part of the region of interest for the shape of a Chinese character' bacrAdapting the right part of the region of interest for the shape of a Chinese character 'ba', SyThe pixel point value is calculated by a 3 multiplied by 3 longitudinal Sobel operator template.
3. The method as claimed in claim 1, wherein in step 4), the image is divided into left and right parts, and the lane marking is recognized to obtain left and right boundaries of the road.
4. The method as claimed in claim 1, wherein the step 4) is performed by setting a left fluctuation angle AflAnd right undulation angle AfrThe range of the angle alpha formed by the boundary of the left lane and the x-axis is controlled to be
Figure FDA0002934049280000021
The range of the included angle beta formed by the boundary of the right lane and the x axis is controlled as
Figure FDA0002934049280000022
Wherein,
Figure FDA0002934049280000023
is the region of interest a of the left lanelblcldlLeft sideline aldlThe angle of the normal to the light source,
Figure FDA0002934049280000024
quadrilateral area a of interest for the left lanelblcldlRight side line blclThe angle of the normal to the light source,
Figure FDA0002934049280000025
quadrilateral area a of interest for the right lanerbrcrdrLeft sideline ardrThe angle of the normal to the light source,
Figure FDA0002934049280000026
quadrilateral area a of interest for the right lanerbrcrdrRight side line brcrThe normal angle of (c).
CN201810723815.0A 2018-07-04 2018-07-04 Night lane marking line video detection method suitable for unmanned driving Active CN109086671B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810723815.0A CN109086671B (en) 2018-07-04 2018-07-04 Night lane marking line video detection method suitable for unmanned driving

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810723815.0A CN109086671B (en) 2018-07-04 2018-07-04 Night lane marking line video detection method suitable for unmanned driving

Publications (2)

Publication Number Publication Date
CN109086671A CN109086671A (en) 2018-12-25
CN109086671B true CN109086671B (en) 2021-05-11

Family

ID=64837270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810723815.0A Active CN109086671B (en) 2018-07-04 2018-07-04 Night lane marking line video detection method suitable for unmanned driving

Country Status (1)

Country Link
CN (1) CN109086671B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163166B (en) * 2019-05-27 2021-06-25 北京工业大学 Robust detection method for LED lighting lamp of highway tunnel
CN111931560B (en) * 2020-06-23 2022-11-01 东南大学 Linear acceleration lane marking line detection method suitable for formula-free racing car
CN118276093A (en) * 2024-06-03 2024-07-02 南通大学 Traffic road detection method based on millimeter wave radar image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101382997A (en) * 2008-06-13 2009-03-11 青岛海信电子产业控股股份有限公司 Vehicle detecting and tracking method and device at night
CN103297754A (en) * 2013-05-02 2013-09-11 上海交通大学 Monitoring video self-adaption interesting area coding system
EP2813073A1 (en) * 2012-02-10 2014-12-17 Google, Inc. Adaptive region of interest
CN107895151A (en) * 2017-11-23 2018-04-10 长安大学 Method for detecting lane lines based on machine vision under a kind of high light conditions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101382997A (en) * 2008-06-13 2009-03-11 青岛海信电子产业控股股份有限公司 Vehicle detecting and tracking method and device at night
EP2813073A1 (en) * 2012-02-10 2014-12-17 Google, Inc. Adaptive region of interest
CN103297754A (en) * 2013-05-02 2013-09-11 上海交通大学 Monitoring video self-adaption interesting area coding system
CN107895151A (en) * 2017-11-23 2018-04-10 长安大学 Method for detecting lane lines based on machine vision under a kind of high light conditions

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于边界点分布特征的夜问道路检测算法研究;游峰等;《交通信息与安全》;20110820;第29卷(第4期);第112-115页 *
自适应感兴趣区域车道检测算法;高建明;《中国优秀硕士学位论文全文数据库信息科技辑》;20170215(第2期);第3.3-3.4节 *

Also Published As

Publication number Publication date
CN109086671A (en) 2018-12-25

Similar Documents

Publication Publication Date Title
CN105893949B (en) A kind of method for detecting lane lines under complex road condition scene
CN107330376B (en) Lane line identification method and system
CN111563412B (en) Rapid lane line detection method based on parameter space voting and Bessel fitting
CN110210451B (en) Zebra crossing detection method
KR100975749B1 (en) Method for recognizing lane and lane departure with Single Lane Extraction
CN104318206B (en) A kind of obstacle detection method and device
WO2020000253A1 (en) Traffic sign recognizing method in rain and snow
CN109086671B (en) Night lane marking line video detection method suitable for unmanned driving
CN101334836A (en) License plate positioning method incorporating color, size and texture characteristic
CN108052904B (en) Method and device for acquiring lane line
TWI401473B (en) Night time pedestrian detection system and method
CN110263635B (en) Marker detection and identification method based on structural forest and PCANet
CN104700072A (en) Lane line historical frame recognition method
CN106887004A (en) A kind of method for detecting lane lines based on Block- matching
CN109190483B (en) Lane line detection method based on vision
CN107563331B (en) Road sign line detection method and system based on geometric relationship
CN107832674B (en) Lane line detection method
CN109886175B (en) Method for detecting lane line by combining straight line and circular arc
CN110427979B (en) Road water pit identification method based on K-Means clustering algorithm
CN112666573B (en) Detection method for retaining wall and barrier behind mine unloading area vehicle
CN103279755A (en) Vehicle bottom shadow characteristics-based rapid license plate positioning method
CN103996031A (en) Self adaptive threshold segmentation lane line detection system and method
CN114693716A (en) Driving environment comprehensive identification information extraction method oriented to complex traffic conditions
CN107918775B (en) Zebra crossing detection method and system for assisting safe driving of vehicle
Schreiber et al. Detecting symbols on road surface for mapping and localization using OCR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant