CN112270690B - Self-adaptive night lane line detection method based on improved CLAHE and sliding window search - Google Patents
Self-adaptive night lane line detection method based on improved CLAHE and sliding window search Download PDFInfo
- Publication number
- CN112270690B CN112270690B CN202011083289.XA CN202011083289A CN112270690B CN 112270690 B CN112270690 B CN 112270690B CN 202011083289 A CN202011083289 A CN 202011083289A CN 112270690 B CN112270690 B CN 112270690B
- Authority
- CN
- China
- Prior art keywords
- image
- lane line
- clahe
- sliding window
- img
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 20
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 238000010008 shearing Methods 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 17
- 238000001914 filtration Methods 0.000 claims description 14
- 101150071665 img2 gene Proteins 0.000 claims description 14
- 238000000034 method Methods 0.000 claims description 13
- 101150013335 img1 gene Proteins 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 8
- 230000009977 dual effect Effects 0.000 claims description 7
- 230000000877 morphologic effect Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 5
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 238000012800 visualization Methods 0.000 claims description 4
- 108010063499 Sigma Factor Proteins 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000007797 corrosion Effects 0.000 claims description 3
- 238000005260 corrosion Methods 0.000 claims description 3
- 239000002245 particle Substances 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000010845 search algorithm Methods 0.000 claims description 2
- 238000005286 illumination Methods 0.000 abstract description 16
- 230000002708 enhancing effect Effects 0.000 abstract description 2
- 238000007500 overflow downdraw method Methods 0.000 abstract 1
- 238000000605 extraction Methods 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008570 general process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a self-adaptive night lane line detection method based on improved CLAHE and sliding window search, which comprises the following steps: setting a polygonal area at the lower half part of the image as an ROI area; the CLAHE algorithm for enhancing and improving the high-light image is provided, so that the influence of an over-bright area on the detection of the lane line is reduced; a DF fusion method is proposed to separate the lane line from the complex background; determining the lane line pixel points by using the optimized sliding window searching algorithm, and fitting a lane line boundary equation by combining a second-order function; and projecting the lane line to an original image by adopting inverse perspective transformation to realize lane line detection. The invention can realize the detection and tracking of the lane line under different environments such as night illumination, weak illumination, strong illumination, normal illumination and the like, and simultaneously solves the problem that the curved lane line is difficult to detect.
Description
Technical Field
The invention relates to the technical field of image processing and road safety, in particular to a self-adaptive night lane line detection method based on improved CLAHE and sliding window search.
Background
The lane line detection method is divided into a plurality of methods, Nima Zarbakht et al propose to convert an image from an RGB color space to a YCbCr color space and an HSV color space, and realize lane line detection by utilizing a gradient detection operator. Jamel Baili et al propose a feature-based lane detection method that simplifies the edge detection process using a horizontal difference filter and groups detected edge points into straight lines using an improved hough transform. Chiyder et al used the Soberx edge detection operator to detect lane line edge information and then used an improved hough transform to detect possible lane lines based on the region of interest. The above lane line detection method is easily affected by illumination, road surface shadows and lane line defects, cannot detect curved lane lines, is difficult to cope with complex urban roads, and is easily affected by other edge noises.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a self-adaptive night lane line detection method based on improved CLAHE and sliding window search, which solves the problem of poor algorithm adaptability at night or under the condition of non-uniform light and solves the problem of difficult detection of a bent lane line.
The technical scheme is as follows: the invention discloses a self-adaptive night lane line detection method based on improved CLAHE and sliding window search, which comprises the following steps of:
(1) obtaining a video image, zooming the image by using a down-sampling algorithm, and fixing the size of the image to obtain a processed image IMG;
(2) determining 6 vertexes, processing an interested region of the IMG, setting the value of a pixel point of an image in the region to be 0, and keeping the value of the pixel point outside the region unchanged to obtain a processed image IMG 1;
(3) optimizing a CLAHE algorithm shearing point formula, preprocessing an image IMG1, setting a pixel threshold value to be [145,255], and performing thresholding processing on the preprocessed image to obtain a binary image IMG 2;
(4) filtering the IMG1 by using a density Prewitt operator, setting a pixel threshold value to be [1,50], and carrying out thresholding treatment on the filtered image to obtain a binary image IMG 3;
(5) a DF dual feature fusion algorithm is provided, the features of IMG2 and IMG3 are fused, and morphological operation is adopted to filter the fused image to obtain an image IMG 4;
(6) determining lane line pixel points in IMG4 by adopting an optimized sliding window search algorithm, and fitting the pixel points by utilizing a polynomial to determine a boundary equation of a lane line;
(7) and according to the determined boundary equation, projecting the lane line to the IMG by adopting inverse perspective transformation to finish lane line tracking and lane line region visualization.
Further, the image size in step (1) is in the range of [600,380] pixels in length and [460,200] pixels in width.
Further, characterized in that said step (3) comprises the steps of:
(31) according to the dynamic range of each block in the image, a CLAHE shearing point formula is optimized, a shearing point cp is set in a self-adaptive mode to process the image, and the shearing point formula is as follows:
the method comprises the following steps that cp is a shearing point of a CLAHE algorithm, N is the number of pixel points in each block after the blocks are partitioned, R is the dynamic range of pixels in the blocks, alpha is a cutting factor, sigma is the variance of the pixels in the blocks, Avg is the average pixel value in the blocks, epsilon is a minimum number, an optimization point is a self-adaptive shearing point, and the variance and the average gray level are used for representing the uniformity degree of the pixels in the blocks;
(32) the clipped image is subjected to gaussian filtering to further reduce image noise, and then the image is processed into a binary image IMG2 with a threshold value of [145,255 ].
Further, the step (4) comprises the steps of:
(41) graying the IMG1, and filtering the image using the Prewitt operators in the horizontal and vertical directions, wherein the Prewitt operators in the vertical and horizontal directions are as follows:
wherein g _ x is a transverse Prewitt operator, and g _ y is a longitudinal Prewitt operator;
(42) calculating S byxyThe calculation formula is as follows:
in the formula, SxAnd SyThe results of filtering the IMG1 image after the graying process by g _ x and g _ y, respectively, max () represents the maximum value;
(43) and setting the threshold value to be [1,50] and carrying out thresholding processing on the image to obtain a binary image IMG 3.
Further, the step (5) includes the steps of:
(51) and (3) providing a DF dual feature fusion algorithm, fusing the binary images IMG2 and IMG3, and separating the lane line from the complex background:
g(x,y)=f(x,y)∨h(x,y)
wherein f (x, y) and h (x, y) represent IMG2 and IMG3, respectively;
(52) performing expansion and corrosion on the fused image by using morphological operation, namely performing closed operation on the image, filling some holes in the lane line and eliminating small particle noise contained in the lane line to obtain a processed image IMG4, wherein the size and the size of a convolution kernel B used are as follows:
further, the step (6) comprises the steps of:
(61) setting four vertexes, carrying out perspective transformation on the IMG4 to obtain a top view IMG5, counting a pixel histogram of the top view IMG, and determining the initial position of a left-right sliding window according to pixel density distribution;
(62) a sliding window optimization scheme of frame skipping search is provided, namely all lane line pixel points are searched from bottom to top by using a sliding window every 10 frames;
(63) fitting a quadratic polynomial equation of the x and y values of the horizontal and vertical coordinates of all non-0 pixel points counted in the window:
f(x)=by2+cy+d
wherein x is the abscissa value of the lane line, y is the ordinate value of the lane line, and the height of the image IMG5 is H, the range of y is (1-H), and b, c, and d are polynomial coefficients.
Has the advantages that: compared with the prior art, the invention has the beneficial effects that: 1. the optimized CLAHE algorithm is adopted, so that the influence of over-strong local illumination at night on the lane line detection is effectively overcome; 2. the proposed DF algorithm can be self-adaptive to the night lane line environment, and separates the lane line from the background to obtain a high-quality lane line binary image; 3. an improved sliding window searching algorithm is provided, and the algorithm efficiency is improved; 4. the algorithm provided by the invention takes 0.0153 seconds to determine the boundary of the lane line in the single-frame image, and has better real-time property.
Drawings
FIG. 1 is a general process flow diagram of the present invention;
FIG. 2 is a ROI area set for detecting a lane line;
FIG. 3 is a binary image of a lane line after being processed by the improved CLAHE algorithm;
FIG. 4 is a binary image of the lane line after processing by the Prewitt algorithm;
FIG. 5 is a binary image of a lane line filtered by the DF algorithm;
FIG. 6 is a fitted lane line boundary;
fig. 7 is a detected lane line region.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
A large number of parameters are involved in the present embodiment, and each parameter will be described below as shown in table 1.
Table 1 description of variables
As shown in fig. 1, the method for detecting a lane line at night based on improved CLAHE and sliding window search provided by the present invention specifically includes the following steps:
step 1: and acquiring a video image, and in order to ensure the real-time performance and the efficiency of the algorithm, zooming the image by using a down-sampling algorithm, fixing the size of the image, wherein the image length range is [600,380] pixels, and the image width range is [460,200] pixels, and acquiring a processed image IMG.
The resolution of the video image processed by the invention is 1280 × 720, and in order to improve the efficiency and applicability of the algorithm, the image resolution needs to be reduced by down-sampling. The algorithm is mainly realized by adopting Gaussian convolution, and adopts a one-dimensional convolution kernel w [5 ]:
wherein the value of a is 0.6.
Step 2: and determining 6 vertexes, carrying out region-of-interest processing on the IMG, setting the value of a pixel point of the image in the region to be 0, and keeping the value of the pixel point outside the region unchanged to obtain a processed image IMG 1.
In the actual processing process, in order to effectively extract the ROI region including the lane line, the 6 coordinate points set in the experiment are: p1(0,0), p2(imgw,0), p3(imgw,380), p4(425,165), p5(290,165), p6(0,380), wherein imgw is the length of the image IMG. The pixel value outside the ROI area is set to 0, which can effectively improve the operation efficiency of the lane line algorithm, and fig. 2 is the ROI area set for detecting the lane line.
And step 3: an improved CLAHE (contrast Limited Adaptive texture estimation) algorithm is proposed to preprocess the image IMG1, then a pixel threshold value is set to be [145,255], and the preprocessed image is subjected to thresholding processing to obtain a binary image IMG 2.
Because the lamps of the vehicle irradiate on the lane line in the night environment and the illumination intensity of partial areas of the lane line on the road surface is too high in the dark environment of the whole night environment, the improved CLAHE algorithm is provided for preprocessing the image IMG 1. The brightness is greatly influenced by the shearing point, and experimental researches show that high shearing points need to be arranged at places with uneven illumination and shearing points with low values need to be arranged at places with even illumination, so that the arrangement of the self-adaptive shearing points is very important for the CLAHE algorithm.
The invention provides a CLAHE algorithm for enhancing and improving a high-illumination image to process a night lane line image, wherein the algorithm is used for adaptively setting a shearing point according to the illumination distribution of the image, and the formula is as follows:
wherein, β is a shearing point of the CLAHE algorithm, N is the number of pixel points in each block after the block division, R is the dynamic range of pixels in the block, α is a cutting factor, σ is the variance of the pixels in the block, Avg is the average pixel value in the block, and ε is a minimum number.
And performing Gaussian filtering on the cut image to further reduce image noise, then setting a pixel threshold value to be [145,255], performing thresholding processing on the preprocessed image, and obtaining a binary image IMG2 after the processing of the CLAHE algorithm in the image processing system shown in FIG. 3.
And 4, step 4: filtering the IMG1 by using a density Prewitt operator, then setting a pixel threshold value to [1,50], and carrying out thresholding processing on the filtered image to obtain a binary image IMG 3.
Firstly graying the IMG1 image, and then filtering the image by using horizontal and vertical operators g _ x and g _ y of Prewitt operator respectively, wherein the g _ x and g _ x operators are as follows:
wherein, g _ x is a horizontal Prewitt operator, and g _ y is a vertical Prewitt operator.
And then substituting the results processed by the g _ x and g _ y operators into the following formula to obtain the image filtered by the Prewitt operator, wherein the calculation formula is as follows:
in the formula, SxAnd SyAfter filtering the grayed IMG1 image by g _ x and g _ y, respectively, max () represents the maximum value.
Then the S isxyThe result is thresholded, and the selected threshold range is [1,50]]The image is processed into a binary image, and the processing result is shown in fig. 4.
And 5: and (3) proposing a DF (Dual-feature Fusion) Dual feature Fusion algorithm, fusing features of IMG2 and IMG3, and then filtering the fused image by adopting morphological operation to obtain an image IMG 4.
And a DF dual feature fusion algorithm is provided, the binary images IMG2 and IMG3 are fused, and the lane lines are separated from the complex background.
g(x,y)=f(x,y)∨h(x,y)
Wherein f (x, y) and h (x, y) represent IMG2 and IMG3, respectively.
Performing expansion and corrosion on the fused image by using morphological operation, namely closing operation of the image, filling some holes in a lane line and eliminating small particle noise contained in the lane line, performing logical OR operation on the binary images IMG2 and IMG3, fusing the characteristics of the two images, performing advanced morphological closing operation on the obtained image, wherein the size of a convolution kernel B is 5 multiplied by 5, the value of the convolution kernel B is 1, and filtering the fused image to obtain a processed image IMG4, wherein the effect is shown in FIG. 5. The convolution kernel B used is as follows:
step 6: and determining the lane line pixel points in the IMG4 by adopting an optimized sliding window searching algorithm, and fitting the pixel points by utilizing a polynomial to determine a boundary equation of the lane line.
Four vertices are provided, and the IMG4 is subjected to perspective transformation, and its angle of view is transformed into a top view IMG 5. Histogram statistics is carried out on the top view, the horizontal coordinate value range of the histogram is the same as the length of an IMG5 image, and the vertical coordinate is pixel density. And determining the initial positions sl and sr of the left and right lane line feature extraction windows according to the peak value of the statistical histogram. Setting the length of a feature extraction window as 100px and the width as 50px, and counting the abscissa x and the ordinate y of a non-0 pixel point in the feature extraction window from the starting position to the bottom on the IMG5 image according to the values of sl and sr. And then fitting the coordinate points by using a quadratic polynomial to obtain a boundary equation of the lane line.
A sliding window optimization scheme of frame skipping search is provided, namely all lane line pixel points are searched from bottom to top by using a sliding window every 10 frames, and the execution efficiency of an algorithm can be effectively improved.
Fitting a quadratic polynomial equation of the x and y values of the horizontal and vertical coordinates of all non-0 pixel points counted in the window:
f(x)=by2+cy+d
wherein x is the abscissa value of the lane line, y is the ordinate value of the lane line, and the height of the image IMG5 is H, the range of y is (1-H), and b, c, and d are polynomial coefficients.
And 7: and according to the determined boundary equation, projecting the lane line to the IMG by adopting inverse perspective transformation to finish lane line tracking and lane line region visualization.
According to the equation of the left and right boundary determined in the step 6, the pixels in the middle part of the left and right lane lines are color-labeled to obtain the fitted lane line boundary, as shown in fig. 6, wherein an inverse perspective transformation matrix from the top view of the IMG4 to the IMG4 in the step 4 is calculated, and then the lane line region labeled with the color is projected to the IMG according to the matrix, so as to complete the tracking and visualization of the lane line region, as shown in fig. 7.
The method can detect and track the lane lines under different environments such as night illumination, weak illumination, strong illumination, normal illumination and the like, and experimental results show that the average time consumption of a single-frame lane line image is 0.0143 seconds, the FPS is 70 frames/second, the real-time performance is good, and the method has good robustness on interference of car light illumination, road shadow, traffic signs and the like.
Claims (4)
1. A self-adaptive night lane line detection method based on improved CLAHE and sliding window search is characterized by comprising the following steps:
(1) obtaining a video image, zooming the image by using a down-sampling algorithm, and fixing the size of the image to obtain a processed image IMG;
(2) determining 6 vertexes, processing an interested region of the IMG, setting the value of a pixel point of an image in the region to be 0, and keeping the value of the pixel point outside the region unchanged to obtain a processed image IMG 1;
(3) optimizing a CLAHE algorithm shearing point formula, preprocessing an image IMG1, setting a pixel threshold value to be [145,255], and performing thresholding processing on the preprocessed image to obtain a binary image IMG 2;
(4) filtering the IMG1 by using a density Prewitt operator, setting a pixel threshold value to be [1,50], and carrying out thresholding treatment on the filtered image to obtain a binary image IMG 3;
(5) a DF dual feature fusion algorithm is provided, the features of IMG2 and IMG3 are fused, and morphological operation is adopted to filter the fused image to obtain an image IMG 4; the DF dual feature fusion algorithm is to perform logical OR operation on binary images IMG2 and IMG3, and fuse the features of the two images:
g(x,y)=f(x,y)∨h(x,y)
wherein f (x, y) and h (x, y) represent IMG2 and IMG3, respectively;
(6) determining lane line pixel points in IMG4 by adopting an optimized sliding window search algorithm, and fitting the pixel points by utilizing a polynomial to determine a boundary equation of a lane line;
(7) according to the determined boundary equation, the lane line is projected to the IMG by adopting inverse perspective transformation, and lane line tracking and lane line area visualization are completed;
the step (3) comprises the following steps:
(31) according to the dynamic range of each block in the image, a CLAHE shearing point formula is optimized, a shearing point cp is set in a self-adaptive mode to process the image, and the shearing point formula is as follows:
the method comprises the following steps that cp is a shearing point of a CLAHE algorithm, N is the number of pixel points in each block after the blocks are partitioned, R is the dynamic range of pixels in the blocks, alpha is a cutting factor, sigma is the variance of the pixels in the blocks, Avg is the average pixel value in the blocks, epsilon is a minimum number, an optimization point is a self-adaptive shearing point, and the variance and the average gray level are used for representing the uniformity degree of the pixels in the blocks;
(32) performing Gaussian filtering on the cut image to further reduce image noise, then setting a threshold value to [145,255], and processing the image into a binary image IMG 2;
the step (4) comprises the following steps:
(41) graying the IMG1, and filtering the image using the Prewitt operators in the horizontal and vertical directions, wherein the Prewitt operators in the vertical and horizontal directions are as follows:
wherein g _ x is a transverse Prewitt operator, and g _ y is a longitudinal Prewitt operator;
(42) calculating S byxyThe calculation formula is as follows:
in the formula, SxAnd SyThe results of filtering the IMG1 image after the graying process by g _ x and g _ y, respectively, max () represents the maximum value;
(43) and setting a threshold value as [1,50] to carry out thresholding processing on the image to obtain a binary image IMG.
2. The adaptive night lane line detection method based on improved CLAHE and sliding window search of claim 1, wherein the image size of step (1) is in the length range of [600,380] pixels and the width range of [460,200] pixels.
3. The adaptive night lane detection method based on improved CLAHE and sliding window search as claimed in claim 1, wherein said step (5) comprises the steps of:
(51) and (3) providing a DF dual feature fusion algorithm, fusing the binary images IMG2 and IMG3, and separating the lane line from the complex background:
g(x,y)=f(x,y)∨h(x,y)
wherein f (x, y) and h (x, y) represent IMG2 and IMG3, respectively;
(52) performing expansion and corrosion on the fused image by using morphological operation, namely performing closed operation on the image, filling some holes in the lane line and eliminating small particle noise contained in the lane line to obtain a processed image IMG4, wherein the size and the size of a convolution kernel B used are as follows:
4. the adaptive night lane detection method based on improved CLAHE and sliding window search as claimed in claim 1, wherein said step (6) comprises the steps of:
(61) setting four vertexes, carrying out perspective transformation on the IMG4 to obtain a top view IMG5, counting a pixel histogram of the top view IMG, and determining the initial position of a left-right sliding window according to pixel density distribution;
(62) a sliding window optimization scheme of frame skipping search is provided, namely all lane line pixel points are searched from bottom to top by using a sliding window every 10 frames;
(63) fitting a quadratic polynomial equation of the x and y values of the horizontal and vertical coordinates of all non-0 pixel points counted in the window:
f(x)=by2+cy+d
wherein x is the abscissa value of the lane line, y is the ordinate value of the lane line, and the height of the image IMG5 is H, the range of y is (1-H), and b, c, and d are polynomial coefficients.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011083289.XA CN112270690B (en) | 2020-10-12 | 2020-10-12 | Self-adaptive night lane line detection method based on improved CLAHE and sliding window search |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011083289.XA CN112270690B (en) | 2020-10-12 | 2020-10-12 | Self-adaptive night lane line detection method based on improved CLAHE and sliding window search |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112270690A CN112270690A (en) | 2021-01-26 |
CN112270690B true CN112270690B (en) | 2022-04-26 |
Family
ID=74338876
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011083289.XA Active CN112270690B (en) | 2020-10-12 | 2020-10-12 | Self-adaptive night lane line detection method based on improved CLAHE and sliding window search |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112270690B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116681721B (en) * | 2023-06-07 | 2023-12-29 | 东南大学 | Linear track detection and tracking method based on vision |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102288121A (en) * | 2011-05-12 | 2011-12-21 | 电子科技大学 | Method for measuring and pre-warning lane departure distance based on monocular vision |
CN103605953A (en) * | 2013-10-31 | 2014-02-26 | 电子科技大学 | Vehicle interest target detection method based on sliding window search |
CN109085823A (en) * | 2018-07-05 | 2018-12-25 | 浙江大学 | The inexpensive automatic tracking running method of view-based access control model under a kind of garden scene |
CN109359602A (en) * | 2018-10-22 | 2019-02-19 | 长沙智能驾驶研究院有限公司 | Method for detecting lane lines and device |
CN110414385A (en) * | 2019-07-12 | 2019-11-05 | 淮阴工学院 | A kind of method for detecting lane lines and system based on homography conversion and characteristic window |
CN110569704A (en) * | 2019-05-11 | 2019-12-13 | 北京工业大学 | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision |
CN111126306A (en) * | 2019-12-26 | 2020-05-08 | 江苏罗思韦尔电气有限公司 | Lane line detection method based on edge features and sliding window |
CN111242037A (en) * | 2020-01-15 | 2020-06-05 | 华南理工大学 | Lane line detection method based on structural information |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011118890A (en) * | 2009-11-04 | 2011-06-16 | Valeo Schalter & Sensoren Gmbh | Method and system for detecting whole lane boundary |
JP5906272B2 (en) * | 2014-03-28 | 2016-04-20 | 富士重工業株式会社 | Stereo image processing apparatus for vehicle |
US10872246B2 (en) * | 2017-09-07 | 2020-12-22 | Regents Of The University Of Minnesota | Vehicle lane detection system |
CN110147698A (en) * | 2018-02-13 | 2019-08-20 | Kpit技术有限责任公司 | System and method for lane detection |
CN110647850A (en) * | 2019-09-27 | 2020-01-03 | 福建农林大学 | Automatic lane deviation measuring method based on inverse perspective principle |
-
2020
- 2020-10-12 CN CN202011083289.XA patent/CN112270690B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102288121A (en) * | 2011-05-12 | 2011-12-21 | 电子科技大学 | Method for measuring and pre-warning lane departure distance based on monocular vision |
CN103605953A (en) * | 2013-10-31 | 2014-02-26 | 电子科技大学 | Vehicle interest target detection method based on sliding window search |
CN109085823A (en) * | 2018-07-05 | 2018-12-25 | 浙江大学 | The inexpensive automatic tracking running method of view-based access control model under a kind of garden scene |
CN109359602A (en) * | 2018-10-22 | 2019-02-19 | 长沙智能驾驶研究院有限公司 | Method for detecting lane lines and device |
CN110569704A (en) * | 2019-05-11 | 2019-12-13 | 北京工业大学 | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision |
CN110414385A (en) * | 2019-07-12 | 2019-11-05 | 淮阴工学院 | A kind of method for detecting lane lines and system based on homography conversion and characteristic window |
CN111126306A (en) * | 2019-12-26 | 2020-05-08 | 江苏罗思韦尔电气有限公司 | Lane line detection method based on edge features and sliding window |
CN111242037A (en) * | 2020-01-15 | 2020-06-05 | 华南理工大学 | Lane line detection method based on structural information |
Non-Patent Citations (1)
Title |
---|
微光夜视仪图像增强算法的研究与实现;贺聪;《中国优秀硕士学位论文全文数据库 信息科技辑》;20200215(第2期);第26-27页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112270690A (en) | 2021-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111145161B (en) | Pavement crack digital image processing and identifying method | |
CN111310558B (en) | Intelligent pavement disease extraction method based on deep learning and image processing method | |
CN107862290B (en) | Lane line detection method and system | |
CN110414385B (en) | Lane line detection method and system based on homography transformation and characteristic window | |
CN108038416B (en) | Lane line detection method and system | |
US9183617B2 (en) | Methods, devices, and computer readable mediums for processing a digital picture | |
CN107784669A (en) | A kind of method that hot spot extraction and its barycenter determine | |
US8532339B2 (en) | System and method for motion detection and the use thereof in video coding | |
CN107895151A (en) | Method for detecting lane lines based on machine vision under a kind of high light conditions | |
CN109064411B (en) | Illumination compensation-based road surface image shadow removing method | |
CN110197153A (en) | Wall automatic identifying method in a kind of floor plan | |
CN112070717B (en) | Power transmission line icing thickness detection method based on image processing | |
CN112200742A (en) | Filtering and denoising method applied to edge detection | |
CN111598814B (en) | Single image defogging method based on extreme scattering channel | |
CN112270690B (en) | Self-adaptive night lane line detection method based on improved CLAHE and sliding window search | |
CN110047041B (en) | Space-frequency domain combined traffic monitoring video rain removing method | |
Pan et al. | Single-image dehazing via dark channel prior and adaptive threshold | |
CN108921147B (en) | Black smoke vehicle identification method based on dynamic texture and transform domain space-time characteristics | |
CN114241436A (en) | Lane line detection method and system for improving color space and search window | |
Wang et al. | Adaptive binarization: A new approach to license plate characters segmentation | |
Wang et al. | Automatic TV logo detection, tracking and removal in broadcast video | |
CN116030430A (en) | Rail identification method, device, equipment and storage medium | |
Wang et al. | A robust vehicle detection approach | |
CN111539967B (en) | Method and system for identifying and processing interference fringe region in terahertz imaging of focal plane | |
Rui | Lane line detection technology based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
EE01 | Entry into force of recordation of patent licensing contract | ||
EE01 | Entry into force of recordation of patent licensing contract |
Application publication date: 20210126 Assignee: Jiangsu Kesheng Xuanyi Technology Co.,Ltd. Assignor: HUAIYIN INSTITUTE OF TECHNOLOGY Contract record no.: X2022320000363 Denomination of invention: An Adaptive Night Lane Detection Method Based on Improved CLAHE and Sliding Window Search Granted publication date: 20220426 License type: Common License Record date: 20221210 |