CN112270690A - Self-adaptive night lane line detection method based on improved CLAHE and sliding window search - Google Patents

Self-adaptive night lane line detection method based on improved CLAHE and sliding window search Download PDF

Info

Publication number
CN112270690A
CN112270690A CN202011083289.XA CN202011083289A CN112270690A CN 112270690 A CN112270690 A CN 112270690A CN 202011083289 A CN202011083289 A CN 202011083289A CN 112270690 A CN112270690 A CN 112270690A
Authority
CN
China
Prior art keywords
image
lane line
clahe
sliding window
adaptive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011083289.XA
Other languages
Chinese (zh)
Other versions
CN112270690B (en
Inventor
高尚兵
蔡创新
相林
于永涛
朱全银
张�浩
于坤
汪长春
陈浩霖
李文婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202011083289.XA priority Critical patent/CN112270690B/en
Publication of CN112270690A publication Critical patent/CN112270690A/en
Application granted granted Critical
Publication of CN112270690B publication Critical patent/CN112270690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a self-adaptive night lane line detection method based on improved CLAHE and sliding window search, which comprises the following steps: setting a polygonal area at the lower half part of the image as an ROI area; the CLAHE algorithm for enhancing and improving the high-light image is provided, so that the influence of an over-bright area on the detection of the lane line is reduced; a DF fusion method is proposed to separate the lane line from the complex background; determining the lane line pixel points by using the optimized sliding window searching algorithm, and fitting a lane line boundary equation by combining a second-order function; and projecting the lane line to an original image by adopting inverse perspective transformation to realize lane line detection. The invention can realize the detection and tracking of the lane line under different environments such as night illumination, weak illumination, strong illumination, normal illumination and the like, and simultaneously solves the problem that the curved lane line is difficult to detect.

Description

Self-adaptive night lane line detection method based on improved CLAHE and sliding window search
Technical Field
The invention relates to the technical field of image processing and road safety, in particular to a self-adaptive night lane line detection method based on improved CLAHE and sliding window search.
Background
The lane line detection method is divided into a plurality of methods, Nima Zarbakht et al propose to convert an image from an RGB color space to a YCbCr color space and an HSV color space, and realize lane line detection by utilizing a gradient detection operator. Jamel Baili et al propose a feature-based lane detection method that simplifies the edge detection process using a horizontal difference filter and groups detected edge points into straight lines using an improved hough transform. Chiyder et al used the Soberx edge detection operator to detect lane line edge information and then used an improved hough transform to detect possible lane lines based on the region of interest. The above lane line detection method is easily affected by illumination, road surface shadows and lane line defects, cannot detect curved lane lines, is difficult to cope with complex urban roads, and is easily affected by other edge noises.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a self-adaptive night lane line detection method based on improved CLAHE and sliding window search, which solves the problem of poor algorithm adaptability at night or under the condition of non-uniform light and solves the problem of difficult detection of a bent lane line.
The technical scheme is as follows: the invention discloses a self-adaptive night lane line detection method based on improved CLAHE and sliding window search, which comprises the following steps of:
(1) obtaining a video image, zooming the image by using a down-sampling algorithm, and fixing the size of the image to obtain a processed image IMG;
(2) determining 6 vertexes, processing an interested region of the IMG, setting the value of a pixel point of an image in the region to be 0, and keeping the value of the pixel point outside the region unchanged to obtain a processed image IMG 1;
(3) optimizing a CLAHE algorithm shearing point formula, preprocessing an image IMG1, setting a pixel threshold value to be [145,255], and performing thresholding processing on the preprocessed image to obtain a binary image IMG 2;
(4) filtering the IMG1 by using a density Prewitt operator, setting a pixel threshold value to be [1,50], and carrying out thresholding treatment on the filtered image to obtain a binary image IMG 3;
(5) a DF dual feature fusion algorithm is provided, the features of IMG2 and IMG3 are fused, and morphological operation is adopted to filter the fused image to obtain an image IMG 4;
(6) determining lane line pixel points in IMG4 by adopting an optimized sliding window search algorithm, and fitting the pixel points by utilizing a polynomial to determine a boundary equation of a lane line;
(7) and according to the determined boundary equation, projecting the lane line to the IMG by adopting inverse perspective transformation to finish lane line tracking and lane line region visualization.
Further, the image size in step (1) is in the range of [600,380] pixels in length and [460,200] pixels in width.
Further, characterized in that said step (3) comprises the steps of:
(31) according to the dynamic range of each block in the image, a CLAHE shearing point formula is optimized, a shearing point cp is set in a self-adaptive mode to process the image, and the shearing point formula is as follows:
Figure BDA0002719419480000021
the method comprises the following steps that cp is a shearing point of a CLAHE algorithm, N is the number of pixel points in each block after the blocks are partitioned, R is the dynamic range of pixels in the blocks, alpha is a cutting factor, sigma is the variance of the pixels in the blocks, Avg is the average pixel value in the blocks, epsilon is a minimum number, an optimization point is a self-adaptive shearing point, and the variance and the average gray level are used for representing the uniformity degree of the pixels in the blocks;
(32) the clipped image is subjected to gaussian filtering to further reduce image noise, and then the image is processed into a binary image IMG2 with a threshold value of [145,255 ].
Further, the step (4) comprises the steps of:
(41) graying the IMG1, and filtering the image using the Prewitt operators in the horizontal and vertical directions, wherein the Prewitt operators in the vertical and horizontal directions are as follows:
Figure BDA0002719419480000022
wherein g _ x is a transverse Prewitt operator, and g _ y is a longitudinal Prewitt operator;
(42) calculating S byxyThe calculation formula is as follows:
Figure BDA0002719419480000023
in the formula, SxAnd SyThe results of filtering the IMG1 image after the graying process by g _ x and g _ y, respectively, max () represents the maximum value;
(43) and setting the threshold value to be [1,50] and carrying out thresholding processing on the image to obtain a binary image IMG 3.
Further, the step (5) includes the steps of:
(51) and (3) providing a DF dual feature fusion algorithm, fusing the binary images IMG2 and IMG3, and separating the lane line from the complex background:
g(x,y)=f(x,y)∨h(x,y)
wherein f (x, y) and h (x, y) represent IMG2 and IMG3, respectively;
(52) performing expansion and corrosion on the fused image by using morphological operation, namely performing closed operation on the image, filling some holes in the lane line and eliminating small particle noise contained in the lane line to obtain a processed image IMG4, wherein the size and the size of a convolution kernel B used are as follows:
Figure BDA0002719419480000031
further, the step (6) comprises the steps of:
(61) setting four vertexes, carrying out perspective transformation on the IMG4 to obtain a top view IMG5, counting a pixel histogram of the top view IMG, and determining the initial position of a left-right sliding window according to pixel density distribution;
(62) a sliding window optimization scheme of frame skipping search is provided, namely all lane line pixel points are searched from bottom to top by using a sliding window every 10 frames;
(63) fitting a quadratic polynomial equation of the x and y values of the horizontal and vertical coordinates of all non-0 pixel points counted in the window:
f(x)=by2+cy+d
wherein x is the abscissa value of the lane line, y is the ordinate value of the lane line, and the height of the image IMG5 is H, the range of y is (1-H), and b, c, and d are polynomial coefficients.
Has the advantages that: compared with the prior art, the invention has the beneficial effects that: 1. the optimized CLAHE algorithm is adopted, so that the influence of over-strong local illumination at night on the lane line detection is effectively overcome; 2. the proposed DF algorithm can be self-adaptive to the night lane line environment, and separates the lane line from the background to obtain a high-quality lane line binary image; 3. an improved sliding window searching algorithm is provided, and the algorithm efficiency is improved; 4. the algorithm provided by the invention takes 0.0153 seconds to determine the boundary of the lane line in the single-frame image, and has better real-time property.
Drawings
FIG. 1 is a general process flow diagram of the present invention;
FIG. 2 is a ROI area set for detecting a lane line;
FIG. 3 is a binary image of a lane line after being processed by the improved CLAHE algorithm;
FIG. 4 is a binary image of the lane line after processing by the Prewitt algorithm;
FIG. 5 is a binary image of a lane line filtered by the DF algorithm;
FIG. 6 is a fitted lane line boundary;
fig. 7 is a detected lane line region.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
A large number of parameters are involved in the present embodiment, and each parameter will be described below as shown in table 1.
Table 1 description of variables
Figure BDA0002719419480000041
As shown in fig. 1, the method for detecting a lane line at night based on improved CLAHE and sliding window search provided by the present invention specifically includes the following steps:
step 1: and acquiring a video image, and in order to ensure the real-time performance and the efficiency of the algorithm, zooming the image by using a down-sampling algorithm, fixing the size of the image, wherein the image length range is [600,380] pixels, and the image width range is [460,200] pixels, and acquiring a processed image IMG.
The resolution of the video image processed by the invention is 1280 × 720, and in order to improve the efficiency and applicability of the algorithm, the image resolution needs to be reduced by down-sampling. The algorithm is mainly realized by adopting Gaussian convolution, and adopts a one-dimensional convolution kernel w [5 ]:
Figure BDA0002719419480000042
wherein the value of a is 0.6.
Step 2: and determining 6 vertexes, carrying out region-of-interest processing on the IMG, setting the value of a pixel point of the image in the region to be 0, and keeping the value of the pixel point outside the region unchanged to obtain a processed image IMG 1.
In the actual processing process, in order to effectively extract the ROI region including the lane line, the 6 coordinate points set in the experiment are: p1(0,0), p2(imgw,0), p3(imgw,380), p4(425,165), p5(290,165), p6(0,380), wherein imgw is the length of the image IMG. The pixel value outside the ROI area is set to 0, which can effectively improve the operation efficiency of the lane line algorithm, and fig. 2 is the ROI area set for detecting the lane line.
And step 3: an improved CLAHE (contrast Limited Adaptive texture estimation) algorithm is proposed to preprocess the image IMG1, then a pixel threshold value is set to be [145,255], and the preprocessed image is subjected to thresholding processing to obtain a binary image IMG 2.
Because the lamps of the vehicle irradiate on the lane line in the night environment and the illumination intensity of partial areas of the lane line on the road surface is too high in the dark environment of the whole night environment, the improved CLAHE algorithm is provided for preprocessing the image IMG 1. The brightness is greatly influenced by the shearing point, and experimental researches show that high shearing points need to be arranged at places with uneven illumination and shearing points with low values need to be arranged at places with even illumination, so that the arrangement of the self-adaptive shearing points is very important for the CLAHE algorithm.
The invention provides a CLAHE algorithm for enhancing and improving a high-illumination image to process a night lane line image, wherein the algorithm is used for adaptively setting a shearing point according to the illumination distribution of the image, and the formula is as follows:
Figure BDA0002719419480000051
wherein, β is a shearing point of the CLAHE algorithm, N is the number of pixel points in each block after the block division, R is the dynamic range of pixels in the block, α is a cutting factor, σ is the variance of the pixels in the block, Avg is the average pixel value in the block, and ε is a minimum number.
And performing Gaussian filtering on the cut image to further reduce image noise, then setting a pixel threshold value to be [145,255], performing thresholding processing on the preprocessed image, and obtaining a binary image IMG2 after the processing of the CLAHE algorithm in the image processing system shown in FIG. 3.
And 4, step 4: filtering the IMG1 by using a density Prewitt operator, then setting a pixel threshold value to [1,50], and carrying out thresholding processing on the filtered image to obtain a binary image IMG 3.
Firstly graying the IMG1 image, and then filtering the image by using horizontal and vertical operators g _ x and g _ y of Prewitt operator respectively, wherein the g _ x and g _ x operators are as follows:
Figure BDA0002719419480000052
wherein, g _ x is a horizontal Prewitt operator, and g _ y is a vertical Prewitt operator.
And then substituting the results processed by the g _ x and g _ y operators into the following formula to obtain the image filtered by the Prewitt operator, wherein the calculation formula is as follows:
Figure BDA0002719419480000061
in the formula, SxAnd SyAfter filtering the grayed IMG1 image by g _ x and g _ y, respectively, max () represents the maximum value.
Then the S isxyThe result is thresholded, and the selected threshold range is [1,50]]The image is processed into a binary image, and the processing result is shown in fig. 4.
And 5: and (3) proposing a DF (Dual-feature Fusion) Dual feature Fusion algorithm, fusing features of IMG2 and IMG3, and then filtering the fused image by adopting morphological operation to obtain an image IMG 4.
And a DF dual feature fusion algorithm is provided, the binary images IMG2 and IMG3 are fused, and the lane lines are separated from the complex background.
g(x,y)=f(x,y)∨h(x,y)
Wherein f (x, y) and h (x, y) represent IMG2 and IMG3, respectively.
Performing expansion and corrosion on the fused image by using morphological operation, namely closing operation of the image, filling some holes in a lane line and eliminating small particle noise contained in the lane line, performing logical OR operation on the binary images IMG2 and IMG3, fusing the characteristics of the two images, performing advanced morphological closing operation on the obtained image, wherein the size of a convolution kernel B is 5 multiplied by 5, the value of the convolution kernel B is 1, and filtering the fused image to obtain a processed image IMG4, wherein the effect is shown in FIG. 5. The convolution kernel B used is as follows:
Figure BDA0002719419480000062
step 6: and determining the lane line pixel points in the IMG4 by adopting an optimized sliding window searching algorithm, and fitting the pixel points by utilizing a polynomial to determine a boundary equation of the lane line.
Four vertices are provided, and the IMG4 is subjected to perspective transformation, and its angle of view is transformed into a top view IMG 5. Histogram statistics is carried out on the top view, the horizontal coordinate value range of the histogram is the same as the length of an IMG5 image, and the vertical coordinate is pixel density. And determining the initial positions sl and sr of the left and right lane line feature extraction windows according to the peak value of the statistical histogram. Setting the length of a feature extraction window as 100px and the width as 50px, and counting the abscissa x and the ordinate y of a non-0 pixel point in the feature extraction window from the starting position to the bottom on the IMG5 image according to the values of sl and sr. And then fitting the coordinate points by using a quadratic polynomial to obtain a boundary equation of the lane line.
A sliding window optimization scheme of frame skipping search is provided, namely all lane line pixel points are searched from bottom to top by using a sliding window every 10 frames, and the execution efficiency of an algorithm can be effectively improved.
Fitting a quadratic polynomial equation of the x and y values of the horizontal and vertical coordinates of all non-0 pixel points counted in the window:
f(x)=by2+cy+d
wherein x is the abscissa value of the lane line, y is the ordinate value of the lane line, and the height of the image IMG5 is H, the range of y is (1-H), and b, c, and d are polynomial coefficients.
And 7: and according to the determined boundary equation, projecting the lane line to the IMG by adopting inverse perspective transformation to finish lane line tracking and lane line region visualization.
According to the equation of the left and right boundary determined in the step 6, the pixels in the middle part of the left and right lane lines are color-labeled to obtain the fitted lane line boundary, as shown in fig. 6, wherein an inverse perspective transformation matrix from the top view of the IMG4 to the IMG4 in the step 4 is calculated, and then the lane line region labeled with the color is projected to the IMG according to the matrix, so as to complete the tracking and visualization of the lane line region, as shown in fig. 7.
The method can detect and track the lane lines under different environments such as night illumination, weak illumination, strong illumination, normal illumination and the like, and experimental results show that the average time consumption of a single-frame lane line image is 0.0143 seconds, the FPS is 70 frames/second, the real-time performance is good, and the method has good robustness on interference of car light illumination, road shadow, traffic signs and the like.

Claims (6)

1. A self-adaptive night lane line detection method based on improved CLAHE and sliding window search is characterized by comprising the following steps:
(1) obtaining a video image, zooming the image by using a down-sampling algorithm, and fixing the size of the image to obtain a processed image IMG;
(2) determining 6 vertexes, processing an interested region of the IMG, setting the value of a pixel point of an image in the region to be 0, and keeping the value of the pixel point outside the region unchanged to obtain a processed image IMG 1;
(3) optimizing a CLAHE algorithm shearing point formula, preprocessing an image IMG1, setting a pixel threshold value to be [145,255], and performing thresholding processing on the preprocessed image to obtain a binary image IMG 2;
(4) filtering the IMG1 by using a density Prewitt operator, setting a pixel threshold value to be [1,50], and carrying out thresholding treatment on the filtered image to obtain a binary image IMG 3;
(5) a DF dual feature fusion algorithm is provided, the features of IMG2 and IMG3 are fused, and morphological operation is adopted to filter the fused image to obtain an image IMG 4;
(6) determining lane line pixel points in IMG4 by adopting an optimized sliding window search algorithm, and fitting the pixel points by utilizing a polynomial to determine a boundary equation of a lane line;
(7) and according to the determined boundary equation, projecting the lane line to the IMG by adopting inverse perspective transformation to finish lane line tracking and lane line region visualization.
2. The adaptive nighttime lane detection method based on an improved CLAHE and sliding window search of claim 1, wherein step (1) the image size is in the range of [600,380] pixels in length and [460,200] pixels in width.
3. The adaptive nighttime lane detection method based on an improved CLAHE and a sliding window search according to claim 1, wherein the step (3) comprises the steps of:
(31) according to the dynamic range of each block in the image, a CLAHE shearing point formula is optimized, a shearing point cp is set in a self-adaptive mode to process the image, and the shearing point formula is as follows:
Figure FDA0002719419470000011
the method comprises the following steps that cp is a shearing point of a CLAHE algorithm, N is the number of pixel points in each block after the blocks are partitioned, R is the dynamic range of pixels in the blocks, alpha is a cutting factor, sigma is the variance of the pixels in the blocks, Avg is the average pixel value in the blocks, epsilon is a minimum number, an optimization point is a self-adaptive shearing point, and the variance and the average gray level are used for representing the uniformity degree of the pixels in the blocks;
(32) the clipped image is subjected to gaussian filtering to further reduce image noise, and then the image is processed into a binary image IMG2 with a threshold value of [145,255 ].
4. The adaptive nighttime lane detection method based on an improved CLAHE and a sliding window search according to claim 1, wherein the step (4) comprises the steps of:
(41) graying the IMG1, and filtering the image using the Prewitt operators in the horizontal and vertical directions, wherein the Prewitt operators in the vertical and horizontal directions are as follows:
Figure FDA0002719419470000021
wherein g _ x is a transverse Prewitt operator, and g _ y is a longitudinal Prewitt operator;
(42) calculating S byxyThe calculation formula is as follows:
Figure FDA0002719419470000022
in the formula, SxAnd SyThe results of filtering the IMG1 image after the graying process by g _ x and g _ y, respectively, max () represents the maximum value;
(43) and setting the threshold value to be [1,50] and carrying out thresholding processing on the image to obtain a binary image IMG 3.
5. The adaptive night lane detection method based on improved CLAHE and sliding window search as claimed in claim 1, wherein said step (5) comprises the steps of:
(51) and (3) providing a DF dual feature fusion algorithm, fusing the binary images IMG2 and IMG3, and separating the lane line from the complex background:
g(x,y)=f(x,y)∨h(x,y)
wherein f (x, y) and h (x, y) represent IMG2 and IMG3, respectively;
(52) performing expansion and corrosion on the fused image by using morphological operation, namely performing closed operation on the image, filling some holes in the lane line and eliminating small particle noise contained in the lane line to obtain a processed image IMG4, wherein the size and the size of a convolution kernel B used are as follows:
Figure FDA0002719419470000023
6. the adaptive night lane detection method based on improved CLAHE and sliding window search as claimed in claim 1, wherein said step (6) comprises the steps of:
(61) setting four vertexes, carrying out perspective transformation on the IMG4 to obtain a top view IMG5, counting a pixel histogram of the top view IMG, and determining the initial position of a left-right sliding window according to pixel density distribution;
(62) a sliding window optimization scheme of frame skipping search is provided, namely all lane line pixel points are searched from bottom to top by using a sliding window every 10 frames;
(63) fitting a quadratic polynomial equation of the x and y values of the horizontal and vertical coordinates of all non-0 pixel points counted in the window:
f(x)=by2+cy+d
wherein x is the abscissa value of the lane line, y is the ordinate value of the lane line, and the height of the image IMG5 is H, the range of y is (1-H), and b, c, and d are polynomial coefficients.
CN202011083289.XA 2020-10-12 2020-10-12 Self-adaptive night lane line detection method based on improved CLAHE and sliding window search Active CN112270690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011083289.XA CN112270690B (en) 2020-10-12 2020-10-12 Self-adaptive night lane line detection method based on improved CLAHE and sliding window search

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011083289.XA CN112270690B (en) 2020-10-12 2020-10-12 Self-adaptive night lane line detection method based on improved CLAHE and sliding window search

Publications (2)

Publication Number Publication Date
CN112270690A true CN112270690A (en) 2021-01-26
CN112270690B CN112270690B (en) 2022-04-26

Family

ID=74338876

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011083289.XA Active CN112270690B (en) 2020-10-12 2020-10-12 Self-adaptive night lane line detection method based on improved CLAHE and sliding window search

Country Status (1)

Country Link
CN (1) CN112270690B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681721A (en) * 2023-06-07 2023-09-01 东南大学 Linear track detection and tracking method based on vision

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011118890A (en) * 2009-11-04 2011-06-16 Valeo Schalter & Sensoren Gmbh Method and system for detecting whole lane boundary
CN102288121A (en) * 2011-05-12 2011-12-21 电子科技大学 Method for measuring and pre-warning lane departure distance based on monocular vision
CN103605953A (en) * 2013-10-31 2014-02-26 电子科技大学 Vehicle interest target detection method based on sliding window search
US20150279017A1 (en) * 2014-03-28 2015-10-01 Fuji Jukogyo Kabushiki Kaisha Stereo image processing device for vehicle
CN109085823A (en) * 2018-07-05 2018-12-25 浙江大学 The inexpensive automatic tracking running method of view-based access control model under a kind of garden scene
CN109359602A (en) * 2018-10-22 2019-02-19 长沙智能驾驶研究院有限公司 Method for detecting lane lines and device
US20190073542A1 (en) * 2017-09-07 2019-03-07 Regents Of The University Of Minnesota Vehicle lane detection system
EP3525132A1 (en) * 2018-02-13 2019-08-14 KPIT Technologies Ltd. System and method for lane detection
CN110414385A (en) * 2019-07-12 2019-11-05 淮阴工学院 A kind of method for detecting lane lines and system based on homography conversion and characteristic window
CN110569704A (en) * 2019-05-11 2019-12-13 北京工业大学 Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110647850A (en) * 2019-09-27 2020-01-03 福建农林大学 Automatic lane deviation measuring method based on inverse perspective principle
CN111126306A (en) * 2019-12-26 2020-05-08 江苏罗思韦尔电气有限公司 Lane line detection method based on edge features and sliding window
CN111242037A (en) * 2020-01-15 2020-06-05 华南理工大学 Lane line detection method based on structural information

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011118890A (en) * 2009-11-04 2011-06-16 Valeo Schalter & Sensoren Gmbh Method and system for detecting whole lane boundary
CN102288121A (en) * 2011-05-12 2011-12-21 电子科技大学 Method for measuring and pre-warning lane departure distance based on monocular vision
CN103605953A (en) * 2013-10-31 2014-02-26 电子科技大学 Vehicle interest target detection method based on sliding window search
US20150279017A1 (en) * 2014-03-28 2015-10-01 Fuji Jukogyo Kabushiki Kaisha Stereo image processing device for vehicle
US20190073542A1 (en) * 2017-09-07 2019-03-07 Regents Of The University Of Minnesota Vehicle lane detection system
EP3525132A1 (en) * 2018-02-13 2019-08-14 KPIT Technologies Ltd. System and method for lane detection
CN109085823A (en) * 2018-07-05 2018-12-25 浙江大学 The inexpensive automatic tracking running method of view-based access control model under a kind of garden scene
CN109359602A (en) * 2018-10-22 2019-02-19 长沙智能驾驶研究院有限公司 Method for detecting lane lines and device
CN110569704A (en) * 2019-05-11 2019-12-13 北京工业大学 Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110414385A (en) * 2019-07-12 2019-11-05 淮阴工学院 A kind of method for detecting lane lines and system based on homography conversion and characteristic window
CN110647850A (en) * 2019-09-27 2020-01-03 福建农林大学 Automatic lane deviation measuring method based on inverse perspective principle
CN111126306A (en) * 2019-12-26 2020-05-08 江苏罗思韦尔电气有限公司 Lane line detection method based on edge features and sliding window
CN111242037A (en) * 2020-01-15 2020-06-05 华南理工大学 Lane line detection method based on structural information

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XI, Q等: "An automatic active contour method for sea cucumber segmentation in natural underwater environments", 《COMPUTERS AND ELECTRONICS IN AGRICULTURE》 *
贺聪: "微光夜视仪图像增强算法的研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
黄勇: "基于双边滤波和改进CLAHE算法的低照度图像增强研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681721A (en) * 2023-06-07 2023-09-01 东南大学 Linear track detection and tracking method based on vision
CN116681721B (en) * 2023-06-07 2023-12-29 东南大学 Linear track detection and tracking method based on vision

Also Published As

Publication number Publication date
CN112270690B (en) 2022-04-26

Similar Documents

Publication Publication Date Title
CN111145161B (en) Pavement crack digital image processing and identifying method
CN107862290B (en) Lane line detection method and system
US20210049744A1 (en) Method for image dehazing based on adaptively improved linear global atmospheric light of dark channel
CN108038416B (en) Lane line detection method and system
CN110414385B (en) Lane line detection method and system based on homography transformation and characteristic window
US9183617B2 (en) Methods, devices, and computer readable mediums for processing a digital picture
TWI607901B (en) Image inpainting system area and method using the same
US8532339B2 (en) System and method for motion detection and the use thereof in video coding
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN107895151A (en) Method for detecting lane lines based on machine vision under a kind of high light conditions
CN109064411B (en) Illumination compensation-based road surface image shadow removing method
CN110197153A (en) Wall automatic identifying method in a kind of floor plan
CN112070717B (en) Power transmission line icing thickness detection method based on image processing
CN116823686B (en) Night infrared and visible light image fusion method based on image enhancement
CN112200742A (en) Filtering and denoising method applied to edge detection
CN111598814B (en) Single image defogging method based on extreme scattering channel
CN112270690B (en) Self-adaptive night lane line detection method based on improved CLAHE and sliding window search
Pan et al. Single-image dehazing via dark channel prior and adaptive threshold
CN110047041B (en) Space-frequency domain combined traffic monitoring video rain removing method
CN108921147B (en) Black smoke vehicle identification method based on dynamic texture and transform domain space-time characteristics
Wang et al. Adaptive binarization: A new approach to license plate characters segmentation
Wang et al. A robust vehicle detection approach
Wang et al. Automatic TV logo detection, tracking and removal in broadcast video
CN116030430A (en) Rail identification method, device, equipment and storage medium
CN111539967B (en) Method and system for identifying and processing interference fringe region in terahertz imaging of focal plane

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210126

Assignee: Jiangsu Kesheng Xuanyi Technology Co.,Ltd.

Assignor: HUAIYIN INSTITUTE OF TECHNOLOGY

Contract record no.: X2022320000363

Denomination of invention: An Adaptive Night Lane Detection Method Based on Improved CLAHE and Sliding Window Search

Granted publication date: 20220426

License type: Common License

Record date: 20221210