CN109871759B - Lane line identification method based on TensorFlow and OpenCV - Google Patents

Lane line identification method based on TensorFlow and OpenCV Download PDF

Info

Publication number
CN109871759B
CN109871759B CN201910036301.2A CN201910036301A CN109871759B CN 109871759 B CN109871759 B CN 109871759B CN 201910036301 A CN201910036301 A CN 201910036301A CN 109871759 B CN109871759 B CN 109871759B
Authority
CN
China
Prior art keywords
image
lane line
value
pixel
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910036301.2A
Other languages
Chinese (zh)
Other versions
CN109871759A (en
Inventor
王权
卢思源
尹升
刘胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN201910036301.2A priority Critical patent/CN109871759B/en
Publication of CN109871759A publication Critical patent/CN109871759A/en
Application granted granted Critical
Publication of CN109871759B publication Critical patent/CN109871759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a lane line identification method based on TensorFlow and OpenCV, which is used for identifying lane lines from road images and establishing a lane line fitting equation, so that the problems of more interference items and low identification rate in the traditional lane line identification algorithm are solved. According to the invention, a TensorFlow is used for establishing a convolution and deconvolution neural network model, and the model can realize semantic segmentation of a road image after training, so that the model can realize pixel-level classification of a lane line area in the road image; then establishing an image binarization method of a variable threshold value, binarizing an output image of the neural network, and distinguishing a lane line area in the road image; and finally, establishing a method for extracting the lane line coordinate points by using OpenCV, further eliminating interference items, extracting the lane line coordinate points from the binarized image, and establishing a lane line equation. The algorithm provided by the invention has the characteristics of high efficiency and high recognition rate, and has better robustness in practical application.

Description

Lane line identification method based on TensorFlow and OpenCV
Technical Field
The invention belongs to the technology of automatic driving image recognition of vehicles, and particularly relates to a lane line recognition method based on TensorFlow and OpenCV.
Background
The lane line recognition algorithm is an important component in an automatic driving algorithm, and adopts an image processing technology to extract points on lane lines from an original road image acquired by a front CCD camera, and establishes a lane line equation so as to complete the function of recognizing the lane lines in the road image. However, because of the complexity of road images and the large number of interference items, the conventional image processing method cannot meet the requirement of the automatic driving vehicle on the lane line recognition precision.
Disclosure of Invention
Aiming at the problems, the invention provides a lane line identification method based on TensorFlow and OpenCV, wherein an algorithm adopted by the identification method fuses the advantages of the current advanced image identification technology and the traditional image processing technology, the former extracts lane line characteristics in a road image through a neural network model, and the classification function of pixel point levels of the lane line area of the road image is realized; the method is characterized in that a series of image processing algorithms are established according to the characteristics of the lane lines, and the algorithms can further remove interference items and obtain a better lane line identification effect. The algorithm provided by the invention has the characteristics of high efficiency and high recognition rate, and has better robustness in practical application. The design method provided by the invention comprises the following steps:
step 1, manufacturing a picture data set: the original image data for making the image data set is derived from the color road image acquired by the front CCD camera, and the invention uses the images to generate the image data set for training the neural network. The manufacturing method comprises an image labeling method, an image processing method and an image data set generating method. The image labeling method comprises the steps of manually labeling images, selecting a lane line area frame in an original image by using LabelMe labeling software, and outputting a binary image as a labeled image; the image processing method is that the labeling image and the original image are respectively processed to generate two images required by neural network model training, including a neural network input image during training and an input image during cross entropy calculation; the image data set generating method is to package two images in the image processing method into an image data set by using an image data set format TFRecord of TensorFlow.
Step 2, designing a neural network model: the neural network model is built by TensorFlow, and the model is built and then used for extracting lane line characteristics after training the model. The model has 8 layers, namely a convolution layer 1, a convolution layer 2, a convolution layer 3, a maximum pooling layer, a deconvolution layer 3, a deconvolution layer 2 and a deconvolution layer 1, wherein each layer has a specific filter size, a specific filter depth and a specific filter moving step length, and some layers also use all zero padding. At design time, the input of deconvolution layer 3 is equal to the linear weighted sum of the output of deconvolution layer 3 and the output of the anti-max pooling layer.
Step 3, a variable threshold image binarization method: in actual use, the trained neural network model can output images with obvious lane line characteristics, the image binarization method with variable threshold values can carry out binarization processing on the output images, and the processed images can remove more interference items, so that the lane line characteristics are more obvious. The invention compares the value of the pixel point of the image to be processed with a threshold value, if the pixel point value is larger than the threshold value, the pixel point value is set to 255, otherwise, the pixel point value is set to 0. The sum of the variable threshold and the pixel point of the image to be processed has a linear function relationship, and the slope and the intercept of the linear function relationship can be adjusted according to actual conditions.
Step 4, a method for extracting the coordinate points of the lane lines comprises the following steps: the invention establishes a lane line coordinate point extraction algorithm by using OpenCV, and extracts lane line coordinate points from the road image after binarization processing. The method comprises the steps of setting an intermediate value, and then carrying out line scanning and column scanning of the image, wherein the increment of the line scanning is 8, and the increment of the column scanning is 1. In the process of scanning the image line, if the values of the pixel points meet the following two conditions, the values of the two pixel points in the following two columns are not scanned any more, and one pixel point in the following column is recorded as a coordinate point of the lane line:
(a) The column number of the pixel points is smaller than or equal to the intermediate value, the pixel value of the pixel points is larger than 100, the pixel value of the pixel points of the same row, which is shifted by 1 column in the positive direction of the x-axis, is larger than 100, and the pixel value of the pixel points of the same row, which is shifted by 4 columns in the positive direction of the x-axis, is smaller than 100;
(b) The column number of the pixel points is larger than the middle value, the pixel value of the pixel points is larger than 100, the pixel value of the pixel points of the same row, which is offset by 1 column, in the negative direction of the x-axis is larger than 100, and the pixel value of the pixel points of the same row, which is offset by 4 columns, in the negative direction of the x-axis is smaller than 100.
Step 5, establishing a lane line fitting equation: and establishing a lane line fitting equation by using the extracted lane line coordinate points and solving the equation. The invention adopts a polynomial function as a basic form of a lane line fitting equation, when each lane line fitting equation is established, the number of the unknown quantity is equal to the number of coordinate points extracted on the lane line, and the highest number of independent variables of the equation is one less than the number of the unknown quantity, so that the equation is ensured to be completely solvable.
The invention has the beneficial effects that: the lane line identification method provided by the invention combines the advantages of high efficiency of the traditional OpenCV algorithm and high identification rate of the deep learning algorithm, and the program designed according to the method provided by the invention only occupies about 68.469ms of running time in the machine environment of Intel (R) Core (TM) i5-7400 and NVIDIAGeForce GTX 1050Ti, has the identification accuracy rate of the lane line coordinate point reaching 90.446%, and has better robustness to external environments such as vehicle shadow, illumination conditions and the like.
Drawings
FIG. 1 is a schematic diagram of an image dataset production process according to the present invention;
FIG. 2 is a schematic diagram of a neural network model according to the present invention;
FIG. 3 is an image scanning coordinate system in accordance with the present invention;
fig. 4 is a flowchart of the present invention for extracting lane line coordinate points.
Detailed Description
In order that those skilled in the art will better understand the solution provided by the present invention, the present invention will be specifically described with reference to the accompanying drawings.
1. The method for manufacturing the image data set comprises the following steps:
the image dataset is used for training a neural network model, and the trained neural network model has the function of extracting lane line characteristics, and is a schematic diagram of the image dataset manufacturing process as shown in fig. 1. The original image data for making the image data set is derived from color road images acquired by the CCD camera in front of the vehicle, and the color images are subjected to three processes of image labeling, image processing and image data set generation to obtain the image data set.
Image labeling uses LabelMe software to label lane line areas from color road images. And the lane line area is defined as the area where the lane dotted line, the lane solid line, the lane double-dotted line and the lane double-solid line are located in the marking process. The marked image is a binary image, the pixel value of the pixel points of the lane line area in the binary image is 255, and the pixel values of the pixel points of the rest parts are 0. Thus, the lane line area is seen as white from the binary image, and the rest is black.
And after the image marking is finished, entering an image processing link. The image processing link is used for processing the image annotation output image and comprises image clipping, image scaling and image binarization processes, and the image processing link is used for processing the color road image and comprises image clipping, image scaling and image graying. The image cutting is to remove the non-lane line area in the image to be processed according to the fixed size from the image height direction, so as to keep the lane line area in the image as much as possible, and the width of the image is kept unchanged. The image scaling uses bilinear interpolation to scale the size of the output image after image cropping to a fixed size and the width and height of the output image after image scaling are equal, and the purpose of this process is to meet the requirements of the neural network model on the size of the input image. Image binarization is to compare the pixel value of the output image after image scaling with a fixed threshold value and modify the pixel value according to the comparison result, when the pixel value of the image is smaller than the threshold value, it will be set to 0, otherwise set to 255, where the image binarization is used to annotate the output image after image clipping and scaling, the purpose of the binarization is to make them into a pure binary map to improve the training quality of the neural network model. The image graying is to convert the color of the image into gray, and the image graying is used for the output image after the color road image is cut and scaled, which helps to reduce the data amount of the image on the premise of retaining the image information as much as possible. The image processing link generates two images required by training the neural network model, namely an input image and a calculated cross entropy input image when the neural network model is trained, wherein the input image is obtained by performing image cutting, image scaling and image graying on a color road image, and the image processing link generates two images required by training the neural network model, namely the color road image is obtained by performing an image labeling process, and then performing image cutting, image scaling and image binarization on a labeled output image.
And after the image processing is finished, entering a step of generating an image data set. Two images obtained in an image processing link, namely an input image and a calculated cross entropy input image when training a neural network model, are packaged in an image data set, and the link packages the two images into a data set according to a TFRecord file format.
2. The design method of the neural network model comprises the following steps:
the neural network model is built by TensorFlow, the neural network model built by the invention adopts the idea of up-sampling, and the size of an image is not changed, and a schematic diagram of the neural network model is shown in FIG. 2. The trained neural network model may be used to extract lane line features including dotted lines, solid lines, double-dotted lines, and double realizations in the road image.
The neural network model built by the invention has 8 layers, namely a convolution layer 1, a convolution layer 2, a convolution layer 3, a maximum pooling layer, an inverse convolution layer 3, an inverse convolution layer 2 and an inverse convolution layer 1. The filter size of the convolution layer 1 is 2×2, the filter depth is 32, the filter movement step length is 1×1, and no all-zero filling is used; the filter size of the convolution layer 2 is 2×2, the filter depth is 64, the filter movement step length is 1×1, and no all-zero padding is used; the filter size of the convolution layer 3 is 3×3, the filter depth is 128, the filter movement step length is 1×1, and no all-zero padding is used; the filter size of the largest pooling layer is 2×2, the filter moving step length is 2×2, and all zero filling is used; the filter size of the inverse maximum pooling layer is 2×2, the filter movement step length is 2×2, and all-zero filling is used; the deconvolution layer 3 has a filter size of 3 x 3, a filter depth of 64, a filter movement step size of 1 x 1, and no all zero padding; the deconvolution layer 2 has a filter size of 2 x 2, a filter depth of 32, a filter movement step size of 1 x 1, and no all zero padding; the deconvolution layer 1 has a filter size of 2 x 2, a filter depth of 1, a filter movement step size of 1 x 1, and no all zero padding; the input of deconvolution layer 3 is equal to the linear weighted sum of the output of the deconvolution layer 3 and the output of the anti-max pooling layer.
The data set adopted by the neural network model training is the image data set generated in the invention. When training, firstly acquiring an input image for training the neural network model from the image data set, and outputting the neural network model output images with the same size after the image passes through the neural network model. The computed cross entropy input image is then acquired from the image dataset. And finally, using the neural network model output image and the acquired cross entropy input image together for calculating the cross entropy, wherein the calculated cross entropy value is used as a parameter value of a gradient descent algorithm, and the parameter value of the neural network model is updated by the gradient descent algorithm so as to achieve the training effect.
3. The image binarization method of the variable threshold comprises the following steps:
the trained neural network model can output images with obvious lane line characteristics, the image binarization method with variable threshold values can carry out binarization processing on the output images, and the processed images can remove more interference items so that the lane line characteristics are more obvious.
The principle of the method is to compare the value of a pixel in the image to be processed with a threshold value, if the pixel value is greater than the threshold value, the pixel value will be set to 255, otherwise set to 0. However, the threshold value is not fixed, but varies with the sum of pixels of each image to be processed, and has a linear function relationship with the sum of pixels of the image to be processed, and the slope and the intercept of the linear function relationship can be adjusted according to practical situations. In general, determining the slope and intercept requires a variable threshold image binarization process using a large number of images, and obtaining empirical values of the slope and intercept based on the effect of binarization.
4. The method for extracting the lane line coordinate points comprises the following steps:
the invention establishes an algorithm for extracting lane line coordinate points by using OpenCV, and extracts lane line coordinate points from an image processed by a variable threshold image binarization method. First, the method establishes an image scanning coordinate system with the upper left corner of the image as the origin, the right direction as the positive direction of the x-axis, and the lower direction as the positive direction of the y-axis, as shown in fig. 3. Then, the method establishes a scanning method of coordinate points by using OpenCV, which is to establish a condition mechanism, record the coordinate points meeting the condition as collected lane line coordinate points in the scanning process of the image.
The scanning of the image is divided into line scanning and column scanning, the line scanning is started from the 0 th line to the end of the last line, the direction is the positive direction of the y axis, and the increment is 8; the column scan starts from column 0 to the end of the last column, with the direction being the positive direction of the x-axis and the increment being 1. The combination of the row scanning and the column scanning can traverse the values of the rows of pixel points which meet the scanning conditions. Fig. 4 is a flowchart of the invention for extracting the lane line coordinate points, in which the size of the image to be scanned is assumed to be 100×100, and according to the scanning method, the scanning will traverse the values of most pixels of the 0 th row, 8 th row, 16 th row, 24 th row, 32 nd row, 40 th row, 48 th row, 56 th row, 64 th row, 72 nd row, 80 th row, 88 th row and 96 th row, and the values of few pixels will not be accessed. In the process of scanning the image line, if the values of the pixel points meet the following two conditions, the values of the two pixel points in the following two columns are not scanned any more, and one pixel point in the following column is recorded as a coordinate point of the lane line:
(a) The column number of the pixel points is smaller than or equal to the intermediate value, the pixel value of the pixel points is larger than 100, the pixel value of the pixel points of the same row, which is shifted by 1 column in the positive direction of the x-axis, is larger than 100, and the pixel value of the pixel points of the same row, which is shifted by 4 columns in the positive direction of the x-axis, is smaller than 100;
(b) The column number of the pixel points is larger than the middle value, the pixel value of the pixel points is larger than 100, the pixel value of the pixel points of the same row, which is offset by 1 column, in the negative direction of the x-axis is larger than 100, and the pixel value of the pixel points of the same row, which is offset by 4 columns, in the negative direction of the x-axis is smaller than 100.
For example, in the case of scanning the 16 th row, if the intermediate value is set to 50, when the value of the pixel point of the 18 th column of the row is scanned to be 200, the value of the pixel point of the 19 th column is 160, and the value of the pixel point of the 22 nd column is 88, it can be seen that this is the condition (a), the pixel point of the 16 th row and the 19 th column is recorded as the lane line coordinate point, and the determination of the condition (a) is not performed on the pixel points of the 19 th column and the 20 th column in the scanning.
Intermediate values are also introduced in the conditions, where the intermediate values are user-defined and are typically approximately equal to half the image width. In the scanning process, the intermediate value can be the pixel point of two areas, when the column number of the pixel point is smaller than or equal to the intermediate value, the pixel point is considered to be on the left side, otherwise, the pixel point is considered to be on the right side. The left pixel is determined by the condition (a) at the time of scanning, and the right pixel is determined by the condition (b). The mechanism is beneficial to improving the recognition accuracy of the lane line coordinate points at the two sides of the road edge in practical use.
The following is a detailed step of OpenCV extracting lane line coordinate points in an image scanning coordinate system:
step 1: setting an intermediate value;
step 2: setting the value of y to 0;
step 3: setting the value of x to 0;
step 4: if x is less than or equal to the intermediate value, entering step 5, otherwise entering step 8;
step 5: if P (x, y) is greater than 100, go to step 6, otherwise go to step 14;
step 6: if P (x+1, y) is greater than 100, go to step 7, otherwise go to step 14;
step 7: if P (x+4, y) is less than 100, go to step 11, otherwise go to step 14;
step 8: if P (x, y) is greater than 100, go to step 9, otherwise go to step 14;
step 9: if P (x-1, y) is greater than 100, go to step 10, otherwise go to step 14;
step 10: if P (x-4, y) is less than 100, go to step 11, otherwise go to step 14;
step 11: x=x+1;
step 12: recording P (x, y) as an extracted lane line coordinate point;
step 13: x=x+1;
step 14: x=x+1;
step 15: if x is greater than 100, entering step 16, otherwise returning to step 4;
step 16: y=y+8;
step 17 if y is greater than 100, the scan ends, otherwise return to step 3.
5. The method for establishing the lane line fitting equation comprises the following steps:
and establishing a lane line fitting equation and solving the equation to finally obtain a lane line identification result. The lane line fitting equation establishes a mathematical model by using the collected lane line coordinate points, and then fits the lane lines by using a mathematical expression. The invention adopts a polynomial function as a basic form of a lane line fitting equation, and when each lane line fitting equation is established, the number of unknown quantities of the lane line fitting equation is equal to the number of coordinate points extracted on the lane line. For example, by the algorithm of the invention, 4 lane coordinate points are acquired on a certain lane, and are respectively P 1 (x 1 ,y 1 )、P 2 (x 2 ,y 2 )、P 3 (x 3 ,y 3 ) And P 4 (x 4 ,y 4 ) The following mathematical equation can be established:
y=ax 3 +bx 2 +cx+d (1)
in the equation, the unknowns to be solved are a, b, c and d, and theoretically, P will be 1 、P 2 、P 3 And P 4 Substituting the coordinate values of the points into equations can solve the 4 unknowns. In solving equation (1), the method adopted by the invention is a method for solving an equation by an inverse matrix. Substituting the lane line coordinate points into an equation to be solved, and arranging the substituted points into a matrix operation form, as shown in a formula (2):
Figure BDA0001946032750000071
in equation (2), a, b, c and d are unknowns to be solved, their values can be determined by the right-hand inverse matrix and vector (y 1 ,y 2 ,y 3 ,y 4 ) And multiplying to obtain the product. And (5) carrying the obtained a, b, c and d back to the original equation (1) to solve the lane line equation.
The above list of detailed descriptions is only specific to practical embodiments of the present invention, and they are not intended to limit the scope of the present invention, and all equivalent embodiments or modifications that do not depart from the spirit of the present invention should be included in the scope of the present invention.

Claims (8)

1. The lane line identification method based on TensorFlow and OpenCV is characterized by comprising the following steps of:
step 1, manufacturing an image data set: the original image data for making the image data set is derived from the color road images acquired by the front CCD camera, and the images are used for generating the image data set for training the neural network;
step 2, designing a neural network model: adopting a convolutional neural network model;
the convolutional neural network model comprises 8 layers, namely a convolutional layer 1, a convolutional layer 2, a convolutional layer 3, a maximum pooling layer, a deconvolution layer 3, a deconvolution layer 2 and a deconvolution layer 1; the filter size of the convolution layer 1 is 2 multiplied by 2, the filter depth is 32, and the filter moving step length is 1 multiplied by 1; the filter size of the convolution layer 2 is 2×2, the filter depth is 64, and the filter moving step length is 1×1; the filter size of the convolution layer 3 is 3×3, the filter depth is 128, and the filter moving step length is 1×1; the filter size of the maximum pooling layer is 2 multiplied by 2, the filter moving step length is 2 multiplied by 2, and all-zero filling is used; the filter size of the inverse maximum pooling layer is 2 multiplied by 2, the filter moving step length is 2 multiplied by 2, and all zero filling is used; the filter size of the deconvolution layer 3 is 3 multiplied by 3, the filter depth is 64, and the filter moving step length is 1 multiplied by 1; the filter size of the deconvolution layer 2 is 2 multiplied by 2, the filter depth is 32, and the filter moving step length is 1 multiplied by 1; the filter size of the deconvolution layer 1 is 2 multiplied by 2, the filter depth is 1, and the filter moving step length is 1 multiplied by 1;
wherein the input of the deconvolution layer 3 is equal to a linear weighted sum of the output of the deconvolution layer 3 and the output of the anti-max pooling layer;
step 3, binarizing the image with the variable threshold value: performing binarization processing on an image with lane line characteristics output by the convolutional neural network model;
step 4, extracting lane line coordinate points: establishing a lane line coordinate point extraction algorithm by using OpenCV, and extracting lane line coordinate points from the road image subjected to binarization processing;
step 5, establishing a lane line fitting equation: and establishing a lane line fitting equation by using the extracted lane line coordinate points, and solving the equation to obtain a lane line identification result.
2. The lane line recognition method based on TensorFlow and OpenCV according to claim 1, wherein in step 1, the creating of the image dataset includes: image annotation, image processing and generating an image dataset;
the image marking is carried out manually, a lane line area frame in an original image of a road is selected by LabelMe marking software, and the output marked image is a binary image;
the image processing is to respectively process the labeling image and the original image to generate two images required by neural network model training, wherein the two images comprise a neural network input image during training and an input image during cross entropy calculation; specifically: the image processing comprises image cutting, image scaling and image binarization processes when used for processing the output image of the image annotation, and comprises image cutting, image scaling and image graying processes when used for processing the color road image; the image cutting is to remove a non-lane region in the image to be processed according to a fixed size from the height direction of the image, and the width of the image is kept unchanged; the image scaling uses a bilinear interpolation method to scale the size of the output image after the image is cut to a fixed size, and the width and the height of the scaled image are equal; the image binarization is to compare the pixel value of the output image after the image scaling with a threshold value, and modify the pixel value according to the comparison result, when the pixel value of the image is smaller than the threshold value, the pixel value is set to 0, otherwise, the pixel value is set to 255; the image graying is to convert the color of the output image into gray after the image scaling;
the generating image data set is to package two images in image processing into an image data set by using an image data set format TFRecord of TensorFlow.
3. The lane line recognition method based on the TensorFlow and OpenCV according to claim 2, wherein the lane line region in the road raw image includes a lane broken line, a lane solid line, a lane double broken line, and a lane double solid line;
the output image of the image processing comprises an output image obtained by preprocessing the output image marked by the image and an output image obtained by preprocessing the color road image.
4. The lane line recognition method based on the TensorFlow and OpenCV according to claim 1, wherein in step 3, binarizing processing: comparing the value of a pixel point in the image to be processed with a variable threshold value, setting the value of the pixel point to 255 if the value of the pixel point is larger than the threshold value, otherwise setting the value of the pixel point to 0; the variable threshold has a linear function relationship with the sum of pixels of the image to be processed.
5. The lane line recognition method based on TensorFlow and OpenCV according to claim 1, wherein the specific implementation of step 4 includes: setting an intermediate value, and then performing row scanning and column scanning of the image, wherein the increment of the row scanning is 8, and the increment of the column scanning is 1; in the process of scanning the image line, if the values of the pixel points meet the following two conditions, the values of the two pixel points in the following two columns are not scanned any more, and one pixel point in the following column is recorded as a coordinate point of the lane line:
(a) The column number of the pixel points is smaller than or equal to the intermediate value, the pixel value of the pixel points is larger than 100, the pixel value of the pixel points of the same row, which is shifted by 1 column in the positive direction of the x-axis, is larger than 100, and the pixel value of the pixel points of the same row, which is shifted by 4 columns in the positive direction of the x-axis, is smaller than 100;
(b) The column number of the pixel points is larger than the middle value, the pixel value of the pixel points is larger than 100, the pixel value of the pixel points of the same row, which is offset by 1 column, in the negative direction of the x-axis is larger than 100, and the pixel value of the pixel points of the same row, which is offset by 4 columns, in the negative direction of the x-axis is smaller than 100.
6. The lane line recognition method based on the TensorFlow and OpenCV according to claim 5, wherein the lane line coordinate points are points in an image scanning coordinate system; the image scanning coordinate system is established, wherein the upper left corner of the image is taken as an origin of the coordinate system, the right direction is taken as the positive direction of the x axis of the coordinate system, and the lower direction is taken as the positive direction of the y axis of the coordinate system; the values of the image pixels in the image scanning coordinate system are represented by P (x, y), wherein x represents the number of rows where the image pixels are located, and y represents the number of columns where the image pixels are located.
7. The lane line recognition method based on the TensorFlow and OpenCV according to claim 6, wherein the specific step of extracting the lane line coordinate point in the image scanning coordinate system by using OpenCV is as follows:
step 1: setting an intermediate value;
step 2: setting the value of y to 0;
step 3: setting the value of x to 0;
step 4: if x is less than or equal to the intermediate value, entering step 5, otherwise entering step 8;
step 5: if P (x, y) is greater than 100, go to step 6, otherwise go to step 14;
step 6: if P (x+1, y) is greater than 100, go to step 7, otherwise go to step 14;
step 7: if P (x+4, y) is less than 100, go to step 11, otherwise go to step 14;
step 8: if P (x, y) is greater than 100, go to step 9, otherwise go to step 14;
step 9: if P (x-1, y) is greater than 100, go to step 10, otherwise go to step 14;
step 10: if P (x-4, y) is less than 100, go to step 11, otherwise go to step 14;
step 11: x=x+1;
step 12: recording P (x, y) as an extracted lane line coordinate point;
step 13: x=x+1;
step 14: x=x+1;
step 15: if x is greater than 100, entering step 16, otherwise returning to step 4;
step 16: y=y+8;
step 17 if y is greater than 100, the scan ends, otherwise return to step 3.
8. The lane line recognition method based on the TensorFlow and OpenCV according to claim 1, wherein in step 5, a polynomial function is adopted as a basic form of a lane line fitting equation, and when each lane line fitting equation is established, the number of unknowns is equal to the number of coordinate points extracted on the lane line.
CN201910036301.2A 2019-01-15 2019-01-15 Lane line identification method based on TensorFlow and OpenCV Active CN109871759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910036301.2A CN109871759B (en) 2019-01-15 2019-01-15 Lane line identification method based on TensorFlow and OpenCV

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910036301.2A CN109871759B (en) 2019-01-15 2019-01-15 Lane line identification method based on TensorFlow and OpenCV

Publications (2)

Publication Number Publication Date
CN109871759A CN109871759A (en) 2019-06-11
CN109871759B true CN109871759B (en) 2023-05-09

Family

ID=66917659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910036301.2A Active CN109871759B (en) 2019-01-15 2019-01-15 Lane line identification method based on TensorFlow and OpenCV

Country Status (1)

Country Link
CN (1) CN109871759B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682292B (en) * 2012-05-10 2014-01-29 清华大学 Method based on monocular vision for detecting and roughly positioning edge of road
CN104794447A (en) * 2015-04-22 2015-07-22 深圳市航盛电子股份有限公司 Vehicle-mounted tunnel recognition method and system based on OpenCv Kalman filter
CN104820822A (en) * 2015-04-22 2015-08-05 深圳市航盛电子股份有限公司 Vehicle-mounted road offset identification method and system based on OpenCv Kalman filter

Also Published As

Publication number Publication date
CN109871759A (en) 2019-06-11

Similar Documents

Publication Publication Date Title
CN111814722B (en) Method and device for identifying table in image, electronic equipment and storage medium
CN111723585B (en) Style-controllable image text real-time translation and conversion method
CN110738207B (en) Character detection method for fusing character area edge information in character image
Singh et al. Sniper: Efficient multi-scale training
CN108229490B (en) Key point detection method, neural network training method, device and electronic equipment
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN111160352B (en) Workpiece metal surface character recognition method and system based on image segmentation
CN110647795B (en) Form identification method
CN114529459B (en) Method, system and medium for enhancing image edge
CN111275034B (en) Method, device, equipment and storage medium for extracting text region from image
CN110598566A (en) Image processing method, device, terminal and computer readable storage medium
CN111680690A (en) Character recognition method and device
CN111507337A (en) License plate recognition method based on hybrid neural network
CN113591831A (en) Font identification method and system based on deep learning and storage medium
CN110991440B (en) Pixel-driven mobile phone operation interface text detection method
CN112509026A (en) Insulator crack length identification method
CN113657225B (en) Target detection method
CN107145888A (en) Video caption real time translating method
CN114519788A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111079516B (en) Pedestrian gait segmentation method based on deep neural network
CN109871759B (en) Lane line identification method based on TensorFlow and OpenCV
CN110610177A (en) Training method of character recognition model, character recognition method and device
CN116645325A (en) Defect marking method and device for photovoltaic panel, medium and electronic equipment
CN113705571B (en) Method and device for removing red seal based on RGB threshold, readable medium and electronic equipment
Valiente et al. A process for text recognition of generic identification documents over cloud computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant