CN107832674B - Lane line detection method - Google Patents

Lane line detection method Download PDF

Info

Publication number
CN107832674B
CN107832674B CN201710957864.6A CN201710957864A CN107832674B CN 107832674 B CN107832674 B CN 107832674B CN 201710957864 A CN201710957864 A CN 201710957864A CN 107832674 B CN107832674 B CN 107832674B
Authority
CN
China
Prior art keywords
point
points
pixel
fine
straight line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710957864.6A
Other languages
Chinese (zh)
Other versions
CN107832674A (en
Inventor
赵小明
刘飞
王永红
郑鹏珍
朱大炜
陈前
邵晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710957864.6A priority Critical patent/CN107832674B/en
Publication of CN107832674A publication Critical patent/CN107832674A/en
Application granted granted Critical
Publication of CN107832674B publication Critical patent/CN107832674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a lane line detection method, wherein the method comprises the following steps: acquiring an image in front of a vehicle, and converting the image into a gray scale image; carrying out coarse positioning on the lane line by utilizing the gray information of the image, and recording a coarse positioning point; carrying out fine positioning on the coarse positioning point, and reserving fine points; classifying the fine points to obtain a plurality of straight line classes; and acquiring the lane line according to the plurality of straight line types. According to the technical scheme, the whole image is longitudinally segmented and block-partitioned to be subjected to downsampling, the characteristic of longitudinal arrangement is utilized when the edge is detected, the horizontal direction of the lane in each segment is changed violently, the suspicious edge points of the lane are determined, and then the suspicious edge points are processed, so that the calculation amount is reduced, the interference points can be eliminated, and the real-time performance of the algorithm and the reliability of the lane are improved.

Description

Lane line detection method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a lane line detection method.
Background
With the rapid development of automobile auxiliary driving and unmanned vehicle technology, whether a machine vision sensor can accurately acquire marks, marks or lane line information around a vehicle is the most important ring of an automobile auxiliary driving system, wherein the real-time detection and early warning technology of the lane lines ensures that the vehicle can travel on its own way, and plays an important role in departure early warning, lane keeping and the like.
Currently, in the research of lane line detection, lanes are mainly divided into a straight line model, a quadratic curve model, a segment switching model, and the like. In the linear model, most of the lane edges are detected by adopting an edge search method, and then the lanes are identified by utilizing Hough transformation. The edge search method usually adopts a Canny edge detection algorithm, wavelet transformation, a Sobel algorithm and the like, but the algorithms can detect edge information of a large number of non-longitudinally arranged non-lane line objects, interference is brought to subsequent Hough transformation, operation resources of a CPU are greatly wasted, the algorithm speed is low, and the algorithm cannot be used in embedded hardware with low universality calculation. Meanwhile, by adopting a line detection method based on Hough transformation, a line in an image space needs to be converted into a parameter space for description, all points which may fall on a line boundary are counted, and the probability that the points belong to the line is finally determined, so that time consumption and poor real-time performance caused by large calculation amount are a bottleneck of lane detection based on Hough transformation. And most of vehicle-mounted systems are based on embedded systems, and the lane detection method based on Hough transformation is difficult to be widely applied to the embedded systems.
Aiming at the defect of time consumption of Hough transformation, most processing methods limit regions possibly appearing in a lane in an image, for example, a certain range of the left and right sides of the image and a certain background on the upper part of the image are removed, the remaining regions are used as regions of interest (ROI), and lane detection is only carried out on the ROI in subsequent processing. This approach does not essentially solve the problem of algorithm time consumption. Therefore, it is necessary to further consider the characteristic information of the lane and to screen out the lane edge by making full use of the difference between the lane and non-lane objects. In addition, since the method only eliminates the image background block region from the global angle, the interference of non-lane information is not reduced substantially, and the method is not suitable when the image acquired by the sensor does not contain a large-area background region.
Therefore, it is a hot research direction for those skilled in the art to develop a lane line detection method that is not limited by image content, has a small computation amount, and can be based on an embedded system.
Disclosure of Invention
In view of the above problems, the present invention provides a lane line detection method, and the specific implementation manner is as follows.
The embodiment of the invention provides a lane line detection method, which comprises the following steps:
step 1: acquiring an image in front of a vehicle, and converting the image into a gray scale image;
step 2: carrying out coarse positioning on the lane line by utilizing the gray information of the image, and recording a coarse positioning point;
and step 3: carrying out fine positioning on the coarse positioning point, and reserving fine points;
and 4, step 4: classifying the fine points to obtain a plurality of straight line classes;
and 5: and acquiring the lane line according to the plurality of straight line types.
In one embodiment of the present invention, the step 2 comprises:
step 21, dividing the image into a plurality of strips equally by utilizing the gray information of the image, and dividing each strip into a plurality of pixel blocks equally;
step 22, summing the pixel gray values of each pixel block, and obtaining Sum value of each pixel block;
step 23, obtaining gradient data of each strip according to Sum value of each pixel block;
step 24, searching a maximum value point and a minimum value point in the gradient data of each strip;
and 25, recording the maximum value point and the minimum value point as the rough positioning point.
In one embodiment of the present invention, the step 3 comprises:
step 31, selecting a rough positioning point Pi(x, y) at said coarse localization point PiTaking M pixel lines up and down and xOffset pixel points left and right respectively by taking (x, y) as a center;
step 32, performing convolution operation on the coarse positioning point of each pixel row to obtain a pixel extreme point of each pixel row;
step 33, averaging X1 the abscissa of the plurality of pixel extreme points in the upper M pixel rows, and averaging X2 the abscissa of the plurality of pixel extreme points in the lower M pixel rows;
step 34, judging whether the absolute value of the difference value between each pixel extreme point in the M pixel rows and the average value X1 is greater than a preset value, if so, discarding the pixel extreme points, and if not, reserving the pixel extreme points;
step 35, respectively judging whether the absolute value of the difference between each pixel extreme point in the next M pixel rows and the average value X2 is greater than a preset value, if so, discarding the pixel extreme points, and if not, retaining the pixel extreme points;
step 36, sequentially executing steps 31 to 35 on all the rough positioning points;
and step 37, converting the retained coordinates of the plurality of pixel extreme points into fine point coordinates.
In one embodiment of the present invention, the step 36 comprises:
the fine point PiaThe abscissa of (x, y) is:
Pia.x=Pim.x+(sumx1+sumx2)/(N1+N2)
wherein, Pim.xThe value of the abscissa of the pixel extreme point is reserved; sumx1 is the sum of the abscissas of the pixel extreme points reserved in the upper M pixel rows; n1 is the number of the pixel extreme points reserved in the upper M pixel rows; sumx2 is the sum of the abscissas of the pixel extremum points reserved in the next M pixel rows; n2 is the number of the pixel extremum points reserved in the next M pixel rows.
In one embodiment of the present invention, the step 4 comprises:
step 41, taking the ith point in the plurality of fine points as a reference point, and determining a first straight line type according to the reference point, wherein i is 1,2,3 or 4 … …;
step 42, comparing the (i + 1) th fine point with the reference point;
if the (i + 1) th fine point is in the range determined by the reference point, recording the (i + 1) th fine point in the first straight line class, and taking the (i + 1) th fine point as a new reference point of the first straight line class;
if the (i + 1) th fine point is not in the range determined by the reference point, adding a new straight line class, and taking the (i + 1) th fine point as the reference point of the new straight line class;
and 43, comparing the plurality of non-classified fine points with the reference points of the first straight line class and the reference points of the newly added straight line class respectively until the classification of the plurality of fine points is finished.
In one embodiment of the present invention, the step 42 includes:
step 421, judging whether the Y value of the (i + 1) th fine point coordinate is equal to the Y value of the reference point coordinate;
if the number of the fine points is equal to the number of the fine points, the (i + 1) th fine point is not in the range determined by the reference point;
if not, go to step 422;
step 422, judging whether the slope of the (i + 1) th fine point is within a preset slope range;
if the number of the fine points is within the preset slope range, the (i + 1) th fine point is within the range determined by the reference point;
if the number of the fine points is not in the preset slope range, the (i + 1) th fine point is not in the range determined by the reference point.
In an embodiment of the present invention, the preset slope range is a slope range corresponding to a preset region in which the reference point coordinate is located, and includes:
acquiring a slope mean value Kavg of a preset area where the reference point is located according to a preset slope information configuration table;
multiplying the slope mean value by a maximum slope coefficient and a minimum slope coefficient to obtain a slope range in the area where the reference point is located;
wherein the maximum slope coefficient is 1.2 and the minimum slope coefficient is 0.8.
In one embodiment of the present invention, the step 5 comprises:
screening the plurality of straight line classes to determine a left straight line class and a right straight line class of the lane line;
dividing the points on the left straight line into an upper part and a lower part according to the number of the pictures, and respectively calculating the average coordinate (x)Upper left of,yUpper left of)、(xLeft lower part,yLeft lower part);
Dividing the points on the right straight line into an upper part and a lower part according to the number of the points, and respectively calculating the average coordinate (x)Upper right part,yUpper right part)、(xLower right,yLower right);
Calculating the intermediate coordinate xOn the upper part=(xUpper left of+xUpper right part)/2,yOn the upper part=(yUpper left of+yUpper right part)/2;xLower part=(xLeft lower part+xLower right)/2,yLower part=(yLeft lower part+yLower right)/2;
Connecting said intermediate coordinates (x)On the upper part,yOn the upper part) And (x)Lower part,yLower part) And obtaining the lane line.
The invention has the beneficial effects that:
1. according to the technical scheme, longitudinal stripe block downsampling is carried out on the whole image, the characteristics of longitudinal arrangement are utilized when edges are detected, and the lane in each stripe changes violently in the horizontal direction, so that horizontal gradient operation is carried out on image block data in the stripes by adopting a simple gradient operator, and suspicious edge points of the lane can be roughly detected by means of the characteristics that gradient changes of the left edge and the right edge of the lane are opposite and gradient extreme points appear in pairs in the lane width field, so that the interference of edge information in non-longitudinal trend arrangement is reduced, and the real-time performance of the algorithm and the reliability of the lane are improved.
2. The method comprises the steps of obtaining fine point positions of lane edges after coarse positioning points are subjected to fine positioning, then performing table lookup judgment according to slope information between the fine points and data in a preset lane configuration table, classifying the points meeting the conditions into different linear classes, selecting the most linear classes from the classified points which are greater than the threshold point number as the left side and the right side of a lane line, and averaging the upper position and the lower position of the linear classes to obtain the lane line so as to finish detection of the lane line. Therefore, the calculation of the interference points which are not arranged in the longitudinal trend is not needed, so that a large amount of useless calculation is reduced, the processing pressure of the system is reduced to a great extent, and the processing speed is improved.
Drawings
Fig. 1 is a flowchart of a lane line detection method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the image segmentation and blocking according to an embodiment of the present invention;
FIG. 3 is a schematic representation of gradient data for a single stripe in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a coarse positioning point according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an embodiment of the present invention after fine positioning;
FIG. 6 is a diagram illustrating an embodiment of the present invention after classification;
FIG. 7 is a schematic view of a line display according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
As shown in fig. 1 to 7, fig. 1 is a flowchart of a lane line detection method according to an embodiment of the present invention; FIG. 2 is a schematic diagram of the image segmentation and blocking according to an embodiment of the present invention; FIG. 3 is a schematic representation of gradient data for a single stripe in an embodiment of the present invention; FIG. 4 is a schematic diagram of a coarse positioning point according to an embodiment of the present invention; FIG. 5 is a schematic diagram of an embodiment of the present invention after fine positioning; FIG. 6 is a diagram illustrating an embodiment of the present invention after classification; FIG. 7 is a schematic view of a line display according to an embodiment of the present invention. The present embodiment describes in more detail the working principle of the lane line detection method as follows.
As shown in fig. 1, in the lane line detection method provided in the embodiment of the present invention, a processed image is first converted into a grayscale image, and then rough positioning, fine positioning, point classification, and linear display processing are sequentially performed. The method specifically comprises the following steps:
step 1: acquiring an image in front of a vehicle, and converting the image into a gray scale image;
step 2: carrying out coarse positioning on the lane line by utilizing the gray information of the image, and recording a coarse positioning point;
and step 3: carrying out fine positioning on the coarse positioning point, and reserving fine points;
and 4, step 4: classifying the fine points to obtain a plurality of straight line classes;
and 5: and acquiring the lane line according to the plurality of straight line types.
< coarse positioning >
Further, the rough positioning of the lane line is performed by using the gray information of the image, and the rough positioning point is recorded, including:
step 21, dividing the image into a plurality of strips equally by utilizing the gray information of the image, and dividing each strip into a plurality of pixel blocks equally;
step 22, summing the pixel gray values of each pixel block, and obtaining Sum value of each pixel block;
step 23, obtaining gradient data of each strip according to Sum value of each pixel block;
step 24, searching a maximum value point and a minimum value point in the gradient data of each strip;
and 25, recording the maximum value point and the minimum value point as the rough positioning point.
Specifically, the method comprises the following steps:
for coarse positioning, for W × H image, taking 720 × 200 as an example, the image is divided into a plurality of strips (usually selected 40), each strip contains H rows of image data (usually selected 5), and each strip is divided into W (usually selected 5 or 10) pixel wide blocks, which are divided into W/W blocks. After the block division, the following steps are sequentially carried out:
1. and summing pixel gray values of each block with the size of W x h in each strip to obtain Sum, namely obtaining W/W dimension data Sum of each strip.
2. And (3) performing horizontal gradient calculation on the summed data Sum [ i ] of each block i, and performing convolution operation by using image data and a template, wherein the template can be a simple operator such as [ -1,0,1] or [1,0, -1 ].
Taking [ -1,0,1] as an example, the gradient data of a single band is obtained as:
Diff[i]=Sum[i+1]-Sum[i-1],i=1,2,3...(W/w-1) (1)
taking the image of 720 × 200 size in fig. 2 as an example, taking the upper left corner as the origin of coordinates, partitioning the stripe of which the starting point is y 50 in the figure according to the size of h 5 and w 5 to obtain the sum of pixel values, and then performing horizontal gradient calculation on each block data of the stripe by using formula (1), so as to obtain gradient data as shown in fig. 3;
3. and sequentially searching a plurality of gradient pole pairs in the gradient data. The suspicious edge points of the lane can be roughly detected by means of the characteristics that the gradient changes of the left edge and the right edge of the lane are opposite and the gradient extreme points appear in pairs in the lane width field.
In the embodiment of the invention, a template [ -1,0,1] is adopted]When calculating the gradient, the left lane edge point is the maximum positive gradient value, the right edge point is the minimum negative gradient value, the gradient amplitude of the extreme value is larger than the MaxMinTh threshold value, and the number of the blocks separated by the maximum value and the minimum value is in the threshold value range of the lane, namely, for each extreme value large point, the column block index is set as nl (left lane), when the gradient amplitude is larger than MaxMinTh, the minimum value point with the gradient amplitude larger than the MaxMinTh is searched in the width threshold value of the width of the right side width, and the column block index is set asnr(right lane), if a pair of minimum maximum value points (nl, nr) is found, the pair of points is considered as a suspicious lane, and the found pair of extreme value points (nl, nr) is according to the formula (2):
Figure BDA0001434409180000081
Figure BDA0001434409180000082
Figure BDA0001434409180000083
Figure BDA0001434409180000084
mapping back to original image to obtain point coordinates P of left side and right side of lane coarse positioningi(x,y),Pi+1(x, y), storing the position information of the point, and continuously searching other extreme point pairs of the current stripe according to the 3 rd point. Otherwise, deleting the found maximum value point from the candidate points, and then sequentially searching the gradient data of the strip for the maximum value pair.
On the other hand, gradient calculation can be performed by taking [1,0, -1] as an example, specifically: and searching a negative minimum value first, and then searching a positive maximum value in the width field block.
The MaxMinTh adopts an adaptive threshold, the threshold is adaptively adjusted according to the background brightness near the maximum value and the position information of the background brightness, and the calculation mode of the threshold MaxMinTh is as follows:
first, a basic threshold baseTh of gradient is set, since a lane line is inclined to be vertical in the middle of an image, the lane line is inclined to be more inclined to two sides of the image, after pixel values are partitioned and summed at different positions, the size and the position of the sum of the pixel values are related, and the sum is larger towards the middle and smaller towards the two sides. Therefore, the scale coefficient locateRate needs to be set at different positions of the image, and the basic gradient threshold baseTh is adjusted in proportion in different position areas. And calculating the brightness mean value of m blocks in the field of each found extreme point as a background value bkgrd of the extreme point, and comparing the bkgrd with a preset background maximum value MaxBkgrdTth.
When bkgrd is less than or equal to MaxBkgrdTth:
MaxMinTh=baseTh*locateRate+bkgrd*LumaRate (3)
when bkgrd is greater than MaxBkgrdTh, the image may have illumination interference, so that the gradient change is reduced, so that the gradient amplitude threshold value, i.e. overTimes of the difference value, should be correspondingly reduced according to the difference bkgrd-MaxBkgrdTh between the background bkgrd at the extreme value and the background maximum MaxBkgrdTh:
Figure BDA0001434409180000091
for the gradient data shown in fig. 3, for example: the block coordinates of the extreme point pairs found finally are (40,42), (81,84), and the point coordinates (202, 52), (212,52), (407,52), (422,52) of the rough positioning can be obtained by mapping back to the original image. The coarse anchor points of the whole graph are shown in fig. 4.
4. The above operations are sequentially carried out on other strips of the image, and coarse positioning points P on the left side and the right side of the suspected lane in all the strips of the whole image are found outi(x,y),Pi+1(x,y)。
The method only carries out gradient calculation in the horizontal direction on the sum of pixel values of the blocks after the image is segmented into strips and blocks, then searches extreme points of the lane according to the limitation of the lane width, the adaptive adjustment of the gradient amplitude threshold value and the limitation of the direction to realize edge detection, fully utilizes the longitudinal arrangement trend of the lane and the direction information of gradient change of the left side and the right side of the lane, and greatly avoids a plurality of interference edges caused by adopting a Canny operator or a Sobel operator to detect the edges.
The lane candidate points are downsampled by dividing the strips, subsequent calculation is carried out on a few points after rough positioning, and a method that the acceleration is carried out by globally removing image blocks of the upper part and the left part and the right part is avoided, so that the method can be applied when data acquired by a camera does not contain a sky background, and all lane lines which are vertically distributed from the middle to the left side and the right side and slowly obliquely distributed towards the middle in the image can be detected.
< Fine localization >
And performing fine positioning on the coarse positioning point and reserving fine points, wherein the fine positioning comprises the following steps:
step 31, selecting a rough positioning point Pi(x, y) at said coarse localization point PiTaking M pixel lines up and down and xOffset pixel points left and right respectively by taking (x, y) as a center;
step 32, performing convolution operation on the coarse positioning point of each pixel row to obtain a pixel extreme point of each pixel row;
step 33, averaging X1 the abscissa of the plurality of pixel extreme points in the upper M pixel rows, and averaging X2 the abscissa of the plurality of pixel extreme points in the lower M pixel rows;
step 34, judging whether the absolute value of the difference value between each pixel extreme point in the M pixel rows and the average value X1 is greater than a preset value, if so, discarding the pixel extreme points, and if not, reserving the pixel extreme points;
step 35, respectively judging whether the absolute value of the difference between each pixel extreme point in the next M pixel rows and the average value X2 is greater than a preset value, if so, discarding the pixel extreme points, and if not, retaining the pixel extreme points;
step 36, sequentially executing steps 31 to 35 on all the rough positioning points;
and step 37, converting the retained coordinates of the plurality of pixel extreme points into fine point coordinates.
And obtaining the fine point position of the lane edge after the coarse positioning point is subjected to fine positioning, eliminating interference points which do not accord with the longitudinal continuous compact arrangement rule of the lane during fine positioning, and simultaneously storing the fine edge point coordinate information of each strip and the slope information of the edge. The method comprises the following specific steps:
1. for P that has been detectedi(x, y) for PiAnd searching gradient extreme points in the xOffset column neighborhoods of the upper and lower m rows of (x, y). The x initial coordinate for each row is set to P for this coarse positioning of the stripiX coordinate of (x, y) at PiWithin the xOffset range of (x, y), the difference is made by the sum of the front and back 4-point or 5-point pixel values, i.e. the image data and the template [ -1, -1, -1, -1,0,1,1]Or [ -1, -1, -1, -1, -1,0,1,1,1,1, 1]Convolution is carried out, and the point with the maximum gradient change in the xOffset range is found as the extreme value coordinate P of the lineim(x, y). Sequentially obtaining the extreme points P of m upper and lower lines, each lineim(x,y)。
In the embodiment of the present invention, preferable values of m are: when 5 by 5 blocks, 3 is generally taken, and when 10 by 10 blocks, 5 is generally taken; xOffset is typically based on PiThe x coordinate of (x, y) determines that 30 is taken if the coarse positioning point is located in front 1/3 or rear 1/3 of the image and 15 is taken if the coarse positioning point is located in the middle 1/3.
2. Respectively corresponding to the extreme points P of the m rows on the upper and lower surfacesim(x, y) averaging the x coordinatesThe values X1 and X2, and the extreme points P of the upper and lower m rowsimThe X-coordinate of (X, y) is compared to the mean X-coordinates X1 and X2, respectively, and the point remaining within xTh pixels of the mean coordinate is preferably xTh taken as 3, i.e.: and calculating the difference between each pixel extreme point and the corresponding average coordinate value, and taking the absolute value of the difference to determine the distance between the pixel extreme point and the average coordinate value so as to determine whether to retain the pixel extreme point.
The number of dots retained in the upper M pixel rows is counted as N1, and the number of dots retained in the lower M pixel rows is counted as N2.
3. When N1 and N2 are larger than a preset value PnTh, respectively solving the sum sumx1 and sumx2 of the x coordinates of the upper part and the lower part of the rest points, namely summing the x coordinates of all the pixel extreme points reserved in the upper M pixel rows, and summing the x coordinates of all the pixel extreme points reserved in the lower M pixel rows; and then, calculating the respective mean values avg1 and avg 2: avg1 sumx1/N1 and avg2 sumx 2/N2.
It should be noted that, when M is 5, the preset value PnTh is 2, and when M is 10, the preset value PnTh is 3.
4. Using the mean value of the retained points, the rough locating point P of the strip is determined according to the following formulaimThe x coordinate of (x, y) becomes a fine point PiaThe x-coordinate of (x, y);
Pia.x=Pim.x+(sumx1+sumx2)/(N1+N2)。
5. the fine point positions of all the coarse positioning points are obtained, as shown in fig. 5.
< Classification Point >
Because the gradient changes of the left edge and the right edge of the lane are opposite, and the extreme gradient points appear in pairs in the lane width field, in the embodiment of the invention, each single fine point can be classified, and all the reserved paired fine points can also be classified, whether the slope is in a preset range is judged by calculating the slope of the fine points, and when the slopes meet the limit of the maximum value and the minimum value of the slope, the fine points are classified into the same straight line class, namely, the points on the same straight line. The classification method comprises the following steps:
step 41, taking the ith point in the plurality of fine points as a reference point, and determining a first straight line type according to the reference point, wherein i is 1,2,3 or 4 … …;
step 42, comparing the (i + 1) th fine point with the reference point;
if the (i + 1) th fine point is in the range determined by the reference point, recording the (i + 1) th fine point in the first straight line class, and taking the (i + 1) th fine point as a new reference point of the first straight line class;
if the (i + 1) th fine point is not in the range determined by the reference point, adding a new straight line class, and taking the (i + 1) th fine point as the reference point of the new straight line class;
and 43, comparing the plurality of non-classified fine points with the reference points of the first straight line class and the reference points of the newly added straight line class respectively until the classification of the plurality of fine points is finished.
Wherein step 42 comprises:
step 421, judging whether the Y value of the (i + 1) th fine point coordinate is equal to the Y value of the reference point coordinate;
if the number of the fine points is equal to the number of the fine points, the (i + 1) th fine point is not in the range determined by the reference point;
if not, go to step 422;
step 422, judging whether the slope of the (i + 1) th fine point is within a preset slope range;
if the number of the fine points is within the preset slope range, the (i + 1) th fine point is within the range determined by the reference point;
if the number of the fine points is not in the preset slope range, the (i + 1) th fine point is not in the range determined by the reference point.
Specifically, the paired fine point classification is taken as an example:
1. in the embodiment of the invention, a first pair of fine points P is used0a(x, y) is a reference point from the second pair of fine points Pia(x,y),P(i+1)a(x, y) starting with the first step, when the y coordinates of the second pair of fine reference points and the first pair of reference points are judged not to be equal to each other and the difference between the y coordinates is less than the threshold value yTh, they are describedNot in the same band and within the threshold range of the longitudinal comparison, then it calculates the slope information of such fine point pairs as stored: pia(x, y) and P0a(x,y)、P(i+1)a(x, y) and P1aSlope information k of (x, y) pointi=(Pia.x-P0a.x)/(Pia.y-P0a.y),ki+1=(P(i+1)a.x-P1a.x)/(P(i+1)a.y-P1a.y);
When the y coordinate is equal, the point is stored as a reference point of a new straight line class, and then the next pair of fine points is selected for the above operation.
In this embodiment, the threshold yTh is 5.
2. Slope information k for the calculated ith pair of fine pointsi,ki+1Obtaining the slope average value Kavg of the corresponding fine points through the data of the slope information configuration table in different preset image position areas, and when MinRate Kavg is less than or equal to KiNot more than MaxRate Kavg and MinRate Kavg not more than Ki+1And when the distance between the fine point and the corresponding datum point is less than or equal to MaxRate Kavg, the fine point and the corresponding datum point belong to points on the same straight line, the fine point and the corresponding datum point are classified into respective straight line classes, and the fine point pair replaces the original datum point pair to become a new datum point on the respective straight line class.
The subsequent fine points are compared with the new reference points on the straight line class to judge whether the subsequent fine points are the points on the straight line class.
If the slope of the fine point is judged not to be in the slope range, the point is stored as a reference point of a new straight line, and then the next fine point is selected for the operation.
Typically, the minimum slope coefficient MinRate takes 0.8 and the maximum slope coefficient MaxRate takes 1.2.
The classification process is carried out on all the fine points, and finally the retained fine points respectively form a plurality of straight line classes.
The process of classifying each fine-point pair is the same as the process of classifying each fine-point individually. In the embodiment of the invention, after classifying all the fine points, screening a plurality of straight line classes to determine a left straight line class and a right straight line class of the lane line;
1. after all the fine points are classified, each divided straight line class is screened by using the point number Pmnum, the straight line class with the point number larger than a preset value is indicated to be representative and is reserved, and when the number of the points in a certain straight line class is particularly small and smaller than the preset value, the straight line class is indicated to be not representative and is discarded, and usually, the preset value is 4.
For the remaining line classes, the slope Km is calculated.
2. The point number Pmnum of different straight lines is adjusted by utilizing the slope of each straight line class, so that the point number weight of the straight line positioned in the middle area of the image is reduced, and the point number weight of the straight line positioned on the left side and the right side and inclined is increased. The adjustment formula is as follows:
PmNum2=PmNum/(moffset-poffset*Km) (5)
typically, the moffset is taken 1500, the poffset is taken 2; then step 5 is executed;
3. as shown in fig. 6, when the extension line of the straight line calculated by the slope Km is smaller than 0 or larger than the image width for the selected plural straight line classes, indicating that the straight line represented by the straight line class belongs to an inclined straight line in the image, that is, is located in the left and right regions, two straight line classes having the largest number of points among the straight line classes having the larger number of points than the threshold BandNumLR are selected as the left and right sides of the final lane line; otherwise, the lane line is located in the middle area of the image and is discarded.
In the embodiment of the invention, the Hough transformation is not adopted to calculate the linear parameters during linear detection, but the slope information of the left side and the right side of the lane is fully utilized to be compared with the preset information, and the screened fine points are distributed into different linear classes by comparing the preset information, so that the problem of time consumption of Hough detection parameter calculation is essentially overcome, the arithmetic operation speed is greatly improved, and the real-time lane line detection can be realized on an embedded system.
< straight line display >
Dividing the points on the left straight line into an upper part and a lower part according to the number of the pictures, and respectively calculating the average coordinate (x)Upper left of,yUpper left of)、(xLeft lower part,yLeft lower part);
Dividing the points on the right straight line into an upper part and a lower part according to the number of the points, and respectively calculating the average coordinate (x)Upper right part,yUpper right part)、(xLower right,yLower right);
Calculating the intermediate coordinate xOn the upper part=(xUpper left of+xUpper right part)/2,yOn the upper part=(yUpper left of+yUpper right part)/2;xLower part=(xLeft lower part+xLower right)/2,yLower part=(yLeft lower part+yLower right)/2;
Connecting the intermediate coordinates (x) as shown in FIG. 7On the upper part,yOn the upper part) And (x)Lower part,yLower part) And obtaining the lane line and finishing the detection of the lane line.
In summary, the specific examples are applied to describe the implementation of the lane line detection method provided in the embodiments of the present invention, and the description of the above embodiments is only used to help understanding the scheme and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention, and the scope of the present invention should be subject to the appended claims.

Claims (7)

1. A lane line detection method, comprising:
step 1: acquiring an image in front of a vehicle, and converting the image into a gray scale image;
step 2: carrying out coarse positioning on the lane line by utilizing the gray information of the image, and recording a coarse positioning point;
and step 3: carrying out fine positioning on the coarse positioning point, and reserving fine points;
and 4, step 4: classifying the fine points to obtain a plurality of straight line classes;
and 5: acquiring the lane line according to the straight line classes;
the step 2 comprises the following steps:
step 21, dividing the image into a plurality of strips equally by utilizing the gray information of the image, and dividing each strip into a plurality of pixel blocks equally;
step 22, summing the pixel gray values of each pixel block to obtain the Sum value of each pixel block;
step 23, obtaining gradient data of each strip according to Sum value of each pixel block;
step 24, searching a maximum value point and a minimum value point in the gradient data of each strip;
step 25, recording the maximum value point and the minimum value point as the rough positioning point;
specifically, during rough positioning, for an image with the size of W × H, the image is longitudinally divided into a plurality of strips, each strip contains H lines of image data, each strip is divided into blocks with the width of W pixels, the blocks are divided into W/W blocks in total, and the following steps are sequentially performed after the blocks are divided into blocks:
summing pixel gray values of each block with the size of W x h in each strip to obtain Sum, namely obtaining W/W dimension data Sum of each strip; performing horizontal gradient calculation on the summed data Sum [ i ] of each block i, performing convolution operation by adopting image data and a template, wherein the template can be a simple operator such as [ -1,0,1] or [1,0, -1 ];
taking [ -1,0,1] as an example, the gradient data of a single band is obtained as:
Diff[i]=Sum[i+1]-Sum[i-1],i=1,2,3...(W/w-1)
and taking the upper left corner of the image as a coordinate origin, partitioning the strips in the image according to the sizes of h and w to calculate the sum of pixel values, and then performing horizontal gradient calculation on each block data of the strips by using the gradient data formula.
2. The method of claim 1, wherein step 3 comprises:
step 31, selecting a coarse positioning point Pi (x, y), taking the coarse positioning point Pi (x, y) as a center, taking M pixel lines up and down respectively, and taking xOffset pixel points left and right respectively;
step 32, performing convolution operation on the coarse positioning point of each pixel row to obtain a pixel extreme point of each pixel row;
step 33, averaging X1 the abscissa of the plurality of pixel extreme points in the upper M pixel rows, and averaging X2 the abscissa of the plurality of pixel extreme points in the lower M pixel rows;
step 34, judging whether the absolute value of the difference value between each pixel extreme point in the M pixel rows and the average value X1 is greater than a preset value, if so, discarding the pixel extreme points, and if not, reserving the pixel extreme points;
step 35, respectively judging whether the absolute value of the difference between each pixel extreme point in the next M pixel rows and the average value X2 is greater than a preset value, if so, discarding the pixel extreme points, and if not, retaining the pixel extreme points;
step 36, sequentially executing steps 31 to 35 on all the rough positioning points;
and step 37, converting the retained coordinates of the plurality of pixel extreme points into fine point coordinates.
3. The method of claim 2, wherein the step 36 comprises:
the abscissa of the fine point Pia (x, y) is:
Pia.x=Pim.x+(sumx1+sumx2)/(N1+N2)
wherein, Pim.xThe value of the abscissa of the pixel extreme point is reserved; sumx1 is the sum of the abscissas of the pixel extreme points reserved in the upper M pixel rows; n1 is the number of the pixel extreme points reserved in the upper M pixel rows; sumx2 is the sum of the abscissas of the pixel extremum points reserved in the next M pixel rows; n2 is the number of the pixel extremum points reserved in the next M pixel rows.
4. The method of claim 2, wherein the step 4 comprises:
step 41, taking the ith point in the plurality of fine points as a reference point, and determining a first straight line type according to the reference point, wherein i is 0,1, 2,3,4 … …;
step 42, comparing the (i + 1) th fine point with the reference point;
if the (i + 1) th fine point is in the range determined by the reference point, recording the (i + 1) th fine point in the first straight line class, and taking the (i + 1) th fine point as a new reference point of the first straight line class;
if the (i + 1) th fine point is not in the range determined by the reference point, adding a new straight line class, and taking the (i + 1) th fine point as the reference point of the new straight line class;
and 43, comparing the plurality of non-classified fine points with the reference points of the first straight line class and the reference points of the newly added straight line class respectively until the classification of the plurality of fine points is finished.
5. The method of claim 4, wherein said step 42 comprises:
step 421, judging whether the Y value of the (i + 1) th fine point coordinate is equal to the Y value of the reference point coordinate;
if the number of the fine points is equal to the number of the fine points, the (i + 1) th fine point is not in the range determined by the reference point;
if not, go to step 422;
step 422, judging whether the slope of the (i + 1) th fine point is within a preset slope range;
if the number of the fine points is within the preset slope range, the (i + 1) th fine point is within the range determined by the reference point;
if the number of the fine points is not in the preset slope range, the (i + 1) th fine point is not in the range determined by the reference point.
6. The method of claim 5, wherein the preset slope range is a corresponding slope range in a preset area where the reference point coordinate is located, and the method comprises:
acquiring a slope mean value Kavg of a preset area where the reference point is located according to a preset slope information configuration table;
multiplying the slope mean value by a maximum slope coefficient and a minimum slope coefficient to obtain a slope range in the area where the reference point is located;
wherein the maximum slope coefficient is 1.2 and the minimum slope coefficient is 0.8.
7. The method of claim 4, wherein the step 5 comprises:
screening the plurality of straight line classes to determine a left straight line class and a right straight line class of the lane line;
dividing the points on the left straight line into an upper part and a lower part according to the number of the pictures, and respectively calculating the average coordinate (x)Upper left of,yUpper left of)、(xLeft lower part,yLeft lower part);
Dividing the points on the right straight line into an upper part and a lower part according to the number of the points, and respectively calculating the average coordinate (x)Upper right part,yUpper right part)、(xLower right,yLower right);
Calculating the intermediate coordinate xOn the upper part=(xUpper left of+xUpper right part) (y) 2, yUpper left of+yUpper right part)/2;xLower part=(xLeft lower part+xLower right)/2,yLower part=(yLeft lower part+yLower right)/2;
Connecting said intermediate coordinates (x)On the upper part,yOn the upper part) And (x)Lower part,yLower part) And obtaining the lane line.
CN201710957864.6A 2017-10-16 2017-10-16 Lane line detection method Active CN107832674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710957864.6A CN107832674B (en) 2017-10-16 2017-10-16 Lane line detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710957864.6A CN107832674B (en) 2017-10-16 2017-10-16 Lane line detection method

Publications (2)

Publication Number Publication Date
CN107832674A CN107832674A (en) 2018-03-23
CN107832674B true CN107832674B (en) 2021-07-09

Family

ID=61647976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710957864.6A Active CN107832674B (en) 2017-10-16 2017-10-16 Lane line detection method

Country Status (1)

Country Link
CN (1) CN107832674B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117757B (en) * 2018-07-27 2022-02-22 四川大学 Method for extracting guy cable in aerial image
CN110490033B (en) * 2018-10-29 2022-08-23 毫末智行科技有限公司 Image processing method and device for lane detection
CN110135252A (en) * 2019-04-11 2019-08-16 长安大学 A kind of adaptive accurate lane detection and deviation method for early warning for unmanned vehicle
CN111178193A (en) * 2019-12-18 2020-05-19 深圳市优必选科技股份有限公司 Lane line detection method, lane line detection device and computer-readable storage medium
CN111460072B (en) * 2020-04-01 2023-10-03 北京百度网讯科技有限公司 Lane line detection method, device, equipment and storage medium
CN112581473B (en) * 2021-02-22 2021-05-18 常州微亿智造科技有限公司 Method for realizing surface defect detection gray level image positioning algorithm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140132210A (en) * 2013-05-07 2014-11-17 숭실대학교산학협력단 Lane detection method and system
CN105224909A (en) * 2015-08-19 2016-01-06 奇瑞汽车股份有限公司 Lane line confirmation method in lane detection system
CN105426863A (en) * 2015-11-30 2016-03-23 奇瑞汽车股份有限公司 Method and device for detecting lane line
CN107025432A (en) * 2017-02-28 2017-08-08 合肥工业大学 A kind of efficient lane detection tracking and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140132210A (en) * 2013-05-07 2014-11-17 숭실대학교산학협력단 Lane detection method and system
CN105224909A (en) * 2015-08-19 2016-01-06 奇瑞汽车股份有限公司 Lane line confirmation method in lane detection system
CN105426863A (en) * 2015-11-30 2016-03-23 奇瑞汽车股份有限公司 Method and device for detecting lane line
CN107025432A (en) * 2017-02-28 2017-08-08 合肥工业大学 A kind of efficient lane detection tracking and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Vision-based approach towards lane line detection and vehicle localization;Xinxin Du et al.;《Machine Vision and Applications》;20151119;175-191 *
基于环境感知技术的客运车辆危险行驶状态辨识技术研究;刘永涛;《中国博士学位论文全文数据库 工程科技Ⅱ辑》;20170215;C035-27 *

Also Published As

Publication number Publication date
CN107832674A (en) 2018-03-23

Similar Documents

Publication Publication Date Title
CN107832674B (en) Lane line detection method
CN107330376B (en) Lane line identification method and system
CN109785291B (en) Lane line self-adaptive detection method
Yan et al. A method of lane edge detection based on Canny algorithm
CN108470356B (en) Target object rapid ranging method based on binocular vision
CN110414355A (en) The right bit sky parking stall of view-based access control model and parking stall line detecting method during parking
CN107169972B (en) Non-cooperative target rapid contour tracking method
CN108052904B (en) Method and device for acquiring lane line
US10878259B2 (en) Vehicle detecting method, nighttime vehicle detecting method based on dynamic light intensity and system thereof
CN110414385B (en) Lane line detection method and system based on homography transformation and characteristic window
CN108205667A (en) Method for detecting lane lines and device, lane detection terminal, storage medium
CN114820773A (en) Silo transport vehicle carriage position detection method based on computer vision
CN106875430B (en) Single moving target tracking method and device based on fixed form under dynamic background
KR101483742B1 (en) Lane Detection method for Advanced Vehicle
CN113506246B (en) Concrete 3D printing component fine detection method based on machine vision
CN113554646B (en) Intelligent urban road pavement detection method and system based on computer vision
CN105551046A (en) Vehicle face location method and device
CN112183325B (en) Road vehicle detection method based on image comparison
CN112078578A (en) Self-parking position planning method facing to perception uncertainty in lane keeping system
CN111178193A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN108256385A (en) The front vehicles detection method of view-based access control model
CN117094914A (en) Smart city road monitoring system based on computer vision
CN114693716A (en) Driving environment comprehensive identification information extraction method oriented to complex traffic conditions
CN108256470A (en) A kind of lane shift judgment method and automobile
CN113221739B (en) Monocular vision-based vehicle distance measuring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant