CN107451566B - Lane line display method and device and computer-readable storage medium - Google Patents

Lane line display method and device and computer-readable storage medium Download PDF

Info

Publication number
CN107451566B
CN107451566B CN201710653058.XA CN201710653058A CN107451566B CN 107451566 B CN107451566 B CN 107451566B CN 201710653058 A CN201710653058 A CN 201710653058A CN 107451566 B CN107451566 B CN 107451566B
Authority
CN
China
Prior art keywords
lane line
lane
position parameters
scene image
lines
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710653058.XA
Other languages
Chinese (zh)
Other versions
CN107451566A (en
Inventor
高语函
李阳
高伟杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co Ltd filed Critical Hisense Co Ltd
Priority to CN201710653058.XA priority Critical patent/CN107451566B/en
Publication of CN107451566A publication Critical patent/CN107451566A/en
Application granted granted Critical
Publication of CN107451566B publication Critical patent/CN107451566B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a display method and device of a lane line and a computer readable storage medium, and belongs to the technical field of automobile auxiliary driving. The method comprises the following steps: detecting a lane line in the scene image based on Hough transform; when one lane line is detected from the scene image, the detected lane line is used as a first lane line, and the position parameter of a second lane line which belongs to the same group as the first lane line is searched from the stored position parameters of a plurality of lane lines on the basis of the position parameter of the first lane line and a lane line constraint criterion; when the position parameters of the second lane line exist in the position parameters of the plurality of lane lines, the first lane line and the second lane line are displayed in the scene image, so that the phenomenon that the lane line determined through a tracking algorithm possibly cannot be matched with an actual lane line when one lane line is determined from the scene image is avoided, the accuracy of displaying the lane line is improved, and traffic accidents are avoided.

Description

Lane line display method and device and computer-readable storage medium
Technical Field
The invention relates to the technical field of automobile auxiliary driving, in particular to a method and a device for displaying lane lines and a computer readable storage medium.
Background
During the driving process of the vehicle, the driver may cause the relative position of the vehicle and the lane line to shift due to misoperation or other reasons, and further may cause the vehicle to collide with the adjacent lane to cause traffic accidents. Therefore, the lane lines can be collected in the driving process, and the collected lane lines are displayed, so that when the relative position of the vehicle and the lane lines deviates, the driver is warned to remind the driver, and further traffic accidents are avoided.
In the related art, in order to prevent the relative position between the vehicle and the lane line from being shifted, a camera may be installed in front of the vehicle, and an image of a scene in front of the vehicle may be captured by the camera. For each acquired frame of scene image, carrying out gray processing on the scene image, acquiring an interested region in the scene image, carrying out filtering processing and contour preprocessing on the interested region, and determining a lane line from the processed interested region through Hough transform. However, when the lane lines are obscured by obstacles such as vehicles or the like or are not clear due to abrasion, only one lane line may be determined or the lane line may not be determined from the scene image through hough transform, in this case, two lane lines may be determined through a tracking algorithm based on the lane lines determined from the consecutive N frames of scene images acquired before the scene image, and the determined lane lines may be displayed. Wherein N is a positive integer greater than or equal to 1.
However, when a lane line is determined from the scene image, the lane line determined by the tracking algorithm may not coincide with the actual lane line, and at this time, if the relative position between the vehicle and the actual lane line is shifted but the relative position between the vehicle and the lane line determined by the tracking algorithm is not shifted, misjudgment may be brought to the driver, and thus a traffic accident may be easily caused. In addition, when the N setting is too small and the lane line cannot be determined in the N scene images, the lane line cannot be output after the N scene images, resulting in the loss of the lane line, and when the N setting is too large, the lane line is also displayed in a no-lane area where the vehicle is traveling, resulting in a display error of the lane line.
Disclosure of Invention
In order to solve the problem of low lane line display accuracy in the related art, embodiments of the present invention provide a lane line display method, apparatus, and computer-readable storage medium. The technical scheme is as follows:
according to a first aspect of embodiments of the present invention, there is provided a lane line display method, including:
detecting a lane line in the scene image based on Hough transform;
when one lane line is detected from the scene image, the detected lane line is used as a first lane line, and based on the position parameters of the first lane line and a lane line constraint criterion, the position parameters of a second lane line which belongs to the same group as the first lane line are searched from the stored position parameters of a plurality of lane lines, wherein the first lane line is a left lane line or a right lane line;
when the position parameter of the second lane line exists in the position parameters of the plurality of lane lines, the first lane line and the second lane line are displayed in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line.
Optionally, the searching, based on the position parameter of the first lane line and lane line constraint criteria, for the position parameter of a second lane line belonging to the same group as the first lane line from the stored position parameters of the plurality of lane lines includes:
determining distance difference values and corresponding angle difference values between the first lane line and the plurality of lane lines based on the position parameters of the first lane line and the position parameters of the plurality of lane lines;
and when the distance difference smaller than the preset distance exists in the plurality of distance differences and the corresponding angle difference is smaller than the preset angle, determining that the position parameter of the second lane line exists in the position parameters of the plurality of lane lines.
Optionally, the detecting a lane line in a captured scene image based on hough transform includes:
acquiring an interested area in the scene image, and carrying out image preprocessing on the interested area to obtain a binary image;
acquiring corresponding position parameters of pixel points with the gray scale value of 255 in the binarized image in a polar coordinate system;
and when the acquired position parameters do not meet the lane line constraint criterion and position parameters meeting preset similarity constraint conditions with the position parameters of the stable lane line exist in the acquired position parameters, determining to detect one lane line from the scene image.
Optionally, after obtaining the corresponding position parameter of the pixel point with the gray value of 255 in the binarized image in the polar coordinate system, the method further includes:
determining the overlapping times of any one of the obtained position parameters, and sequencing the obtained position parameters according to the sequence of the overlapping times from large to small;
obtaining the top L position parameters from the sorting result, wherein L is a positive integer not less than 2;
and acquiring position parameters corresponding to lane lines which are positioned in a preset area in the binary image and can be detected in previous continuous S frame scene images adjacent to the scene image from the L position parameters, wherein S is a positive integer greater than 1.
Optionally, after detecting a lane line in the captured scene image based on hough transform, the method further includes:
determining the number of frames of a scene image in which a lane line is not detected in previous multiple frames of consecutive scene images adjacent to the scene image when the lane line is not detected from the scene image;
and if the determined frame number is less than T, displaying a stable lane line in the scene image based on the position parameter of the stable lane line, wherein T is a positive integer greater than 1.
According to a second aspect of embodiments of the present invention, there is provided a display device of a lane line, the device including:
the detection module is used for detecting a lane line in the scene image based on Hough transformation;
the searching module is used for taking the detected lane line as a first lane line when one lane line is detected from the scene image, and searching the position parameters of a second lane line which belongs to the same group with the first lane line from the stored position parameters of a plurality of lane lines on the basis of the position parameters of the first lane line and the lane line constraint criterion, wherein the first lane line is a left lane line or a right lane line;
a first display module, configured to display the first lane line and the second lane line in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line when the position parameter of the second lane line exists in the position parameters of the plurality of lane lines.
Optionally, the search module includes:
the first determining submodule is used for determining distance difference values and corresponding angle difference values between the first lane line and the plurality of lane lines on the basis of the position parameters of the first lane line and the position parameters of the plurality of lane lines;
and the second determining submodule is used for determining that the position parameter of the second lane line exists in the position parameters of the lane lines when the distance difference smaller than the preset distance exists in the plurality of distance differences and the corresponding angle difference is smaller than the preset angle.
Optionally, the detection module includes:
the processing submodule is used for acquiring an interested area in the scene image and carrying out image preprocessing on the interested area to obtain a binary image;
the first obtaining submodule is used for obtaining corresponding position parameters of pixel points with the gray value of 255 in the binarized image in a polar coordinate system;
and the third determining submodule is used for determining that one lane line is detected from the scene image when the acquired position parameters do not meet the lane line constraint criterion and position parameters meeting preset similarity constraint conditions with the position parameters of the stable lane line exist in the acquired position parameters.
Optionally, the detection module further includes:
the fourth determining submodule is used for determining the overlapping times of any one of the obtained position parameters and sequencing the obtained position parameters according to the sequence of the overlapping times from large to small;
a second obtaining submodule, configured to obtain top L position parameters from the sorting result, where L is a positive integer not less than 2;
and a third obtaining sub-module, configured to obtain, from the L location parameters, a location parameter corresponding to a lane line that can be detected in each of preceding consecutive S-frame scene images that are located in a preset region in the binarized image and adjacent to the scene image, where S is a positive integer greater than 1.
Optionally, the apparatus further comprises:
a determining module configured to determine, when a lane line is not detected from the scene image, a number of frames of the scene image in which the lane line is not detected in a plurality of frames of consecutive scene images adjacent to the scene image;
and the second display module is used for displaying the stable lane line in the scene image based on the position parameter of the stable lane line if the determined frame number is less than T, wherein T is a positive integer greater than 1.
According to a third aspect of the embodiments of the present disclosure, there is provided a display device of a lane line, the device including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of the first aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon instructions which, when executed by a processor, implement the steps of the method of the first aspect.
The technical scheme provided by the embodiment of the invention has the following beneficial effects: in the embodiment of the invention, the lane lines in the scene image are detected based on Hough transform, when one lane line is detected from the scene image, the detected lane line is used as a first lane line, the position parameter of a second lane line which belongs to the same group with the first lane line is searched from the stored position parameters of a plurality of lane lines based on the position parameter of the first lane line and the lane line constraint criterion, and the first lane line and the second lane line are displayed in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line, namely when only one stable lane line exists in the current scene, the other lane line corresponding to the first lane line is determined in a matching mode, so that the lane line determined by a tracking algorithm is prevented from being directly displayed when only one stable lane line exists in the current scene, the embodiment of the invention improves the accuracy of displaying the lane lines and avoids traffic accidents.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a lane line display method according to an embodiment of the present invention;
fig. 2 is a flowchart of another lane line display method according to an embodiment of the present invention;
fig. 3A is a schematic structural diagram of a display device of a first lane marking according to an embodiment of the present invention;
fig. 3B is a schematic structural diagram of a display device of a second lane line according to an embodiment of the present invention;
fig. 3C is a schematic structural diagram of a display device of a third lane line according to an embodiment of the present invention;
fig. 3D is a schematic structural diagram of a display device of a fourth lane line according to an embodiment of the present invention;
fig. 3E is a schematic structural diagram of a display device of a fifth lane line according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
For convenience of understanding, before explaining the embodiments of the present invention in detail, terms and application scenarios related to the embodiments of the present invention will be described.
First, some terms related to embodiments of the present invention will be described and explained.
Location parameter
The position parameter refers to the polar coordinates of a point in the rectangular coordinate system corresponding to the point in the polar coordinate system. For example, in the embodiment of the present disclosure, when a pixel with a gray value of 255 is detected in the binarized image, the pixel may be mapped into a polar coordinate system, so that a curve in the polar coordinate system may be obtained, where the curve is formed by a plurality of polar coordinate points, and a polar coordinate (ρ, θ) of each polar coordinate point may be referred to as a position parameter of the pixel in the polar coordinate system. Wherein ρ is a distance from an origin to the straight line in the rectangular coordinate system, and θ is an angle between an x-axis and a normal of the straight line in the rectangular coordinate system.
Image enhancement
The image enhancement is an image processing method which changes an original unclear image into a clear image or emphasizes certain interesting features, inhibits the uninteresting features, improves the image quality, enriches the information content and enhances the image interpretation and identification effects. Image enhancement, which refers to a process of highlighting certain information in an image as desired while attenuating or removing certain unwanted information, is generally directed to improving the visual appearance of the image so that the processed image is more suitable for a particular application than the original image.
Image filtering
Image filtering is a processing method for suppressing noise of an image while retaining detailed features of the image as much as possible. The effectiveness and reliability of subsequent image processing and analysis are directly affected by the quality of the image filtering. Such as non-linear filtering or median filtering, etc.
The nonlinear filtering is a processing method of a nonlinear mapping relation of an input signal, for example, a specific noise can be mapped to be approximately zero, and the main characteristics of the signal are reserved. Median filtering refers to a process in which the value of the location of the center point of an image or sequence is replaced by the median value of the region.
Image binarization
The image binarization refers to setting the gray value of a pixel point on an image to be 0 or 255, that is, the whole image presents an obvious black-and-white effect, wherein the gray value corresponding to a black pixel point is 0, and the gray value corresponding to a white pixel point is 255.
Spatial domain method
The spatial domain method is to perform operation processing on the gray values of the pixels in the image, and mainly includes a gray stretching algorithm and a histogram equalization algorithm.
The gray scale stretching is a method for changing the dynamic range of gray scales to enhance the detail part of the gray scale level of an image through a piecewise linear transformation function. Histogram equalization is a method for changing a gray level histogram of an original image from a certain gray level interval in a comparative set to uniform distribution in the whole gray level range to achieve an image enhancement effect.
Next, an application scenario related to the embodiment of the present invention is described.
In the driving process of the vehicle, the relative position of the vehicle and the lane line may be shifted due to misoperation of a driver or other reasons, and further the vehicle may collide with the vehicle in the adjacent lane, thereby causing traffic accidents. Therefore, the method for displaying the lane line can be provided, the detected lane line is displayed by detecting the scene image collected by the camera arranged in front of the vehicle, so that the driver is reminded when the relative position of the vehicle and the lane line deviates, and the traffic accident is avoided. Of course, the lane line display method may also be applied to an unmanned vehicle, and if it is detected that the relative position of the vehicle and the lane line is deviated, the driving direction of the vehicle is adjusted, thereby avoiding the occurrence of a traffic accident.
After introducing the terms and application scenarios related to the embodiments of the present invention, the following describes the embodiments of the present invention in detail.
Fig. 1 is a flowchart of a method for displaying a lane line according to an embodiment of the present invention. Referring to fig. 1, the method includes the following steps.
Step 101: and detecting a lane line in the scene image based on Hough transform.
Step 102: when one lane line is detected from the scene image, the detected lane line is used as a first lane line, and based on the position parameters of the first lane line and a lane line constraint criterion, the position parameters of a second lane line which belongs to the same group with the first lane line are searched from the stored position parameters of a plurality of lane lines, wherein the first lane line is a left lane line or a right lane line.
Step 103: when the position parameter of the second lane line exists in the position parameters of the plurality of lane lines, the first lane line and the second lane line are displayed in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line.
In summary, in the method provided in the embodiment of the present invention, a lane line in a scene image is detected based on hough transform, when one lane line is detected from the scene image, the detected lane line is used as a first lane line, and a position parameter of a second lane line that belongs to the same group as the first lane line is searched from stored position parameters of a plurality of lane lines based on a position parameter of the first lane line and a lane line constraint criterion, and the first lane line and the second lane line are displayed in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line, that is, when only one stable lane line exists in a current scene, another lane line corresponding to the position parameter is determined in a matching manner, so that when only one stable lane line exists in the current scene, the method and the device have the advantages that the lane lines determined through the tracking algorithm are directly displayed, so that the phenomenon that the displayed lane lines possibly cannot be matched with the actual lane lines occurs.
Optionally, searching for a position parameter of a second lane line belonging to the same group as the first lane line from the stored position parameters of the plurality of lane lines based on the position parameter of the first lane line and a lane line constraint criterion, including:
determining distance difference values and corresponding angle difference values between the first lane line and the plurality of lane lines based on the position parameters of the first lane line and the position parameters of the plurality of lane lines;
and when the distance difference smaller than the preset distance exists in the plurality of distance differences and the corresponding angle difference is smaller than the preset angle, determining that the position parameter of the second lane line exists in the position parameters of the plurality of lane lines.
Optionally, detecting a lane line in the captured scene image based on hough transform includes:
acquiring an interested area in the scene image, and carrying out image preprocessing on the interested area to obtain a binary image;
acquiring corresponding position parameters of pixel points with the gray scale value of 255 in the binary image in a polar coordinate system;
and when the acquired position parameters do not meet the lane line constraint criterion and position parameters meeting preset similarity constraint conditions with the position parameters of the stable lane line exist in the acquired position parameters, determining to detect one lane line from the scene image.
Optionally, after obtaining the corresponding position parameter of the pixel point with the gray value of 255 in the binarized image in the polar coordinate system, the method further includes:
determining the overlapping times of any one of the obtained position parameters, and sequencing the obtained position parameters according to the sequence of the overlapping times from large to small;
obtaining the top L position parameters from the sequencing result, wherein L is a positive integer not less than 2;
and acquiring position parameters corresponding to lane lines which can be detected in the previous continuous S frame scene images adjacent to the scene image and are positioned in a preset area in the binary image from L position parameters, wherein S is a positive integer greater than 1.
Optionally, after detecting a lane line in the captured scene image based on hough transform, the method further includes:
when a lane line is not detected from the scene image, determining the number of frames of the scene image in which the lane line is not detected in a plurality of frames of consecutive scene images adjacent to the scene image;
and if the determined frame number is less than T, displaying the stable lane line in the scene image based on the position parameter of the stable lane line, wherein T is a positive integer greater than 1.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present invention, which is not described in detail herein.
Fig. 2 is a flowchart of a lane line display method according to an embodiment of the present invention, and referring to fig. 2, the method includes the following steps.
In order to display the lane line in front of the vehicle, a camera may be installed in front of the vehicle, a scene image in front of the vehicle is captured by the camera, and then the lane line in the scene image is detected to determine whether the scene image includes the lane line. Specifically, the detection of the lane lines in the scene image can be realized according to steps 201 to 204.
Step 201: and acquiring an interested area in the scene image, and carrying out image preprocessing on the interested area to obtain a binary image.
In the embodiment of the present disclosure, the region of interest in the scene image is a region that may include a lane line, and when the installation positions of the cameras are different, the positions of the region of interest in the scene image are also different, so that the region of interest in the scene image may be acquired based on the installation positions of the cameras. If the region of interest is a color image, the region of interest is converted into a gray image, and then image enhancement, image filtering and image binarization processing are sequentially performed to obtain a binarized image. And if the region of interest is a gray image, sequentially carrying out image enhancement, image filtering and image binarization processing on the region of interest so as to obtain a binarized image.
The position of the region of interest in the scene image may be set by the user in advance based on the installation position of the camera. For example, when the camera is installed at the middle position in front of the vehicle, the middle-lower half area of the scene image is determined as the region of interest in the scene image.
Because the area range included by the scene image is large, and the lane line may be located in a partial area of the scene image, the region of interest in the scene image may be acquired through the installation position of the camera, and then the region of interest may be processed without processing the whole scene image, thereby reducing the amount of computation and improving the display efficiency of the lane line.
In a possible implementation, the specific operation of converting the region of interest into a grayscale image may be: calculating the gray value of each pixel point in the gray image according to the following formula (1) based on R, G and B components of the pixel value of each pixel point in the region of interest, and further converting the region of interest into a gray image;
Vgray=0.30R+0.59G+0.11B (1)
wherein, in the above formula (1), VgrayThe gray values of the pixels in the gray image are R, G, B, which are the three components of the RGB color model of the pixel values of the pixels in the region of interest.
Step 202: and acquiring the corresponding position parameter of the pixel point with the gray value of 255 in the binary image in the polar coordinate system.
Because one point in the rectangular coordinate system corresponds to a curve in the polar coordinate system, and the lane line in the scene image may be composed of a plurality of pixel points with the gray scale value of 255 in the binarized image, the coordinates of the pixel points with the gray scale value of 255 in the binarized image in the direct coordinate system can be converted into a corresponding curve in the polar coordinate system based on hough transform, and then the polar coordinates of the intersection point of the converted curve are determined, and the obtained polar coordinates are determined as the corresponding position parameters of the pixel points with the gray scale value of 255 in the binarized image in the polar coordinate system.
Of course, the above method is a possible implementation manner for determining the corresponding position parameter of the pixel with the gray value of 255 in the polar coordinate system in the binarized image, and in practical application, the corresponding position parameter of the pixel with the gray value of 255 in the polar coordinate system may also be determined by other methods.
It should be noted that after the position parameters corresponding to the pixel points with the gray scale value of 255 in the binarized image in the polar coordinate system are obtained, interference straight lines may exist in the scene image, so that interference position parameters may exist in the obtained position parameters, and therefore, the obtained position parameters may be screened, and then the screened position parameters are judged based on the lane line constraint criterion, so as to improve the display efficiency of the lane lines. In practical application, the possibility that the lane line has the interference straight line is low, so that the acquired position parameter can be judged directly based on the lane line constraint criterion.
Specifically, the screening of the plurality of positional parameters may be achieved by the following steps (1) to (3).
(1) Determining the overlapping times of any one of the obtained position parameters, and sequencing the obtained position parameters according to the sequence of the overlapping times from large to small.
The number of times of overlapping of any position parameter is the number of curves passing through the polar coordinate point corresponding to the position parameter in the polar coordinate system.
For example, the binarized image includes 25 pixel points with a gray value of 255, and after the 25 pixel points are converted into a polar coordinate system, 25 curves in the polar coordinate system are obtained, 15 curves in the 25 curves pass through the position parameter (2, 30 °), 13 curves pass through the position parameter (3, 30 °), 10 curves pass through the position parameter (2, 60 °), 5 curves pass through the position parameter (4, 45 °), and 3 curves pass through the position parameter (3, 75 °). That is, the number of times of superimposition of the position parameter (2, 30 °) is 15 times, the number of times of superimposition of the position parameter (3, 30 °) is 13 times, the number of times of superimposition of the position parameter (2, 60 °) is 10 times, the number of times of superimposition of the position parameter (4, 45 °) is 5 times, and the number of times of superimposition of the position parameter (3, 75 °) is 3 times. At this time, the obtained position parameters are sorted according to the sequence of the overlapping times from large to small, and the sorting result is as follows: position parameter (2, 30 °), position parameter (3, 30 °), position parameter (2, 60 °), position parameter (4, 45 °), position parameter (3, 75 °).
It should be noted that, when the obtained position parameters are sorted, the obtained position parameters may be sorted not only in the order of the overlapping times from large to small, but also in practical applications.
(2) The top L position parameters are obtained from the sorting result, and L is a positive integer not less than 2.
Because an interference straight line or other straight lines may exist in the scene image, and the number of times of overlapping of the position parameters corresponding to the pixel points of the interference straight line or other straight lines in the binarized image may be smaller than the number of times of overlapping of the position parameters corresponding to the pixel points of the lane lines in the binarized image, the obtained position parameters can be preliminarily screened through the number of times of overlapping of the position parameters, so that the calculation amount is reduced.
Continuing with the above example, assuming L is 4, the first 4 position parameters are obtained, position parameter (2, 30 °), position parameter (3, 30 °), position parameter (2, 60 °), and position parameter (4, 45 °).
When the acquired position parameters are sorted in the order of the overlapping times from small to large, the post-L position parameters may be acquired from the sorting result.
(3) And acquiring position parameters corresponding to lane lines which can be detected in the previous continuous S frame scene images adjacent to the scene image and are positioned in a preset area in the binary image from L position parameters, wherein S is a positive integer greater than 1.
The preset area may be preset based on an installation position of the camera, for example, when the camera is installed in a middle position in front of the vehicle, the preset area may be a position area having a certain width and located on left and right sides of the region of interest in the scene image.
In the binarized image, because a certain distance exists between the same group of lane lines, that is, the same group of lane lines may be located in a preset region in the binarized image, and because the time interval between two adjacent frames of the camera-acquired scene image is small, the lane lines which can be detected by the continuous S-frame scene image are all the same, therefore, the obtained position parameters can be further screened based on whether the straight line corresponding to the position parameter is in the preset region in the binarized image, and whether the position parameters corresponding to the lane lines which can be detected in the previous continuous S-frame scene image adjacent to the scene image are the same, so as to determine the possible lane lines, thereby improving the accuracy and the display efficiency of the lane line display.
Continuing with the above example, assuming that the straight line corresponding to the position parameter (2, 30 °), the straight line corresponding to the position parameter (3, 30 °) and the straight line corresponding to the position parameter (4, 45 °) are located in the preset region in the binarized image, the straight line corresponding to the position parameter (2, 60 °) is located in the non-preset region, the position parameter (2, 30 °) is detectable in 5 consecutive frames of scene images, the position parameter (2, 60 °) is detectable in 3 consecutive frames of scene images, the position parameter (4, 45 °) is detectable only in the captured scene image, and assuming that S is 3, the straight line corresponding to the position parameter (2, 30 °) and the straight line corresponding to the position parameter (4, 45 °) can be determined as possible lane lines.
Step 203: and judging whether the acquired position parameters meet a lane line constraint criterion, wherein the lane line constraint criterion refers to a condition met by the same group of lane lines.
Since the lane line constraint criterion refers to a condition that is satisfied by the same group of lane lines, in order to determine whether the acquired position parameters include a lane line belonging to the same group (where the same group of lane lines generally includes two lane lines), it may be determined first whether the acquired position parameters satisfy the lane line constraint criterion. And when the acquired position parameters do not meet the lane line constraint criterion, determining that two lane lines do not exist in the scene image.
In a possible implementation manner, since the lane line constraint criterion refers to a condition that is satisfied by the same group of lane lines, and the same group of lane lines generally includes two lane lines, in order to facilitate description of the lane line constraint criterion, two position parameters for determining whether the lane line constraint criterion is satisfied may be referred to as a first position parameter and a second position parameter, respectively, and at this time, the lane line constraint criterion may be represented by the following formula (2) and formula (3):
||ρ1|-|ρ2||<T1 (2)
||θ1|-|θ2||<T2 (3)
in the above equations (2) and (3), ρ 1 is a distance coordinate in the first position parameter, ρ 2 is a distance coordinate in the second position parameter, T1 is a preset distance, θ 1 is an angle coordinate in the first position parameter, θ 2 is an angle coordinate in the second position parameter, and T2 is a preset angle.
It should be noted that, when the acquired position parameters include two position parameters, whether two straight lines corresponding to the two position parameters are the same group of lane lines may be determined according to the lane line constraint criterion, however, when the acquired position parameters are one position parameter, the determination of the lane line constraint criterion may not be required, and it may be directly determined that the acquired position parameters do not satisfy the lane line constraint criterion.
Further, when the acquired position parameter meets the lane line constraint criterion, it is determined that the scene image includes two lane lines. At this time, the straight line corresponding to the acquired position parameter may be determined as a lane line, and it is further determined that the scene image includes two lane lines. Certainly, in order to improve the accuracy of displaying the lane lines and avoid the false display caused by the sudden change of the road condition, the preset similarity constraint condition judgment may be performed on the two lane lines, that is, when the acquired position parameter meets the lane line constraint criterion and the acquired position parameter and the position parameter of the lane line displayed when the last scene image of the scene image is acquired meet the preset similarity constraint condition, it is determined that the scene image includes the two lane lines.
In one possible implementation, the preset similarity constraint may be represented by the following formula (4):
|ρ-ρ‘|+θ-θ‘|×α<T3 (4)
in the above formula (4), ρ is a distance coordinate of the acquired position parameter, ρ 'is a distance coordinate of the position parameter for stabilizing the lane line, θ is an angle coordinate of the acquired position parameter, θ' is an angle coordinate of the position parameter for stabilizing the lane line, α is a correction coefficient of the angle and is a constant, and T3 is a preset value.
Since the dimension of ρ is different from that of θ in the position parameters (ρ, θ), the dimension of θ may be corrected and multiplied by a correction coefficient α for the convenience of calculating the preset similarity constraint condition, and the T3 may be preset, for example, 50, 75, or 100.
Step 204: and when the acquired position parameters do not meet the lane line constraint criterion and position parameters meeting preset similarity constraint conditions with the position parameters of the stable lane line exist in the acquired position parameters, determining to detect one lane line from the scene image.
The stable lane line refers to a lane line which can be detected in continuous M frames of scene images, and M is a positive integer greater than 1.
When the acquired position parameters do not meet the lane line constraint criterion, a lane line may exist in the scene image, so that the acquired position parameters may be judged based on the preset similarity constraint condition, and if the acquired position parameters and the position parameters of the stable lane line meet the preset similarity constraint condition, it is determined that a lane line is detected from the scene image.
Specifically, since the time interval between two adjacent frames is small, the lane line existing in the scene image of the frame is the same as or similar to the stable lane line determined based on the scene image that is before the scene image of the frame and adjacent to the scene image of the frame, and therefore, the possible lane line screened by the scene image may be determined based on the preset similarity constraint condition. Specifically, preset similarity constraint condition comparison is performed on each position parameter in the acquired position parameters and a position parameter of a stable lane line, and if one position parameter in the acquired position parameters and the position parameter of the stable lane line meet the preset similarity constraint condition, a lane line is determined to be found, that is, it is determined that the scene image includes the lane line.
If one position parameter exists in the obtained position parameters and the position parameter of the right lane line in the stable lane lines meets a preset similarity constraint condition, determining to find the right lane line; and if one position parameter in the acquired position parameters and the position parameter of the left lane line in the stable lane lines meet the preset similarity constraint condition, determining to find the left lane line.
It should be noted that, before the acquired position parameter is determined based on the preset similarity constraint condition, the position parameter of the stable lane line determined when the last scene image of the scene image is acquired may be acquired. Because of the change of the road condition, the position parameters of the lane lines respectively detected by the scene images separated by multiple frames may have differences, and therefore, the stable lane line of each frame of scene image may be re-determined based on the lane lines that can be detected in the M frames of scene images that are consecutive to the stable lane line, that is, after each frame of scene image is acquired, the corresponding stable lane line needs to be determined, and the stable lane lines corresponding to each frame of scene image may be different. Where M may be set in advance, for example, may be 15, 20, or 25.
In a possible implementation manner, when a stable lane line corresponding to one frame of scene image is obtained every time the stable lane line is determined, the position parameter of the determined stable lane line may be stored, and for example, the position parameter of the determined stable lane line may be stored in the lane line database. For the convenience of distinguishing, when the position parameter of the lane line corresponding to each frame of scene image is stored, the frame number of each frame of scene image can be correspondingly stored. Then, when the position parameter of the stable lane line determined when the last scene image of the scene image is acquired, the position parameter of the corresponding stable lane line may be acquired from the lane line database based on the frame number of the last scene image.
Further, when the acquired position parameters do not meet the lane line constraint criterion and the position parameters meeting the preset similarity constraint condition with the position parameters of the stable lane line are not found from the acquired position parameters, it is determined that the scene image does not include the lane line. When determining that the scene image does not include the lane line, namely when the lane line is not detected from the scene image, determining the frame number of the scene image of which the lane line is not detected in the previous multiple frames of continuous scene images adjacent to the scene image; and if the determined frame number is less than T, displaying a stable lane line in the scene image based on the position parameter of the stable lane line, wherein T is a positive integer greater than 1.
In practical applications, due to the influence of factors such as changes in road conditions, a lane line may not be detected in the scene image, but a lane line exists in an actual lane, so to avoid making a false judgment for a driver, when a lane line is not detected in the scene image, the number of frames of the scene image which is detected in a plurality of frames of scene images before and adjacent to the scene image and does not include the lane line is determined, and if the determined number of frames is less than T, the lane line is displayed based on the position parameter of the stable lane line determined when the last frame of scene image of the scene image is acquired, where T is a positive integer greater than 1. And if the frame number of the continuously detected scene images without the lane line is not less than T before the scene image is acquired, determining that the current lane has no lane line, and further not displaying the lane line.
That is, when it is detected that the scene image does not include the lane line, if the number of frames of the scene images that are continuously detected before the scene image and do not include the lane line is less than T, the stable lane line determined when the last frame of scene image is captured is displayed, otherwise, the lane line is not displayed.
Step 205: and taking the detected lane line as a first lane line, and searching the position parameters of a second lane line which belongs to the same group with the first lane line from the stored position parameters of the plurality of lane lines on the basis of the position parameters of the first lane line and the lane line constraint criterion.
Specifically, when one lane line is detected from the scene image, it may be determined that there are two lane lines in the scene image. In order to accurately display the two lane lines, the detected lane line can be used as a first lane line, and the distance difference value and the corresponding angle difference value between the first lane line and the plurality of lane lines are determined based on the position parameter of the first lane line and the position parameters of the plurality of lane lines; and when the distance difference smaller than the preset distance exists in the plurality of distance differences and the corresponding angle difference is smaller than the preset angle, determining that the position parameter of the second lane line exists in the position parameters of the plurality of lane lines. Wherein, the first lane line is a left lane line or a right lane line.
The multiple lane lines are lane lines detected from multiple frames of scene images acquired before the current time. For example, the position parameters corresponding to the plurality of lane lines may be position parameters corresponding to lane lines stored in a lane line database.
In one possible implementation manner, determining distance differences between the distance coordinates of the position parameters of the first lane line and the distance coordinates of the position parameters of the plurality of lane lines to obtain a plurality of distance differences; and determining the angle difference between the angle coordinate of the position parameter of the first lane line and the angle coordinates of the position parameters of the plurality of lane lines to obtain a plurality of angle differences. And when the distance difference value smaller than the preset distance exists in the absolute values of the distance difference values and the corresponding angle difference value is smaller than the angle difference value of the preset angle, determining that the position parameter of the second lane line exists in the position parameters of the lane lines.
That is, based on the position parameter of the first lane line, the position parameter of the lane line satisfying the above formula (2) and formula (3) with the position parameter of the first lane line is searched from the lane line database, and the searched position parameter of the lane line is determined as the position parameter of the second lane line belonging to the same group as the first lane line.
For example, when it is detected that the first lane line included in the scene image is the right lane line, based on the position parameter of the right lane line and the above formula (2) and formula (3), the position parameter of the left lane line that belongs to the same group as the right lane line is searched from the position parameters stored in the lane line database; when the first lane line included in the scene image is detected to be the left lane line, the position parameter of the right lane line which belongs to the same group with the left lane line is searched from the position parameters stored in the database based on the position parameter of the left lane line and the formula (2) and the formula (3).
Step 206: when the position parameter of the second lane line exists in the position parameters of the plurality of lane lines, the first lane line and the second lane line are displayed in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line.
And determining the position parameter of the first lane line detected from the scene image and the position parameter of the searched second lane line as the same group of lane lines, and displaying the same group of lane lines in the scene image.
In summary, in the method provided in the embodiment of the present invention, the region of interest in the scene image collected by the camera is obtained, image preprocessing is performed to obtain the binarized image, and the position parameters corresponding to the pixel points with the gray scale value of 255 in the binarized image in the polar coordinate system are obtained based on hough transform. When the position parameters are acquired, whether two lane lines are included in the acquired position parameters is determined based on a lane line constraint criterion, when the acquired position parameters do not meet the lane line constraint criterion, whether one lane line exists in the acquired position parameters is determined based on a preset similarity constraint condition, and when the preset similarity constraint condition is met between the acquired position parameters and the position parameters of the stable lane line, one lane line exists in the scene image. And taking the lane line as a first lane line, and searching the position parameter of a second lane line which belongs to the same group of lane lines with the first lane line from the stored position parameters of a plurality of lane lines based on the lane line constraint criterion. When the position parameter belonging to the second lane line with the first lane line is found, the first lane line and the second lane line are displayed in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line, namely when only one stable lane line exists in the current scene, the lane line display method of the embodiment of the invention determines the other lane line corresponding to the first lane line in a matching mode, thereby avoiding the phenomenon that the displayed lane line can not be matched with the actual lane line due to the fact that the lane line determined by the tracking algorithm is directly displayed when only one stable lane line exists in the current scene.
Fig. 3A is a display device of a lane line according to an embodiment of the present invention. As shown in fig. 3A, the apparatus includes:
a detection module 301, configured to detect a lane line in a scene image based on hough transform;
the searching module 302 is configured to, when a lane line is detected from the scene image, use the detected lane line as a first lane line, and search, based on a position parameter of the first lane line and a lane line constraint criterion, a position parameter of a second lane line that belongs to the same group as the first lane line from among stored position parameters of a plurality of lane lines, where the first lane line is a left lane line or a right lane line;
the first display module 303 is configured to display the first lane line and the second lane line in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line when the position parameter of the second lane line exists in the position parameters of the plurality of lane lines.
Optionally, as shown in fig. 3B, the lookup module 302 includes:
a first determining submodule 3021, configured to determine distance differences and corresponding angle differences between the first lane line and the plurality of lane lines based on the position parameter of the first lane line and the position parameters of the plurality of lane lines;
the second determining submodule 3022 is configured to determine that the position parameter of the second lane line exists in the position parameters of the multiple lane lines when a distance difference smaller than a preset distance exists among the multiple distance differences, and the corresponding angle difference is smaller than a preset angle.
Optionally, as shown in fig. 3C, the detection module 301 includes:
the processing sub-module 3011 is configured to obtain a region of interest in the scene image, and perform image preprocessing on the region of interest to obtain a binary image;
the first obtaining submodule 3012 is configured to obtain a corresponding position parameter of a pixel point with a gray value of 255 in the binarized image in the polar coordinate system;
the third determining submodule 3013 is configured to determine that a lane line is detected from the scene image when none of the obtained location parameters meets the lane line constraint criterion, and a location parameter meeting a preset similarity constraint condition exists between the obtained location parameters and the location parameter of the stable lane line.
Optionally, as shown in fig. 3D, the detection module 301 further includes:
the fourth determining submodule 3014 is configured to determine the number of overlapping times of any one of the obtained location parameters, and sequence the obtained location parameters according to a descending order of the number of overlapping times;
a second obtaining sub-module 3015, configured to obtain the top L location parameters from the sorting result, where L is a positive integer not less than 2;
the third obtaining sub-module 3016 is configured to obtain, from L location parameters, location parameters corresponding to lane lines that can be detected in previous consecutive S frame scene images that are located in a preset area in the binarized image and adjacent to the scene image, where S is a positive integer greater than 1.
Optionally, as shown in fig. 3E, the apparatus further includes:
a determining module 304, configured to determine, when no lane line is detected from the scene image, a number of frames of the scene image in which no lane line is detected in a plurality of frames of consecutive scene images adjacent to the scene image;
a second display module 305, configured to display a stable lane line in the scene image based on the position parameter of the stable lane line if the determined frame number is less than T, where T is a positive integer greater than 1.
In summary, in the method provided in the embodiment of the present invention, a binary image is obtained by preprocessing each collected scene image, a position parameter of a pixel point with a gray value of 255 in the binary image in a polar coordinate system is obtained, and the obtained position parameter is further screened; when the scene image is detected to include one lane line, the position parameters of the lane line belonging to the same group as the lane line are obtained from the stored position parameters of the multiple lane lines based on the position parameters of the lane line and the lane line constraint criteria, and then the lane line is displayed on the detected position parameters of the lane line and the obtained position parameters of the lane line, and the other lane line corresponding to the detected position parameters and the obtained position parameters of the lane line is determined in a matching mode, so that the phenomenon that the displayed lane line can not be matched with the actual lane line due to the fact that the lane line determined by a tracking algorithm is directly displayed when only one stable lane line exists in the current scene is avoided.
Fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention, where the terminal may be a vehicle-mounted terminal, and certainly may also be a mobile terminal, and is configured to implement the lane line display method shown in fig. 1 and fig. 2. Referring to fig. 4, the terminal 400 may include components such as a communication unit 410, a memory 420 including one or more computer-readable storage media, an input unit 430, a display unit 440, a sensor 450, an audio circuit 460, a wireless communication unit 470 such as a WIFI (wireless fidelity) module 470, a processor 480 including one or more processing cores, and a power supply 490. Those skilled in the art will appreciate that the terminal configuration shown in fig. 4 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the communication unit 410 may be used for receiving and transmitting signals during a message or call, the communication unit 410 may be a Radio Frequency (RF) circuit, a router, a modem, or like network communication device, and particularly, when the communication unit 410 is a RF circuit, the communication unit receives downstream information from a base station and processes the received downstream information by one or more processors 480, and further, transmits data related to an upstream to the base station, in General, the RF circuit as a communication unit includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, L NA (L Noise Amplifier, duplexer, etc., and further, the communication unit 410 may communicate with a network and other devices via wireless communication, which may use any communication standard or protocol, including, but not limited to, GSM, GPRS (General Packet Radio Service), CDMA (Code division, services), flash memory, etc., may also operate with a memory device such as a memory device, a memory, a terminal, a memory, a communication.
The input unit 430 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. Preferably, the input unit 430 may include a touch-sensitive surface 431 and other input devices 432. The touch-sensitive surface 431, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 431 (e.g., operations by a user on or near the touch-sensitive surface 431 using any suitable object or attachment such as a finger, a stylus, etc.) and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 431 may comprise both a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 480, and receives and executes commands sent from the processor 480. In addition, the touch-sensitive surface 431 may be implemented in various types, such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 430 may include other input devices 432 in addition to the touch-sensitive surface 431. Preferably, other input devices 432 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Display unit 440 may be configured to Display information input by or provided to a user, as well as various graphical user interfaces of terminal 400, which may be comprised of graphics, text, icons, video, and any combination thereof Display panel 441 may optionally be configured in the form of L CD (L iquid Crystal Display ), O L ED (Organic L light-Emitting Diode), etc. further, touch-sensitive surface 431 may overlay Display panel 441, and upon touch-sensitive surface 431 detecting a touch operation on or near touch-sensitive surface 431, communicate to processor 480 to determine the type of touch event, and processor 480 then provides a corresponding visual output on Display panel 441 depending on the type of touch event.
The terminal 400 can also include at least one sensor 450, such as a light sensor, motion sensor, and other sensors. The light sensor may include an ambient light sensor that adjusts the brightness of the display panel 441 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 441 and/or a backlight when the terminal 400 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal 400, detailed descriptions thereof are omitted.
The audio circuit 460, speaker 461, microphone 462 may provide an audio interface between a user and the terminal 400. The audio circuit 460 may transmit the electrical signal converted from the received audio data to the speaker 461, and convert the electrical signal into a sound signal for output by the speaker 461; on the other hand, the microphone 462 converts the collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 460, and then processes the audio data by the audio data output processor 480, and then passes through the communication unit 410 to be transmitted to, for example, another terminal, or outputs the audio data to the memory 420 for further processing. The audio circuit 460 may also include an earbud jack to provide communication of a peripheral headset with the terminal 400.
In order to implement wireless communication, a wireless communication unit 470 may be configured on the terminal, and the wireless communication unit 470 may be a WIFI module. WIFI belongs to a short-range wireless transmission technology, and the terminal 400 can help a user to send and receive e-mails, browse web pages, access streaming media, and the like through the wireless communication unit 470, and provides the user with wireless broadband internet access. Although the wireless communication unit 470 is shown in fig. 4, it is understood that it does not belong to the essential constitution of the terminal 400 and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 480 is a control center of the terminal 400, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal 400 and processes data by operating or executing software programs and/or modules stored in the memory 420 and calling data stored in the memory 420, thereby integrally monitoring the mobile phone. Optionally, processor 480 may include one or more processing cores; preferably, the processor 480 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 480.
The terminal 400 also includes a power supply 490 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 480 via a power management system that may be used to manage charging, discharging, and power consumption. The power supply 460 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the terminal 400 may further include a camera, a bluetooth module, etc., which will not be described herein.
In this embodiment, the terminal further includes one or more programs, the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for performing the lane line display method described in fig. 1 and 2 above and provided by the embodiment of the present invention.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as the memory 420 including instructions, executable by the processor 480 of the terminal to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer-readable storage medium in which instructions, when executed by a processor of a terminal, enable the terminal to perform a lane line display method, the method comprising:
and detecting a lane line in the scene image based on Hough transform.
When one lane line is detected from the scene image, the detected lane line is used as a first lane line, and based on the position parameters of the first lane line and a lane line constraint criterion, the position parameters of a second lane line which belongs to the same group with the first lane line are searched from the stored position parameters of a plurality of lane lines, wherein the first lane line is a left lane line or a right lane line.
When the position parameter of the second lane line exists in the position parameters of the plurality of lane lines, the first lane line and the second lane line are displayed in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line.
Optionally, searching for a position parameter of a second lane line belonging to the same group as the first lane line from the stored position parameters of the plurality of lane lines based on the position parameter of the first lane line and a lane line constraint criterion, including:
determining distance difference values and corresponding angle difference values between the first lane line and the plurality of lane lines based on the position parameters of the first lane line and the position parameters of the plurality of lane lines;
and when the distance difference smaller than the preset distance exists in the plurality of distance differences and the corresponding angle difference is smaller than the preset angle, determining that the position parameter of the second lane line exists in the position parameters of the plurality of lane lines.
Optionally, detecting a lane line in the captured scene image based on hough transform includes:
acquiring an interested area in the scene image, and carrying out image preprocessing on the interested area to obtain a binary image;
acquiring corresponding position parameters of pixel points with the gray scale value of 255 in the binary image in a polar coordinate system;
and when the acquired position parameters do not meet the lane line constraint criterion and position parameters meeting preset similarity constraint conditions with the position parameters of the stable lane line exist in the acquired position parameters, determining to detect one lane line from the scene image.
Optionally, after obtaining the corresponding position parameter of the pixel point with the gray value of 255 in the binarized image in the polar coordinate system, the method further includes:
determining the overlapping times of any one of the obtained position parameters, and sequencing the obtained position parameters according to the sequence of the overlapping times from large to small;
obtaining the top L position parameters from the sequencing result, wherein L is a positive integer not less than 2;
and acquiring position parameters corresponding to lane lines which can be detected in the previous continuous S frame scene images adjacent to the scene image and are positioned in a preset area in the binary image from L position parameters, wherein S is a positive integer greater than 1.
Optionally, after detecting a lane line in the captured scene image based on hough transform, the method further includes:
when a lane line is not detected from the scene image, determining the number of frames of the scene image in which the lane line is not detected in a plurality of frames of consecutive scene images adjacent to the scene image;
and if the determined frame number is less than T, displaying the stable lane line in the scene image based on the position parameter of the stable lane line, wherein T is a positive integer greater than 1.
To sum up, the method provided in the embodiment of the present invention detects lane lines in a scene image based on hough transform, when one lane line is detected from the scene image, uses the detected lane line as a first lane line, searches for a position parameter of a second lane line belonging to the same group as the first lane line from the stored position parameters of a plurality of lane lines based on the position parameter of the first lane line and a lane line constraint criterion, and further displays the first lane line and the second lane line in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line, and determines another lane line corresponding to the first lane line in a matching manner, thereby avoiding a phenomenon that the displayed lane line may not coincide with an actual lane line when only one stable lane line exists in a current scene, because the lane line determined by a tracking algorithm is directly displayed, the embodiment of the invention improves the accuracy of lane line display and avoids traffic accidents.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A method for displaying a lane line, the method comprising:
acquiring an interested area in a scene image, and carrying out image preprocessing on the interested area to obtain a binary image;
acquiring corresponding position parameters of pixel points with the gray scale value of 255 in the binarized image in a polar coordinate system;
determining the overlapping times of any one of the obtained position parameters, and sequencing the obtained position parameters according to the sequence of the overlapping times from large to small;
obtaining the top L position parameters from the sorting result, wherein L is a positive integer not less than 2;
acquiring position parameters corresponding to lane lines which are located in a preset area in the binary image and can be detected in previous continuous S frame scene images adjacent to the scene image from the L position parameters, wherein S is a positive integer greater than 1;
when the acquired position parameters do not meet the lane line constraint criterion and position parameters meeting the preset similarity constraint condition exist between the acquired position parameters and the position parameters of the stable lane lines, determining to detect one lane line from the scene image, wherein the lane line constraint criterion refers to the condition met by the same group of lane lines, the stable lane line refers to the lane lines which can be detected in the continuous M frames of scene images, M is a positive integer greater than 1, and the preset similarity constraint condition is used for determining whether the lane lines exist in the scene image;
when one lane line is detected from the scene image, the detected lane line is used as a first lane line, and the position parameter of a second lane line which belongs to the same group as the first lane line is searched from the stored position parameters of a plurality of lane lines on the basis of the position parameter of the first lane line and the lane line constraint criterion, wherein the first lane line is a left lane line or a right lane line;
when the position parameter of the second lane line exists in the position parameters of the plurality of lane lines, the first lane line and the second lane line are displayed in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line.
2. The method of claim 1, wherein the searching for the location parameter of the second lane line that belongs to the same group as the first lane line from the stored location parameters of the plurality of lane lines based on the location parameter of the first lane line and the lane line constraint criteria comprises:
determining distance difference values and corresponding angle difference values between the first lane line and the plurality of lane lines based on the position parameters of the first lane line and the position parameters of the plurality of lane lines;
and when the distance difference smaller than the preset distance exists in the plurality of distance differences and the corresponding angle difference is smaller than the preset angle, determining that the position parameter of the second lane line exists in the position parameters of the plurality of lane lines.
3. The method of claim 1, wherein after detecting the lane lines in the scene image based on the hough transform, the lane line constraint criterion, and the preset similarity condition, further comprising:
determining the number of frames of a scene image in which a lane line is not detected in previous multiple frames of consecutive scene images adjacent to the scene image when the lane line is not detected from the scene image;
and if the determined frame number is less than T, displaying a stable lane line in the scene image based on the position parameter of the stable lane line, wherein the stable lane line refers to the lane line which can be detected in continuous M frames of scene images, and both T and M are positive integers greater than 1.
4. A lane line display apparatus, comprising:
the detection module is used for detecting the lane lines in the scene image based on Hough transform, lane line constraint criteria and preset similarity conditions, wherein the lane line constraint criteria refer to conditions met by the same group of lane lines, and the preset similarity constraints are used for determining whether the lane lines exist in the scene image or not;
the searching module is used for taking the detected lane line as a first lane line when one lane line is detected from the scene image, and searching the position parameters of a second lane line which belongs to the same group with the first lane line from the stored position parameters of a plurality of lane lines on the basis of the position parameters of the first lane line and the constraint criterion of the lane line, wherein the first lane line is a left lane line or a right lane line;
a first display module, configured to display the first lane line and the second lane line in the scene image based on the position parameter of the first lane line and the position parameter of the second lane line when the position parameter of the second lane line exists in the position parameters of the plurality of lane lines;
the detection module comprises:
the processing submodule is used for acquiring an interested area in the scene image and carrying out image preprocessing on the interested area to obtain a binary image;
the first obtaining submodule is used for obtaining corresponding position parameters of pixel points with the gray value of 255 in the binarized image in a polar coordinate system;
a third determining submodule, configured to determine that a lane line is detected from the scene image when the acquired position parameters do not meet the lane line constraint criterion and a position parameter meeting the preset similarity constraint condition exists between the acquired position parameters and a position parameter of a stable lane line, where the stable lane line is a lane line that can be detected in consecutive M-frame scene images, and M is a positive integer greater than 1;
the detection module further comprises:
the fourth determining submodule is used for determining the overlapping times of any one of the obtained position parameters and sequencing the obtained position parameters according to the sequence of the overlapping times from large to small;
the second obtaining submodule is used for obtaining the top L position parameters from the sorting result, wherein L is a positive integer not less than 2;
and the third obtaining submodule is used for obtaining the position parameters corresponding to the lane lines which are positioned in the preset area in the binary image and can be detected in the previous continuous S frame scene images adjacent to the scene image from L position parameters, wherein S is a positive integer larger than 1.
5. The apparatus of claim 4, wherein the lookup module comprises:
the first determining submodule is used for determining distance difference values and corresponding angle difference values between the first lane line and the plurality of lane lines on the basis of the position parameters of the first lane line and the position parameters of the plurality of lane lines;
and the second determining submodule is used for determining that the position parameter of the second lane line exists in the position parameters of the lane lines when the distance difference smaller than the preset distance exists in the plurality of distance differences and the corresponding angle difference is smaller than the preset angle.
6. A lane line display apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of claims 1-3.
7. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of any of the methods of claims 1-3.
CN201710653058.XA 2017-08-02 2017-08-02 Lane line display method and device and computer-readable storage medium Expired - Fee Related CN107451566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710653058.XA CN107451566B (en) 2017-08-02 2017-08-02 Lane line display method and device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710653058.XA CN107451566B (en) 2017-08-02 2017-08-02 Lane line display method and device and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN107451566A CN107451566A (en) 2017-12-08
CN107451566B true CN107451566B (en) 2020-07-24

Family

ID=60490771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710653058.XA Expired - Fee Related CN107451566B (en) 2017-08-02 2017-08-02 Lane line display method and device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN107451566B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101608924B (en) * 2009-05-20 2011-09-14 电子科技大学 Method for detecting lane lines based on grayscale estimation and cascade Hough transform
CN103738243B (en) * 2013-10-29 2015-12-30 惠州华阳通用电子有限公司 A kind of lane departure warning method
CN104036246B (en) * 2014-06-10 2017-02-15 电子科技大学 Lane line positioning method based on multi-feature fusion and polymorphism mean value
CN104700072B (en) * 2015-02-06 2018-01-19 中国科学院合肥物质科学研究院 Recognition methods based on lane line historical frames
CN106128115B (en) * 2016-08-01 2018-11-30 青岛理工大学 Fusion method for detecting road traffic information based on double cameras
CN106803061A (en) * 2016-12-14 2017-06-06 广州大学 A kind of simple and fast method for detecting lane lines based on dynamic area-of-interest

Also Published As

Publication number Publication date
CN107451566A (en) 2017-12-08

Similar Documents

Publication Publication Date Title
CN107908383B (en) Screen color adjusting method and device and mobile terminal
CN107749194B (en) Lane changing assisting method and mobile terminal
CN106874906B (en) Image binarization method and device and terminal
CN107977652B (en) Method for extracting screen display content and mobile terminal
CN109977845B (en) Driving region detection method and vehicle-mounted terminal
CN111476780A (en) Image detection method and device, electronic equipment and storage medium
CN112418214B (en) Vehicle identification code identification method and device, electronic equipment and storage medium
CN109934769B (en) Method, terminal and storage medium for long screenshot of screen
CN108920572B (en) Bus information processing method and mobile terminal
CN114399813B (en) Face shielding detection method, model training method, device and electronic equipment
CN109635700A (en) Obstacle recognition method, equipment, system and storage medium
CN110719119B (en) Anti-interference method and device
CN110309003B (en) Information prompting method and mobile terminal
CN108230680B (en) Vehicle behavior information acquisition method and device and terminal
CN107451566B (en) Lane line display method and device and computer-readable storage medium
KR20130015973A (en) Apparatus and method for detecting object based on vanishing point and optical flow
CN109189517B (en) Display switching method and mobile terminal
CN107734049B (en) Network resource downloading method and device and mobile terminal
CN110647892A (en) Violation alarm prompting method, terminal and computer readable storage medium
CN115439660A (en) Detection method, detection device, electronic equipment and medium
CN109685850B (en) Transverse positioning method and vehicle-mounted equipment
CN109492451B (en) Coded image identification method and mobile terminal
CN103645842A (en) Method for identifying transverse and longitudinal screens of intelligent terminal and vehicle-mounted terminal
CN109813298B (en) Navigation method and terminal equipment
CN108759881B (en) Relative position detection method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200724