CN108229406B - Lane line detection method, device and terminal - Google Patents

Lane line detection method, device and terminal Download PDF

Info

Publication number
CN108229406B
CN108229406B CN201810024993.4A CN201810024993A CN108229406B CN 108229406 B CN108229406 B CN 108229406B CN 201810024993 A CN201810024993 A CN 201810024993A CN 108229406 B CN108229406 B CN 108229406B
Authority
CN
China
Prior art keywords
lane line
line
pixel
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810024993.4A
Other languages
Chinese (zh)
Other versions
CN108229406A (en
Inventor
李阳
高语函
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Co Ltd
Original Assignee
Hisense Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Co Ltd filed Critical Hisense Co Ltd
Priority to CN201810024993.4A priority Critical patent/CN108229406B/en
Publication of CN108229406A publication Critical patent/CN108229406A/en
Application granted granted Critical
Publication of CN108229406B publication Critical patent/CN108229406B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a lane line detection method, a lane line detection device and a lane line detection terminal, which are applied to the technical field of auxiliary driving, wherein the method comprises the following steps: acquiring an image to be detected and a V disparity map corresponding to the image, and determining a candidate lane line in the image and a ground related line in the V disparity map; determining a second pixel point on the ground related line on the same line as a first pixel point on the candidate lane line, and if an absolute difference value between a first parallax value of the first pixel point and a second parallax value of the second pixel point meets a first preset condition, determining the first pixel point as an effective pixel point; and if the proportion of the effective pixel points on the candidate lane line is greater than a preset threshold value, determining the candidate lane line as a target lane line. By applying the method, the interference of the obstacle contour line in the image on the detection of the lane line can be eliminated, and the accuracy of the lane line detection result is improved.

Description

Lane line detection method, device and terminal
Technical Field
The application relates to the technical field of assistant driving, in particular to a lane line detection method, a lane line detection device and a lane line detection terminal.
Background
The lane departure early warning system can assist a driver in reducing traffic accidents caused by lane departure in an alarming mode, and lane line detection and identification are particularly important links in the working process of the lane departure early warning system.
At present, the lane line is mainly identified in the road image by using the linear characteristic of the lane line, specifically, the gray level image of the road image can be subjected to binarization processing to obtain a binarized image, then a hough line detection mode is used to detect a linear line on the binarized image, and finally, the detected linear line is screened through two parameters of linear distance and inclination angle to determine the lane line. However, in practical applications, due to interference of obstacles on the road surface, a hough detection algorithm often erroneously detects a certain portion of the obstacle itself as a lane line, resulting in inaccurate detection results.
Disclosure of Invention
In view of this, in order to solve the problem in the prior art that a correct lane line cannot be detected due to interference of a road surface obstacle, the present application provides a lane line detection method, an apparatus, and a terminal, so as to determine an accurate lane line from detected candidate lane lines and improve accuracy of a lane line detection result.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of embodiments of the present application, there is provided a lane line detection method, the method including:
acquiring an image to be detected and a V disparity map corresponding to the image, and determining a candidate lane line in the image and a ground related line in the V disparity map; determining a second pixel point on the ground related line on the same line as a first pixel point on the candidate lane line, and if an absolute difference value between a first parallax value of the first pixel point and a second parallax value of the second pixel point meets a first preset condition, determining the first pixel point as an effective pixel point; and if the proportion of the effective pixel points on the candidate lane line is greater than a preset threshold value, determining the candidate lane line as a target lane line.
Optionally, the first preset condition is: and the absolute difference value between the first parallax value of the first pixel point and the second parallax value of the second pixel point is less than or equal to the first difference value.
Optionally, if an absolute difference between a first parallax value of the first pixel and a second parallax value of the second pixel satisfies a first preset condition, the first pixel is determined as an effective pixel, and the specific steps include: dividing the V disparity map to obtain a plurality of sub V disparity maps according to the direction of the disparity value changing from small to large; and in the sub-V disparity map, determining that the absolute difference value between the first disparity value of the first pixel point and the second disparity value of the second pixel point is smaller than or equal to a second difference value, wherein the second difference value is increased along the direction of the change of the disparity values from small to large.
According to a second aspect of the embodiments of the present application, there is provided another lane line detection method, including:
acquiring an image to be detected, determining a candidate lane line in the image, and determining pixel points of the candidate lane line on the same line as candidate pixel points; determining candidate pixel points of which the parallax values do not meet a second preset condition as interference pixel points according to the parallax values of the candidate pixel points; and if the proportion of the interference pixel points on the candidate lane line is smaller than a preset ratio, determining the candidate lane line as a target lane line.
Optionally, the step of determining, according to the disparity value of the candidate pixel, the candidate pixel whose disparity value does not satisfy the second preset condition as an interference pixel includes: determining a neighborhood taking a target pixel point as a center according to a preset radius, wherein the target pixel point is a candidate pixel point corresponding to the maximum parallax value; and if other candidate pixel points do not exist in the neighborhood except the target pixel point, determining the target pixel point as the interference pixel point.
Optionally, the step of determining, according to the disparity value of the candidate pixel, the candidate pixel whose disparity value does not satisfy the second preset condition as an interference pixel includes: calculating the mean value and the standard deviation of the parallax values of the candidate pixel points, and respectively determining the dispersion of the parallax values of the candidate pixel points; and if the parallax value dispersion degree corresponding to the maximum parallax value is taken as the center in the neighborhood, the parallax value dispersion degree corresponding to other parallax values is not included, and the candidate pixel point corresponding to the maximum parallax value is determined as the interference pixel point.
According to a third aspect of the embodiments of the present application, there is provided a lane line detection apparatus including:
the first acquisition module is used for acquiring an image to be detected and a V disparity map corresponding to the image, and determining a candidate lane line in the image and a ground related line in the V disparity map; an effective pixel point determining module, configured to determine a second pixel point on the ground-related line in the same row as a first pixel point on the candidate lane line, and if an absolute difference between a first parallax value of the first pixel point and a second parallax value of the second pixel point satisfies a first preset condition, determine the first pixel point as an effective pixel point; and the first target lane line determining module is used for determining the candidate lane line as a target lane line if the proportion of the effective pixel points on the candidate lane line is greater than a preset threshold value.
According to a fourth aspect of the embodiments of the present application, there is provided another lane line detecting apparatus including:
the second acquisition module is used for acquiring an image to be detected, determining a candidate lane line in the image and determining pixel points of the candidate lane line on the same line as candidate pixel points; the interference pixel point determining module is used for determining the candidate pixel points of which the parallax values do not meet a second preset condition as interference pixel points according to the parallax values of the candidate pixel points; and the second target lane line determining module is used for determining the candidate lane line as the target lane line if the proportion of the interference pixel points on the candidate lane line is smaller than a preset ratio.
According to a fifth aspect of the embodiments of the present application, there is provided a lane line detection terminal, including a memory, a processor, a communication interface, a camera assembly, and a communication bus; the memory, the processor, the communication interface and the camera assembly are communicated with each other through the communication bus; the camera assembly is used for collecting an image to be detected and sending the image to be detected to the processor through the communication bus; the memory is used for storing a computer program; the processor is configured to execute the computer program stored in the memory, and when the processor executes the computer program, the processor implements the steps of any lane line detection method provided in the embodiment of the present application on the image to be detected.
According to a sixth aspect of the embodiments of the present application, there is provided a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any one of the lane line detection methods provided by the embodiments of the present application.
In one mode, since the lane line is located on the road surface and the obstacle is located on the road, it is possible to determine whether the detected candidate lane line is an interfering lane line or a target lane line by determining whether the candidate lane line is located on the road surface. The method comprises the steps of determining a ground related line in a V disparity map, judging an absolute difference value of disparity values between pixels corresponding to the ground related line on the same line with a first pixel on a candidate lane line, if the absolute difference value meets a first preset condition, taking the first pixel as an effective pixel, and finally determining a target lane line according to the proportion of the effective pixels.
In another mode, because the lane line is located on the surface of the road, the fluctuation of the parallax value of the pixel points corresponding to the lane line at the same horizontal position is not large, and based on this, the application proposes that in the parallax map, according to the parallax value of the candidate pixel points of the candidate lane line on the same line, the pixel points corresponding to the parallax value which does not meet the second preset condition are determined as interference pixel points, and the candidate lane line can be determined as the target lane line only when the proportion of the interference pixel points is less than the preset proportion.
In summary, the lane line detection method provided by the application can avoid the interference of the road surface obstacles on the detection of the lane line, and improve the accuracy of the lane line detection result.
Drawings
FIG. 1 is an example of a road binarized image captured by an onboard camera;
FIG. 2 is an exemplary diagram of a lane line candidate obtained by Hough line detection in FIG. 1;
FIG. 3 is a disparity distribution diagram corresponding to the lane line candidate detected in FIG. 2;
fig. 4 is a flowchart of a lane line detection method according to a first embodiment of the present application;
fig. 5 is an exemplary diagram illustrating determining an effective pixel point in a first manner according to an embodiment of the present disclosure;
fig. 6 is an exemplary diagram of determining an effective pixel point in a second manner according to the first embodiment of the present application;
fig. 7 is a flowchart of a lane line detection method according to a second embodiment of the present application;
fig. 8 is an exemplary diagram of determining candidate pixels according to a second embodiment of the present application;
FIG. 9 is a block diagram of one embodiment of a lane marking detection apparatus of the present application;
FIG. 10 is a block diagram of another embodiment of a lane marking detection apparatus of the present application;
fig. 11 is a hardware configuration diagram of a lane line detection terminal according to a fifth embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
For convenience of understanding, before explaining the embodiments of the present invention in detail, terms related to the embodiments of the present invention will be explained.
Parallax images: the binocular camera is used for shooting left and right images at the same time and obtaining the images through calculation. One of the left and right images is used as a reference image, and the other image is used as a comparison image. And matching the pixel points in the comparison image with the pixel points on the same Y coordinate in the reference image, and calculating the difference of the abscissa between every two matched pixel points, wherein the difference of the abscissas is the parallax value between the two pixel points. And taking the parallax value as a pixel value corresponding to the pixel point in the reference image, thereby obtaining the parallax image with the same size as the reference image.
V disparity map: the parallax image is obtained through transverse compression calculation, the number of lines of the parallax image is reserved, specifically, the vertical coordinate of the parallax image is kept unchanged, the horizontal coordinate of the parallax image is changed into a parallax value, and the pixel value of each point (x1, y1) in the V parallax image is the total number of pixels with the parallax value of x1 in the pixels with the vertical coordinate of y1 in the parallax image.
The Lane Departure Warning System (LDWS for short) is an important component in the field of car safety-assisted driving, and can assist a driver in reducing or even avoiding traffic accidents caused by Lane Departure in an alarming manner, Lane line detection and identification are used as important links in the workflow of the Lane Departure Warning System, and the accuracy of the detection result directly affects the processing result of the Lane Departure Warning System.
Next, an application scenario related to the embodiment of the present invention will be described.
With the development of urbanization and the popularization of automobiles, traffic problems are increasingly prominent, automobiles are required to have not only good safety but also certain intelligence, and therefore, people begin to research a driving assistance system which aims to realize unmanned, full-automatic and safe driving. In the current driving assistance system, road condition images acquired by a radar, a sensor or a camera can be processed through image processing and computer vision technologies, pedestrians and obstacles in front are predicted according to the road condition images, and a driver is warned or the vehicle is controlled to brake emergently under the condition that potential danger exists. The lane departure early warning system is very important in the auxiliary driving of the automobile, and false warning can be caused by wrong lane line detection results.
The above background art mentions that, in the existing lane line detection technology, a captured road image is usually binarized, and then a hough line detection is performed to identify a lane line in the road image by using a straight line characteristic of the lane line to determine the lane line, however, in practical applications, many interferents, such as cars, fences, road shoulder stones, etc. on a road, may exist, and since some pixel values of the interferents are greater than a set binarization threshold value, pixel points of some interferents may be retained in the binarization process, and the pixel points of the interferents may cause interference to the lane line detection.
For example, fig. 1 is an example of a road binarization image shot by a vehicle-mounted camera, as shown in fig. 1, white pixel points in a dashed line frame 101 are part of pixel points of a vehicle body, mostly chassis parts and tire parts of the vehicle body, and oblique lines can be fitted by the pixel points through hough line detection, and the oblique lines can interfere with the oblique lines fitted by a real lane line. Fig. 2 is an exemplary diagram of lane line candidates obtained by hough line detection in fig. 1, and as shown in fig. 2, after the hough line detection, lane line candidates obtained in a binarized image are respectively a slope line 201, a slope line 202, a slope line 203, and a slope line 204, and it is obvious that a vehicle body part other than a lane line is also erroneously detected as a lane line (the slope line 204), in such a case, there are two general ways in the prior art to eliminate interference of the slope line 204, so as to determine a target lane line in the lane line candidates, which is specifically as follows:
in the first mode, if an image shot by a monocular camera is adopted, whether an interference straight line exists can be judged according to the detected geometric relations such as angles, distances, intersection positions and the like among candidate lane lines in the image, and when the interference straight line happens to meet the geometric relations, the interference straight line cannot be removed in the prior art, so that the lane line detection is inaccurate.
In a second mode, if an image captured by a binocular camera is used, a disparity map of the image may be obtained through a stereo matching algorithm, and disparity values corresponding to white pixels through which the four candidate lane lines pass may be respectively counted to obtain a disparity distribution map of each candidate lane line, as shown in fig. 3, a broken line 301 is a disparity distribution corresponding to a slope 201 in fig. 2, a broken line 302 is a disparity distribution corresponding to a slope 202 in fig. 2, a broken line 303 is a disparity distribution corresponding to a slope 203 in fig. 2, and a broken line 304 is a disparity distribution corresponding to a slope 204 in fig. 2. It can be seen from observing the four broken lines that the fluctuation of the broken line 304 is obviously larger than the fluctuation of the other three broken lines, and it can be determined through experience that the candidate lane line corresponding to the broken line 304 is an interference oblique line. However, the above method of eliminating the interfering lane lines can be judged only visually, and there is a great randomness.
Therefore, the conventional lane line detection based on the binary image cannot quickly and accurately eliminate interference, so that the result of detecting the lane line is not accurate. Based on this, the application provides a lane line detection method to realize avoiding as far as possible that the obstacle on the road causes the interference to the detection of lane line, improves the degree of accuracy of lane line testing result.
The following examples are provided to explain the lane line detection method provided in the present application.
The first embodiment is as follows:
referring to fig. 4, a flowchart of an embodiment of the lane line detection method according to the present application is shown, where the method includes the following steps:
step S201: the method comprises the steps of obtaining an image to be detected and a V disparity map corresponding to the image, and determining candidate lane lines in the image and ground related lines in the V disparity map.
Specifically, the common automobile carries out binocular camera to acquire images, wherein the binocular camera can be carried in front of the automobile and located on a longitudinal axis of the automobile, and after the binocular camera is carried on the automobile, the binocular camera can be calibrated. During the driving process of the automobile, the binocular camera can simultaneously acquire images containing continuous road partitions through the left camera and the right camera, wherein the image acquired by the left camera can be called as a left image, the image acquired by the right camera can be called as a right image, the left image can be used as a reference diagram, the right image can be used as a comparison diagram, and the right image can be used as the reference diagram and the left image can be used as the comparison diagram.
After the binocular camera acquires an image, the image can be sent to the terminal, the terminal can process the image to obtain a parallax image, and then a V parallax image is calculated according to the parallax image, and the specific steps of the parallax image and the V parallax image can refer to the prior art and are not described in detail herein.
It should be noted that the calculated disparity map and the calculated V disparity map have a mapping relationship, and when the terminal receives an image, the disparity map and the V disparity map are calculated for each frame of image and stored in the memory, and in the subsequent calculation process, the target disparity map and the target V disparity map can be determined according to the mapping relationship.
Optionally, in the embodiment of the present application, a grayscale image of a road image acquired by a camera may be used as an image to be detected, an area of interest may also be defined on the grayscale image, and a partial image corresponding to the area of interest is used as an image to be detected, which is not limited in the present application. Taking a partial image corresponding to the region of interest as an example of an image to be detected, a person skilled in the art may understand that the region of interest may be determined on a grayscale image of a road image in various ways, for example, the region of interest may be framed on the grayscale image by manually selecting a frame, for example, the region of interest may be intercepted on the grayscale image by a preset height ratio (for example, the following 3/4 portion), for example, a road vanishing point may also be determined, and a portion below the road vanishing point is taken as the region of interest, and the application does not limit a specific process of determining the region of interest on the grayscale image.
And after obtaining the image to be detected and determining the corresponding V disparity map, respectively performing linear detection in the image to be detected and determining a ground related line in the V disparity map, wherein a lane line in the image to be detected and the ground related line in the V disparity map can be determined by adopting the prior art, and the determination is not limited herein.
Optionally, the lane line is generally white or yellow, the gray value is large, the road surface is close to black, and the gray value is small, so that the edge of the lane line can be detected by using gradient information, such as a first order difference, a Robert operator, a Sobel operator, a laplacian operator, a Canny operator, and the like, without being described one by one, and the image to be detected is processed by using an edge detection operator to obtain a binary image.
Then, a straight line is detected by hough transform in the binarized image, thereby obtaining lane line candidates. Specifically, the basic principle of hough transform is to convert pixels in an image space into a parameter space, then count collinear points in the parameter space, and finally determine whether the pixels are straight lines meeting requirements through a threshold. In a rectangular coordinate system, a straight line is defined as a form shown by equation (1):
y=mx+b (1)
where m is the slope and b is the intercept with the y-axis, the straight line can be uniquely determined as long as m and b are determined. If there is a vertical straight line in the image, the m parameter will be infinite. Therefore, there is another parameter space scheme: describing a straight line with polar parameters instead of "slope-intercept", the straight line can be represented again in the form shown in equation (2):
ρ is xcos + ysin θ equation (2)
Where ρ represents the euclidean distance from the origin to the straight line, θ represents the angle between the orthogonal line of the straight line and the x-axis, and if ρ and θ are processed orthogonally, (ρ, θ) is called hough space, and its abscissa is θ and its ordinate is ρ, so that an H matrix can be obtained. One point in the rectangular coordinate system corresponds to one sinusoid in the hough space. A straight line is composed of a plurality of points, namely a plurality of sinusoids in Hough space, but the sinusoids intersect at a point (rho)00) Substituting the point into the formula (1) to obtain the slope and intercept of the straight line, and determining the unique straight line. Therefore, based on the above principle, when identifying a straight line by hough transform, a straight line representing a lane line is detected by determining a local maximum value in hough space. Assume that, taking fig. 2 as an example, 4 lane line candidates are obtained in the image to be detected.
Next, the ground correlation line can be determined in the V-disparity map by using the prior art, such as least square method, RANSAC (RANdom SAmple Consensus) algorithm, and the like. Optionally, the method includes extracting a straight line where the ground is located by using a RANSAC algorithm in the V-disparity map, and includes setting a parameter model, considering some points in the data as local interior points when the points are suitable for the parameter model, estimating a desired parameter model by repeatedly selecting a set of random subsets in the data as the local interior points, performing verification with the local exterior points, and estimating the model by estimating an error rate between the local interior points and the model, where after iteration is performed for a fixed number of times, a model generated each time is either discarded because the local interior points are too few or selected because the local interior points are better than existing models, and finally obtaining a more accurate model. As shown in fig. 5, a ground correlation line 501 is obtained by fitting points in the V disparity map.
In the description of determining the candidate lane lines in the image to be detected by using hough transform and determining the ground related lines by using RANSAC method, those skilled in the art may refer to the related description in the prior art, and the detailed description is not repeated here.
Step S202: and determining a second pixel point on the ground related line on the same line as a first pixel point on the candidate lane line, and if the absolute difference between the first parallax value of the first pixel point and the second parallax value of the second pixel point meets a first preset condition, determining the first pixel point as an effective pixel point.
Because the lane line is positioned on the ground, the interference oblique lines are all contour lines of the obstacles, and the interference oblique lines have a certain height from the ground, the interference lines on non-roads are removed from the candidate lane lines based on the principle, and the problem of inaccurate lane line detection in the prior art can be solved. Two implementation ways for determining the effective pixel point are given below:
the first method is as follows: the first preset condition is that an absolute difference value between a first parallax value of the first pixel and a second parallax value of the second pixel is smaller than or equal to a first difference value. It will be appreciated by those skilled in the art that the first difference is preset empirically, for example the first difference is 2. Assuming that the parallax value of a pixel on a candidate lane line is D, the parallax value of a corresponding ground related line on the line where the pixel is located is D, and the first difference value is T, judging whether the pixel on the candidate lane line is an effective pixel according to a formula (3), if the pixel is an effective pixel, marking a flag as 1, and finally counting the number of pixels of which the flag is 1 on each candidate lane line:
|D-d|≤T,flag=1 (3)
as an example, next, taking fig. 2 and fig. 5 as an example, where a coordinate system is established with the upper left corner of fig. 5 as the origin of coordinates, the horizontal axis represents the disparity value, the vertical axis represents the line number, a point a is marked on the lane line candidate 204 of fig. 2, the point a corresponds to the V disparity map of fig. 5 to obtain a point a ', next, a point B on the ground correlation line with the same vertical coordinate as the point a' is determined, that is, the point a 'and the point B are on the same line of the V disparity map, and the absolute difference between the disparity values of the point a' and the point B is calculated, if the absolute difference is smaller than or equal to the first difference T, the lane line candidate point a is a valid pixel, otherwise, the point a is an invalid pixel. And repeating the steps to sequentially judge the effective pixel points on the candidate lane lines.
The second method comprises the following steps: dividing the V disparity map to obtain a plurality of sub V disparity maps according to the direction of the disparity value changing from small to large; and in the sub-V disparity map, determining that the absolute difference value between the first disparity value of the first pixel point and the second disparity value of the second pixel point is smaller than or equal to a second difference value, wherein the second difference value is increased along the direction of the change of the disparity values from small to large.
Because the fluctuation rule of the road related line presents 'big and small near', the V disparity map is divided into a plurality of sub V disparity maps, the sub V disparity map with the small disparity value corresponds to the far region of the image, the linear fluctuation range of the sub V disparity map is small, the second difference value is set to be small, the sub V disparity map with the large disparity value corresponds to the near region of the image, the linear fluctuation range of the sub V disparity map is large, the second difference value is set to be large, whether the pixel point on the candidate lane line is the effective pixel point is judged according to a formula (4), wherein t isiA second parallax value set by the i (i is 1,2, … …) th sub-V parallax map, and the second parallax value is set by the i (i is 1,2, … …) th sub-V parallax map1 sub-V disparity map represents the farthest region, hence t1<t2<...<ti
Figure BDA0001544638090000111
For example, assuming that the V disparity map is averagely divided into two sub V disparity maps, as shown in fig. 6, for convenience of description, the two sub V disparity maps are named as a first sub map 61 (a farther region) and a second sub map 62 (a closer region), a second difference value set for the first sub map 61 is named as a difference value one (a smaller value, e.g., 2), and a second difference value set for the second sub map 62 is named as a difference value two (a larger value, e.g., 11). Then, in the first sub-diagram 61, it is determined whether the absolute difference between the disparity values of the point corresponding to the candidate lane line and the point on the ground-related line in the row is less than or equal to 2, as shown in fig. 6, the point between the broken line 611 and the broken line 612 is the point in the first sub-diagram 61 that satisfies the condition; in the second sub-diagram 62, it is determined whether the absolute difference between the disparity values of the point corresponding to the candidate lane line and the point on the ground-related line in the row is less than or equal to 11, as shown in fig. 6 where the point between the broken line 621 and the broken line 622 is the point in the first sub-diagram 62 that satisfies the condition.
For example, whether a pixel point on a candidate lane line is an effective pixel point may be recorded in the manner shown in the following tables 1 to 2, and of course, the recording and storing manner of the effective pixel point is not limited in the present application, and a person skilled in the art may flexibly select:
TABLE 1 determination of significant pixels in the first subgraph
Line number Disparity value of ground correlation line Disparity value of candidate lane line flag
1 2 3 1
2 4 7 0
…… …… …… ……
m 30 32 1
TABLE 2 determination of significant pixels in the second subgraph
Line number Disparity value of ground correlation line Disparity value of candidate lane line flag
m+1 32 36 1
m+2 35 43 1
…… …… …… ……
n 63 76 0
Step S203: and if the proportion of the effective pixel points on the candidate lane line is greater than a preset threshold value, determining the candidate lane line as a target lane line.
Specifically, the number of the pixels whose flag is 1 on each candidate lane line may be determined through the step S202, and these pixels are effective pixels, and the total pixels on each candidate lane line may also be obtained through statistics, and the ratio of the effective pixels to the total pixels is calculated to determine whether the candidate lane line is an interference lane line or a target lane line, and if the candidate lane line is an interference lane line, the candidate lane line needs to be deleted, and if the candidate lane line is a target lane line, the candidate lane line needs to be retained.
For example, next, taking fig. 2 as an example, the proportions of the effective pixel points on the candidate lane lines 201 and 204 are respectively determined, as shown in the following table 3:
TABLE 3 ratio of effective pixel points of candidate lane lines
Lane line candidates 201 Lane line candidates 202 Lane line candidates 203 Lane line candidates 204
87.2% 90.3% 91.5 10.2%
If the preset threshold is 80%, it can be obtained from table 3 that the ratio of the effective pixels in the candidate lane line 201-.
The above is the content of the first embodiment of the present invention, and since the lane line is located on the ground, the disparity value of the lane line should be the same as the disparity value on the ground, and then, it is determined whether the pixel point of the candidate lane line is an effective pixel point by determining the absolute difference between the disparity value of the pixel point on the candidate lane line and the disparity value of the point on the ground related line on the corresponding line, and the proportion of the effective pixel point on the candidate lane line is determined, so as to determine and eliminate the interfering lane line in the candidate lane line, and the accuracy of lane line detection can be improved.
Example two:
another way of implementing lane line detection is presented, instead of using a V disparity map, a binary map is used to determine which interfering lane lines are according to disparity values, fig. 7 is a flowchart of a lane line detection method in the second embodiment of the present application, and the steps in the second embodiment are specifically described with reference to fig. 7:
step S301, acquiring an image to be detected, determining a candidate lane line in the image, and determining pixel points of the candidate lane line on the same line as candidate pixel points.
Specifically, step S201 in the first embodiment may be referred to determine the lane line candidate, and a description thereof is not repeated here. As shown in fig. 8, in step S301, 4 candidate lane lines are detected, and a row is selected from the binary image, so that 4 candidate pixels are obtained, which are respectively K1-K4.
Step S302, according to the parallax value of the candidate pixel point, determining the candidate pixel point of which the parallax value does not meet a second preset condition as an interference pixel point.
Specifically, following the above example, according to the positions of the candidate pixels in the binary image, the disparity values corresponding to the candidate pixels K1-K4 can be determined in the disparity image, which are assumed to be 12, and 20. Then, according to the disparity values of the 4 candidate pixel points, two methods for determining the interference pixel point are provided:
the first method is as follows:
determining a neighborhood taking a target pixel point as a center according to a preset radius, wherein the target pixel point is a candidate pixel point corresponding to the maximum parallax value; and if other candidate pixel points do not exist in the neighborhood except the target pixel point, determining the target pixel point as the interference pixel point.
Next to the above example, among the four candidate pixels, the disparity value of the candidate pixel K4 is the largest, the target pixel is K4, assuming that the preset radius is 2, the neighborhood of the target pixel is determined by taking the target pixel as the center and taking the preset radius as 2, and other candidate pixels K1-K3 are searched in the neighborhood, that is, whether absolute disparity values between other candidate pixels and the target pixel are less than 2 is respectively determined, obviously, the candidate pixels K1-K3 do not fall into the neighborhood, so that the target pixel K4 is determined as an interference pixel, and similarly, the flag may be marked as 0.
The second method comprises the following steps:
calculating the mean value and the standard deviation of the parallax values of the candidate pixel points, and respectively determining the dispersion of the parallax values of the candidate pixel points; and if the parallax value dispersion degree corresponding to the maximum parallax value is taken as the center in the neighborhood, the parallax value dispersion degree corresponding to other parallax values is not included, and the candidate pixel point corresponding to the maximum parallax value is determined as the interference pixel point.
For example, following the above example, the mean of the four points is obtained according to equation (5) and the standard deviation is determined according to equation (6), then the above four candidate pixel points are taken as an example, the mean is μ ═ 14, the variance is σ ≈ 6.9, and then the deviation degree of each candidate pixel point is calculated according to equation (7), which is o1≈-0.29,o2≈-0.29,o3≈-0.29,o4Approximatively, 0.86, and similarly, determining the maximum skewness degree o4Is approximately equal to 0.86, the maximum deviation degree is taken as the center, the radius is taken as 0.3 to determine a neighborhood, and the deviation degrees of other three candidate pixel points are judged to be not in the neighborhood, so that o4And determining candidate pixel points which are approximately equal to 0.86 as interference pixel points, and marking flag as 0.
Figure BDA0001544638090000141
Figure BDA0001544638090000151
Figure BDA0001544638090000152
Step S303, if the proportion of the interference pixel points on the candidate lane line is smaller than a preset ratio, determining the candidate lane line as a target lane line.
Specifically, according to step S302, the interference pixel points on the candidate lane lines can be determined, so that the proportion of the interference pixel points on each candidate lane line can be determined, and if the proportion is smaller than a preset ratio, for example, 10%, the candidate lane line is determined as the target lane line.
The above is the implementation step of the second embodiment of the present application, and since the normal lane lines are all located on the road surface, the disparity values of the pixel points on the same line in the image are all similar, and from this point, it is determined whether there is an interference pixel point on the candidate lane line by determining the dispersion degree between the candidate pixel points on the candidate lane line on the same line, and in order to eliminate the interference of noise, a redundancy is set, that is, when it is determined that the proportion of the interference pixel points is smaller than the preset ratio, the candidate lane line is determined as the target lane line.
Example three:
referring to fig. 9, a block diagram of an embodiment of a lane line detection apparatus according to the present application is shown, where the apparatus may include:
a first obtaining module 901, configured to obtain an image to be detected and a V disparity map corresponding to the image, and determine a candidate lane line in the image and a ground related line in the V disparity map;
an effective pixel point determining module 902, configured to determine a second pixel point on the ground-related line in the same row as a first pixel point on the candidate lane line, and if an absolute difference between a first parallax value of the first pixel point and a second parallax value of the second pixel point satisfies a first preset condition, determine the first pixel point as an effective pixel point;
optionally, the first preset condition is: and the absolute difference value between the first parallax value of the first pixel point and the second parallax value of the second pixel point is less than or equal to the first difference value.
Optionally, the effective pixel point determining module 902 is further configured to divide the V disparity map to obtain a plurality of sub V disparity maps according to a direction in which the disparity value changes from small to large; and in the sub-V disparity map, determining that the absolute difference value between the first disparity value of the first pixel point and the second disparity value of the second pixel point is smaller than or equal to a second difference value, wherein the second difference value is increased along the direction of the change of the disparity values from small to large.
A first target lane line determining module 903, configured to determine the candidate lane line as a target lane line if the proportion of the effective pixels on the candidate lane line is greater than a preset threshold.
The above is a specific description of the third embodiment, and each module may refer to the lane line detection method described in the first embodiment.
Example four:
referring to fig. 10, a block diagram of another embodiment of the lane line detection apparatus according to the present application is shown, where the apparatus may include:
a second obtaining module 1001, configured to obtain an image to be detected, determine a candidate lane line in the image, and determine a pixel point of the candidate lane line on the same row as a candidate pixel point;
an interference pixel determining module 1002, configured to determine, according to the disparity value of the candidate pixel, a candidate pixel whose disparity value does not satisfy a second preset condition as an interference pixel.
Optionally, the interference pixel determining module 1002 is further configured to determine, according to a preset radius, a neighborhood centered on a target pixel, where the target pixel is a candidate pixel corresponding to the maximum disparity value; and if other candidate pixel points do not exist in the neighborhood except the target pixel point, determining the target pixel point as the interference pixel point.
Optionally, the interference pixel determining module 1002 is further configured to calculate a mean value and a standard deviation of disparity values of the candidate pixels, and determine disparity value dispersion of the candidate pixels respectively; and if the parallax value dispersion degree corresponding to the maximum parallax value is taken as the center in the neighborhood, the parallax value dispersion degree corresponding to other parallax values is not included, and the candidate pixel point corresponding to the maximum parallax value is determined as the interference pixel point.
A second target lane line determining module 1003, configured to determine the candidate lane line as the target lane line if the proportion of the interference pixel points on the candidate lane line is smaller than a preset ratio.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
The embodiment of the lane line detection device can be applied to the lane line detection terminal. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a device in a logical sense, a processor of the lane line detection terminal where the device is located reads corresponding computer program instructions in the nonvolatile memory into the memory for operation.
Example five:
as shown in fig. 11, a hardware configuration diagram of a lane line detection terminal according to a fifth embodiment of the present invention is shown, in which a processor 1101 is a control center of the lane line detection device 1100, various interfaces and lines are used to connect various parts of the entire lane line detection device, and various functions and processing data of the lane line detection device 1100 are executed by running or executing software programs and/or modules stored in a memory 1102 and calling data stored in the memory 1102, so as to perform overall monitoring of the lane line detection device.
Optionally, processor 1101 may include (not shown in FIG. 11) one or more processing cores; optionally, the processor 1101 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 1101.
The memory 1102 may be used to store software programs and modules, and the processor 1101 executes various functional applications and data processing by operating the software programs and modules stored in the memory 1102. The memory 1102 mainly includes (not shown in fig. 11) a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the lane line detection apparatus 1100 (such as a captured image, a calculated parallax image, or a processed grayscale image), and the like.
Further, the memory 1102 may include (not shown in FIG. 11) high speed random access memory, and may also include (not shown in FIG. 11) non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 1102 may also include (not shown in FIG. 11) a memory controller to provide processor 1101 with access to memory 1102.
In some embodiments, the apparatus 1100 may further include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102 and peripheral interface 1103 may be connected by a communication bus or signal line (not shown in fig. 11). Various peripheral devices may be connected to peripheral interface 1103 by communication buses or signal lines. Specifically, the peripheral device may include: at least one of a radio frequency component 1204, a touch screen display 1105, a camera component 1106, an audio component 1107, a positioning component 1108, and a power component 1109.
Wherein the camera assembly 1106 is used to collect an image to be detected. Optionally, camera head assembly 1106 may include at least two cameras. In some embodiments, the at least two cameras may be left and right cameras, respectively, of a binocular camera.
In some embodiments, camera assembly 1106 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
In addition to the hardware illustrated in fig. 11, the lane line detection terminal where the device is located in the embodiment may also include other hardware, which is not described again, generally according to the actual function of the lane line detection terminal.
It can be understood by those skilled in the art that the lane line detection terminal illustrated in fig. 11 can be applied to an automobile, and can also be applied to other devices such as a computer and a smart phone, which is not limited in the present application.
The present application further provides a computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps of any lane line detection method provided in the embodiments of the present application are implemented.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (6)

1. A lane line detection method, comprising:
acquiring an image to be detected and a V disparity map corresponding to the image, and determining a candidate lane line in the image and a ground related line in the V disparity map;
determining second pixel points which are positioned on the ground related line on the same line with the first pixel points on the candidate lane line, and dividing the V disparity map according to the direction of the disparity value from small to large to obtain a plurality of sub V disparity maps;
in the sub-V disparity map, determining that an absolute difference value between a first disparity value of the first pixel and a second disparity value of the second pixel is smaller than or equal to a second difference value, wherein the second difference value increases along a direction that the disparity values change from small to large;
if the absolute difference value between the first parallax value of the first pixel point and the second parallax value of the second pixel point meets a first formula, determining the first pixel point as an effective pixel point; the first formula is:
Figure 178936DEST_PATH_IMAGE002
Figure 627235DEST_PATH_IMAGE004
is shown as
Figure 938131DEST_PATH_IMAGE006
Figure 864498DEST_PATH_IMAGE008
) The second parallax value set by the sub-V parallax map is D, the parallax value of the pixel on the candidate lane line is D, the parallax value of the corresponding ground related line on the line where the pixel is located is D, if the pixel is an effective pixel point,
Figure 210029DEST_PATH_IMAGE010
is marked as 1;
and if the proportion of the effective pixel points on the candidate lane line is greater than a preset threshold value, determining the candidate lane line as a target lane line.
2. The method of claim 1, wherein the valid pixels on the candidate lane line are recorded in a table, and the table at least comprises: the line number, the disparity value of the ground related line, the disparity value of the candidate lane line,
Figure DEST_PATH_IMAGE011
3. the method of claim 1, wherein the percentage of active pixels on the candidate lane lines is recorded in a tabular manner.
4. A lane line detection apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring an image to be detected and a V disparity map corresponding to the image, and determining a candidate lane line in the image and a ground related line in the V disparity map;
the effective pixel point determining module is used for determining second pixel points which are positioned on the ground related line on the same row with the first pixel points on the candidate lane line, and dividing the V disparity map according to the direction of the disparity value changing from small to large to obtain a plurality of sub V disparity maps;
in the sub-V disparity map, determining that an absolute difference value between a first disparity value of the first pixel and a second disparity value of the second pixel is smaller than or equal to a second difference value, wherein the second difference value increases along a direction that the disparity values change from small to large;
if the absolute difference value between the first parallax value of the first pixel point and the second parallax value of the second pixel point meets a first formula, determining the first pixel point as an effective pixel point; the first formula is:
Figure 94808DEST_PATH_IMAGE012
Figure 893000DEST_PATH_IMAGE004
is shown as
Figure 124524DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE013
) Second parallax value set by sub-V parallax mapIf the parallax value of the pixel on the candidate lane line is D and the parallax value of the corresponding ground related line on the line where the pixel is located is D, if the pixel is a valid pixel,
Figure 590140DEST_PATH_IMAGE010
is marked as 1;
and the first target lane line determining module is used for determining the candidate lane line as a target lane line if the proportion of the effective pixel points on the candidate lane line is greater than a preset threshold value.
5. A lane line detection terminal is characterized by comprising a memory, a processor, a communication interface, a camera assembly and a communication bus;
the memory, the processor, the communication interface and the camera assembly are communicated with each other through the communication bus;
the camera assembly is used for collecting an image to be detected and sending the image to be detected to the processor through the communication bus;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, and when the processor executes the computer program, the processor implements the steps of the method according to any one of claims 1 to 3 on the image to be detected.
6. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 3.
CN201810024993.4A 2018-01-11 2018-01-11 Lane line detection method, device and terminal Active CN108229406B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810024993.4A CN108229406B (en) 2018-01-11 2018-01-11 Lane line detection method, device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810024993.4A CN108229406B (en) 2018-01-11 2018-01-11 Lane line detection method, device and terminal

Publications (2)

Publication Number Publication Date
CN108229406A CN108229406A (en) 2018-06-29
CN108229406B true CN108229406B (en) 2022-03-04

Family

ID=62640556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810024993.4A Active CN108229406B (en) 2018-01-11 2018-01-11 Lane line detection method, device and terminal

Country Status (1)

Country Link
CN (1) CN108229406B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145751A (en) * 2018-07-23 2019-01-04 安徽淘云科技有限公司 Page turning detection method and device
CN109711242B (en) * 2018-10-31 2021-04-20 百度在线网络技术(北京)有限公司 Lane line correction method, lane line correction device, and storage medium
CN109583327A (en) * 2018-11-13 2019-04-05 青岛理工大学 A kind of binocular vision wheat seeding trace approximating method
CN109583418B (en) * 2018-12-13 2021-03-12 武汉光庭信息技术股份有限公司 Lane line deviation self-correction method and device based on parallel relation
WO2020132965A1 (en) * 2018-12-26 2020-07-02 深圳市大疆创新科技有限公司 Method and apparatus for determining installation parameters of on-board imaging device, and driving control method and apparatus
CN111738034B (en) * 2019-03-25 2024-02-23 杭州海康威视数字技术股份有限公司 Lane line detection method and device
CN111443704B (en) * 2019-12-19 2021-07-06 苏州智加科技有限公司 Obstacle positioning method and device for automatic driving system
CN111460072B (en) * 2020-04-01 2023-10-03 北京百度网讯科技有限公司 Lane line detection method, device, equipment and storage medium
CN113139399B (en) * 2021-05-13 2024-04-12 阳光电源股份有限公司 Image wire frame identification method and server

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679121B (en) * 2012-09-14 2017-04-12 株式会社理光 Method and system for detecting roadside using visual difference image
CN103679127B (en) * 2012-09-24 2017-08-04 株式会社理光 The method and apparatus for detecting the wheeled region of pavement of road
CN104166834B (en) * 2013-05-20 2017-10-10 株式会社理光 Pavement detection method and apparatus
CN104376297B (en) * 2013-08-12 2017-06-23 株式会社理光 The detection method and device of the line style Warning Mark on road
CN105975957B (en) * 2016-05-30 2019-02-05 大连理工大学 A kind of road plane detection method based on lane line edge

Also Published As

Publication number Publication date
CN108229406A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN108229406B (en) Lane line detection method, device and terminal
CN110861639B (en) Parking information fusion method and device, electronic equipment and storage medium
CN108629292B (en) Curved lane line detection method and device and terminal
CN110660186B (en) Method and device for identifying target object in video image based on radar signal
US9846812B2 (en) Image recognition system for a vehicle and corresponding method
CN106326822B (en) Method and device for detecting lane line
WO2019000945A1 (en) On-board camera-based distance measurement method and apparatus, storage medium, and electronic device
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN108416306B (en) Continuous obstacle detection method, device, equipment and storage medium
CN108596899B (en) Road flatness detection method, device and equipment
CN108197590B (en) Pavement detection method, device, terminal and storage medium
Youjin et al. A robust lane detection method based on vanishing point estimation
CN107748882B (en) Lane line detection method and device
CN111213153A (en) Target object motion state detection method, device and storage medium
CN108052921B (en) Lane line detection method, device and terminal
CN111243003A (en) Vehicle-mounted binocular camera and method and device for detecting road height limiting rod
CN113255444A (en) Training method of image recognition model, image recognition method and device
CN110515376B (en) Evaluation method, terminal and storage medium for track deduction correction
CN115578468A (en) External parameter calibration method and device, computer equipment and storage medium
CN113380038A (en) Vehicle dangerous behavior detection method, device and system
CN108090425B (en) Lane line detection method, device and terminal
CN112417976B (en) Pavement detection and identification method and device, intelligent terminal and storage medium
CN108615025B (en) Door identification and positioning method and system in home environment and robot
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal
CN112400094B (en) Object detecting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant