CN114120258A - Lane line identification method and device and storage medium - Google Patents

Lane line identification method and device and storage medium Download PDF

Info

Publication number
CN114120258A
CN114120258A CN202210088440.1A CN202210088440A CN114120258A CN 114120258 A CN114120258 A CN 114120258A CN 202210088440 A CN202210088440 A CN 202210088440A CN 114120258 A CN114120258 A CN 114120258A
Authority
CN
China
Prior art keywords
lane line
coordinate system
lane
point
road surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210088440.1A
Other languages
Chinese (zh)
Other versions
CN114120258B (en
Inventor
敖争光
刘国清
杨广
王启程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Youjia Innovation Technology Co.,Ltd.
Original Assignee
Shenzhen Minieye Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Minieye Innovation Technology Co Ltd filed Critical Shenzhen Minieye Innovation Technology Co Ltd
Priority to CN202210088440.1A priority Critical patent/CN114120258B/en
Publication of CN114120258A publication Critical patent/CN114120258A/en
Application granted granted Critical
Publication of CN114120258B publication Critical patent/CN114120258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a lane line identification method, a lane line identification device and a storage medium, wherein the method comprises the following steps: acquiring a plurality of lane line sampling point pairs of a flat road surface and a slope road surface under an image coordinate system; acquiring a conversion relation between an image coordinate system and a vehicle body coordinate system, and obtaining a lane line identification point pair of a flat road surface from a lane line sampling point pair according to the conversion relation, wherein the lane line identification point pair comprises a left side identification point and a right side identification point corresponding to the left side identification pair; calculating to obtain the lane width corresponding to the lane line identification point pair of the flat road surface according to the difference between the X-axis coordinates of the left identification point and the right identification point; calculating to obtain a point set of a lane line identification point pair corresponding to a lane line sampling point of the gradient road surface according to the conversion relation between the image coordinate system and the vehicle body coordinate system and the lane width; and performing polynomial function fitting on the point set to obtain a lane line fitting equation. According to the embodiment of the invention, the accuracy of slope identification can be effectively improved.

Description

Lane line identification method and device and storage medium
Technical Field
The invention relates to the technical field of automatic driving, in particular to a lane line identification method, a lane line identification device and a storage medium.
Background
In an intelligent driving product, lane line identification precision directly influences the landing (LKA, LCC and the like) of an intelligent driving function, the relative position relation of a vehicle in a lane can be described through lane lines, and then the landing of a control algorithm is carried out. The existing lane line identification method carries out lane line identification on the premise that the road surface where the own vehicle is located is flat, but in an actual scene, the road surface where the vehicle is located is not constantly flat, so that the identification accuracy of the existing lane line identification method for the lane line is low.
Disclosure of Invention
The invention provides a lane line identification method, a lane line identification device and a storage medium, which aim to solve the technical problem that the existing lane line identification method is low in lane line identification precision.
One embodiment of the present invention provides a lane line identification method, including:
acquiring a plurality of lane line sampling point pairs of a flat road surface and a slope road surface under an image coordinate system, wherein each sampling point pair comprises a left side sampling point and a right side sampling point corresponding to the left side sampling point;
acquiring a conversion relation between an image coordinate system and a vehicle body coordinate system, and converting a lane line sampling point pair of the flat road surface from the image coordinate system to the vehicle body coordinate system according to the conversion relation to obtain a lane line identification point pair of the flat road surface, wherein the lane line identification point pair comprises a left side identification point and a right side identification point corresponding to the left side identification point pair;
calculating to obtain the lane width corresponding to the lane line identification point pair of the flat road surface according to the difference between the X-axis coordinates of the left side identification point and the right side identification point;
calculating to obtain a point set of lane line identification point pairs corresponding to the lane line sampling points on the gradient road surface according to the conversion relation between the image coordinate system and the vehicle body coordinate system and the lane width;
and performing polynomial function fitting on the point set to obtain a lane line fitting equation.
Further, acquiring a conversion relation between the image coordinate system and the vehicle body coordinate system, specifically:
based on the camera pinhole imaging principle, the conversion relation between the image coordinate system and the vehicle body coordinate system is obtained.
Further, based on the camera pinhole imaging principle, the conversion relation between the image coordinate system and the vehicle body coordinate system is obtained, specifically:
according to the pinhole imaging principle, the method comprises the following steps:
Figure 525570DEST_PATH_IMAGE001
order:
Figure 995735DEST_PATH_IMAGE002
then there is a conversion relationship between the image coordinate system and the vehicle body coordinate system:
Figure 614935DEST_PATH_IMAGE003
wherein (u, v) is image coordinate system coordinate, (X, Y, Z) is vehicle body coordinate system coordinate, ZC is Z direction coordinate of camera under vehicle body coordinate, A is3x4Is a matrix with rows and columns of 3 and 4, respectively, a1、a2、a3Is to A3x4Description of the row vectors of the matrix, fu、fvNormalized focal length of the camera in the lateral direction and normalized focal length of the camera in the longitudinal direction, respectively, R is the rotation matrix of the camera, t is the displacement, 0TIs a 0 row vector.
Further, a conversion relation between an image coordinate system and a vehicle body coordinate system is obtained, the image coordinate system is converted into the vehicle body coordinate system according to the conversion relation, and a lane line identification point pair of the flat road is obtained, wherein the conversion relation is specifically as follows:
setting the set of the lane line sampling point pairs as: tracelane={[UV0,left, UV0,right],[UV1,left, UV1,right], ..., [UVn,left, UVn,right]};
Wherein, UVn,leftFor the nth left sample point, UVn,rightIs the nth right sampling point, U is the abscissa of the image coordinate system, and V is the ordinate of the image coordinate system;
according to the conversion relation between the image coordinate system and the vehicle body coordinate system, the lane line sampling point pair [ UV ]n,left, UVn,right]Converting into lane line identification point pairs [ XYZ ] under a vehicle body coordinate systemn,left,XYZn, right]Wherein XYZn,leftFor the nth left recognition point, XYZn, rightIs the nth right identification point.
Further, calculating a lane width corresponding to the lane line identification point pair of the flat road surface according to a difference between the X-axis coordinates of the left side identification point and the right side identification point, specifically:
Wlane,n=XYZn,right.x – XYZn,left.x = xright - xleft
wherein, Wlane,nFor the nth lane identification point the corresponding lane width, xrightFor the X-axis coordinate of the right hand identification point, XleftThe X-axis coordinates of the left hand identified point.
Further, according to the conversion relationship between the image coordinate system and the vehicle body coordinate system and the lane width, a point set of a lane line identification point pair corresponding to a lane line sampling point on the gradient road surface is obtained by calculation, specifically:
setting a cost function FcostComprises the following steps:
Fcost = min{xright - xleft -Wlane,n}
let FcostAnd =0, calculating a point set of the lane identification point pair by combining the conversion relation, wherein the Z-axis coordinate of each lane identification point is the gradient height of the lane identification point.
Further, performing polynomial function fitting on the point set to obtain a lane line fitting equation, specifically:
and performing polynomial function fitting on the point set by adopting a least square fitting algorithm to obtain a lane line fitting equation.
Further, acquiring a plurality of lane line sampling point pairs under an image coordinate system specifically comprises:
acquiring an image set of a target area, performing target detection on a lane line by adopting deep learning or machine learning, and detecting to obtain a left lane line and a right lane line;
and simultaneously acquiring sampling points of the left lane line and sampling points of the right lane line at preset time intervals to obtain a plurality of lane line sampling point pairs.
An embodiment of the present invention provides a lane line recognition apparatus including:
the system comprises a sampling point pair acquisition module, a road surface detection module and a road surface detection module, wherein the sampling point pair acquisition module is used for acquiring a plurality of lane line sampling point pairs of a flat road surface and a slope road surface under an image coordinate system, and each sampling point pair comprises a left sampling point and a right sampling point corresponding to the left sampling point;
the identification point pair calculation module is used for acquiring a conversion relation between an image coordinate system and a vehicle body coordinate system, converting the lane line sampling point pair of the flat road surface from the image coordinate system to the vehicle body coordinate system according to the conversion relation, and obtaining a lane line identification point pair of the flat road surface, wherein the lane line identification point pair comprises a left side identification point and a right side identification point corresponding to the left side identification point pair;
the lane width calculation module is used for calculating the lane width corresponding to the lane line identification point of the flat road surface according to the difference of the X-axis coordinates of the left side identification point and the right side identification point;
the point set total calculation module is used for calculating and obtaining a point set of a lane line identification point pair corresponding to the lane line sampling point of the gradient road surface according to the conversion relation between the image coordinate system and the vehicle body coordinate system and the lane width;
and the lane line fitting module is used for performing polynomial function fitting on the point set to obtain a lane line fitting equation.
An embodiment of the present invention provides a computer-readable storage medium, which includes a stored computer program, wherein when the computer program runs, the apparatus on which the computer-readable storage medium is located is controlled to execute the lane line identification method as described above.
The embodiment of the invention obtains the conversion relation between the image coordinate system and the vehicle body coordinate system to convert the lane line sampling point pairs of the flat road surface from the image coordinate system to the vehicle body coordinate system, obtains the lane line identification point pairs, obtains the lane width under the flat road surface, corrects the lane line sampling point pairs of the gradient road surface based on the lane width to obtain the accurate lane sampling point pairs in the gradient road surface, further performs polynomial function fitting according to the lane sampling point coordinates of the gradient road surface to obtain the accurate lane line fitting equation, and considers the gradient condition of the front implementation side road during lane line identification, thereby effectively improving the lane line identification precision under the gradient road condition and further effectively improving the reliability of automatic driving.
Drawings
Fig. 1 is a schematic flow chart of a lane line identification method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a lane line provided by an embodiment of the present invention;
FIG. 3 is a schematic illustration of a grade provided by an embodiment of the present invention;
FIG. 4 is another schematic illustration of a grade provided by an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a lane line identification device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless otherwise specified.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art.
Referring to fig. 1, an embodiment of the present invention provides a lane line identification method, including:
s1, acquiring a plurality of lane line sampling point pairs of a flat road surface and a slope road surface under an image coordinate system, wherein each sampling point pair comprises a left side sampling point and a right side sampling point corresponding to the left side sampling point;
in the embodiment of the invention, the lane line sampling point pair is acquired based on the image acquired by the camera. In a specific implementation manner, in the embodiment of the present invention, an image set of a target region is acquired by a camera, deep learning or machine learning is adopted to perform target detection on a lane line and a vehicle, and position information of the lane line and the vehicle under an image coordinate is obtained, that is, a preliminary detection result of a left lane line and a right lane line is obtained. And simultaneously acquiring the sampling points of the lane line on the left side and the sampling points of the lane line on the right side at preset time intervals to obtain a plurality of lane line sampling point pairs, wherein the lane line sampling point pairs comprise a lane line sampling point pair on a flat road surface and a lane line sampling point pair on a slope road surface. For example, the left lane line and the right lane line are sampled at the same time at the time t to obtain the tth left sampling point and the tth right sampling point, which are the tth sampling point pair.
It should be noted that the image coordinate system is a two-dimensional coordinate system established with the central point of the pixel plane as the origin, the lateral direction as the u direction, and the longitudinal direction as the v direction. The flat road surface in the embodiment of the invention is a near road surface in front of the vehicle, and the gradient road surface is a far road surface in front of the vehicle.
S2, obtaining a conversion relation between the image coordinate system and the vehicle body coordinate system, converting the lane line sampling point pairs of the flat road surface from the image coordinate system to the vehicle body coordinate system according to the conversion relation, and obtaining lane line identification point pairs of the flat road surface, wherein the lane line identification point pairs comprise left side identification points and right side identification points corresponding to the left side identification points;
in the embodiment of the invention, the coordinate system of the vehicle body is a three-dimensional coordinate system established by taking a projection point of the center of the vehicle body on the ground as an origin, the foreground direction of the vehicle as a Y axis, the direction vertical to the vehicle body as an X axis and the direction vertical to the vehicle body as a Z axis, and the three-dimensional coordinate system accords with the right-hand rule.
S3, calculating the lane width corresponding to the lane line identification point of the flat road surface according to the difference of the X-axis coordinates of the left identification point and the right identification point;
s4, calculating a point set of the lane line identification point pair corresponding to the lane line sampling point of the gradient road surface according to the conversion relation between the image coordinate system and the vehicle body coordinate system and the lane width;
in the embodiment of the invention, the point set of the lane line identification point pair corresponding to the lane line sampling point on the gradient road surface is calculated and obtained by utilizing the conversion relation and the lane width, that is, the coordinate of the vehicle body coordinate system of each lane identification point is obtained by calculation, wherein the Z axis of the coordinate is the gradient height of the lane line identification point.
And S5, performing polynomial function fitting on the point set to obtain a lane line fitting equation.
The image coordinate system based on the two-dimensional coordinate system is an ideal flat road surface, which cannot reflect the slope condition of the vehicle when actually driving, the embodiment of the invention obtains the conversion relation between the image coordinate system and the vehicle body coordinate system, so as to convert the lane line sampling point pairs of the flat road surface from the image coordinate system to the vehicle body coordinate system to obtain the lane line identification point pairs, so as to determine the lane width of the flat road surface, and correct the lane line sampling point pair of the gradient road surface based on the lane width, so as to obtain accurate lane sampling point pairs in the slope road surface, and further carry out polynomial function fitting according to the coordinates of the lane sampling points of the slope road surface to obtain an accurate lane line fitting equation, in other words, the sampling point of the gradient road surface is corrected by the lane width of the flat road surface in consideration of the gradient situation of the vehicle in the actual running state, so that the lane line can be accurately identified.
In one embodiment, the conversion relationship between the image coordinate system and the vehicle body coordinate system is obtained by:
based on the camera pinhole imaging principle, the conversion relation between the image coordinate system and the vehicle body coordinate system is obtained.
Specifically, based on the camera pinhole imaging principle, the conversion relationship between the image coordinate system and the vehicle body coordinate system is obtained, which specifically comprises the following steps:
according to the pinhole imaging principle, the method comprises the following steps:
Figure 288493DEST_PATH_IMAGE004
order:
Figure 18551DEST_PATH_IMAGE005
then there is a conversion relationship between the image coordinate system and the vehicle body coordinate system:
Figure 421851DEST_PATH_IMAGE006
wherein (u, v) is image coordinate system coordinate, (X, Y, Z) is vehicle body coordinate system coordinate, ZC is Z direction coordinate of camera under vehicle body coordinate, A is3x4Is a matrix with rows and columns of 3 and 4, respectively, a1、a2、a3Is to A3x4Description of the row vectors of the matrix, fu、fvNormalized focal length of the camera in the lateral direction and normalized focal length of the camera in the longitudinal direction, respectively, R is the rotation matrix of the camera, t is the displacement, 0TIs a 0 row vector.
In one embodiment, a conversion relationship between an image coordinate system and a vehicle body coordinate system is obtained, and a lane line sampling point pair of a flat road is converted from the image coordinate system to the vehicle body coordinate system according to the conversion relationship, so as to obtain a lane line identification point pair of the flat road, specifically:
the set of lane line sampling point pairs is set as: tracelane={[UV0,left, UV0,right],[UV1,left, UV1,right], ..., [UVn,left, UVn,right]};
Wherein, UVn,leftFor the nth left sample point, UVn,rightIs the nth right sampling point, U is the abscissa of the image coordinate system, and V is the ordinate of the image coordinate system;
according to the conversion relation between the image coordinate system and the vehicle body coordinate system, the lane line sampling point pair [ UV ]n,left, UVn,right]Converting into lane line identification point pairs [ XYZ ] under a vehicle body coordinate systemn,left,XYZn, right]Wherein XYZn,leftFor the nth left recognition point, XYZn, rightIs the nth right identification point.
In the implementation of the invention, the process of converting the image coordinate system into the vehicle body coordinate system is an inverse projection transformation process, and the coordinates of the image coordinate system can be obtained by a visual perception module in the vehicle.
In one embodiment, the method for calculating the lane width corresponding to each lane line identification point on the flat road surface according to the difference between the X-axis coordinates of the left identification point and the right identification point specifically includes:
Wlane,n=XYZn,right.x – XYZn,left.x = xright - xleft
wherein, Wlane,nFor the nth lane identification point the corresponding lane width, xrightFor the X-axis coordinate of the right hand identification point, XleftThe X-axis coordinates of the left hand identified point.
Referring to fig. 3-4, the lane width can be represented by the difference of the X coordinates of a lane line identification point pair under the inverse projection transformation. Specifically, Wn = Abs(XYZ0, left.x - XYZ0,right.x), WiFor width, the time when y =0 (i.e. the height H of the point on the ground from the camera) can be calculatedgt = Hcamera) Distance W between two lane lines0I.e. the width of the current lane. Obtaining a smooth lane width W using the historical lane width and the currently detected widthlaneAs an output. In one specific embodiment, the lane width may also be represented by the difference between the x coordinates of the left and right lane line sampling points in the vehicle body coordinate system, that is:
Wlane,n=XYZn,right.x – XYZn,left.x = xright - xleft
meanwhile, the width of a lane is consistent no matter how far it is. I.e. Wlane = Wlane,n
In one embodiment, a point set of the lane line identification point pair corresponding to the lane line sampling point on the gradient road surface is calculated according to the conversion relationship between the image coordinate system and the vehicle body coordinate system and the lane width, specifically:
setting a cost function FcostComprises the following steps:
Fcost = min{xright - xleft -Wlane,n}
let FcostAnd =0, calculating a point set of the lane identification point pair by combining the conversion relation, wherein the Z-axis coordinate of each lane identification point is the gradient height of the lane identification point.
In the embodiment of the invention, the vehicle body coordinate system of the lane line sampling point is calculated according to the conversion relation between the image coordinate system and the vehicle body coordinate system:
Figure 71007DEST_PATH_IMAGE007
where Z is the slope height of the location and X, Y are the longitudinal and lateral coordinates based on the slope, respectively. In the embodiment of the invention, when the Z values are different, the X values are different, the Z values are obtained through iterative calculation, and different X values are obtained to minimize FcostExample of the inventionFcostIs 0, thereby determining the value of X, and the value of Y. According to the embodiment of the invention, the least square fitting algorithm is adopted to fit the lane line equation based on the finally determined X, Y value, so that the lane line of the road is accurately identified.
In one embodiment, the point set is subjected to polynomial function fitting to obtain a lane line fitting equation, which specifically includes:
and performing polynomial function fitting on the point set by adopting a least square fitting algorithm to obtain a lane line fitting equation.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention obtains the conversion relation between the image coordinate system and the vehicle body coordinate system to convert the lane line sampling point pairs of the flat road surface from the image coordinate system to the vehicle body coordinate system, obtains the lane line identification point pairs, obtains the lane width under the flat road surface, corrects the lane line sampling point pairs of the gradient road surface based on the lane width to obtain the accurate lane sampling point pairs in the gradient road surface, further performs polynomial function fitting according to the lane sampling point coordinates of the gradient road surface to obtain the accurate lane line fitting equation, and considers the gradient condition of the front implementation side road during lane line identification, thereby effectively improving the lane line identification precision under the gradient road condition and further effectively improving the reliability of automatic driving.
Referring to fig. 5, based on the same inventive concept as the above embodiment, an embodiment of the present invention provides a lane line identification apparatus, including:
the sampling point pair obtaining module 10 is used for obtaining a plurality of lane line sampling point pairs of a flat road surface and a slope road surface under an image coordinate system, wherein each sampling point pair comprises a left sampling point and a right sampling point corresponding to the left sampling point;
the identification point pair calculation module 20 is configured to obtain a conversion relationship between the image coordinate system and the vehicle body coordinate system, convert the lane line sampling point pairs of the flat road surface from the image coordinate system to the vehicle body coordinate system according to the conversion relationship, and obtain lane line identification point pairs of the flat road surface, where the lane line identification point pairs include left identification points and right identification points corresponding to the left identification points;
the lane width calculation module 30 is configured to calculate a point set of a lane line identification point pair corresponding to a lane line sampling point on a gradient road surface according to a conversion relationship between the image coordinate system and the vehicle body coordinate system and a lane width;
the point set calculation module 40 is used for calculating a point set of the lane line identification point pair corresponding to each lane line sampling point according to the conversion relation between the image coordinate system and the vehicle body coordinate system and the lane width;
and the lane line fitting module 50 is used for performing polynomial function fitting on the point set to obtain a lane line fitting equation.
In one embodiment, the identification point pair calculation module 20 is specifically configured to:
based on the camera pinhole imaging principle, the conversion relation between the image coordinate system and the vehicle body coordinate system is obtained.
In one embodiment, the identification point pair calculation module 20 is further configured to:
according to the pinhole imaging principle, the method comprises the following steps:
Figure 622074DEST_PATH_IMAGE008
order:
Figure 765610DEST_PATH_IMAGE009
then there is a conversion relationship between the image coordinate system and the vehicle body coordinate system:
Figure 23416DEST_PATH_IMAGE010
wherein (u, v) is image coordinate system coordinate, (X, Y, Z) is vehicle body coordinate system coordinate, ZC is Z direction coordinate of camera under vehicle body coordinate, A is3x4Is a matrix with rows and columns of 3 and 4, respectively, a1、a2、a3Is to A3x4Description of the row vectors of the matrix, fu、fvNormalized focal length of the camera in the lateral direction and normalized focal length of the camera in the longitudinal direction, respectively, R is the rotation matrix of the camera, t is the displacement, 0TIs a 0 row vector.
In one embodiment, the identification point pair calculation module 20 is further configured to:
the set of lane line sampling point pairs is set as: tracelane={[UV0,left, UV0,right],[UV1,left, UV1,right], ..., [UVn,left, UVn,right]};
Wherein, UVn,leftFor the nth left sample point, UVn,rightIs the nth right sampling point, U is the abscissa of the image coordinate system, and V is the ordinate of the image coordinate system;
according to the conversion relation between the image coordinate system and the vehicle body coordinate system, the lane line sampling point pair [ UV ]n,left, UVn,right]Converting into lane line identification point pairs [ XYZ ] under a vehicle body coordinate systemn,left,XYZn, right]Wherein XYZn,leftFor the nth left recognition point, XYZn, rightIs the nth right identification point.
In one embodiment, the lane width calculation module 30 is specifically configured to:
Wlane,n=XYZn,right.x – XYZn,left.x = xright - xleft
wherein, Wlane,nFor the nth lane identification point the corresponding lane width, xrightFor the X-axis coordinate of the right hand identification point, XleftThe X-axis coordinates of the left hand identified point.
In one embodiment, point set total calculation module 40 is specifically configured to:
setting a cost function FcostComprises the following steps:
Fcost = min{xright - xleft -Wlane,n}
let FcostAnd =0, calculating a point set of the lane identification point pair by combining the conversion relation, wherein the Z-axis coordinate of each lane identification point is the gradient height of the lane identification point.
In one embodiment, lane line fitting module 50 is specifically configured to:
and performing polynomial function fitting on the point set by adopting a least square fitting algorithm to obtain a lane line fitting equation.
In one embodiment, the sampling point pair obtaining module 10 is specifically configured to:
acquiring an image set of a target area, performing target detection on a lane line by adopting deep learning or machine learning, and detecting to obtain a left lane line and a right lane line;
and simultaneously acquiring sampling points of the left lane line and sampling points of the right lane line at preset time intervals to obtain a plurality of lane line sampling point pairs.
An embodiment of the present invention provides a computer-readable storage medium, which includes a stored computer program, wherein when the computer program runs, a device in which the computer-readable storage medium is located is controlled to execute the lane line identification method as claimed in the above.
The foregoing is a preferred embodiment of the present invention, and it should be noted that it would be apparent to those skilled in the art that various modifications and enhancements can be made without departing from the principles of the invention, and such modifications and enhancements are also considered to be within the scope of the invention.

Claims (10)

1. A lane line identification method is characterized by comprising the following steps:
acquiring a plurality of lane line sampling point pairs of a flat road surface and a slope road surface under an image coordinate system, wherein each sampling point pair comprises a left side sampling point and a right side sampling point corresponding to the left side sampling point;
acquiring a conversion relation between an image coordinate system and a vehicle body coordinate system, and converting a lane line sampling point pair of the flat road surface from the image coordinate system to the vehicle body coordinate system according to the conversion relation to obtain a lane line identification point pair of the flat road surface, wherein the lane line identification point pair comprises a left side identification point and a right side identification point corresponding to the left side identification point pair;
calculating to obtain the lane width corresponding to the lane line identification point pair of the flat road surface according to the difference between the X-axis coordinates of the left side identification point and the right side identification point;
calculating to obtain a point set of lane line identification point pairs corresponding to the lane line sampling points on the gradient road surface according to the conversion relation between the image coordinate system and the vehicle body coordinate system and the lane width;
and performing polynomial function fitting on the point set to obtain a lane line fitting equation.
2. The lane line identification method according to claim 1, wherein the obtaining of the conversion relationship between the image coordinate system and the vehicle body coordinate system specifically comprises:
based on the camera pinhole imaging principle, the conversion relation between the image coordinate system and the vehicle body coordinate system is obtained.
3. The lane line identification method according to claim 2, wherein the conversion relationship between the image coordinate system and the vehicle body coordinate system is acquired based on a camera pinhole imaging principle, and specifically:
according to the pinhole imaging principle, the method comprises the following steps:
Figure 719799DEST_PATH_IMAGE001
order:
Figure 43464DEST_PATH_IMAGE002
then there is a conversion relationship between the image coordinate system and the vehicle body coordinate system:
Figure 619939DEST_PATH_IMAGE003
wherein (u, v) is image coordinate system coordinate, (X, Y, Z) is vehicle body coordinate system coordinate, ZC is Z direction coordinate of camera under vehicle body coordinate, A is3x4Is a moment of 3 and 4 for a row and column, respectivelyArray, a1、a2、a3Is to A3x4Description of the row vectors of the matrix, fu、fvNormalized focal length of the camera in the lateral direction and normalized focal length of the camera in the longitudinal direction, respectively, R is the rotation matrix of the camera, t is the displacement, 0TIs a 0 row vector.
4. The lane line identification method according to claim 1, wherein a conversion relationship between an image coordinate system and a vehicle body coordinate system is obtained, and the lane line sampling point pairs of the flat road surface are converted from the image coordinate system to the vehicle body coordinate system according to the conversion relationship, so as to obtain the lane line identification point pairs of the flat road surface, specifically:
setting the set of the lane line sampling point pairs as: tracelane={[UV0,left, UV0,right],[UV1,left, UV1,right], ..., [UVn,left, UVn,right]};
Wherein, UVn,leftFor the nth left sample point, UVn,rightIs the nth right sampling point, U is the abscissa of the image coordinate system, and V is the ordinate of the image coordinate system;
according to the conversion relation between the image coordinate system and the vehicle body coordinate system, the lane line sampling point pair [ UV ]n,left, UVn,right]Converting into lane line identification point pairs [ XYZ ] under a vehicle body coordinate systemn,left,XYZn, right]Wherein XYZn,leftFor the nth left recognition point, XYZn, rightIs the nth right identification point.
5. The lane line identification method according to claim 1, wherein the lane width corresponding to the lane line identification point pair on the flat road surface is calculated from a difference between X-axis coordinates of the left side identification point and the right side identification point, specifically:
Wlane,n=XYZn,right.x – XYZn,left.x = xright - xleft
wherein, Wlane,nFor the nth lane identification point the corresponding lane width, xrightFor the X-axis coordinate of the right hand identification point, XleftThe X-axis coordinates of the left hand identified point.
6. The method for identifying lane lines according to claim 1, wherein a point set of the lane line identification point pairs corresponding to the lane line sampling points on the gradient road surface is calculated according to a conversion relationship between the image coordinate system and the vehicle body coordinate system and the lane width, and specifically includes:
setting a cost function FcostComprises the following steps:
Fcost = min{xright - xleft -Wlane,n}
let FcostAnd =0, calculating a point set of the lane identification point pair by combining the conversion relation, wherein the Z-axis coordinate of each lane identification point is the gradient height of the lane identification point.
7. The lane line identification method according to claim 1, wherein a polynomial function fitting is performed on the point set to obtain a lane line fitting equation, specifically:
and performing polynomial function fitting on the point set by adopting a least square fitting algorithm to obtain a lane line fitting equation.
8. The lane line identification method according to claim 1, wherein the obtaining of a plurality of lane line sampling point pairs in an image coordinate system specifically comprises:
acquiring an image set of a target area, performing target detection on a lane line by adopting deep learning or machine learning, and detecting to obtain a left lane line and a right lane line;
and simultaneously acquiring sampling points of the left lane line and sampling points of the right lane line at preset time intervals to obtain a plurality of lane line sampling point pairs.
9. A lane line identification apparatus, comprising:
the system comprises a sampling point pair acquisition module, a road surface detection module and a road surface detection module, wherein the sampling point pair acquisition module is used for acquiring a plurality of lane line sampling point pairs of a flat road surface and a slope road surface under an image coordinate system, and each sampling point pair comprises a left sampling point and a right sampling point corresponding to the left sampling point;
the identification point pair calculation module is used for acquiring a conversion relation between an image coordinate system and a vehicle body coordinate system, converting the lane line sampling point pair of the flat road surface from the image coordinate system to the vehicle body coordinate system according to the conversion relation, and obtaining a lane line identification point pair of the flat road surface, wherein the lane line identification point pair comprises a left side identification point and a right side identification point corresponding to the left side identification point pair;
the lane width calculation module is used for calculating the lane width corresponding to the lane line identification point of the flat road surface according to the difference of the X-axis coordinates of the left side identification point and the right side identification point;
the point set total calculation module is used for calculating and obtaining a point set of a lane line identification point pair corresponding to the lane line sampling point of the gradient road surface according to the conversion relation between the image coordinate system and the vehicle body coordinate system and the lane width;
and the lane line fitting module is used for performing polynomial function fitting on the point set to obtain a lane line fitting equation.
10. A computer-readable storage medium, comprising a stored computer program, wherein when the computer program is run, the computer-readable storage medium controls an apparatus to execute the lane line identification method according to any one of claims 1 to 8.
CN202210088440.1A 2022-01-26 2022-01-26 Lane line identification method and device and storage medium Active CN114120258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210088440.1A CN114120258B (en) 2022-01-26 2022-01-26 Lane line identification method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210088440.1A CN114120258B (en) 2022-01-26 2022-01-26 Lane line identification method and device and storage medium

Publications (2)

Publication Number Publication Date
CN114120258A true CN114120258A (en) 2022-03-01
CN114120258B CN114120258B (en) 2022-05-03

Family

ID=80361066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210088440.1A Active CN114120258B (en) 2022-01-26 2022-01-26 Lane line identification method and device and storage medium

Country Status (1)

Country Link
CN (1) CN114120258B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115655205A (en) * 2022-11-16 2023-01-31 清智汽车科技(苏州)有限公司 Method and device for assisting distance measurement by using lane

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100238283A1 (en) * 2009-03-18 2010-09-23 Hyundai Motor Company Lane departure warning method and system using virtual lane-dividing line
WO2012011713A2 (en) * 2010-07-19 2012-01-26 주식회사 이미지넥스트 System and method for traffic lane recognition
US20150161454A1 (en) * 2013-12-11 2015-06-11 Samsung Techwin Co., Ltd. Lane detection system and method
US20150279017A1 (en) * 2014-03-28 2015-10-01 Fuji Jukogyo Kabushiki Kaisha Stereo image processing device for vehicle
CN107133985A (en) * 2017-04-20 2017-09-05 常州智行科技有限公司 A kind of vehicle-mounted vidicon automatic calibration method for the point that disappeared based on lane line
CN110197151A (en) * 2019-05-28 2019-09-03 大连理工大学 A kind of lane detection system and method for combination double branching networks and custom function network
CN110361015A (en) * 2018-09-30 2019-10-22 长城汽车股份有限公司 Roadway characteristic point extracting method and system
CN112307953A (en) * 2020-10-29 2021-02-02 无锡物联网创新中心有限公司 Clustering-based adaptive inverse perspective transformation lane line identification method and system
CN112348752A (en) * 2020-10-28 2021-02-09 武汉极目智能技术有限公司 Lane line vanishing point compensation method and device based on parallel constraint
CN112733812A (en) * 2021-03-01 2021-04-30 知行汽车科技(苏州)有限公司 Three-dimensional lane line detection method, device and storage medium
US20210295061A1 (en) * 2020-07-20 2021-09-23 Beijing Baidu Netcom Science and Technology Co., Ltd Lane line determination method and apparatus, lane line positioning accuracy evaluation method and apparatus, and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100238283A1 (en) * 2009-03-18 2010-09-23 Hyundai Motor Company Lane departure warning method and system using virtual lane-dividing line
WO2012011713A2 (en) * 2010-07-19 2012-01-26 주식회사 이미지넥스트 System and method for traffic lane recognition
US20150161454A1 (en) * 2013-12-11 2015-06-11 Samsung Techwin Co., Ltd. Lane detection system and method
US20150279017A1 (en) * 2014-03-28 2015-10-01 Fuji Jukogyo Kabushiki Kaisha Stereo image processing device for vehicle
CN107133985A (en) * 2017-04-20 2017-09-05 常州智行科技有限公司 A kind of vehicle-mounted vidicon automatic calibration method for the point that disappeared based on lane line
CN110361015A (en) * 2018-09-30 2019-10-22 长城汽车股份有限公司 Roadway characteristic point extracting method and system
CN110197151A (en) * 2019-05-28 2019-09-03 大连理工大学 A kind of lane detection system and method for combination double branching networks and custom function network
US20210295061A1 (en) * 2020-07-20 2021-09-23 Beijing Baidu Netcom Science and Technology Co., Ltd Lane line determination method and apparatus, lane line positioning accuracy evaluation method and apparatus, and device
CN112348752A (en) * 2020-10-28 2021-02-09 武汉极目智能技术有限公司 Lane line vanishing point compensation method and device based on parallel constraint
CN112307953A (en) * 2020-10-29 2021-02-02 无锡物联网创新中心有限公司 Clustering-based adaptive inverse perspective transformation lane line identification method and system
CN112733812A (en) * 2021-03-01 2021-04-30 知行汽车科技(苏州)有限公司 Three-dimensional lane line detection method, device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郁梅 等: "采用快速路面重建的车道和障碍物检测新方法", 《仪器仪表学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115655205A (en) * 2022-11-16 2023-01-31 清智汽车科技(苏州)有限公司 Method and device for assisting distance measurement by using lane

Also Published As

Publication number Publication date
CN114120258B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
US10424081B2 (en) Method and apparatus for calibrating a camera system of a motor vehicle
CN106529587B (en) Vision course recognition methods based on object detection
CN109345593B (en) Camera posture detection method and device
CN107305632B (en) Monocular computer vision technology-based target object distance measuring method and system
CN103630122B (en) Monocular vision lane line detection method and distance measurement method thereof
CN110490936B (en) Calibration method, device and equipment of vehicle camera and readable storage medium
CN112037159B (en) Cross-camera road space fusion and vehicle target detection tracking method and system
CN109948470B (en) Hough transform-based parking line distance detection method and system
US10187630B2 (en) Egomotion estimation system and method
CN103559711A (en) Motion estimation method based on image features and three-dimensional information of three-dimensional visual system
CN112184792B (en) Road gradient calculation method and device based on vision
CN108151713A (en) A kind of quick position and orientation estimation methods of monocular VO
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN114565510A (en) Lane line distance detection method, device, equipment and medium
CN114120258B (en) Lane line identification method and device and storage medium
CN106228531B (en) Automatic vanishing point calibration method and system based on horizon line search
CN112489106A (en) Video-based vehicle size measuring method and device, terminal and storage medium
CN108151824A (en) Water level recognition methods and system based on vehicle-mounted panoramic image
CN112819711A (en) Monocular vision-based vehicle reverse positioning method utilizing road lane line
CN110033492A (en) Camera marking method and terminal
CN115239822A (en) Real-time visual identification and positioning method and system for multi-module space of split type flying vehicle
CN110197104B (en) Distance measurement method and device based on vehicle
CN106408589A (en) Vehicle-mounted overlooking camera based vehicle movement measurement method
CN104471436A (en) Method and device for calculating a change in an image scale of an object
CN109934140B (en) Automatic reversing auxiliary parking method and system based on detection of ground transverse marking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Floor 25, Block A, Zhongzhou Binhai Commercial Center Phase II, No. 9285, Binhe Boulevard, Shangsha Community, Shatou Street, Futian District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Youjia Innovation Technology Co.,Ltd.

Address before: 518051 401, building 1, Shenzhen new generation industrial park, No. 136, Zhongkang Road, Meidu community, Meilin street, Futian District, Shenzhen, Guangdong Province

Patentee before: SHENZHEN MINIEYE INNOVATION TECHNOLOGY Co.,Ltd.