CN115546319A - Lane keeping method, lane keeping apparatus, computer device, and storage medium - Google Patents

Lane keeping method, lane keeping apparatus, computer device, and storage medium Download PDF

Info

Publication number
CN115546319A
CN115546319A CN202211482598.3A CN202211482598A CN115546319A CN 115546319 A CN115546319 A CN 115546319A CN 202211482598 A CN202211482598 A CN 202211482598A CN 115546319 A CN115546319 A CN 115546319A
Authority
CN
China
Prior art keywords
lane
result
lane line
fitting
lane keeping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211482598.3A
Other languages
Chinese (zh)
Other versions
CN115546319B (en
Inventor
席华炜
董洪泉
卢兵
王博
宋士佳
孙超
王文伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Auto Co Ltd
Shenzhen Automotive Research Institute of Beijing University of Technology
Original Assignee
Shenzhen Automotive Research Institute of Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Automotive Research Institute of Beijing University of Technology filed Critical Shenzhen Automotive Research Institute of Beijing University of Technology
Priority to CN202211482598.3A priority Critical patent/CN115546319B/en
Publication of CN115546319A publication Critical patent/CN115546319A/en
Application granted granted Critical
Publication of CN115546319B publication Critical patent/CN115546319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a lane keeping method, a lane keeping device, computer equipment and a storage medium. The method comprises the following steps: acquiring camera calibration data; extracting a lane line according to the camera calibration data to obtain an extraction result; performing lane line fitting according to the extraction result to obtain a fitting result; constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result; and fusing the lane line potential field and a lane keeping model to obtain a lane keeping function. By implementing the method provided by the embodiment of the invention, small probability events in an actual driving scene can be coped with, the method is suitable for detecting curves and dotted lines, and the stability in the process of controlling the overbending is improved.

Description

Lane keeping method, lane keeping apparatus, computer device, and storage medium
Technical Field
The invention relates to a driving assistance system, in particular to a lane keeping method, a lane keeping device, a computer device and a storage medium.
Background
The lane keeping function is an important component of an auxiliary driving system, and automatic deviation rectification of the vehicle is realized through active transverse control, so that the vehicle keeps running on a given lane. Therefore, the vehicle with the lane keeping function can greatly improve the driving safety and reduce the fatigue degree of the driver. Lane line detection is a precondition for developing a lane keeping function, and extensive algorithm research is developed at home and abroad aiming at the problem of lane line detection. The lane line detection algorithm can be generalized to two major detection algorithms based on traditional image feature extraction and deep learning.
The detection algorithm based on deep learning can obtain a lane line detection result with higher precision, but the requirements of model training on calculation power and data are very strict, and due to the lack of interpretability and the completeness of a data set, the model is difficult to converge, and a small-probability event in an actual driving scene cannot be processed to induce a traffic accident; meanwhile, the carrying of the deep learning model also provides a brand-new challenge for the performance of the whole hardware, and the consumption of large computational power can seriously influence the real-time performance of algorithm implementation. Therefore, in order to ensure the real-time performance and stability of the operation of the algorithm, the traditional lane line detection algorithm based on image preprocessing, feature extraction, lane line fitting and the like is still the mainstream algorithm in practical application. The traditional lane detection algorithm is mostly based on a lane line detection algorithm of Hough transform, has high real-time performance, but is too sensitive to the illumination environment and is not suitable for detecting curves and dotted lines. Meanwhile, the center line of the lane is used as a target track for lane keeping to calculate a transverse error, and the vehicle is prone to instability during lane keeping of a large-curvature curve.
Therefore, it is necessary to design a new method, which can cope with the small probability event in the actual driving scene, and is suitable for the detection of the curve and the broken line, and improve the stability in the over-bending control process.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a lane keeping method, a lane keeping device, a computer device and a storage medium.
In order to achieve the purpose, the invention adopts the following technical scheme: a lane keeping method comprising:
acquiring camera calibration data;
extracting a lane line according to the camera calibration data to obtain an extraction result;
performing lane line fitting according to the extraction result to obtain a fitting result;
constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result;
and fusing the lane line potential field and a lane keeping model to obtain a lane keeping function.
The further technical scheme is as follows: the acquiring of camera calibration data comprises:
calibrating the internal parameters and the external parameters of the camera based on the camera parameters to obtain a calibration result;
and acquiring 2D pixel coordinates according to the calibration result to obtain camera calibration data.
The further technical scheme is as follows: the extracting lane lines according to the camera calibration data to obtain an extraction result comprises:
performing image gray processing on the camera calibration data to obtain a processing result;
carrying out inverse perspective transformation on the processing result to obtain a transformation result;
and performing feature extraction on the transformation result to obtain an extraction result.
The further technical scheme is as follows: the extracting the feature of the transformation result to obtain an extraction result includes:
adopting a DBSCAN clustering algorithm to extract clustering features of the transformation result aiming at the objects with the same density so as to obtain feature points;
and screening the connection points of the characteristic points through parameter adjustment and identification to obtain an extraction result.
The further technical scheme is as follows: the performing lane line fitting according to the extraction result to obtain a fitting result includes:
and performing curve fitting on the extraction result based on a moving least square method to obtain a fitting result.
The further technical scheme is as follows: constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result, comprising:
determining a curve coordinate mapping relation of the lane line according to the fitting result;
and designing a potential field function of the lane line based on the transverse position of the vehicle according to the curve coordinate mapping relation of the lane line so as to construct the potential field of the lane line.
The further technical scheme is as follows: the fusing the lane line potential field with a lane keeping model to obtain a lane keeping function includes:
calculating a lateral position and a longitudinal position of the vehicle from the lane line based on the lane keeping model;
designing a target function by combining the transverse distance, the reference transverse target and the lane line potential field;
and combining the lane line potential field, the objective function and the vehicle three-degree-of-freedom dynamic model to construct an optimization problem so as to obtain a lane keeping function.
The present invention also provides a lane keeping apparatus including:
the data acquisition unit is used for acquiring camera calibration data;
the lane line extraction unit is used for extracting a lane line according to the camera calibration data to obtain an extraction result;
the fitting unit is used for fitting the lane line according to the extraction result to obtain a fitting result;
the construction unit is used for constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result;
and the fusion unit is used for fusing the lane line potential field with the lane keeping model to obtain a lane keeping function.
The invention also provides computer equipment which comprises a memory and a processor, wherein the memory is stored with a computer program, and the processor realizes the method when executing the computer program.
The invention also provides a storage medium storing a computer program which, when executed by a processor, implements the method described above.
Compared with the prior art, the invention has the beneficial effects that: according to the method, internal and external parameters of the camera are calibrated, camera calibration data are obtained, lane lines are extracted, lane line fitting is carried out, a lane line potential field is constructed by combining the transverse position of a vehicle, the lane line potential field and a lane keeping model are fused to obtain a lane keeping function, the lane keeping function can be used for carrying out lane keeping prediction, small probability events in an actual driving scene can be handled, the method is suitable for detecting curves and dotted lines, and the stability in the process of over-bending control is improved.
The invention is further described below with reference to the figures and the specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a lane keeping method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a lane keeping method according to an embodiment of the present invention;
FIG. 3 is a sub-flowchart of a lane keeping method according to an embodiment of the present invention;
FIG. 4 is a schematic sub-flow diagram of a lane keeping method according to an embodiment of the present invention;
FIG. 5 is a sub-flowchart of a lane keeping method according to an embodiment of the present invention;
FIG. 6 is a schematic sub-flow chart of a lane keeping method according to an embodiment of the present invention;
FIG. 7 is a sub-flowchart of a lane keeping method according to an embodiment of the present invention;
FIG. 8 is a first graph of the fitting effect provided by the embodiment of the present invention;
FIG. 9 is a second graph of the fitting effect provided by the embodiment of the present invention;
FIG. 10 is a third graph of the fitting effect provided by the embodiment of the present invention;
FIG. 11 is a graph of the effect of fitting provided by the embodiment of the present invention;
fig. 12 is a schematic diagram of acquiring coordinates of a lane line curve according to an embodiment of the present invention;
FIG. 13 is a schematic view of a lateral distance of a vehicle provided by an embodiment of the present invention;
fig. 14 is a schematic diagram of a 2D lane line potential field provided by an embodiment of the present invention;
fig. 15 is a schematic block diagram of a lane keeping apparatus provided in an embodiment of the present invention;
fig. 16 is a schematic block diagram of a data acquisition unit of the lane keeping apparatus provided by the embodiment of the present invention;
fig. 17 is a schematic block diagram of a lane line extraction unit of the lane keeping apparatus provided by the embodiment of the present invention;
fig. 18 is a schematic block diagram of an extracting subunit of the lane keeping device provided by the embodiment of the present invention;
fig. 19 is a schematic block diagram of a construction unit of the lane keeping device provided by the embodiment of the present invention;
fig. 20 is a schematic block diagram of a fusion unit of the lane keeping apparatus provided by the embodiment of the present invention;
FIG. 21 is a schematic block diagram of a computer apparatus provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view illustrating an application scenario of a lane keeping method according to an embodiment of the present invention. Fig. 2 is a schematic flow chart of a lane keeping method according to an embodiment of the present invention. The lane keeping method is applied to a server. The server and the camera perform data interaction to realize the combination of the accuracy of lane line detection and the stability of a vehicle in a lane keeping process, firstly, internal and external parameter calibration of the camera is completed based on camera parameters to complete image gray processing and inverse perspective transformation; secondly, completing lane line feature extraction by DBSCAN (Based on space-Based Spatial Clustering of Applications with Noise) and realizing lane line fitting by combining a mobile least square method; and finally, fusing the lane line potential field and the lane keeping model for predictive control, and completing a vehicle longitudinal and transverse cooperative control module to realize a lane keeping function. The problem of algorithm compatibility under the scenes of a solid line and a dotted line is solved; the stability of the algorithm in the turning control process is improved, so that the vehicle has better tracking performance and tracking potential.
Fig. 2 is a schematic flow chart of a lane keeping method according to an embodiment of the present invention. As shown in fig. 2, the method includes the following steps S110 to S150.
And S110, acquiring camera calibration data.
In the present embodiment, the camera calibration data refers to 2D pixel coordinates formed by the camera shooting lane.
In an embodiment, referring to fig. 3, the step S110 may include steps S111 to S112.
And S111, calibrating the internal parameters and the external parameters of the camera based on the camera parameters to obtain a calibration result.
In this embodiment, the calibration result refers to the calibration content of the internal and external parameters of the camera.
And S112, acquiring 2D pixel coordinates according to the calibration result to obtain camera calibration data.
In this embodiment, the camera parameter identification is a precondition for realizing the expression of the mapping relationship between the spatial coordinates and the pixel coordinates, and the external reference and the internal reference identification of the camera are realized by combining information such as the installation position of the camera, the size of a lens and the size of an output image. The transformation of any coordinate in 3D space into a perspective transformation of 2D pixel coordinates is summarized as follows
Figure 727780DEST_PATH_IMAGE001
Wherein, in the process,
Figure 582603DEST_PATH_IMAGE002
is the camera internal reference, R and T are the camera external references,
Figure 627919DEST_PATH_IMAGE003
is the position coordinate of a point under the geodetic coordinate system,
Figure 678921DEST_PATH_IMAGE004
is the Z-axis coordinate of the camera coordinate system, and the (u, v) is the pixel coordinate of the pointAnd (4) marking. This formula will 3D coordinate
Figure 758872DEST_PATH_IMAGE005
Into 2D pixel coordinates (u, v). The output of this step is the 2D pixel coordinates (u, v) as input for lane line extraction.
And S120, extracting the lane line according to the camera calibration data to obtain an extraction result.
In the present embodiment, the extracted result refers to the extracted lane line.
In an embodiment, referring to fig. 4, the step S120 may include steps S121 to S123.
And S121, performing image gray scale processing on the camera calibration data to obtain a processing result.
In this embodiment, the processing result refers to data obtained by performing image gradation processing on the camera calibration data.
The image gray scale processing of data belongs to the prior art, and is not described herein again.
And S122, carrying out inverse perspective transformation on the processing result to obtain a transformation result.
In this embodiment, the transformation result is a result obtained by performing inverse perspective transformation on the processing result.
Specifically, the inverse perspective transformation mainly converts a pixel coordinate system into a world coordinate system, outputs an input image as a bird's-eye view, and presents characteristics of parallelism and equal width of real world lane lines so as to improve the detection accuracy of the lane lines. Mainly by
Figure 846914DEST_PATH_IMAGE006
And realizing coordinate conversion. Wherein IPM is an inverse perspective matrix. Pixel coordinates (u, v) are input, and the pixel coordinates are inversely converted by an inverse perspective matrix to output 3D coordinates (X, Y, Z).
And S123, performing feature extraction on the transformation result to obtain an extraction result.
In an embodiment, referring to fig. 4, the step S123 may include steps S1231 to S1232.
And S1231, extracting clustering features of the transformation result aiming at the objects with the same density by adopting a DBSCAN clustering algorithm to obtain feature points.
In this embodiment, the feature point refers to a result obtained by extracting clustering features of the objects with the same density by using a DBSCAN clustering algorithm.
And S1232, screening the connection points of the characteristic points through parameter adjustment and identification so as to obtain an extraction result.
Specifically, after a series of 3D coordinates are obtained, a DBSCAN clustering algorithm is adopted, clustering features are extracted for objects with the same density, connection points are identified through parameter adjustment, and connection points are screened, and at the moment, lane lines are extracted as a series of feature points.
And S130, performing lane line fitting according to the extraction result to obtain a fitting result.
In this embodiment, the fitting result refers to a result formed by fitting a series of extracted feature points, and specifically, the extracted result is subjected to curve fitting based on a moving least square method to obtain a fitting result, so as to ensure the curvature continuity of the lane line, as shown in fig. 8 to 11.
And S140, constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result.
In this embodiment, the lane line potential field refers to a lane line composite potential field.
In an embodiment, referring to fig. 6, the step S140 may include steps S141 to S142.
And S141, determining the curve coordinate mapping relation of the lane line according to the fitting result.
In this embodiment, after the fitting of the lane line is completed, the absolute coordinate of the lane line is obtained, and then the conversion between the absolute coordinate and the road coordinate is realized based on the coordinate conversion theory, so as to obtain the curve coordinate mapping relationship of the lane line, as shown in fig. 12.
And S142, designing a potential field function of the lane line based on the transverse position of the vehicle according to the curve coordinate mapping relation of the lane line so as to construct the potential field of the lane line.
The lane keeping function aims mainly to make the vehicle stably run in a certain range of the lane center line, and does not lose the stability of the vehicle in order to strictly follow the center line, and the lane line transverse distance of the vehicle in a curve coordinate system is set as d, which is shown in fig. 13 in detail.
Specifically, a lane line potential field function is designed based on the calculated lateral position of the vehicle, and a lane line composite potential field is constructed. Respectively defining potential field functions on lane boundaries and lane center lines in the structured road, wherein the potential field functions of the lane lines are designed as follows:
Figure 379527DEST_PATH_IMAGE007
Figure 719372DEST_PATH_IMAGE008
(ii) a Wherein,
Figure 653830DEST_PATH_IMAGE009
and
Figure 178352DEST_PATH_IMAGE010
respectively a repulsive field at the boundary of a lane line and a gravitational field at the center of the lane line under a curve coordinate, b and a respectively are adjustment coefficients of the boundary of the lane and the potential field at the center of the lane, d LR And d LL Respectively, the abscissa of the right and left lane lines, d the abscissa of the own vehicle, d LC The abscissa representing the center line of the lane. The lane line potential field may be constructed by combining the identified lane lines with a potential field function, as shown in fig. 14.
And S150, fusing the lane line potential field with a lane keeping model to obtain a lane keeping function.
In the present embodiment, the lane-keeping function refers to a function for predicting lane keeping.
In an embodiment, referring to fig. 7, the step S150 may include steps S151 to S153.
And S151, calculating the transverse position and the longitudinal position of the vehicle from the lane line based on the lane keeping model.
In the embodiment, an optimization problem is constructed based on a model predictive control theory, the conversion between geodetic coordinates and road coordinates is realized based on a coordinate conversion theory, and the transverse position and the longitudinal position of the vehicle from a lane line are calculated. The prediction and reference values in the prediction domain are generated as:
Figure 107517DEST_PATH_IMAGE011
Figure 110108DEST_PATH_IMAGE012
Figure 899073DEST_PATH_IMAGE013
Figure 469862DEST_PATH_IMAGE014
Figure 242646DEST_PATH_IMAGE015
and S152, designing an objective function by combining the transverse distance, the reference transverse target and the lane line potential field.
In this embodiment, the objective function is
Figure 783349DEST_PATH_IMAGE016
(ii) a Wherein, V pre Represents N P Step prediction state, V ref Represents N P The state reference represents a state weight coefficient matrix and the control weight coefficient matrix respectively.
And S153, combining the lane line potential field, the objective function and the vehicle three-degree-of-freedom dynamic model to construct an optimization problem so as to obtain a lane keeping function.
Combining the step S142, the step S151, the step S152 and the vehicle three-degree-of-freedom dynamic model, constructing a nonlinear, multi-objective and multi-constraint convex optimization problem as follows:
Figure 817033DEST_PATH_IMAGE017
(ii) a Wherein, A d And B d The representation is based on
Figure 417779DEST_PATH_IMAGE018
Coefficient of equation of state matrix, C d And D d A matrix of coefficients of an observation equation is represented,
Figure 412279DEST_PATH_IMAGE019
representing a relaxation factor, converting the soft constraint to a semi-hard constraint. The semi-hard constraint allows the vehicle to deviate from the lane center line within a certain range at the cost of penalty.
The method of the embodiment firstly finishes the calibration of the internal and external parameters of the camera based on the camera parameters, and finishes the image gray processing and the inverse perspective transformation; secondly, based on a space density clustering algorithm, extracting lane line features, and combining a moving least square method to realize lane line fitting; and finally, fusing the predictive control of the lane line potential field and the lane keeping model, completing a vehicle longitudinal and transverse cooperative control module, and realizing a lane keeping function. The method is characterized in that a vehicle is combined with a model to predict transverse distance, a transverse target and a lane line potential field are referred to, meanwhile, a target function is designed by taking vehicle passing efficiency and tracking precision as targets, a nonlinear, multi-target and multi-constraint convex optimization problem function is constructed by combining corresponding formulas, a lane keeping function is combined with lane line detection precision, the stability of a vehicle in a lane keeping process is guaranteed, the lane keeping function has good compatibility under the scenes of solid lines and dotted lines, and meanwhile, the lane control function has better stability in the process of passing curve control. Compared with single transverse control, the vehicle tracking performance and the tracking potential are improved.
According to the lane keeping method, the camera calibration data is obtained by calibrating the internal and external parameters of the camera, the lane line is extracted, the lane line is fitted, the lane line potential field is constructed by combining the transverse position of the vehicle, the lane line potential field is fused with the lane keeping model to obtain the lane keeping function, the lane keeping function can be used for predicting lane keeping, small probability events in the actual driving scene can be responded, the lane keeping method is suitable for detecting curves and dotted lines, and the stability in the process of over-bending control is improved.
Fig. 15 is a schematic block diagram of a lane keeping apparatus 300 according to an embodiment of the present invention. As shown in fig. 15, the present invention also provides a lane keeping device 300 corresponding to the above lane keeping method. The lane keeping apparatus 300 includes a unit for performing the above-described lane keeping method, and the apparatus may be configured in a server. Specifically, referring to fig. 15, the lane keeping device 300 includes a data acquisition unit 301, a lane line extraction unit 302, a fitting unit 303, a construction unit 304, and a fusion unit 305.
A data acquisition unit 301 configured to acquire camera calibration data; a lane line extraction unit 302, configured to extract a lane line according to the camera calibration data to obtain an extraction result; a fitting unit 303, configured to perform lane line fitting according to the extraction result to obtain a fitting result; a construction unit 304, configured to construct a lane line potential field based on a lateral position of the vehicle according to the fitting result; a fusion unit 305, configured to fuse the lane line potential field with a lane keeping model to obtain a lane keeping function.
In one embodiment, as shown in fig. 16, the data acquisition unit 301 includes a calibration subunit 3011 and a coordinate acquisition subunit 3012.
The calibration subunit 3011 is configured to calibrate the camera internal parameter and the camera external parameter based on the camera parameter to obtain a calibration result; and a coordinate obtaining subunit 3012, configured to obtain 2D pixel coordinates according to the calibration result, so as to obtain camera calibration data.
In one embodiment, as shown in fig. 17, the lane line extraction unit 302 includes a processing subunit 3021, a transformation subunit 3022, and an extraction subunit 3023.
A processing subunit 3021, configured to perform image grayscale processing on the camera calibration data to obtain a processing result; a transformation subunit 3022, configured to perform inverse perspective transformation on the processing result to obtain a transformation result; an extracting subunit 3023, configured to perform feature extraction on the transform result to obtain an extraction result.
In one embodiment, as shown in fig. 18, the extracting subunit 3023 includes a feature point extracting module 30231 and a filtering module 30232.
A feature point extracting module 30231, configured to perform cluster feature extraction on the transform result for objects with the same density by using a DBSCAN clustering algorithm to obtain feature points; a screening module 30232, configured to screen the connection points through parameter adjustment to identify the connection points, so as to obtain an extraction result.
In an embodiment, the fitting unit 303 is configured to perform curve fitting on the extracted result based on a moving least square method to obtain a fitting result.
In an embodiment, as shown in fig. 19, the building unit 304 includes a relationship determining subunit 3041 and a function building subunit 3042.
A relationship determination subunit 3041, configured to determine a curve coordinate mapping relationship of the lane line according to the fitting result; the function building subunit 3042 is configured to design a lane line potential field function based on the vehicle transverse position according to the curve coordinate mapping relationship of the lane line, so as to build the lane line potential field.
In one embodiment, as shown in FIG. 20, the fusion unit 305 includes a position calculation subunit 3051, an objective function design subunit 3052, and a problem construction subunit 3053.
A position calculation subunit 3051 configured to calculate a lateral position and a longitudinal position of the vehicle from the lane line based on the lane keeping model; an objective function design subunit 3052, configured to design an objective function in combination with the lateral distance, the reference lateral target, and the lane line potential field; and the problem construction subunit 3053 is configured to combine the lane line potential field, the objective function, and the vehicle three-degree-of-freedom dynamic model to construct an optimization problem, so as to obtain a lane keeping function.
It should be noted that, as can be clearly understood by those skilled in the art, the detailed implementation process of the lane keeping device 300 and each unit may refer to the corresponding description in the foregoing method embodiment, and for convenience and conciseness of description, no further description is provided herein.
The lane keeping apparatus 300 described above may be implemented in the form of a computer program that can be run on a computer device as shown in fig. 21.
Referring to fig. 21, fig. 21 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a server, wherein the server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 21, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and computer programs 5032. The computer program 5032 comprises program instructions that, when executed, may cause the processor 502 to perform a lane keeping method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for running the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 may be caused to execute a lane keeping method.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 21 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation of the computer device 500 to which the present application is applied, and that a particular computer device 500 may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to run the computer program 5032 stored in the memory to perform the steps of:
acquiring camera calibration data; extracting a lane line according to the camera calibration data to obtain an extraction result; performing lane line fitting according to the extraction result to obtain a fitting result; constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result; and fusing the lane line potential field and a lane keeping model to obtain a lane keeping function.
In an embodiment, when the processor 502 implements the step of acquiring the camera calibration data, the following steps are specifically implemented:
calibrating the internal parameters and the external parameters of the camera based on the camera parameters to obtain a calibration result; and acquiring 2D pixel coordinates according to the calibration result to obtain camera calibration data.
In an embodiment, when the processor 502 implements the step of extracting the lane line according to the camera calibration data to obtain the extraction result, the following steps are specifically implemented:
performing image gray processing on the camera calibration data to obtain a processing result; carrying out inverse perspective transformation on the processing result to obtain a transformation result; and performing feature extraction on the transformation result to obtain an extraction result.
In an embodiment, when implementing the step of performing feature extraction on the transformation result to obtain an extraction result, the processor 502 specifically implements the following steps:
adopting a DBSCAN clustering algorithm to extract the clustering features of the transformation result aiming at the objects with the same density to obtain feature points; and (4) screening the connection points of the characteristic points through parameter adjustment and identification to obtain an extraction result.
In an embodiment, when implementing the step of performing lane line fitting according to the extraction result to obtain a fitting result, the processor 502 specifically implements the following steps:
and performing curve fitting on the extraction result based on a moving least square method to obtain a fitting result.
In an embodiment, when implementing the step of constructing the lane line potential field based on the lateral position of the vehicle according to the fitting result, the processor 502 specifically implements the following steps:
determining a curve coordinate mapping relation of the lane line according to the fitting result; and designing a potential field function of the lane line based on the transverse position of the vehicle according to the curve coordinate mapping relation of the lane line so as to construct the potential field of the lane line.
In an embodiment, when the step of fusing the lane line potential field and the lane keeping model to obtain the lane keeping function is implemented by the processor 502, the following steps are specifically implemented:
calculating a lateral position and a longitudinal position of the vehicle from the lane line based on the lane keeping model; designing a target function by combining the transverse distance, the reference transverse target and the lane line potential field; and constructing an optimization problem by combining the lane line potential field, the objective function and the vehicle three-degree-of-freedom dynamic model to obtain a lane keeping function.
It should be understood that, in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program, when executed by a processor, causes the processor to perform the steps of:
acquiring camera calibration data; extracting lane lines according to the camera calibration data to obtain an extraction result; performing lane line fitting according to the extraction result to obtain a fitting result; constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result; and fusing the lane line potential field and a lane keeping model to obtain a lane keeping function.
In an embodiment, when the step of obtaining camera calibration data is implemented by executing the computer program, the processor specifically implements the following steps:
calibrating the internal parameters and the external parameters of the camera based on the camera parameters to obtain a calibration result; and acquiring 2D pixel coordinates according to the calibration result to obtain camera calibration data.
In an embodiment, when the processor executes the computer program to realize the step of extracting the lane line according to the camera calibration data to obtain the extraction result, the following steps are specifically realized:
performing image gray processing on the camera calibration data to obtain a processing result; carrying out inverse perspective transformation on the processing result to obtain a transformation result; and performing feature extraction on the transformation result to obtain an extraction result.
In an embodiment, when the processor executes the computer program to implement the step of performing feature extraction on the transformation result to obtain an extraction result, the following steps are specifically implemented:
adopting a DBSCAN clustering algorithm to extract the clustering features of the transformation result aiming at the objects with the same density to obtain feature points; and screening the connection points of the characteristic points through parameter adjustment and identification to obtain an extraction result.
In an embodiment, when the processor executes the computer program to implement the step of performing lane line fitting according to the extraction result to obtain a fitting result, the following steps are specifically implemented:
and performing curve fitting on the extraction result based on a moving least square method to obtain a fitting result.
In an embodiment, when the step of constructing the lane line potential field based on the lateral position of the vehicle according to the fitting result is implemented by the processor executing the computer program, the following steps are specifically implemented:
determining a curve coordinate mapping relation of the lane line according to the fitting result; and designing a potential field function of the lane line based on the transverse position of the vehicle according to the curve coordinate mapping relation of the lane line so as to construct the potential field of the lane line.
In an embodiment, when the processor executes the computer program to realize the step of fusing the lane line potential field and the lane keeping model to obtain the lane keeping function, the following steps are specifically realized:
calculating a lateral position and a longitudinal position of the vehicle from the lane line based on the lane keeping model; designing a target function by combining the transverse distance, the reference transverse target and the lane line potential field; and constructing an optimization problem by combining the lane line potential field, the objective function and the vehicle three-degree-of-freedom dynamic model to obtain a lane keeping function.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A lane keeping method, characterized by comprising:
acquiring camera calibration data;
extracting a lane line according to the camera calibration data to obtain an extraction result;
performing lane line fitting according to the extraction result to obtain a fitting result;
constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result;
and fusing the lane line potential field and a lane keeping model to obtain a lane keeping function.
2. The lane keeping method of claim 1, wherein said acquiring camera calibration data comprises:
calibrating the internal parameters and the external parameters of the camera based on the camera parameters to obtain a calibration result;
and acquiring 2D pixel coordinates according to the calibration result to obtain camera calibration data.
3. The lane keeping method of claim 1, wherein said extracting lane lines from said camera calibration data to obtain an extraction result comprises:
performing image gray processing on the camera calibration data to obtain a processing result;
carrying out inverse perspective transformation on the processing result to obtain a transformation result;
and performing feature extraction on the transformation result to obtain an extraction result.
4. The lane keeping method according to claim 3, wherein the performing feature extraction on the transformation result to obtain an extraction result comprises:
adopting a DBSCAN clustering algorithm to extract the clustering features of the transformation result aiming at the objects with the same density to obtain feature points;
and screening the connection points of the characteristic points through parameter adjustment and identification to obtain an extraction result.
5. The lane keeping method according to claim 1, wherein the performing lane line fitting according to the extraction result to obtain a fitting result comprises:
and performing curve fitting on the extraction result based on a moving least square method to obtain a fitting result.
6. The lane-keeping method of claim 1, wherein said constructing a lane-line potential field based on vehicle lateral position from said fitting results comprises:
determining a curve coordinate mapping relation of the lane line according to the fitting result;
and designing a potential field function of the lane line based on the transverse position of the vehicle according to the curve coordinate mapping relation of the lane line so as to construct the potential field of the lane line.
7. The lane keeping method of claim 6, wherein said fusing said lane line potential field with a lane keeping model to derive a lane keeping function comprises:
calculating a lateral position and a longitudinal position of the vehicle from the lane line based on the lane keeping model;
designing a target function by combining the transverse distance, the reference transverse target and the lane line potential field;
and combining the lane line potential field, the objective function and the vehicle three-degree-of-freedom dynamic model to construct an optimization problem so as to obtain a lane keeping function.
8. Lane keeping device, characterized by comprising:
the data acquisition unit is used for acquiring camera calibration data;
the lane line extraction unit is used for extracting a lane line according to the camera calibration data to obtain an extraction result;
the fitting unit is used for fitting the lane line according to the extraction result to obtain a fitting result;
the construction unit is used for constructing a lane line potential field based on the transverse position of the vehicle according to the fitting result;
and the fusion unit is used for fusing the lane line potential field with a lane keeping model to obtain a lane keeping function.
9. A computer device, characterized in that the computer device comprises a memory, on which a computer program is stored, and a processor, which when executing the computer program implements the method according to any of claims 1 to 7.
10. A storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 7.
CN202211482598.3A 2022-11-24 2022-11-24 Lane keeping method, lane keeping apparatus, computer device, and storage medium Active CN115546319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211482598.3A CN115546319B (en) 2022-11-24 2022-11-24 Lane keeping method, lane keeping apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211482598.3A CN115546319B (en) 2022-11-24 2022-11-24 Lane keeping method, lane keeping apparatus, computer device, and storage medium

Publications (2)

Publication Number Publication Date
CN115546319A true CN115546319A (en) 2022-12-30
CN115546319B CN115546319B (en) 2023-03-28

Family

ID=84720587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211482598.3A Active CN115546319B (en) 2022-11-24 2022-11-24 Lane keeping method, lane keeping apparatus, computer device, and storage medium

Country Status (1)

Country Link
CN (1) CN115546319B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112319469A (en) * 2020-11-16 2021-02-05 深圳市康士柏实业有限公司 Lane keeping auxiliary system and method based on machine vision
CN113525375A (en) * 2020-04-21 2021-10-22 郑州宇通客车股份有限公司 Vehicle lane changing method and device based on artificial potential field method
WO2021253245A1 (en) * 2020-06-16 2021-12-23 华为技术有限公司 Method and device for identifying vehicle lane changing tendency
US20220348201A1 (en) * 2021-04-30 2022-11-03 Nissan North America, Inc. Intelligent Pedal Lane Change Assist

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113525375A (en) * 2020-04-21 2021-10-22 郑州宇通客车股份有限公司 Vehicle lane changing method and device based on artificial potential field method
WO2021253245A1 (en) * 2020-06-16 2021-12-23 华为技术有限公司 Method and device for identifying vehicle lane changing tendency
CN112319469A (en) * 2020-11-16 2021-02-05 深圳市康士柏实业有限公司 Lane keeping auxiliary system and method based on machine vision
US20220348201A1 (en) * 2021-04-30 2022-11-03 Nissan North America, Inc. Intelligent Pedal Lane Change Assist

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GAO ZHENHAI 等: "Optimal Preview Trajectory Decision Model of Lane-keeping System with Driver Behavior Simulation and Artificial Potential Field", 《2009 IEEE INTELLIGENT VEHICLES SYMPOSIUM》 *
胡振国等: "基于人工势场法的车道保持***", 《汽车工程学报》 *

Also Published As

Publication number Publication date
CN115546319B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
Zhou et al. Automated evaluation of semantic segmentation robustness for autonomous driving
CN110728658A (en) High-resolution remote sensing image weak target detection method based on deep learning
WO2022007776A1 (en) Vehicle positioning method and apparatus for target scene region, device and storage medium
CN110487286B (en) Robot pose judgment method based on point feature projection and laser point cloud fusion
CN109214422B (en) Parking data repairing method, device, equipment and storage medium based on DCGAN
Xie et al. A binocular vision application in IoT: Realtime trustworthy road condition detection system in passable area
CN112258565B (en) Image processing method and device
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
CN116612468A (en) Three-dimensional target detection method based on multi-mode fusion and depth attention mechanism
CN114280583B (en) Laser radar positioning accuracy verification method and system without GPS signal
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN114280626A (en) Laser radar SLAM method and system based on local structure information expansion
CN114578807A (en) Active target detection and obstacle avoidance method for unmanned target vehicle radar vision fusion
CN113916565B (en) Steering wheel zero deflection angle estimation method and device, vehicle and storage medium
CN116702607A (en) BIM-FEM-based bridge structure digital twin body and method
WO2024041447A1 (en) Pose determination method and apparatus, electronic device and storage medium
CN115546319B (en) Lane keeping method, lane keeping apparatus, computer device, and storage medium
CN113420590A (en) Robot positioning method, device, equipment and medium in weak texture environment
CN113191427A (en) Multi-target vehicle tracking method and related device
CN109816710B (en) Parallax calculation method for binocular vision system with high precision and no smear
CN111861931A (en) Model training method, image enhancement method, model training device, image enhancement device, electronic equipment and storage medium
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN111462177B (en) Multi-clue-based online multi-target tracking method and system
CN115236643A (en) Sensor calibration method, system, device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No. 301, 302, Floor 3, Building A, Subzone 3, Leibai Zhongcheng Life Science Park, No. 22, Jinxiu East Road, Jinsha Community, Kengzi Street, Pingshan District, Shenzhen, Guangdong 518000

Applicant after: Shenzhen Automotive Research Institute Beijing University of Technology

Address before: Floor 19, block a, innovation Plaza, 2007 Pingshan street, Pingshan District, Shenzhen, Guangdong 518000

Applicant before: Shenzhen Automotive Research Institute Beijing University of Technology

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230515

Address after: No. 301, 302, Floor 3, Building A, Subzone 3, Leibai Zhongcheng Life Science Park, No. 22, Jinxiu East Road, Jinsha Community, Kengzi Street, Pingshan District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Automotive Research Institute Beijing University of Technology

Patentee after: BYD AUTO Co.,Ltd.

Address before: No. 301, 302, Floor 3, Building A, Subzone 3, Leibai Zhongcheng Life Science Park, No. 22, Jinxiu East Road, Jinsha Community, Kengzi Street, Pingshan District, Shenzhen, Guangdong 518000

Patentee before: Shenzhen Automotive Research Institute Beijing University of Technology

TR01 Transfer of patent right