CN113869203A - Vehicle positioning method and system - Google Patents

Vehicle positioning method and system Download PDF

Info

Publication number
CN113869203A
CN113869203A CN202111137868.2A CN202111137868A CN113869203A CN 113869203 A CN113869203 A CN 113869203A CN 202111137868 A CN202111137868 A CN 202111137868A CN 113869203 A CN113869203 A CN 113869203A
Authority
CN
China
Prior art keywords
vehicle
positioning
vector map
pose
top view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111137868.2A
Other languages
Chinese (zh)
Inventor
李赵
张旸
陈诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AutoCore Intelligence Technology Nanjing Co Ltd
Original Assignee
AutoCore Intelligence Technology Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AutoCore Intelligence Technology Nanjing Co Ltd filed Critical AutoCore Intelligence Technology Nanjing Co Ltd
Priority to CN202111137868.2A priority Critical patent/CN113869203A/en
Publication of CN113869203A publication Critical patent/CN113869203A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a vehicle positioning method, which comprises the following steps: step 1: collecting road surface image information and vehicle motion information; step 2: converting the collected road surface image information into a top view by an inverse perspective transformation method; matching the road surface image information in the top view with the vector map to obtain the position of the top view in the vector map; and step 3: obtaining the position of the vehicle in the vector map according to the position of the vehicle in the top view, thereby obtaining the pose information of the vehicle; the invention provides a vehicle positioning system. The invention has low use cost and good positioning effect, can quickly complete positioning, and is accurate in positioning, which is particularly important for automatically driven vehicles; moreover, the method or the system provided by the invention has low requirements on computer computing power.

Description

Vehicle positioning method and system
Technical Field
The invention belongs to the field of automatic driving, and particularly relates to a vehicle positioning method and system.
Background
Vehicle positioning is a key technology in the field of automatic driving, vehicle positioning is realized by combining sensors such as high-precision integrated navigation, multi-line laser radar, Camera and the like with a high-precision map, and algorithms such as Kalman filtering, SLAM and the like are mainly adopted. The high-precision combined navigation is adopted, so that the overall price is high and the use cost is high; the multi-line laser radar positioning has the advantages of large computing power, high price and high requirement on computing platform resources. At present, the absolute positioning is realized by adopting expensive GPS and IMU equipment, but the positioning deviation of the method is larger in the places where the GPS signal and magnetic field environment of the vehicle passes through a tunnel for a long time and the like is unstable.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems in the prior art, the invention provides a vehicle positioning method which is based on a low-cost sensor and has accurate positioning effect.
The technical scheme is as follows: in order to achieve the above object, the present invention provides a vehicle positioning method, comprising the steps of:
step 1: collecting road surface image information and vehicle motion information;
step 2: converting the collected road surface image information into a top view by an inverse perspective transformation method; matching the road surface image information in the top view with the vector map to obtain the position of the top view in the vector map;
and step 3: obtaining the position of the vehicle in the vector map according to the position of the vehicle in the top view, thereby obtaining the pose information of the vehicle;
wherein, the step 3 comprises transverse positioning, longitudinal positioning and fusion positioning;
the transverse positioning obtains the transverse offset distance of the vehicle in the lane through a top view, and obtains the state quantity output by the first Kalman filter through respectively optimizing the vehicle motion information acquired by the sensor, the transverse offset distance of the vehicle in the lane and fusing and positioning to obtain the pose information of the vehicle through the first Kalman filter; wherein the state quantity output by the first Kalman filter comprises a lateral offset distance of the vehicle in a road, a lateral acceleration of the vehicle, a speed of the vehicle and a yaw angle of the vehicle;
the longitudinal positioning is used for registering the road surface identification in the top view and the road surface identification in the vector map to obtain the longitudinal displacement of the vehicle in the vector map; respectively optimizing vehicle motion information acquired by a sensor, longitudinal displacement of the vehicle in a vector map and position and pose information of the vehicle obtained by fusion positioning through a second Kalman filter to obtain state quantities output by the second Kalman filter; the state quantities output by the second Kalman filter are longitudinal displacement of the vehicle in a vector map, the running speed of the vehicle and the yaw angle of the vehicle;
the fusion localization comprises the following steps:
step 301: obtaining the pose V of the vehicle in the vector map according to the output values of the transverse positioning submodule and the longitudinal positioning submodulepos
Step 302: at a pose VposTaking a plurality of reference poses as a center;
step 303: respectively taking out the pose V from the vector mapposAnd a set of lane line discrete points near the plurality of reference poses
Figure BDA0003282949330000021
Representing a set of lane line discrete points near the ith pose in the vector map, i ∈ (1, 2, …, T + 1);
step 304: respectively taken out in the top view and positioned at the pose VposAnd a set of lane line discrete points near the plurality of reference poses
Figure BDA0003282949330000022
Representing a set of lane line discrete points near the ii-th pose in a top view
Step 305: separately computing a set for each pose
Figure BDA0003282949330000023
And collections
Figure BDA0003282949330000024
The minimum distance between the vehicles is the minimum distance, and the minimum corresponding pose of the minimum distance is taken as the pose of the current vehicle; the pose of the vehicle includes the vehicle in a vector mapThe abscissa, the ordinate and the heading angle of the vehicle.
Further, the method for obtaining the top view in the step 2 comprises the following steps: according to the formula:
Figure BDA0003282949330000025
Figure BDA0003282949330000026
Figure BDA0003282949330000027
converting the collected road surface image information into a top view; x and y are respectively expressed as an abscissa value and an ordinate value in a converted top view, h is the height of the camera from the ground, beta is the pitch angle of the camera, and alpha is the deflection angle of a pixel point in a road surface image acquired by the camera relative to the focus of the camera; u and v represent the abscissa and ordinate of the pixel in the road surface image collected by the camera; m'-1And performing inverse transformation on M', wherein M is I T, I is an internal parameter matrix of the camera, and T is an external parameter matrix from the camera to the vehicle. And the reconstructed M matrix is adopted, so that the calculation is more convenient.
Further, the method for acquiring the pitch angle β of the camera comprises the following steps: in an initial state, a plurality of elevation angles are selected by taking the measured pitch angle as a reference, plan view conversion is respectively carried out, the lane width in the rotated plan view is compared with the lane width in the vector map, the difference value between the lane width in the plan view and the lane width in the vector map is taken, and the elevation angle with the minimum difference value is the elevation angle in plan view conversion calculation. The pitch angle β thus obtained is more accurate, and the top view converted from the pitch angle β is more accurate.
Further, the method for selecting a plurality of pitch angles comprises the following steps: taking the pitch angle of the camera measured in the initial state as a reference pitch angle, and respectively taking 20 pitch angles before and after the reference pitch angle at the same angle interval; wherein the angular interval is 0.1 deg.. Therefore, the calculation precision can be effectively met; meanwhile, the calculation speed can be increased.
Furthermore, an alarm signal is sent out according to the deviation degree of the vehicle from the center line of the road in the transverse positioning; wherein the degree of the deviation of the vehicle from the center line of the road is
Figure BDA0003282949330000031
If it is not
Figure BDA0003282949330000032
Not less than
Figure BDA0003282949330000033
An alarm signal is sent out; d is the lateral offset of the vehicle in the lane, VwIs the width of the vehicle body, RwIs the lane width, sdIs a safe distance. Therefore, whether the vehicle runs safely or not can be quickly known.
Further, the collection
Figure BDA0003282949330000034
And collections
Figure BDA0003282949330000035
The method for acquiring the minimum distance between the two elements comprises the following steps: selecting a set with few points as a reference set, traversing each point in the reference set and each point in a set with many points to calculate the distance between the two points, and selecting a minimum distance value corresponding to each point in the reference set; obtaining a minimum distance set of a reference set; adding each value in the minimum distance set to obtain a set
Figure BDA0003282949330000036
And collections
Figure BDA0003282949330000037
The minimum distance between. Therefore, the positioning can be completed more quickly and accurately, and the method has low requirement on hardware of a computer platform.
The invention also provides a vehicle positioning system, which comprises a data acquisition module, an image conversion matching module and a vehicle positioning module; wherein,
the data acquisition module is used for acquiring vehicle operation data and road surface image information around the vehicle;
the image conversion matching module converts the road surface image information around the vehicle, which is acquired by the data acquisition module, into a top view through an inverse perspective transformation method; matching the road surface image information in the top view with the vector map to obtain the position of the top view in the vector map;
and obtaining the position of the vehicle in the vector map according to the position of the vehicle in the top view, thereby obtaining the positioning information of the vehicle.
Further, the vehicle positioning module comprises a transverse positioning sub-module, a longitudinal positioning sub-module and a fusion positioning sub-module, the transverse positioning sub-module and the longitudinal positioning sub-module input results into the fusion positioning sub-module in real time, and the fusion positioning sub-module outputs the positioning results to the terminal in real time and respectively sends the positioning results to the transverse positioning sub-module and the longitudinal positioning sub-module for optimization; wherein:
the transverse positioning sub-module obtains a transverse offset distance of the vehicle in the lane through a top view, and obtains pose information of the vehicle through a first Kalman filter by respectively optimizing vehicle motion information acquired by a sensor, the transverse offset distance of the vehicle in the lane and fusion positioning; wherein the state quantity output by the first Kalman filter comprises a lateral offset distance of the vehicle in a road, a lateral acceleration of the vehicle, a speed of the vehicle and a yaw angle of the vehicle;
the longitudinal positioning sub-module is used for registering the road surface identification in the top view and the road surface identification in the vector map to obtain the longitudinal displacement of the vehicle in the vector map; respectively optimizing vehicle motion information acquired by a sensor, longitudinal displacement of the vehicle in a vector map and position and pose information of the vehicle obtained by fusion positioning through a second Kalman filter to obtain state quantities output by the second Kalman filter; the state quantities output by the second Kalman filter are longitudinal displacement of the vehicle in a vector map, the running speed of the vehicle and the yaw angle of the vehicle;
the positioning method of the fusion positioning submodule comprises the following steps:
obtaining the pose V of the vehicle in the vector map according to the output values of the transverse positioning submodule and the longitudinal positioning submodulepos
At a pose VposTaking a plurality of reference poses as a center;
respectively taking out the pose V from the vector mapposAnd a set of lane line discrete points near the plurality of reference poses
Figure BDA0003282949330000041
Figure BDA0003282949330000042
Representing a set of lane line discrete points near the ith pose in the vector map, i ∈ (1, 2, …, T + 1);
respectively taken out in the top view and positioned at the pose VposAnd a set of lane line discrete points near the plurality of reference poses
Figure BDA0003282949330000043
Representing a set of lane line discrete points near the ii-th pose in the top view;
separately computing a set for each pose
Figure BDA0003282949330000044
And collections
Figure BDA0003282949330000045
The minimum distance between the vehicles is the minimum distance, and the minimum corresponding pose of the minimum distance is taken as the pose of the current vehicle; the pose of the vehicle comprises the abscissa and the ordinate of the vehicle in the vector map and the heading angle of the vehicle.
Further, the collection
Figure BDA0003282949330000046
And collections
Figure BDA0003282949330000047
The method for acquiring the minimum distance between the two elements comprises the following steps: selecting a set with few points as a reference set, traversing each point in the reference set and each point in a set with many points to calculate the distance between the two points, and selecting a minimum distance value corresponding to each point in the reference set; obtaining a minimum distance set with a reference set; adding each value in the minimum distance set to obtain a set
Figure BDA0003282949330000048
And collections
Figure BDA0003282949330000049
The minimum distance between.
Further, the data acquisition module comprises a camera, the camera is arranged at the top of the vehicle, and the pitch angle and the roll angle of the camera are both 0.
Has the advantages that: compared with the prior art, the invention has low use cost and good positioning effect, can quickly complete positioning, and is accurate in positioning, which is particularly important for automatically driven vehicles; moreover, the method or the system provided by the invention has low requirements on computer computing power.
Drawings
FIG. 1 is a schematic diagram of a system according to the present invention;
FIG. 2 is a flow chart of a method provided by the present invention;
FIG. 3 is a schematic view of the camera projection when the roll angle and yaw angle are 0;
FIG. 4 is a schematic view of a vehicle positioning module;
FIG. 5 is a schematic view of vehicle parameters optimized by the lateral positioning sub-module;
FIG. 6 is a schematic view of vehicle parameters optimized by the longitudinal positioning sub-module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the examples of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1: as shown in fig. 1, the present embodiment provides a vehicle positioning system, which mainly includes a data acquisition module, an image conversion matching module, and a vehicle positioning module; the data acquisition module mainly adopts a camera and an inertial sensor (IMU for short) to acquire relevant data of the vehicle; the camera is arranged at the top of the vehicle, and the pitch angle and the roll angle of the camera are both 0; the method is mainly used for collecting road surface image information, including lane lines, turning arrows, stop lines, speed-limiting road signs and the like; the IMU is mainly used for collecting vehicle motion information including acceleration, rotation angle and the like of vehicle motion. The image conversion matching module mainly converts the road surface image information acquired by the data acquisition module into a top view through an inverse perspective transformation method, and matches the road surface image information in the top view with a vector map to obtain the position of the top view in the vector map; the vehicle positioning module obtains the position of the vehicle in the vector map according to the position of the vehicle in the top view, so that the positioning information of the vehicle is obtained.
As shown in fig. 2, the present embodiment provides a positioning method based on a vehicle positioning system, which mainly includes the following steps:
step 1: the data acquisition module acquires road surface image information and vehicle motion information; the vehicle motion information includes acceleration, rotation angle, and the like of the vehicle motion;
step 2: the image conversion matching module converts the road surface image information acquired by the data acquisition module into a top view through an inverse perspective transformation method, and matches the road surface image information in the top view with the vector map to obtain the position of the top view in the vector map. Specifically, according to the formula:
Figure BDA0003282949330000051
and completing conversion, wherein h represents the height of the camera from the ground, beta represents the pitching angle of the camera, and in an initial state, the pitching angle beta is measured by a ruler or obtained by an external reference calibration method.
The specific working principle is as follows:
according to the camera imaging principle, the transformation from the space point under the vehicle coordinate system to the image coordinate system is as follows:
Figure BDA0003282949330000061
wherein u and v represent the abscissa and ordinate of the pixel in the road surface image collected by the camera, and I is the internal reference of the camera
Figure BDA0003282949330000062
Wherein, cxRepresenting the amount of displacement of the optical axis of the camera in the transverse direction in the image coordinate system, cyRepresenting the amount of displacement of the optical axis of the camera in the horizontal and vertical directions in the image coordinate system, fxAnd fyRespectively, the focal lengths in the lateral and longitudinal directions. T is the external reference from the camera to the vehicle, in the embodiment
Figure BDA0003282949330000063
Wherein r represents a rotation vector which is a 3 x 3 matrix and is obtained by a pitch angle, a roll angle and a course angle. t is a 3 x 1 matrix, which is composed of translation values of x, y and z respectively; x, y and z are respectively expressed as an abscissa value, an ordinate value and a height coordinate value in a vehicle coordinate system; i.e. the abscissa, ordinate and height coordinates in the top view. Zc is the actual depth value corresponding to the pixel point in the camera coordinate system. Wherein, I is a 3 x 3 matrix, and T is a 3 x 4 matrix. Let M ═ I × T, then M ═ 3 × 4 matrix.
Figure BDA0003282949330000064
Combining the formula (1) to obtain:
Figure BDA0003282949330000065
it can be seen from the formula (2) that any point [ x, y, z ] in the three-dimensional space can be projected into [ u, v ] in the two-dimensional image, and if [ u, v ] of the two-dimensional image is projected into the three-dimensional space [ x, y, z ], the M matrix cannot be inverted, which is not true. But if a certain latitude value in three-dimensional space is assumed to be confirmed. The position of a pixel coordinate point on a known plane can be solved by mainly reconstructing an M matrix and calculating Zc. The specific method comprises the following steps:
(1) let the height value z of the projection plane be h.
Figure BDA0003282949330000066
Simplifying the above formula to obtain:
Figure BDA0003282949330000067
m' is a 3 × 3 matrix, i.e., a square matrix, whose inverse transformation can be solved:
Figure BDA0003282949330000071
as shown in fig. 3, where o is the origin of the camera, oa is the projection height (assuming oa is h), h is the height of the camera from the ground, of is the focal length of the camera, gh is the image plane of the camera, d is a point on the ground, b is the corresponding point of d on the image, and β is the pitch angle, oc is the actual depth value corresponding to the pixel point in the camera coordinate system. Alpha is the declination angle of the point b relative to the focal point of the camera.
The coordinate values of the camera internal parameters and b in the image and the projection height of the camera can be obtained:
Figure BDA0003282949330000072
Figure BDA0003282949330000073
oc=od*cosα (9)
Figure BDA0003282949330000074
oc is equal to z of the camera at roll and yaw angles of 0cThe value, in combination with equations (10) and (6), can be found at the image space point [ u v ] at that time](x, y) coordinates in the z-h plane:
Figure BDA0003282949330000075
the conversion relationship obtainable from equation (11) is mainly determined by the altitude h and the pitch angle β. Wherein the height value is higher by geometric measurement. However, the pitch angle measurement method of the camera is difficult, and a certain deviation exists due to the undulation of the road surface in the running process of the vehicle, so that the detection of the lane line is unstable. At this time, the lane width in the transformed top view and the lane width in the vector map can be compared, so as to obtain the optimal pitch angle. Taking the measured pitch angle as a reference Pm, and respectively taking 20 pitch angles before and after the reference pitch angle Pm at the same angle interval; wherein the angular interval is 0.1 deg.. And respectively carrying out inverse perspective transformation by taking each elevation angle as a parameter, and carrying out difference on the transformed lane width and the lane width in the vector map. And taking the pitch angle corresponding to the minimum difference Dmin as a standard pitch angle, and carrying out inverse perspective transformation to obtain a lane line and road surface signals such as a steering arrow, a stop line, a speed-limiting road sign, a diamond mark and the like so as to complete the conversion of the top view. The minimum difference Dmin needs to be smaller than the threshold Dt, if the minimum difference Dmin is not smaller than the threshold Dt, it indicates that a large deviation occurs in the pitch angle, the calculation is incorrect, the threshold Dt is set, the accuracy and the effectiveness of the calculation can be effectively ensured, and the threshold Dt is 10 in this embodiment.
And step 3: the vehicle positioning module obtains the position of the vehicle in the vector map according to the position of the vehicle in the top view, so as to obtain the positioning information of the vehicle, and as shown in fig. 4, the vehicle positioning module specifically comprises a transverse positioning sub-module, a longitudinal positioning sub-module and a fusion positioning sub-module, wherein the transverse positioning sub-module and the longitudinal positioning sub-module input the results into the fusion positioning sub-module in real time, and the fusion positioning sub-module outputs the positioning results to the terminal in real time and respectively sends the positioning results to the transverse positioning sub-module and the longitudinal positioning sub-module; the transverse positioning sub-module and the longitudinal positioning sub-module work based on a Kalman filter, and simultaneously both work in a freset coordinate system.
The transverse positioning sub-module specifically comprises the following steps:
step 311: as shown in fig. 5, the central axis of the lane in the top view is set as a reference central axis, and the distance between the vehicle central point and the reference central axis is calculated to obtain the lateral offset distance d of the vehicle in the lane.
Step 312: according to
Figure BDA0003282949330000081
And calculating the deviation degree of the vehicle body from the center line of the road. If it is
Figure BDA0003282949330000082
Is less than
Figure BDA0003282949330000083
And if not, reporting warning information. VwDenotes the width of the vehicle body, RwIndicates the lane width, sdIndicating a safe distance.
Step 313: vehicle motion information acquired by a sensor, lateral offset distance of a vehicle in a road and the like are input into a first Kalman filter for optimization, and a state quantity X output by the first Kalman filter1I.e. the output of the transverse positioning submodule. Wherein the state quantity X of the first Kalman filter output1Is (d, a, v, θ), where d is the lateral offset distance of the vehicle in the road; a is the lateral acceleration of the vehicle: v is the running speed of the vehicle; theta is an included angle between the central axis of the vehicle and the reference central axis, and is the yaw angle of the vehicle.
The motion model is as follows:
Figure BDA0003282949330000084
wherein d istThe distance from the vehicle center point to the reference central axis at the moment t; dt-1The distance from the central point of the vehicle to the reference central axis at the moment t-1; v. oft-1The running speed of the vehicle at the moment t-1 is obtained; theta is an included angle between the central axis of the vehicle and the reference central axis; a is the lateral acceleration of the vehicle; deltatIs the time difference between time t and time t-1. Wherein, according to the formula:
Figure BDA0003282949330000085
obtaining the lateral acceleration a of the vehicle; wherein, aimuAcceleration of the vehicle collected for the IMU; r is the radius of curvature of the road identified in the vector map.
A motion prediction model can be obtained from the motion model:
Figure BDA0003282949330000086
wherein, X1tFor the state quantity X of the first Kalman filter at time t1Of each state, X1(t-1)The state quantity X of the first Kalman filter at the moment of t-11The state value of each state in the prediction model is Q, and the covariance of the prediction model is Q, and the motion noise of the prediction model is Q;
the conversion relation between the observation state and the state quantity is as follows:
Figure BDA0003282949330000091
in the above formula Z1For the observation state input to the first Kalman filter, Z1=[d,a,v]The observation state may be output data of the fusion positioning sub-module or data collected by the sensor, and T represents a matrix transposition operation. R is observation noise which is set according to the observation state, namely the output data of the positioning submodule and the number collected by the sensor are fusedThe observation noise is set based on an empirical value, depending on the observation noise used as the observation state. And obtaining a stable system state value according to the Kalman filter.
Longitudinally positioning the sub-module: registering the visually detected road surface identification with the road surface identification in the vector map to obtain the position of the vehicle in the vector map; the kalman filter is designed accordingly. The method specifically comprises the following steps:
step 321: as shown in fig. 6, visually detected road surface markers and ground surface markers in the vector map are registered, so as to obtain longitudinal displacement s of the vehicle in the vector map; the method comprises the steps of finding out the area of a vehicle in a vector map according to the similarity between road surface marks in a top view and the road surface marks in the vector map, and determining the longitudinal displacement s of the vehicle in the vector map through fixed road surface marks.
Step 322: inputting the vehicle motion information acquired by the sensor and the longitudinal displacement of the vehicle in the vector map obtained in the step 321 into a second Kalman filter for optimization; state quantity X of Kalman filter output2I.e. the output of the longitudinal positioning sub-module. Wherein the state quantity X of the second Kalman filter output2Is (s, v, θ), where s is the longitudinal displacement of the vehicle in the vector map; v is the running speed of the vehicle; theta is an included angle between the central axis of the vehicle and the reference central axis, and is the yaw angle of the vehicle.
The motion model is as follows:
st=st-1+vt-1*cosθ*Δt
wherein s istFor longitudinal displacement of the vehicle at time t, st-1Is the longitudinal displacement of the vehicle at time t-1; v. oft-1The running speed of the vehicle at the moment t-1 is obtained; theta is an included angle between the central axis of the vehicle and the reference central axis; deltatIs the time difference between time t and time t-1.
A motion prediction model can be obtained from the motion model:
Figure BDA0003282949330000092
wherein, X2tFor the state quantity X of the second Kalman filter at time t2Of each state, X2(t-1)The state quantity X of the second Kalman filter at the moment of t-12The state value of each state in the prediction model is Q, and the covariance of the prediction model is Q, and the motion noise of the prediction model is Q;
the conversion relation between the observation state and the state quantity is as follows:
Figure BDA0003282949330000101
in the above formula Z2For the observation state input to the second Kalman filter, Z2=[s,v,θ]The observation state may be output data of the fusion positioning sub-module or data collected by the sensor, and T represents a matrix transposition operation. And R is observation noise, the observation noise is set according to the observation state, namely the output data of the fusion positioning sub-module and the data collected by the sensor are different when the data are used as the observation state, and the observation noise is set according to an empirical value. And obtaining a stable system state value according to the Kalman filter.
Fusing to a positioning sub-module: and (4) carrying out fusion matching on the output values of the transverse positioning submodule and the longitudinal positioning submodule, visually detected lane information and the vector map so as to obtain an accurate positioning result of the vehicle in the vector map.
The method comprises the following implementation steps:
step 331: obtaining pose information V of the vehicle in the vector map according to output values of the transverse positioning submodule and the longitudinal positioning submodulepos(x1,y1Ya). Since the top view is matched with the vector map, the lateral offset distance d of the vehicle in the road output by the lateral positioning sub-module, the longitudinal displacement s of the vehicle in the vector map output by the longitudinal positioning sub-module and the included angle theta between the central axis of the vehicle and the reference central axis can be obtained by calculating through the lateral offset distance d of the vehicle in the road, the longitudinal displacement s of the vehicle in the vector map output by the longitudinal positioning sub-module and the included angle theta between the central axis of the vehicle and the reference central axis in the spoke coordinate systemPose information V of vehicle in vector mappos(x1,y1,yaw);x1Representing the abscissa, y, of the vehicle in a vector map1The ordinate of the vehicle in the vector map is shown, and yaw shows the heading angle of the vehicle, i.e. the value in the freset coordinate system is converted into the vector map coordinate system.
Step 332: at a pose VposTaking T reference poses as a center; in this embodiment, pose VposCentered at the forward direction of the vehicle and away from the pose VposFront and back 5 m, off position Vpos10 reference points are randomly arranged within the range of 2 meters left and right.
Step 333: respectively taking out the position V in the vector mapposAnd a set of lane line discrete points near the T reference poses
Figure BDA0003282949330000102
Representing a set of lane line discrete points near the ith pose in the vector map, i ∈ (1, 2, …, T + 1); in the present embodiment, the vicinity indicates the pose VposOr 20m ahead of the T reference poses.
Step 334: respectively taken out in plan view at VposAnd a set of lane line discrete points near the T reference poses
Figure BDA0003282949330000103
Representing a set of lane line discrete points near the ith position in the top view, i e (1, 2, …, T + 1); in the present embodiment, the vicinity indicates the pose VposOr 20m ahead of 10 reference poses. Obtaining the coordinates of the corresponding lanes in the top view in the vector map according to the position relation between the vehicle and the lanes in the top view, thereby obtaining the coordinates
Figure BDA0003282949330000119
Where the coordinates of each point in the vector map.
Step 335: separately computing a set for each pose
Figure BDA0003282949330000111
And collections
Figure BDA0003282949330000112
The minimum distance between the two positions is the minimum corresponding position of the minimum distance, namely the position of the current vehicle.
Wherein a set is calculated
Figure BDA0003282949330000113
And collections
Figure BDA0003282949330000114
The minimum distance between them is selected as a set
Figure BDA0003282949330000115
And
Figure BDA0003282949330000116
traversing each point in the reference set and calculating the distance between each point in the reference set and each point in the set with more points to obtain the minimum distance corresponding to each point in the reference set; thereby obtaining a set of minimum distances to the reference set; summing the minimum distances to obtain a set corresponding to the pose
Figure BDA0003282949330000117
And collections
Figure BDA0003282949330000118
The minimum distance between.

Claims (10)

1. A vehicle positioning method characterized by: the method comprises the following steps:
step 1: collecting road surface image information and vehicle motion information;
step 2: converting the collected road surface image information into a top view by an inverse perspective transformation method; matching the road surface image information in the top view with the vector map to obtain the position of the top view in the vector map;
and step 3: obtaining the position of the vehicle in the vector map according to the position of the vehicle in the top view, thereby obtaining the pose information of the vehicle;
wherein, the step 3 comprises transverse positioning, longitudinal positioning and fusion positioning;
the transverse positioning obtains the transverse offset distance of the vehicle in the lane through a top view, and obtains the state quantity output by the first Kalman filter through respectively optimizing the vehicle motion information acquired by the sensor, the transverse offset distance of the vehicle in the lane and fusing and positioning to obtain the pose information of the vehicle through the first Kalman filter; wherein the state quantity output by the first Kalman filter comprises a lateral offset distance of the vehicle in a road, a lateral acceleration of the vehicle, a speed of the vehicle and a yaw angle of the vehicle;
the longitudinal positioning is used for registering the road surface identification in the top view and the road surface identification in the vector map to obtain the longitudinal displacement of the vehicle in the vector map; respectively optimizing vehicle motion information acquired by a sensor, longitudinal displacement of the vehicle in a vector map and position and pose information of the vehicle obtained by fusion positioning through a second Kalman filter to obtain state quantities output by the second Kalman filter; the state quantities output by the second Kalman filter are longitudinal displacement of the vehicle in a vector map, the running speed of the vehicle and the yaw angle of the vehicle;
the fusion localization comprises the following steps:
step 301: obtaining the pose V of the vehicle in the vector map according to the output values of the transverse positioning submodule and the longitudinal positioning submodulepos
Step 302: in a posture CposTaking a plurality of reference poses as a center;
step 303: respectively taking out the pose V from the vector mapposAnd a set of lane line discrete points near the plurality of reference poses
Figure FDA0003282949320000011
Figure FDA0003282949320000012
Represents a set of discrete points of the lane line near the ith position in the vector map, i ∈ (1, 2, …)、T+1);
Step 304: respectively taken out in the top view and positioned at the pose VposAnd a set of lane line discrete points near the plurality of reference poses
Figure FDA0003282949320000013
Figure FDA0003282949320000014
Representing a set of lane line discrete points near the ii-th pose in the top view;
step 305: separately computing a set for each pose
Figure FDA0003282949320000015
And collections
Figure FDA0003282949320000016
The minimum distance between the vehicles is the minimum distance, and the minimum corresponding pose of the minimum distance is taken as the pose of the current vehicle; the pose of the vehicle comprises the abscissa and the ordinate of the vehicle in the vector map and the heading angle of the vehicle.
2. The vehicle positioning method according to claim 1, characterized in that: the method for obtaining the top view in the step 2 comprises the following steps: according to the formula:
Figure FDA0003282949320000021
Figure FDA0003282949320000022
Figure FDA0003282949320000023
converting the collected road surface image information into a top view; wherein x and y are respectively expressed as converted top viewsThe horizontal coordinate value and the vertical coordinate value in the figure, h represents the height of the camera from the ground, beta represents the pitch angle of the camera, and alpha is the deflection angle of a pixel point in a road surface image acquired by the camera relative to the focus of the camera; u and v represent the abscissa and ordinate of the pixel in the road surface image collected by the camera; m'-1And performing inverse transformation on M', wherein M is I T, I is an internal parameter matrix of the camera, and T is an external parameter matrix from the camera to the vehicle.
3. The vehicle positioning method according to claim 2, characterized in that: the method for acquiring the pitch angle beta of the camera comprises the following steps: in an initial state, a plurality of elevation angles are selected by taking the measured pitch angle as a reference, plan view conversion is respectively carried out, the lane width in the rotated plan view is compared with the lane width in the vector map, the difference value between the lane width in the plan view and the lane width in the vector map is taken, and the elevation angle with the minimum difference value is the elevation angle in plan view conversion calculation.
4. The vehicle positioning method according to claim 3, characterized in that: the method for selecting a plurality of elevation angles comprises the following steps: taking the pitch angle of the camera measured in the initial state as a reference pitch angle, and respectively taking 20 pitch angles before and after the reference pitch angle at the same angle interval; wherein the angular interval is 0.1 deg..
5. The vehicle positioning method according to claim 1, characterized in that: in the transverse positioning, an alarm signal is sent out according to the deviation degree of the vehicle from the center line of the road; wherein the degree of the deviation of the vehicle from the center line of the road is
Figure FDA0003282949320000024
If it is not
Figure FDA0003282949320000025
Not less than
Figure FDA0003282949320000026
Then sends out an alarm signal(ii) a d is the lateral offset of the vehicle in the lane, VwIs the width of the vehicle body, RwIs the lane width, sdIs a safe distance.
6. The vehicle positioning method according to claim 1, characterized in that: the collection
Figure FDA0003282949320000027
And collections
Figure FDA0003282949320000028
The method for acquiring the minimum distance between the two elements comprises the following steps: selecting a set with few points as a reference set, traversing each point in the reference set and each point in a set with many points to calculate the distance between the two points, and selecting a minimum distance value corresponding to each point in the reference set; obtaining a minimum distance set of a reference set; adding each value in the minimum distance set to obtain a set
Figure FDA0003282949320000031
And collections
Figure FDA0003282949320000032
The minimum distance between.
7. A vehicle positioning system, characterized by: the system comprises a data acquisition module, an image conversion matching module and a vehicle positioning module; wherein,
the data acquisition module is used for acquiring vehicle operation data and road surface image information around the vehicle;
the image conversion matching module converts the road surface image information around the vehicle, which is acquired by the data acquisition module, into a top view through an inverse perspective transformation method; matching the road surface image information in the top view with the vector map to obtain the position of the top view in the vector map;
and obtaining the position of the vehicle in the vector map according to the position of the vehicle in the top view, thereby obtaining the positioning information of the vehicle.
8. The vehicle locating system of claim 7, wherein: the vehicle positioning module comprises a transverse positioning sub-module, a longitudinal positioning sub-module and a fusion positioning sub-module, the transverse positioning sub-module and the longitudinal positioning sub-module input results into the fusion positioning sub-module in real time, and the fusion positioning sub-module outputs the positioning results to a terminal in real time and respectively sends the positioning results to the transverse positioning sub-module and the longitudinal positioning sub-module for optimization; wherein:
the transverse positioning sub-module obtains a transverse offset distance of the vehicle in the lane through a top view, and obtains pose information of the vehicle through a first Kalman filter by respectively optimizing vehicle motion information acquired by a sensor, the transverse offset distance of the vehicle in the lane and fusion positioning; wherein the state quantity output by the first Kalman filter comprises a lateral offset distance of the vehicle in a road, a lateral acceleration of the vehicle, a speed of the vehicle and a yaw angle of the vehicle;
the longitudinal positioning sub-module is used for registering the road surface identification in the top view and the road surface identification in the vector map to obtain the longitudinal displacement of the vehicle in the vector map; respectively optimizing vehicle motion information acquired by a sensor, longitudinal displacement of the vehicle in a vector map and position and pose information of the vehicle obtained by fusion positioning through a second Kalman filter to obtain state quantities output by the second Kalman filter; the state quantities output by the second Kalman filter are longitudinal displacement of the vehicle in a vector map, the running speed of the vehicle and the yaw angle of the vehicle;
the positioning method of the fusion positioning submodule comprises the following steps:
obtaining the pose V of the vehicle in the vector map according to the output values of the transverse positioning submodule and the longitudinal positioning submodulepos
At a pose VposTaking a plurality of reference poses as a center;
respectively taking out the pose V from the vector mapposAnd a set of lane line discrete points near the plurality of reference poses
Figure FDA0003282949320000033
Figure FDA0003282949320000041
Representing a set of lane line discrete points near the ith pose in the vector map, i ∈ (1, 2, …, T + 1);
respectively taken out in the top view and positioned at the pose VposAnd a set of lane line discrete points near the plurality of reference poses
Figure FDA0003282949320000042
Figure FDA0003282949320000043
Representing a set of lane line discrete points near the ii-th pose in the top view;
separately computing a set for each pose
Figure FDA0003282949320000044
And collections
Figure FDA0003282949320000045
The minimum distance between the vehicles is the minimum distance, and the minimum corresponding pose of the minimum distance is taken as the pose of the current vehicle; the pose of the vehicle comprises the abscissa and the ordinate of the vehicle in the vector map and the heading angle of the vehicle.
9. The vehicle locating system of claim 8, wherein: the collection
Figure FDA0003282949320000046
And collections
Figure FDA0003282949320000047
The method for acquiring the minimum distance between the two elements comprises the following steps: selecting the set with less points as a reference set, traversing each point in the reference set and each point in the set with more points to calculate the distance between the two points, and selecting each point in the reference setThe minimum distance value corresponding to the point; obtaining a minimum distance set with a reference set; adding each value in the minimum distance set to obtain a set
Figure FDA0003282949320000048
And collections
Figure FDA0003282949320000049
The minimum distance between.
10. The vehicle locating system of claim 7, wherein: the data acquisition module comprises a camera, the camera is arranged at the top of the vehicle, and the pitch angle and the roll angle of the camera are both 0.
CN202111137868.2A 2021-09-27 2021-09-27 Vehicle positioning method and system Pending CN113869203A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111137868.2A CN113869203A (en) 2021-09-27 2021-09-27 Vehicle positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111137868.2A CN113869203A (en) 2021-09-27 2021-09-27 Vehicle positioning method and system

Publications (1)

Publication Number Publication Date
CN113869203A true CN113869203A (en) 2021-12-31

Family

ID=78991422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111137868.2A Pending CN113869203A (en) 2021-09-27 2021-09-27 Vehicle positioning method and system

Country Status (1)

Country Link
CN (1) CN113869203A (en)

Similar Documents

Publication Publication Date Title
Ghallabi et al. LIDAR-Based road signs detection For Vehicle Localization in an HD Map
Rose et al. An integrated vehicle navigation system utilizing lane-detection and lateral position estimation systems in difficult environments for GPS
CN112189225B (en) Lane line information detection apparatus, method, and computer-readable recording medium storing computer program programmed to execute the method
CN110745140B (en) Vehicle lane change early warning method based on continuous image constraint pose estimation
JP4232167B1 (en) Object identification device, object identification method, and object identification program
KR101454153B1 (en) Navigation system for unmanned ground vehicle by sensor fusion with virtual lane
CN107422730A (en) The AGV transportation systems of view-based access control model guiding and its driving control method
CN109946732A (en) A kind of unmanned vehicle localization method based on Fusion
CN112166059A (en) Position estimation device for vehicle, position estimation method for vehicle, and computer-readable recording medium storing computer program programmed to execute the method
JP4978615B2 (en) Target identification device
JP2023021098A (en) Map construction method, apparatus, and storage medium
Shunsuke et al. GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon
US10907972B2 (en) 3D localization device
Kellner et al. Road curb detection based on different elevation mapping techniques
WO2022147924A1 (en) Method and apparatus for vehicle positioning, storage medium, and electronic device
CN111426320A (en) Vehicle autonomous navigation method based on image matching/inertial navigation/milemeter
Wiest et al. Localization based on region descriptors in grid maps
US20210240195A1 (en) Systems and methods for utilizing images to determine the position and orientation of a vehicle
JP2022027593A (en) Positioning method and device for movable equipment, and movable equipment
CN113252051A (en) Map construction method and device
Hoang et al. 3D motion estimation based on pitch and azimuth from respective camera and laser rangefinder sensing
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN115265493A (en) Lane-level positioning method and device based on non-calibrated camera
CN115079143A (en) Multi-radar external parameter rapid calibration method and device for double-axle steering mine card
US20220155455A1 (en) Method and system for ground surface projection for autonomous driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 210012 room 401-404, building 5, chuqiaocheng, No. 57, Andemen street, Yuhuatai District, Nanjing, Jiangsu Province

Applicant after: AUTOCORE INTELLIGENT TECHNOLOGY (NANJING) Co.,Ltd.

Address before: 211800 building 12-289, 29 buyue Road, Qiaolin street, Pukou District, Nanjing City, Jiangsu Province

Applicant before: AUTOCORE INTELLIGENT TECHNOLOGY (NANJING) Co.,Ltd.

CB02 Change of applicant information