CN114001742B - Vehicle positioning method, device, vehicle and readable storage medium - Google Patents

Vehicle positioning method, device, vehicle and readable storage medium Download PDF

Info

Publication number
CN114001742B
CN114001742B CN202111226894.2A CN202111226894A CN114001742B CN 114001742 B CN114001742 B CN 114001742B CN 202111226894 A CN202111226894 A CN 202111226894A CN 114001742 B CN114001742 B CN 114001742B
Authority
CN
China
Prior art keywords
pose
data
window
sliding window
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111226894.2A
Other languages
Chinese (zh)
Other versions
CN114001742A (en
Inventor
高峻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Autopilot Technology Co Ltd filed Critical Guangzhou Xiaopeng Autopilot Technology Co Ltd
Priority to CN202111226894.2A priority Critical patent/CN114001742B/en
Publication of CN114001742A publication Critical patent/CN114001742A/en
Application granted granted Critical
Publication of CN114001742B publication Critical patent/CN114001742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

The application provides a vehicle positioning method, a vehicle positioning device, a vehicle and a readable storage medium, wherein the method comprises the following steps: initializing a first window of a data sliding window by adopting current data of a target data source and determining a fusion pose of the first window, wherein the target data source comprises combined navigation pose, dead reckoning pose and lane line perception data of a vehicle; the current window with the latest target data source enters a data sliding window, and the fusion pose of the current window is predicted according to the fusion pose of the previous window; updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window, and outputting the positioning result of the vehicle according to the fusion pose in the data sliding window when the window number of the data sliding window meets the preset condition. The sliding window is combined with the histories of multiple data sources, and the current data are updated for multiple times on the pose, so that the positioning accuracy and the robustness are good.

Description

Vehicle positioning method, device, vehicle and readable storage medium
Technical Field
The application relates to the technical field of vehicle positioning, in particular to a vehicle positioning method, a vehicle positioning device, a vehicle and a readable storage medium.
Background
Robust lane-level positioning has been the goal of high-precision positioning of cumin for autopilot subsystems. The conventional positioning methods include integrated navigation positioning based on GNSS (Global Navigation SATELLITE SYSTEM ), IMU (Inertial Measurement Unit, inertial measurement unit), wheel speed meter, dead reckoning positioning based on IMU and wheel speed meter, fusion positioning based on camera vision and high-precision map, etc., but these methods have the problems of poor positioning accuracy, such as easy occurrence of sudden jump of track due to influence of GNSS signal quality, error accumulation, easy occurrence of positioning error under the condition of untimely map update, etc. Therefore, it is necessary to propose a vehicle positioning method with good positioning accuracy and robustness.
Disclosure of Invention
The application provides a vehicle positioning method, a vehicle positioning device, a vehicle and a readable storage medium, wherein the vehicle positioning method, the vehicle positioning device, the vehicle and the readable storage medium can update the pose for a plurality of times by combining the history and the current data of a plurality of data sources through a sliding window, and the positioning accuracy and the robustness are good.
In one aspect, the present application provides a vehicle positioning method, comprising the steps of:
S1: in the running process of a vehicle, current data of a target data source is obtained at the time t, wherein the target data source comprises a dead reckoning pose of the vehicle, lane line sensing data and a combined navigation pose based on satellite positioning data;
s2: initializing a first window of a data sliding window by adopting the current data at the time t, determining an initial pose according to the current data to serve as a fusion pose of the first window, wherein the maximum window number of the data sliding window is larger than 1;
s3: a current window with a latest acquired target data source enters the data sliding window, and the fusion pose of the current window is predicted according to the fusion pose of the previous window;
S4: updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window, and outputting the positioning result of the vehicle according to the fusion pose in the data sliding window when the window number of the data sliding window meets the preset condition.
Optionally, the determining an initial pose according to the current data to serve as a fusion pose of the first window includes:
S21: and determining the dead reckoning pose at the time t or the combined navigation pose based on satellite positioning data as an initial pose to serve as a fusion pose of the first window.
Optionally, the step S3 includes:
S31: a current window with a latest acquired target data source enters the data sliding window, and the relation between the dead reckoning pose of the current window and the dead reckoning pose of the previous window is determined;
S32: and predicting the fusion pose of the current window according to the relation and the fusion pose of the previous window.
Optionally, the updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window includes:
S41: acquiring fusion poses of all windows in the data sliding window as poses to be fused;
s42: and correcting the pose to be fused based on the combined navigation pose based on the satellite positioning data in the data sliding window, the pose to be fused, the lane line perception data and the relative transformation relation between the map lane line data so as to update the fusion pose of each window in the data sliding window.
Optionally, the step S42 includes:
S421: determining parameters for correcting the pose to be fused based on the relative transformation relation between the combined navigation pose based on satellite positioning data and the pose to be fused in the data sliding window and the relative transformation relation between the lane line perception data and the map lane line data;
s422: and correcting the pose to be fused according to the parameters so as to update the fusion pose of each window in the data sliding window.
Optionally, the step S421 includes:
s4211: matching the lane line perception data with the map lane line data to obtain a first rigid body transformation parameter; performing rigid body transformation on the pose to be fused to the combined navigation pose based on the satellite positioning data to obtain a second rigid body transformation parameter;
S4212: and calculating a third rigid transformation parameter as a parameter for correcting the pose to be fused according to the first rigid transformation parameter, the second rigid transformation parameter and the weights of the first rigid transformation parameter and the second rigid transformation parameter.
Optionally, before the step S4212, the method further includes:
S4213: determining the weight of the first rigid body transformation parameter according to the error between the lane line perception data and the map lane line data; and determining the weight of the second rigid body transformation parameter according to the error between the result of rigid body transformation from the pose to be fused to the integrated navigation pose based on the satellite positioning data and the integrated navigation pose based on the satellite positioning data.
Optionally, the outputting the positioning result of the vehicle according to the fusion pose in the data sliding window includes:
And (3) outputting a positioning result of the vehicle according to the fusion pose of the current window predicted in the step (S3), and/or outputting the positioning result of the vehicle according to the updated fusion pose.
Optionally, the outputting the positioning result of the vehicle according to the updated fusion pose includes:
Outputting the updated fusion pose of the current window as a positioning result of the vehicle; and/or the number of the groups of groups,
And outputting the fusion pose of the current first window of the data sliding window after updating as a positioning result of the vehicle.
Optionally, after step S4, the method further includes:
S5: after the positioning result of the vehicle is output, judging whether the preset time length is exceeded or whether the preset distance is not used for correcting the pose to be fused in the data sliding window;
s6: if yes, correcting the pose to be fused according to the relative transformation relation between the combined navigation pose based on satellite positioning data and the pose to be fused so as to update the fusion pose of each window in the data sliding window; or resetting the pose to be fused in the data sliding window to be a dead reckoning pose or a combined navigation pose based on satellite positioning data, and returning to the step S3.
The application also provides a vehicle positioning device, comprising:
the data acquisition unit is used for acquiring current data of a target data source at the time t in the running process of the vehicle, wherein the target data source comprises a dead reckoning pose of the vehicle, lane line perception data and a combined navigation pose based on satellite positioning data;
The initialization unit is used for initializing a first window of the data sliding window by adopting the current data at the moment t, determining an initial pose according to the current data to serve as a fusion pose of the first window, and the maximum window number of the data sliding window is larger than 1;
The fusion pose prediction unit is used for predicting the fusion pose of the current window according to the fusion pose of the previous window after the current window with the latest acquired target data source enters the data sliding window;
And the updating and outputting unit is used for updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window, and outputting the positioning result of the vehicle according to the fusion pose in the data sliding window when the window number of the data sliding window meets the preset condition.
The present application also provides a vehicle including: the vehicle positioning system comprises a memory and a processor, wherein a processing program is stored in the memory, and the processing program realizes the steps of the vehicle positioning method when being executed by the processor. The present application also provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the vehicle locating method as described above.
As described above, the present application provides a vehicle positioning method, apparatus, vehicle, and readable storage medium, the method including: initializing a first window of a data sliding window by adopting current data of a target data source and determining a fusion pose of the first window, wherein the target data source comprises combined navigation pose, dead reckoning pose and lane line perception data of a vehicle; the current window with the latest target data source enters a data sliding window, and the fusion pose of the current window is predicted according to the fusion pose of the previous window; updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window, and outputting the positioning result of the vehicle according to the fusion pose in the data sliding window when the window number of the data sliding window meets the preset condition. The sliding window is combined with the histories of multiple data sources, and the current data are updated for multiple times on the pose, so that the positioning accuracy and the robustness are good.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart illustration of a vehicle positioning method according to a first embodiment of the present application.
Fig. 2 is a schematic diagram illustrating a moving process of a data sliding window according to a first embodiment of the present application.
Fig. 3 is a flowchart showing a vehicle positioning method according to a first embodiment of the present application.
Fig. 4 is a schematic diagram of acquiring a first rigid transformation parameter according to a first embodiment of the present application.
Fig. 5 is a schematic diagram of acquiring a second rigid transformation parameter according to the first embodiment of the present application.
Fig. 6 is a flowchart of a vehicle positioning method according to a second embodiment of the present application.
Fig. 7 is a schematic structural view of a vehicle positioning device according to a fourth embodiment of the present application.
Fig. 8 is a schematic structural view of a vehicle according to a fifth embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments. Specific embodiments of the present application have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the element defined by the phrase "comprising one … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element, and furthermore, elements having the same name in different embodiments of the application may have the same meaning or may have different meanings, the particular meaning of which is to be determined by its interpretation in this particular embodiment or by further combining the context of this particular embodiment. In this document, step numbers such as S1 and S2 are used for the purpose of more clearly and briefly describing the corresponding contents, and not to constitute a substantial limitation on the sequence, and those skilled in the art may perform S2 first and then S1, etc. when implementing the present application, but these are all within the scope of the present application.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
First embodiment
Fig. 1 is a flowchart illustration of a vehicle positioning method according to a first embodiment of the present application. As shown in fig. 1, the vehicle positioning method of the present application includes the steps of:
s1: in the running process of the vehicle, current data of a target data source is acquired at the time t, wherein the target data source comprises dead reckoning pose of the vehicle, lane line sensing data and combined navigation pose based on satellite positioning data;
Alternatively, the time t is a time during which the vehicle is traveling, and may be, for example, a time when the vehicle is traveling 100 meters or one minute. After the vehicle starts to run, the dead reckoning pose, lane line sensing data and integrated navigation pose based on satellite positioning data of the vehicle can be calculated, and the current data obtained at the time t indicates that pose fusion, namely pose correction, is started.
The integrated navigation pose based on the satellite positioning data is output obtained by positioning based on GNSS, IMU and wheel speed meter, wherein the position in the pose information is three-dimensional information in space, and the pose is three-dimensional rotation information. Dead reckoning poses (Dead Reckoning, DR) are outputs of dead reckoning based on IMU, wheel speed meters, without using satellite positioning data. The lane line perception data are lane line data obtained based on vehicle visual perception and pose splicing of vehicles.
Optionally, when the lane line sensing data is acquired, at least one frame of image shot by the vehicle-mounted camera is acquired first, and the vehicle-mounted camera can be multiple, so that the wide-view angle image obtained by transversely and seamlessly splicing the images shot from multiple different angles can be regarded as one frame of image shot by the vehicle-mounted camera. And then, overlooking conversion is carried out on the images to obtain a bird's eye view, so that the proportion among the images is consistent, the contrast is clear, and the subsequent analysis of lane line data is facilitated. Because the pose of each frame of image shot by the vehicle-mounted camera is different along with the change of the pose of the vehicle, each frame of image corresponds to different poses of the vehicle, the vehicle pose can be obtained by fusing sensor information such as a global navigation satellite system, an inertial measurement unit, a wheel speed meter and the like, in the embodiment, dead reckoning is preferably carried out on the basis of an IMU (inertial measurement unit) and the wheel speed meter to obtain the dead reckoning pose, and the dead reckoning pose has the advantage of small local data error, so that the accuracy of a lane line splicing result can be improved. After the vehicle pose corresponding to each frame of image is determined, the images are spliced in sequence along the road direction according to pose information, and then the road image in a certain distance range can be obtained. And finally, performing visual perception processing on the road image to obtain visual perception information of the road, wherein the visual perception information comprises lane line perception data.
S2: initializing a first window of a data sliding window by adopting current data at the moment t, determining an initial pose according to the current data to serve as a fusion pose of the first window, wherein the maximum window number of the data sliding window is greater than 1;
Optionally, the maximum number of windows in the data sliding window is greater than 1, and each window in the data sliding window corresponds to a distance or a period of data of the target data source, for example, 2 meters or 2 seconds, and the maximum number of windows may be 30, for example. When the data sliding window moves along with the vehicle, new data can enter the data sliding window, original data in the data sliding window becomes historical data, when the window number reaches the maximum window number, the new window enters the data sliding window and the oldest window is deleted, and then the window number of the data sliding window can be kept at the maximum window number for movement.
Optionally, in step S2, determining an initial pose according to the current data to serve as a fusion pose of the first window includes:
S21: and determining the dead reckoning pose at the time t or the combined navigation pose based on satellite positioning data as an initial pose to serve as a fusion pose of a first window.
Optionally, each window in the data sliding window comprises a dead reckoning pose of the vehicle, lane line sensing data and a combined navigation pose based on satellite positioning data, and also comprises a fusion pose, wherein the fusion pose is a pose obtained by fusion based on a target data source. For the first window, due to the lack of accurate parameters for pose fusion, the initialization of fusion poses may be performed with dead reckoning poses or combined navigation poses based on satellite positioning data. In this embodiment, the integrated navigation pose based on satellite positioning data at the time t is preferably used as the fusion pose of the first window, so that the accuracy of the initial fusion pose can be ensured in a large scale range.
S3: the current window with the latest acquired target data source enters a data sliding window, and the fusion pose of the current window is predicted according to the fusion pose of the previous window;
Optionally, after the time t, the current window with the current data of the latest acquired target data source sequentially enters the data sliding window along with the movement of the vehicle, and the fusion pose of the current window is predicted by the fusion pose of the previous window. In this embodiment, step S3 includes:
s31: a current window with a latest acquired target data source enters a data sliding window, and the relation between the dead reckoning pose of the current window and the dead reckoning pose of the previous window is determined;
S32: and predicting the fusion pose of the current window according to the relation between the dead reckoning poses of the current window and the previous window and the fusion pose of the previous window.
Because the dead reckoning pose has the characteristic of good local performance, the fusion pose of the current window is predicted according to the relation between the dead reckoning poses of the current window and the previous window, so that the fusion pose of the current window and the fusion pose of the previous window also have the characteristic of good local performance, and the prediction accuracy is improved. Alternatively, the fusion pose of the window may be represented by { p, q }, where p is the position and q is the quaternion, and the prediction process from the k window to the k+1 window is as follows:
Wherein, Is quaternion multiplication; /(I)Is/>A corresponding rotation matrix.
S4: updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window, and outputting the positioning result of the vehicle according to the fusion pose in the data sliding window when the window number of the data sliding window meets the preset condition.
The window number of the data sliding window meets the preset conditions including, but not limited to, that the window number of the data sliding window is equal to the maximum window number, or that the window number is equal to a preset number smaller than the maximum window number, and in this embodiment, the positioning result of the vehicle is preferably output when the window number of the data sliding window is equal to the maximum window number, so as to have better positioning accuracy. Optionally, after predicting the fusion pose of the current window according to the fusion pose of the previous window, all windows of the data sliding window correspond to the fusion poses, at this time, further updating the fusion poses of the windows in the data sliding window according to each target data source in the data sliding window to complete one-time correction of the poses, and then returning to step S3 to continuously predict the window entering the data sliding window, further updating the fusion pose of the whole data sliding window again, and updating the fusion poses for multiple times along with the movement of the data sliding window until the number of the windows meets the preset condition, and outputting the positioning result. Therefore, along with the movement of the data sliding window, the fusion pose of each window in the data sliding window can be corrected for multiple times according to the current data and the historical data of a plurality of target data sources, and the accuracy and the robustness are effectively improved.
The following describes a moving process of the data sliding window and a prediction and update process of the fusion pose in conjunction with fig. 2. As shown in fig. 2 (a), the first window of the data sliding window includes a dead reckoning pose of the vehicle, lane line sensing data, a combined navigation pose based on satellite positioning data, and a fusion pose, and for the first window, due to lack of accurate parameters for pose fusion, the initialization of the fusion pose may be performed by using the combined navigation pose based on satellite positioning data or the dead reckoning pose. As the data sliding window moves, as shown in fig. 2 (b), the second window enters the data sliding window, and the second window also contains dead reckoning pose of the vehicle, lane line sensing data and combined navigation pose based on satellite positioning data, except that the fusion pose of the second window is predicted according to the fusion pose of the previous window, and then the fusion pose of each window in the data sliding window is updated based on the step S4, so that one correction of the vehicle pose is completed, and the fusion poses of the first window and the second window are all accumulated for one correction. As the data sliding window moves, as shown in fig. 2 (c), the k-1 window and the k window sequentially enter the data sliding window, and the k-1 window and the k window also contain dead reckoning pose of the vehicle, lane line sensing data and combined navigation pose based on satellite positioning data, as in the previous process, except that when the k window enters the data sliding window, the fusion pose of the k-1 window is corrected for k-2 times, the fusion pose of the k window is predicted according to the updated fusion pose of the k-1 window, and then the fusion pose of each window in the data sliding window is updated based on the step S4, so that k-1 times of correction of the vehicle pose is completed, the fusion poses of the first window and the second window are all accumulated for k-1 times of correction, the k window is accumulated for one time, and the correction times of the middle window are sequentially decreased towards the k window. At this time, as the data sliding window moves, as shown in fig. 2 (d), the k+1 window enters the data sliding window, and assuming that the maximum window number of the data sliding window is k, the first window leaves the data sliding window and is deleted, and the fusion pose of the k+1 window is predicted and the fusion pose of the whole data sliding window is updated based on steps S3 to S4. Therefore, as the vehicle moves, the data sliding window also moves continuously, so that the fusion pose of each window in the data sliding window can be corrected for multiple times continuously according to the current data and the historical data of a plurality of target data sources.
Optionally, in step S4, outputting a positioning result of the vehicle according to the fusion pose in the data sliding window, including: and outputting a positioning result of the vehicle according to the predicted fusion pose of the current window and/or outputting the positioning result of the vehicle according to the updated fusion pose. Therefore, the positioning result meeting different positioning requirements can be output. In the embodiment, the positioning result of the vehicle is output according to the predicted fusion pose of the current window, so that the positioning result of the vehicle can be output based on the predicted data when the number of windows meets the preset condition, and the fusion pose is not required to be updated until the end, so that the real-time performance is better. Referring to fig. 1 and 3, in the specific flow of the present embodiment, steps S1 to S3 in fig. 3 are the same as steps S1 to S3 in fig. 1, and are not repeated, and step S4 includes:
S40: updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window, and returning to the step S3;
s43: judging whether the number of windows is equal to the maximum number of windows;
s44: and if the window number of the data sliding window is equal to the maximum window number, outputting a positioning result of the vehicle according to the predicted fusion pose of the current window.
Optionally, the step S40 updates the fusion pose of each window in the data sliding window, which specifically includes:
s41: acquiring fusion poses of all windows in the data sliding window as poses to be fused;
S42: and correcting the pose to be fused based on the relative transformation relation among the combined navigation pose based on the satellite positioning data, the pose to be fused, the lane line perception data and the map lane line data in the data sliding window so as to update the fusion pose of each window in the data sliding window.
Optionally, step S42 includes:
s421: determining parameters for correcting the pose to be fused based on the relative transformation relation between the combined navigation pose based on the satellite positioning data and the pose to be fused in the data sliding window and the relative transformation relation between the lane line perception data and the map lane line data;
S422: and correcting the pose to be fused according to the parameters so as to update the fusion pose of each window in the data sliding window.
Optionally, step S421 includes:
s4211: matching the lane line perception data with the map lane line data to obtain a first rigid body transformation parameter; carrying out rigid body transformation on the pose to be fused to the combined navigation pose based on the satellite positioning data to obtain a second rigid body transformation parameter;
S4212: and calculating a third rigid transformation parameter as a parameter for correcting the pose to be fused according to the first rigid transformation parameter, the second rigid transformation parameter and the weights of the first rigid transformation parameter and the second rigid transformation parameter.
Optionally, the map lane line data is lane line data corresponding to a road area of the data sliding window in the high-precision map, the lane line in the map is composed of a plurality of characteristic points, and the coordinate data of the characteristic points is composed of the lane line data. The map lane line data and the lane line perception data can be matched and fused under the same coordinate system by using a matching fusion algorithm, and the matching fusion algorithm can use an ICP (ITERATIVE CLOSEST POINT, nearest neighbor iteration) algorithm. In the ICP algorithm, firstly, the nearest characteristic points are found in the target point cloud and the source point cloud to be matched according to the same physical scale and a certain constraint condition, and then the optimal matching parameter sum is calculated, so that the error function is minimum. Referring to fig. 4, when matching the lane line sensing data and the map lane line data by using the ICP point cloud registration, since the lane line sensing data is obtained by using the dead reckoning pose for stitching, the lane line sensing data is bound to the dead reckoning pose (the track shown by the triangle in fig. 4), and the matched ICP matching data also includes a new pose (the track shown by the circle in fig. 4) by aligning and matching the lane line sensing data and the map lane line data, the relative transformation between the lane line sensing data and the map lane line data is equivalent to the transformation between the poses, so that the first rigid body transformation parameter T cp of the pose can be obtained according to the ICP matching result. Referring to fig. 5, the pose to be fused is formed by fusion poses of all windows in the data sliding window, the pose to be fused is subjected to rigid transformation to the combined navigation pose based on satellite positioning data, and the second rigid transformation parameter T loc of the pose can be obtained according to the transformation result.
Optionally, based on T cp and T loc, fusion is performed on the lie algebra se (3) to get a third rigid transformation parameter, as follows:
Tfuse=exp{α·log(Tcp)+(1-α)log(Tloc)}
Wherein T fuse is a third rigid transformation parameter; the logarithmic operation and the exponential operation are used for the mutual conversion between the lie algebra SE (3) and the special Euclidean group SE (3), and the SE (3) is the lie algebra corresponding to the special Euclidean group SE (3); alpha.epsilon.0, 1, determined by the errors in T cp and T loc.
Optionally, to determine the weights of T cp and T loc, before step S4212, the method further comprises:
S4213: determining the weight of a first rigid body transformation parameter according to the error between the lane line perception data and the map lane line data; and determining the weight of the second rigid body transformation parameter according to the error between the result of rigid body transformation from the pose to be fused to the integrated navigation pose based on the satellite positioning data and the integrated navigation pose based on the satellite positioning data.
The certain error exists between the lane line perception data and the map lane line data and can be caused by errors of perception identification, errors during the perception of the lane line splicing, map errors and the like, so that the error between the lane line perception data and the map lane line data can be determined when the lane line perception data and the map lane line data are aligned. In addition, when the pose to be fused is rigidly transformed to the integrated navigation pose based on the satellite positioning data, the transformation result does not necessarily completely coincide with the integrated navigation pose based on the satellite positioning data, so that errors exist. Therefore, the weights of the first rigid body transformation parameter and the second rigid body transformation parameter can be determined according to the error of the first rigid body transformation parameter and the second rigid body transformation parameter, the result with larger error corresponds to the smaller weight, and the result with smaller error corresponds to the larger weight, so that the third rigid body transformation parameter, the first rigid body transformation parameter and the second rigid body transformation parameter are loosely coupled, the corrected parameters can be flexibly adjusted, and better robustness is achieved.
Through the process, the vehicle pose can be corrected in real time by using a plurality of data sources along with the movement of the data sliding window, and the advantages of the data sources are fully combined, for example, the combined navigation pose based on satellite positioning data has high accuracy on a large scale, the dead reckoning pose has good local performance, the accuracy of the lane line sensing data after being fused with map data is high, and the like, so that the robustness of the positioning accuracy is improved.
With continued reference to fig. 3, when the number of windows of the data sliding window is equal to the maximum number of windows, the positioning result of the vehicle may be output according to the fusion pose of the current window predicted in step S3, where the positioning result includes a track and a vehicle corner, so as to meet the real-time requirement of vehicle positioning. For example, after the fusion pose of the k window is predicted, the vehicle positioning result can be output without waiting for the update of the fusion pose of the k window to finish, so that the real-time performance is better, and at the moment, although the fusion pose of the current window is only the predicted result, the fusion pose of the last window used for prediction is the corrected result, thereby meeting the requirement of accuracy and having good robustness. And (3) returning to the step (S3) after outputting the positioning result, and continuously predicting the window entering the data sliding window and updating the fusion pose of the whole data sliding window.
Optionally, when the number of windows is not equal to the maximum number of windows, that is, when the number of windows does not reach the maximum number of windows, returning to the step S3, and continuing to predict the window entering the data sliding window later and updating the fusion pose of the whole data sliding window.
Optionally, the method of this embodiment, after step S4, further includes:
S5: after the positioning result of the vehicle is output, judging whether the preset time length is exceeded or whether the position to be fused in the data sliding window is not corrected by the preset distance;
s6: if yes, correcting the pose to be fused according to the relative transformation relation between the combined navigation pose based on the satellite positioning data and the pose to be fused so as to update the fusion pose of each window in the data sliding window; or resetting the pose to be fused in the data sliding window to be a dead reckoning pose or a combined navigation pose based on satellite positioning data, and returning to the step S3.
After the positioning result of the vehicle is output, the to-be-fused pose in the data sliding window may be over a preset time period or a preset distance due to reasons of equipment, a user and the like, and at this time, the to-be-fused pose can be corrected only according to the relative transformation relationship between the combined navigation pose based on satellite positioning data and the to-be-fused pose, so that the accuracy of the correction result is prevented from being influenced by the accumulated error of the dead reckoning pose due to lane line perception data. Or the pose to be fused in the data sliding window can be reset to be a dead reckoning pose or a combined navigation pose based on satellite positioning data, and the step S3 is returned to restart correction. Therefore, when the pose to be fused is not corrected for a long time, the pose to be fused can be corrected to an accurate result in time.
Second embodiment
Fig. 6 is a flowchart of a vehicle positioning method according to a second embodiment of the present application. As shown in fig. 6, the vehicle positioning method of the present application includes the steps of:
s1: in the running process of the vehicle, current data of a target data source is acquired at the time t, wherein the target data source comprises dead reckoning pose of the vehicle, lane line sensing data and combined navigation pose based on satellite positioning data;
S2: initializing a first window of a data sliding window by adopting current data at the moment t, determining an initial pose according to the current data to serve as a fusion pose of the first window, wherein the maximum window number of the data sliding window is greater than 1;
S3: a current window formed by the latest acquired current data enters a data sliding window, and the fusion pose of the current window is predicted according to the fusion pose of the previous window;
S40: updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window, and returning to the step S3;
S45: judging whether the number of windows meets preset conditions or not;
s46: and if the window number of the data sliding window meets the preset condition, outputting a positioning result of the vehicle according to the updated fusion pose.
In this embodiment, the number of windows of the data sliding window satisfies the preset condition, and the preferred number of windows is equal to the maximum number of windows, unlike the first embodiment, in this embodiment, the positioning result of the vehicle is output according to the updated fusion pose. The updated fusion pose is the updated fusion pose in the step S40. And (3) returning to the step (S3) after outputting the positioning result, and continuously predicting the window entering the data sliding window and updating the fusion pose of the whole data sliding window. Optionally, if the number of windows does not meet the preset condition, that is, if the number of windows does not reach the maximum number of windows, returning to the step S3, and continuing to predict the window entering the data sliding window later and updating the fusion pose of the whole data sliding window.
The implementation process of steps S1 to S40 is the same as that of steps S1 to S40 in the first embodiment, and will not be described here again.
Optionally, in step S46, outputting a positioning result of the vehicle according to the updated fusion pose, including:
outputting the updated fusion pose of the current window as a positioning result of the vehicle; and/or the number of the groups of groups,
And outputting the fusion pose of the current first window of the data sliding window after updating as a positioning result of the vehicle.
Wherein, unlike the first embodiment, the positioning result output by the present embodiment is output based on the updated fusion pose. For example, after the fusion pose of the k window is predicted, the fusion pose of the whole data sliding window is continuously updated, and then the fusion pose of the k window is output, at this time, the fusion pose of the k window is corrected once, and the accuracy is better and the robustness is good under the condition that the instantaneity is not obviously affected. Or after predicting the fusion pose of the K window, continuously updating the fusion pose in the whole data sliding window, and then outputting the fusion pose updated by the first window, wherein the first window is updated for K-1 times, so that the accuracy and the robustness are better, and the method is suitable for the conditions with lower real-time requirements, such as offline test, user actual setting requirements and the like.
Third embodiment
The difference between the embodiment and the first and second embodiments is that if the number of windows of the data sliding window meets the preset condition, the positioning result of the vehicle is output according to the fusion pose of the current window predicted in the step S3, and the positioning result of the vehicle is output according to the updated fusion pose in the step S40. That is, the predicted result and the updated result of the pose can be output together for the user to reference at the same time.
Other steps of the present embodiment are the same as those of the first embodiment or the second embodiment, and will not be described here again.
As described above, the vehicle positioning method provided by the application adopts the current data of the target data source to initialize the first window of the data sliding window and determine the fusion pose of the first window, wherein the target data source comprises the combined navigation pose, dead reckoning pose and lane line sensing data of the vehicle; the current window with the latest target data source enters a data sliding window, and the fusion pose of the current window is predicted according to the fusion pose of the previous window; updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window, and outputting the positioning result of the vehicle according to the fusion pose in the data sliding window when the window number of the data sliding window meets the preset condition. The sliding window is combined with the histories of multiple data sources, and the current data are updated for multiple times on the pose, so that the positioning accuracy and the robustness are good.
Fourth embodiment
Fig. 7 is a schematic structural view of a vehicle positioning device according to a fourth embodiment of the present application. As shown in fig. 7, the present application also provides a vehicle positioning device, including:
A data acquisition unit 601, configured to acquire current data of a target data source at time t during a vehicle driving process, where the target data source includes a dead reckoning pose of the vehicle, lane line sensing data, and a combined navigation pose based on satellite positioning data;
An initializing unit 602, configured to initialize a first window of the data sliding window by using current data at time t, and determine an initial pose according to the current data to serve as a fusion pose of the first window, where the maximum window number of the data sliding window is greater than 1;
The fusion pose prediction unit 603 is configured to predict a fusion pose of a current window according to a fusion pose of a previous window after the current window with the latest acquired target data source enters the data sliding window;
And the updating and outputting unit 604 is configured to update the fusion pose of each window in the data sliding window based on each target data source in the data sliding window, and output the positioning result of the vehicle according to the fusion pose in the data sliding window when the number of windows in the data sliding window meets the preset condition.
Optionally, the vehicle positioning device provided by the application may further include:
The judging unit is used for judging whether the position to be fused in the data sliding window exceeds a preset duration or a preset distance after the updating and outputting unit 604 outputs the positioning result of the vehicle;
The fusion pose resetting unit is used for correcting the pose to be fused according to the relative transformation relation between the combined navigation pose based on the satellite positioning data and the pose to be fused when the preset duration or the preset distance is exceeded and the pose to be fused in the data sliding window is not corrected, so as to update the fusion pose of each window in the data sliding window; or resetting the pose to be fused in the data sliding window to be a dead reckoning pose or a combined navigation pose based on satellite positioning data, and returning to the fusion pose prediction unit for continuous processing.
The detailed working process and working sub-flow of each unit module are the same as those of the corresponding steps described in any one of the first to third embodiments, and are not described herein.
Fifth embodiment
Fig. 8 is a schematic structural view of a vehicle according to a fifth embodiment of the present application. As shown in fig. 8, the present application also provides a vehicle including: a memory 701, and a processor 702, wherein the memory 701 stores a processing program that when executed by the processor 702 implements the steps of the vehicle positioning method according to any one of the first to third embodiments.
The present application also provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the vehicle locating method as described above.
Embodiments of the present application also provide a computer program product comprising computer program code which, when run on a computer, causes the computer to perform the method as in the various possible embodiments described above.
The embodiment of the application also provides a chip, which comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor is used for calling and running the computer program from the memory, so that the device provided with the chip executes the method in the various possible implementation manners.
In the embodiments of the chip, the computer program product and the readable storage medium provided by the present application, all technical features of each embodiment of the above method are included, and the expansion and explanation contents of the description are basically the same as those of each embodiment of the above method, which are not repeated herein.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (12)

1. A vehicle positioning method, characterized by comprising the steps of:
S1: in the running process of a vehicle, current data of a target data source is obtained at the time t, wherein the target data source comprises a dead reckoning pose of the vehicle, lane line sensing data and a combined navigation pose based on satellite positioning data;
s2: initializing a first window of a data sliding window by adopting the current data at the time t, determining an initial pose according to the current data to serve as a fusion pose of the first window, wherein the maximum window number of the data sliding window is larger than 1;
s3: a current window with a latest acquired target data source enters the data sliding window, and the fusion pose of the current window is predicted according to the fusion pose of the previous window;
S4: updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window, and outputting a positioning result of the vehicle according to the fusion pose in the data sliding window when the window number of the data sliding window meets a preset condition;
The updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window comprises the following steps:
S41: acquiring fusion poses of all windows in the data sliding window as poses to be fused;
s42: and correcting the pose to be fused based on the combined navigation pose based on the satellite positioning data in the data sliding window, the pose to be fused, the lane line perception data and the relative transformation relation between the map lane line data so as to update the fusion pose of each window in the data sliding window.
2. The method of claim 1, wherein determining an initial pose from the current data as a fused pose of the first window comprises:
S21: and determining the dead reckoning pose at the time t or the combined navigation pose based on satellite positioning data as an initial pose to serve as a fusion pose of the first window.
3. The method of claim 1, wherein the step S3 comprises:
S31: a current window with a latest acquired target data source enters the data sliding window, and the relation between the dead reckoning pose of the current window and the dead reckoning pose of the previous window is determined;
S32: and predicting the fusion pose of the current window according to the relation and the fusion pose of the previous window.
4. The method of claim 1, wherein the step S42 comprises:
S421: determining parameters for correcting the pose to be fused based on the relative transformation relation between the combined navigation pose based on satellite positioning data and the pose to be fused in the data sliding window and the relative transformation relation between the lane line perception data and the map lane line data;
s422: and correcting the pose to be fused according to the parameters so as to update the fusion pose of each window in the data sliding window.
5. The method of claim 4, wherein the step S421 includes:
s4211: matching the lane line perception data with the map lane line data to obtain a first rigid body transformation parameter; performing rigid body transformation on the pose to be fused to the combined navigation pose based on the satellite positioning data to obtain a second rigid body transformation parameter;
S4212: and calculating a third rigid transformation parameter as a parameter for correcting the pose to be fused according to the first rigid transformation parameter, the second rigid transformation parameter and the weights of the first rigid transformation parameter and the second rigid transformation parameter.
6. The method of claim 5, wherein prior to step S4212, further comprising:
S4213: determining the weight of the first rigid body transformation parameter according to the error between the lane line perception data and the map lane line data; and determining the weight of the second rigid body transformation parameter according to the error between the result of rigid body transformation from the pose to be fused to the integrated navigation pose based on the satellite positioning data and the integrated navigation pose based on the satellite positioning data.
7. The method of claim 1, wherein the outputting the positioning result of the vehicle according to the fusion pose in the data sliding window comprises:
And (3) outputting a positioning result of the vehicle according to the fusion pose of the current window predicted in the step (S3), and/or outputting the positioning result of the vehicle according to the updated fusion pose.
8. The method of claim 7, wherein outputting the vehicle positioning result according to the updated fusion pose comprises:
Outputting the updated fusion pose of the current window as a positioning result of the vehicle; and/or the number of the groups of groups,
And outputting the fusion pose of the current first window of the data sliding window after updating as a positioning result of the vehicle.
9. The method according to any one of claims 1 to 8, further comprising, after the step S4:
s5: judging whether the pose to be fused in the data sliding window is not corrected when the preset duration or the preset distance is exceeded;
s6: if yes, correcting the pose to be fused according to the relative transformation relation between the combined navigation pose based on satellite positioning data and the pose to be fused so as to update the fusion pose of each window in the data sliding window; or resetting the pose to be fused in the data sliding window to be a dead reckoning pose or a combined navigation pose based on satellite positioning data, and returning to the step S3.
10. A vehicle positioning device, characterized by comprising:
the data acquisition unit is used for acquiring current data of a target data source at the time t in the running process of the vehicle, wherein the target data source comprises a dead reckoning pose of the vehicle, lane line perception data and a combined navigation pose based on satellite positioning data;
The initialization unit is used for initializing a first window of the data sliding window by adopting the current data at the moment t, determining an initial pose according to the current data to serve as a fusion pose of the first window, and the maximum window number of the data sliding window is larger than 1;
The fusion pose prediction unit is used for predicting the fusion pose of the current window according to the fusion pose of the previous window after the current window with the latest acquired target data source enters the data sliding window;
The updating and outputting unit is used for updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window, and outputting a positioning result of the vehicle according to the fusion pose in the data sliding window when the window number of the data sliding window meets the preset condition;
The updating the fusion pose of each window in the data sliding window based on each target data source in the data sliding window comprises the following steps:
S41: acquiring fusion poses of all windows in the data sliding window as poses to be fused;
s42: and correcting the pose to be fused based on the combined navigation pose based on the satellite positioning data in the data sliding window, the pose to be fused, the lane line perception data and the relative transformation relation between the map lane line data so as to update the fusion pose of each window in the data sliding window.
11. A vehicle, characterized by comprising: memory, a processor, wherein the memory has stored thereon a processing program which, when executed by the processor, implements the steps of the vehicle positioning method according to any one of claims 1 to 9.
12. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the vehicle localization method according to any one of claims 1 to 9.
CN202111226894.2A 2021-10-21 2021-10-21 Vehicle positioning method, device, vehicle and readable storage medium Active CN114001742B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111226894.2A CN114001742B (en) 2021-10-21 2021-10-21 Vehicle positioning method, device, vehicle and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111226894.2A CN114001742B (en) 2021-10-21 2021-10-21 Vehicle positioning method, device, vehicle and readable storage medium

Publications (2)

Publication Number Publication Date
CN114001742A CN114001742A (en) 2022-02-01
CN114001742B true CN114001742B (en) 2024-06-04

Family

ID=79923427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111226894.2A Active CN114001742B (en) 2021-10-21 2021-10-21 Vehicle positioning method, device, vehicle and readable storage medium

Country Status (1)

Country Link
CN (1) CN114001742B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109116397A (en) * 2018-07-25 2019-01-01 吉林大学 A kind of vehicle-mounted multi-phase machine vision positioning method, device, equipment and storage medium
CN110207714A (en) * 2019-06-28 2019-09-06 广州小鹏汽车科技有限公司 A kind of method, onboard system and the vehicle of determining vehicle pose
CN110806215A (en) * 2019-11-21 2020-02-18 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and storage medium
CN111220154A (en) * 2020-01-22 2020-06-02 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and medium
CN111551186A (en) * 2019-11-29 2020-08-18 福瑞泰克智能***有限公司 Vehicle real-time positioning method and system and vehicle
CN111649739A (en) * 2020-06-02 2020-09-11 北京百度网讯科技有限公司 Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
WO2020237996A1 (en) * 2019-05-30 2020-12-03 魔门塔(苏州)科技有限公司 Vehicle pose correction method and device
CN112304302A (en) * 2019-07-26 2021-02-02 北京初速度科技有限公司 Multi-scene high-precision vehicle positioning method and device and vehicle-mounted terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10699438B2 (en) * 2017-07-06 2020-06-30 Siemens Healthcare Gmbh Mobile device localization in complex, three-dimensional scenes

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109116397A (en) * 2018-07-25 2019-01-01 吉林大学 A kind of vehicle-mounted multi-phase machine vision positioning method, device, equipment and storage medium
WO2020237996A1 (en) * 2019-05-30 2020-12-03 魔门塔(苏州)科技有限公司 Vehicle pose correction method and device
CN110207714A (en) * 2019-06-28 2019-09-06 广州小鹏汽车科技有限公司 A kind of method, onboard system and the vehicle of determining vehicle pose
CN112304302A (en) * 2019-07-26 2021-02-02 北京初速度科技有限公司 Multi-scene high-precision vehicle positioning method and device and vehicle-mounted terminal
CN110806215A (en) * 2019-11-21 2020-02-18 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and storage medium
CN111551186A (en) * 2019-11-29 2020-08-18 福瑞泰克智能***有限公司 Vehicle real-time positioning method and system and vehicle
CN111220154A (en) * 2020-01-22 2020-06-02 北京百度网讯科技有限公司 Vehicle positioning method, device, equipment and medium
CN111649739A (en) * 2020-06-02 2020-09-11 北京百度网讯科技有限公司 Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium

Also Published As

Publication number Publication date
CN114001742A (en) 2022-02-01

Similar Documents

Publication Publication Date Title
US11227168B2 (en) Robust lane association by projecting 2-D image into 3-D world using map information
Atia et al. A low-cost lane-determination system using GNSS/IMU fusion and HMM-based multistage map matching
US11193782B2 (en) Vehicle position estimation apparatus
KR20200044420A (en) Method and device to estimate position
KR102441073B1 (en) Apparatus for compensating sensing value of gyroscope sensor, system having the same and method thereof
KR20220033477A (en) Appratus and method for estimating the position of an automated valet parking system
US11158065B2 (en) Localization of a mobile unit by means of a multi hypothesis kalman filter method
US20190187297A1 (en) System and method for locating a moving object
CN114252082B (en) Vehicle positioning method and device and electronic equipment
CN110637209B (en) Method, apparatus and computer readable storage medium having instructions for estimating a pose of a motor vehicle
CN114136315A (en) Monocular vision-based auxiliary inertial integrated navigation method and system
CN114396943A (en) Fusion positioning method and terminal
CN113405555B (en) Automatic driving positioning sensing method, system and device
US20220057517A1 (en) Method for constructing point cloud map, computer device, and storage medium
CN110132280B (en) Vehicle positioning method and device in indoor scene and vehicle
CN115060257A (en) Vehicle lane change detection method based on civil-grade inertia measurement unit
CN114001742B (en) Vehicle positioning method, device, vehicle and readable storage medium
KR101837821B1 (en) Method for estimating position using multi-structure filter and System thereof
CN111351497B (en) Vehicle positioning method and device and map construction method and device
CN114019954B (en) Course installation angle calibration method, device, computer equipment and storage medium
JP6820762B2 (en) Position estimator
CN115205828B (en) Vehicle positioning method and device, vehicle control unit and readable storage medium
WO2023017624A1 (en) Drive device, vehicle, and method for automated driving and/or assisted driving
CN117805866A (en) Multi-sensor fusion positioning method, device and medium based on high-precision map
CN117346802A (en) Vehicle positioning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant