CN116045964A - High-precision map updating method and device - Google Patents

High-precision map updating method and device Download PDF

Info

Publication number
CN116045964A
CN116045964A CN202310080648.3A CN202310080648A CN116045964A CN 116045964 A CN116045964 A CN 116045964A CN 202310080648 A CN202310080648 A CN 202310080648A CN 116045964 A CN116045964 A CN 116045964A
Authority
CN
China
Prior art keywords
information
vehicle
pose
precision map
observation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310080648.3A
Other languages
Chinese (zh)
Inventor
颜扬治
张婷
张志萌
李凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecarx Hubei Tech Co Ltd
Original Assignee
Ecarx Hubei Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecarx Hubei Tech Co Ltd filed Critical Ecarx Hubei Tech Co Ltd
Priority to CN202310080648.3A priority Critical patent/CN116045964A/en
Publication of CN116045964A publication Critical patent/CN116045964A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/3815Road data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The application discloses a high-precision map updating method and device. Wherein the method comprises the following steps: acquiring first pose information of a vehicle at a first acquisition moment, and acquiring a local high-precision map corresponding to the first pose information from a cloud module, wherein the local high-precision map comprises road vector feature information; acquiring multi-frame road images and vehicle dead reckoning information acquired in a preset time period, and determining fusion observation characteristic information according to the multi-frame road images and the vehicle dead reckoning information, wherein the vehicle dead reckoning information is used for dead reckoning; registering the road vector characteristic information and the fusion observation characteristic information to obtain registration pose information of the vehicle; and uploading the fusion observation characteristic information to a cloud module when the registration pose information meets preset conditions so as to update the local high-precision map. The method and the device solve the technical problems of higher cost or poor map quality when the high-precision map is updated in the related technology.

Description

High-precision map updating method and device
Technical Field
The application relates to the technical field of map updating, in particular to a high-precision map updating method and device.
Background
The high-precision Map (High Definition Map, HD Map), also called high-precision electronic Map, is different from the traditional navigation Map, can provide navigation information of road level and can also provide navigation information of lane level, and is far higher than the traditional navigation Map in the aspects of information richness and precision, so that the high-precision Map is widely applied to scenes such as vehicle navigation and automatic driving.
Because the actual road changes due to factors such as construction diversion, the high-precision map needs to be updated continuously to maintain the real-time property of the map, namely the so-called freshness in the industry. Two methods exist in the related art, one method is to use a professional acquisition vehicle equipped with high-precision equipment to acquire data periodically for updating the map, and the method has the advantages of high updating quality and incapability of updating the map in time at high frequency due to high cost; the map is updated in a crowdsourcing mode by providing a general sensor on a general vehicle, so that the map updating method has the advantages of low cost, timely updating, low updating quality and high technical difficulty.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a high-precision map updating method and device, which at least solve the technical problems of higher cost or poor map quality when a high-precision map is updated in the related technology.
According to an aspect of the embodiments of the present application, there is provided a high-precision map updating method, including: acquiring first pose information of a vehicle at a first acquisition moment, and acquiring a local high-precision map corresponding to the first pose information from a cloud module, wherein the local high-precision map comprises road vector feature information; acquiring multi-frame road images and vehicle dead reckoning information acquired in a preset time period, and determining fusion observation characteristic information according to the multi-frame road images and the vehicle dead reckoning information, wherein the vehicle dead reckoning information is used for dead reckoning; registering the road vector characteristic information and the fusion observation characteristic information to obtain registration pose information of the vehicle; and uploading the fusion observation characteristic information to a cloud module when the registration pose information meets preset conditions so as to update the local high-precision map.
Optionally, acquiring first pose information of the vehicle at the first acquisition time includes: acquiring first pose information of a vehicle from a global satellite navigation system when the first acquisition time is an initial acquisition time; or, acquiring first pose information input by a target object in a human-computer interaction interface of the vehicle; when the first acquisition time is not the initial acquisition time, acquiring first pose information of the vehicle at the first acquisition time, acquiring second pose information and second pose information of the vehicle at the second acquisition time, and determining the first pose information of the vehicle at the first acquisition time according to the second pose information, the first pose information and the second pose information; the second collection time is the last collection time of the first collection time.
Optionally, acquiring the multi-frame road image and the vehicle navigation information acquired in the preset time period includes: determining each third acquisition time in a preset time period before the first acquisition time; acquiring road images acquired by the first type of sensor at each third acquisition time, wherein the first type of sensor comprises one of the following: a camera, a laser radar; acquiring vehicle navigation information acquired by a second type sensor at each third acquisition time, wherein the second type sensor comprises one of the following: inertial measurement unit, wheel speed meter, speedometer.
Optionally, determining the fusion observation feature information according to the multi-frame road image and the vehicle navigation information includes: for each frame of road image, extracting first observation feature information in the road image, and converting the first observation feature information under a sensor coordinate system into second observation feature information under a vehicle coordinate system according to external parameters of the first sensor; determining the relative pose between the vehicle navigation position information acquired at the third acquisition time corresponding to the road image and the vehicle navigation position information acquired at the first acquisition time, and converting the second observation characteristic information into third observation characteristic information according to the relative pose; and fusing the third observation characteristic information corresponding to each frame of road image to obtain fused observation characteristic information.
Optionally, registering the road vector feature information and the fusion observation feature information to obtain registration pose information of the vehicle, including: converting the fused observation characteristic information in the vehicle coordinate system into fourth observation characteristic information in the world coordinate system according to the first pose information; establishing a cost function according to a first reprojection error between the road vector characteristic information and the fourth observation characteristic information; and determining the registration pose information of the vehicle in a mode of minimizing the cost function.
Optionally, when the registration pose information meets a preset condition, sending the fusion observation feature information to a cloud module, including: converting the fusion observed characteristic information under the vehicle coordinate system into fifth observed characteristic information under the world coordinate system according to the registration pose information; determining a second projection error between the road vector characteristic information and the fifth observation characteristic information, and determining the confidence level of the registration pose information according to the second projection error; and when the confidence coefficient is larger than a preset confidence coefficient threshold value and the second re-projection error is larger than a preset error threshold value, the fusion observation feature information is sent to the cloud module.
Optionally, uploading the fused observation feature information to a cloud module to update the local high-precision map, including: and uploading the fusion observation feature information to a cloud module, wherein the cloud module is used for managing the received multiple groups of fusion observation feature information, counting the occurrence frequency of the same fusion observation feature information, determining the fusion observation feature information with the highest occurrence frequency as target fusion observation feature information, and updating the local high-precision map according to the target fusion observation feature information.
Optionally, after obtaining the registration pose information of the vehicle, the method further comprises: determining target positioning information of the vehicle at a first acquisition moment according to the registration pose information; and displaying the local high-precision map and the target positioning information in a human-computer interaction interface of the vehicle.
According to another aspect of the embodiments of the present application, there is also provided a high-precision map updating apparatus, including: the first acquisition module is used for acquiring first pose information of the vehicle at a first acquisition time, and acquiring a local high-precision map corresponding to the first pose information from the cloud module, wherein the local high-precision map comprises road vector characteristic information; the second acquisition module is used for acquiring multi-frame road images and vehicle dead reckoning information acquired in a preset time period and determining fusion observation characteristic information according to the multi-frame road images and the vehicle dead reckoning information, wherein the vehicle dead reckoning information is used for dead reckoning; the registration module is used for registering the road vector feature information and the fusion observation feature information to obtain registration pose information of the vehicle; and the updating module is used for uploading the fusion observation characteristic information to the cloud module when the registration pose information meets the preset condition so as to update the local high-precision map.
According to another aspect of the embodiments of the present application, there is also provided an in-vehicle apparatus including: the system comprises a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the high-precision map updating method through the computer program.
In the embodiment of the application, first pose information of a vehicle at a first acquisition time is firstly acquired, and a local high-precision map corresponding to the first pose information is acquired from a cloud module, wherein the local high-precision map comprises road vector characteristic information; acquiring multi-frame road images and vehicle dead reckoning information acquired in a preset time period, and determining fusion observation characteristic information according to the multi-frame road images and the vehicle dead reckoning information, wherein the vehicle dead reckoning information is used for dead reckoning; registering the road vector characteristic information and the fusion observation characteristic information to obtain registration pose information of the vehicle; and finally, when the registration pose information meets the preset condition, uploading the fusion observation feature information to a cloud module so as to update the local high-precision map. After feature extraction and dead reckoning are carried out on the road image and the vehicle dead reckoning to obtain fused observation feature information, the fused observation feature information is registered with the road vector feature information in the high-precision map, real-time accurate registration pose information of the vehicle can be obtained without high-cost acquisition equipment in the process, accuracy of map updating information can be guaranteed when the registration pose information meets preset conditions, at the moment, the map updating information is uploaded to a cloud end, and the cloud end can finish crowdsourcing updating of the high-precision map according to multiple sets of map updating information. The technical problems of high cost or poor map quality existing when the high-precision map is updated in the related technology are effectively solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flow diagram of an alternative high-precision map updating method according to an embodiment of the present application;
FIG. 2 is a schematic illustration of an alternative high-precision map and vehicle positioning according to an embodiment of the present application;
FIG. 3 is a schematic illustration of an alternative registration pose determination process according to embodiments of the present application;
FIG. 4 is a schematic illustration of an alternative vehicle interacting with a cloud in accordance with an embodiment of the present application;
fig. 5 is a schematic structural diagram of an alternative high-precision map updating apparatus according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and the accompanying drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For a better understanding of the embodiments of the present application, some nouns or translations of terms that appear during the description of the embodiments of the present application are explained first as follows:
positioning technology: is one of the basis and core technologies of the application technology of robots such as automatic driving and the like, and provides position and gesture, namely pose information for the robots. Positioning techniques can be categorized into geometric positioning, dead Reckoning (DR), feature positioning, and the like, according to positioning principles.
The geometric positioning is to measure distance or angle of a reference device with a known position, and then determine its own position through geometric calculation, including techniques such as GNSS (Global Navigation Satellite System, global satellite positioning system), UWB (Ultra Wide Band), bluetooth, 5G, etc., to provide absolute positioning information. Among smart car applications, the application in GNSS technology is the most widespread. GNSS positioning is based on satellite positioning technology and is divided into single-point positioning, differential GPS positioning and RTK (Real-Time Kinematic) GPS positioning, wherein the single-point positioning provides 3-10 meters of positioning precision, the differential GPS provides 0.5-2 meters of positioning precision, and the RTKGPS provides centimeter-level positioning precision. The method has the limitation of depending on positioning facilities, being influenced by signal shielding, reflection and the like, and being invalid in the scenes of tunnels, overhead and the like.
The dead reckoning is to calculate the position of the next moment according to the motion data of the IMU (Inertial Measurement Unit ), wheel speed meter and other sensors from the position of the last moment, and provides relative positioning information. For example, when calculating the relative pose from point a to point b, let point a pose be T a The position and the posture of the point b are T b Then the relative pose T between a and b is determined ba =T a-inverse *T b Here T a-inverse Refers to T a Is a matrix of inverse of (a). The limitation of dead reckoning is that as the dead reckoning distance increases, the dead reckoning error increases continuously.
Feature positioning firstly acquires a plurality of features of surrounding environment, such as base station ID, WIFI fingerprint, image, lidar point cloud and the like, then matches the observed features with a feature map established in advance, determines the position in the feature map and can provide absolute positioning information. The direct factors influencing feature positioning are the number, quality and degree of distinction of features, and the limitation is that the positioning accuracy and stability are reduced when the scene, environment and other factors influence feature observation.
Coordinate system: in positioning technology, a world coordinate system, a carrier coordinate system and a sensor coordinate system are generally involved.
Wherein the world coordinate system is denoted as W and is kept in a Fixed relation to the actual geographical location, a geocentric Fixed coordinate system ECEF (Earth-Centered, earth-Fixed) is generally used.
The carrier coordinate system is denoted B, which is the vehicle coordinate system for the vehicle, centered on a certain fixed position of the carrier, e.g. the rear axle of the vehicle. The pose of the vehicle is the 6Dof (Degree of Freedom ) pose of the vehicle coordinate system in the world coordinate system, and is marked as T WB
The sensor coordinate system is denoted as S, also called the observation coordinate system, and the measurement data acquired by the sensor are all based on the sensor coordinate system. Since the sensor is usually fixed on the carrier and moves in rigid motion with the carrier, there is a fixed transformation relation T between the sensor coordinate system and the carrier coordinate system BS Also known as sensor external parameters.
Example 1
According to the embodiments of the present application, there is provided a high-precision map updating method applicable to an in-vehicle apparatus, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions, and that although a logical order is shown in the flowchart, in some cases the steps shown or described may be performed in an order different from that herein.
Fig. 1 is a schematic flow chart of an alternative high-precision map updating method according to an embodiment of the present application, as shown in fig. 1, the method at least includes steps S102-S108, wherein:
step S102, acquiring first pose information of a vehicle at a first acquisition time, and acquiring a local high-precision map corresponding to the first pose information from a cloud module, wherein the local high-precision map comprises road vector feature information.
Because the map updating is mainly performed according to the real-time positioning of the vehicle, the first acquisition time mainly refers to the current latest acquisition time. As an alternative embodiment, the first pose information of the vehicle at the first acquisition time may be acquired by:
when the first acquisition time is the initial acquisition time, the first pose information of the vehicle can be acquired from the global satellite navigation system and is marked as T WB1 However, the global satellite navigation system is easily affected by signal shielding, reflection and the like, and fails in the scenes of tunnels, overhead and the like, so that the first pose information input by the target object in the man-machine interaction interface of the vehicle can be obtained, namely the current pose information of the vehicle is determined in a user-assisted mode, and the specific input mode can be text input or voice input or the specification of the specific position on the man-machine interaction interface.
When the first acquisition time is not the initial acquisition time, the first navigation position information of the vehicle at the first acquisition time can be acquired, the second position information and the second navigation position information of the vehicle at the second acquisition time are acquired, and the first position information of the vehicle at the first acquisition time is determined according to the second position information, the first navigation position information and the second navigation position information; the second collection time is the last collection time of the first collection time.
For example, the acquired second pose information is T WB2 Dead reckoning according to the first dead reckoning information and the second dead reckoning information to obtain the relative pose of the vehicle at the first acquisition time and the second acquisition time as T B1B2 Then the first pose information is determined to be T WB1 =T WB2 *T B1B2
After the first pose information is obtained, a local high-precision map corresponding to the first pose information can be obtained from the cloud module. The cloud module stores a complete high-precision map, the high-precision map stores road information in a vector information manner, including, but not limited to, road object information such as a lamp post, a guideboard, a road edge and the like on a road surface, and road identification information such as a solid line, a broken line, an arrow, characters and the like, which are stored in a vector information manner, and fig. 2 is a schematic diagram of a typical high-precision map.
Specifically, in the running process of the vehicle, the vehicle-mounted device may send a request for obtaining the high-precision map to the cloud module, where the request includes current first pose information of the vehicle, and after the cloud module receives the request, the cloud module may send a local high-precision map corresponding to the first pose information to the vehicle-mounted device, for example, a local high-precision map within 1 km around the current position of the vehicle.
Step S104, acquiring multi-frame road images and vehicle dead reckoning information acquired in a preset time period, and determining fusion observation characteristic information according to the multi-frame road images and the vehicle dead reckoning information, wherein the vehicle dead reckoning information is used for dead reckoning.
Considering that the observed characteristic information is not comprehensive enough due to the fact that a single-frame road image is possibly blocked, the detection distance of a sensor is limited and the like, the embodiment of the application provides splicing and fusion of the observed characteristics of multiple-frame road images for subsequent pose calibration and map updating.
The preset time period is a time period of a preset duration before (including the first acquisition time), and the preset duration can be set by itself, which is not limited herein specifically. The preset time period comprises a plurality of acquisition moments with fixed time intervals, and the fixed time intervals are related to the acquisition frequency of the acquisition equipment.
As an alternative embodiment, the multi-frame road image and the vehicle navigation position information acquired in the preset time period may be acquired first by the following ways: determining each third acquisition time in a preset time period before the first acquisition time; acquiring road images acquired by the first type of sensor at each third acquisition time, wherein the first type of sensor comprises one of the following: a camera, a laser radar; acquiring vehicle navigation information acquired by a second type sensor at each third acquisition time, wherein the second type sensor comprises one of the following: inertial measurement unit, wheel speed meter, speedometer.
Then, for each frame of road image, the first observation feature information in the road image can be extracted through detection, segmentation, identification and other methods, and the first observation feature information in the sensor coordinate system is converted into the second observation feature information in the vehicle coordinate system according to the external parameters of the first sensor. For example, the first observed characteristic information in the sensor coordinate system is P S The external parameter of the first type of sensor is T BS Then the second observation characteristic information under the vehicle coordinate system can be obtained as P B =P S *T BS
To splice and fuse the observation feature information of the multi-frame road image, the observation feature information needs to be ensured to be in the same coordinate system, so that for each frame of road image, the relative pose between the vehicle navigation position information acquired at the third acquisition time corresponding to the road image and the vehicle navigation position information acquired at the first acquisition time can be determined, and the second observation feature information is converted into the third observation feature information according to the relative pose; and finally, fusing the third observation characteristic information corresponding to each frame of road image to obtain fused observation characteristic information.
For example, the second observation characteristic information corresponding to each third acquisition time is { P } 1 、P 2 、…、P n Corresponding vehicle navigation information is { T } 1 、T 2 、…、T n N represents the current latest first acquisition time, and for the ith third acquisition time, the relative pose of the vehicle at the acquisition time and the first acquisition time is T ni =T i-inverse *T n The third observation characteristic information at the third acquisition time is nP i =T ni *P i The final fusion observation characteristic information is
Figure BDA0004067388910000071
/>
And S106, registering the road vector feature information and the fusion observation feature information to obtain registration pose information of the vehicle.
Wherein, the registration process solves an optimal pose T WB-best And (3) minimizing the distance between the road vector characteristic information and the semantic observation corresponding to the fusion observation characteristic information.
As an alternative embodiment, the road vector feature information and the fused observation feature information may be registered by: the first pose information is used as an optimization initial value, and fused observation feature information in a vehicle coordinate system is converted into fourth observation feature information in a world coordinate system according to the first pose information; then establishing a cost function according to a first reprojection error between the road vector characteristic information and the fourth observation characteristic information; and determining the registration pose information of the vehicle in a mode of minimizing the cost function.
For example, assume that each road element in the road vector feature information is { M 1 、M 2 、…、M n Fusion of the observation characteristic information to { F }, wherein each observation element in the observation characteristic information is { F } 1 、F 2 、…、F n First pose information T WB1 As an optimization initial value, constructing a cost function:
f(T WB )=SUM(DIST(T WB *F i ,M i ))
wherein T is WB *F i Namely fourth observation characteristic information, DIST (T WB *F i ,M i ) Representing the observation element F i And road element M i And a first reprojection error between, SUM (DIST (T) WB *F i ,M i ) A) represents the first re-projection error summation for each element.
The process of optimizing the solution can be expressed as:
T WB-best =argmin(f(T WB ))
wherein argmin (f (T) WB ) Representing the determination of the optimal T WB-best Let the cost function f (T WB ) The value of (2) is the smallest.
Through the above process, the first pose information T with low accuracy can be obtained WB1 Obtaining optimized high-precision registration pose information T WB-best
Fig. 3 shows a schematic diagram of an alternative process of determining registration pose information. When the current initial acquisition time is not the initial acquisition time, dead reckoning is carried out according to the pose information of the last acquisition time to obtain first pose information, and a corresponding local high-precision map is obtained; and carrying out feature registration on the road vector feature information and the fusion observation feature information by taking the first pose information as an initial value to obtain registration pose information of the vehicle.
As an optional implementation manner, after registration pose information of the vehicle is obtained, target positioning information of the vehicle at the first acquisition time can be determined according to the registration pose information, and a local high-precision map and the target positioning information are displayed in a human-computer interaction interface of the vehicle, as shown in fig. 2, wherein five-pointed star is the current positioning of the vehicle.
And S108, uploading the fusion observation feature information to a cloud module when the registration pose information meets the preset condition so as to update the local high-precision map.
Specifically, the fused observed characteristic information in the vehicle coordinate system can be converted into fifth observed characteristic information in the world coordinate system according to the registration pose information; then determining a second projection error between the road vector characteristic information and the fifth observation characteristic information, and determining the confidence level of the registration pose information according to the second projection error; and when the confidence coefficient is larger than a preset confidence coefficient threshold value and the second re-projection error is larger than a preset error threshold value, the fusion observation feature information is sent to the cloud module.
An alternative confidence calculation formula is as follows:
Conf=SUM(DIST(T WB-best *F i ,M i ))
wherein Conf represents confidence, T WB-best *F i For the fifth observation feature information, DIST (T WB-best *F i ,M i ) Representing the second re-projection error, SUM (DIST (T) WB-best *F i ,M i ) Representing the second re-projection error summation for each element.
In order to ensure the accuracy of map updating, the fusion observation feature information is uploaded to the cloud module only when the registration pose information meets the preset condition. Specifically, when the confidence Conf is greater than the preset confidence threshold CT, it indicates that the registration pose information is reliable, the current vehicle is positioned accurately, and when the second projection error DIST (T WB-best *F i ,M i ) Is greater than a preset error threshold DT, illustrating map element M i Changes, i.e. the high-definition map needs to be updated, at which time F needs to be updated i Updated as new map elements into a high-precision map, denoted as MF i
Considering that the map is updated based on the characteristics of single-trip observation of a bicycle, the update is unreliable due to the influence of factors such as shielding, and the like, the cloud high-precision map is updated by adopting a crowdsourcing idea.
As an optional implementation manner, the cloud module manages the received multiple groups of fusion observation feature information, counts the occurrence frequency of the same fusion observation feature information, determines the fusion observation feature information with the highest occurrence frequency as target fusion observation feature information, and updates the local high-precision map according to the target fusion observation feature information.
Specifically, the cloud module sorts the map update data of the same local high-precision map uploaded by a plurality of vehicles in a certain time interval, and records the SET of the map update data as SET { MF } ij And the cloud module determines map updating data according to the following steps:
MF i-max =MAX{SET{MF ij }}
in MAX { SET { MF } ij And } } represents taking multiple values from the multi-pass map update data, with the subscript j.
Fig. 4 shows a schematic diagram of a complete vehicle side interacting with the cloud side to update a high-precision map. When the current initial acquisition time is not the initial acquisition time, dead reckoning is carried out according to data acquired by the first type of sensor and the second type of sensor to obtain first pose information; the vehicle-mounted equipment sends a request for acquiring the high-precision map to the cloud module, wherein the request comprises current first pose information of the vehicle, and the cloud module sends a local high-precision map corresponding to the first pose information to the vehicle-mounted equipment after receiving the request, wherein the local high-precision map comprises road vector characteristic information; performing feature extraction, dead reckoning and feature fusion according to the data acquired by the first type of sensor and the second type of sensor to obtain fusion observation feature information; registering the road vector feature information and the fusion observation feature information to obtain registration pose information of the vehicle, wherein the registration pose information can be used for position location service (Location Based Services, LBS) of the vehicle; when the registration pose information meets preset conditions, the fusion observation feature information is used as map updating data to be uploaded to a cloud module; the cloud module performs statistics fusion on the received multi-pass map updating data, and determines that the target map updating data performs crowd-sourced updating on the high-precision map.
In the embodiment of the application, first pose information of a vehicle at a first acquisition time is firstly acquired, and a local high-precision map corresponding to the first pose information is acquired from a cloud module, wherein the local high-precision map comprises road vector characteristic information; acquiring multi-frame road images and vehicle dead reckoning information acquired in a preset time period, and determining fusion observation characteristic information according to the multi-frame road images and the vehicle dead reckoning information, wherein the vehicle dead reckoning information is used for dead reckoning; registering the road vector characteristic information and the fusion observation characteristic information to obtain registration pose information of the vehicle; and finally, when the registration pose information meets the preset condition, uploading the fusion observation feature information to a cloud module so as to update the local high-precision map. After feature extraction and dead reckoning are carried out on the road image and the vehicle dead reckoning to obtain fused observation feature information, the fused observation feature information is registered with the road vector feature information in the high-precision map, real-time accurate registration pose information of the vehicle can be obtained without high-cost acquisition equipment in the process, accuracy of map updating information can be guaranteed when the registration pose information meets preset conditions, at the moment, the map updating information is uploaded to a cloud end, and the cloud end can finish crowdsourcing updating of the high-precision map according to multiple sets of map updating information. The technical problems of high cost or poor map quality existing when the high-precision map is updated in the related technology are effectively solved.
Example 2
According to an embodiment of the present application, there is further provided a high-precision map updating apparatus for implementing the high-precision map updating method in embodiment 1, as shown in fig. 5, where the high-precision map updating apparatus includes at least a first obtaining module 51, a second obtaining module 52, a registration module 53, and an updating module 54, where:
the first obtaining module 51 is configured to obtain first pose information of the vehicle at a first collection time, and obtain a local high-precision map corresponding to the first pose information from the cloud module, where the local high-precision map includes road vector feature information.
Because the map updating is mainly performed according to the real-time positioning of the vehicle, the first acquisition time mainly refers to the current latest acquisition time. As an alternative embodiment, the first obtaining module may obtain the first pose information of the vehicle at the first acquisition time by:
when the first acquisition time is the initial acquisition time, the first acquisition module can acquire first pose information of the vehicle from the global satellite navigation system, but the global satellite navigation system is easily influenced by signal shielding, reflection and the like and can fail in the scenes such as tunnels, overhead and the like, so that the first pose information of a target object input in a human-computer interaction interface of the vehicle can also be acquired, namely, the current pose information of the vehicle is determined in a user-assisted mode, and a specific input mode can be text input or voice input or can be the specification of a specific position on the human-computer interaction interface.
When the first acquisition time is not the initial acquisition time, the first acquisition module can acquire first pose information of the vehicle at the first acquisition time, acquire second pose information and second pose information of the vehicle at the second acquisition time, and determine the first pose information of the vehicle at the first acquisition time according to the second pose information, the first pose information and the second pose information; the second collection time is the last collection time of the first collection time.
After the first pose information is obtained, the first obtaining module can obtain the local high-precision map corresponding to the first pose information from the cloud module. The cloud module stores a complete high-precision map, and the high-precision map stores road information in a vector information mode, including, but not limited to, road object information such as lamp posts, signboards and edges on a road surface and road identification information such as solid lines, broken lines, arrows and characters on the road surface in a vector information mode such as points, lines and surfaces.
Specifically, in the running process of the vehicle, the first obtaining module may send a request for obtaining the high-precision map to the cloud module, where the request includes current first pose information of the vehicle, and after the cloud module receives the request, the cloud module may send a local high-precision map corresponding to the first pose information to the first obtaining module.
The second obtaining module 52 is configured to obtain a plurality of frames of road images and vehicle dead reckoning information acquired in a preset period of time, and determine fusion observation feature information according to the plurality of frames of road images and the vehicle dead reckoning information, where the vehicle dead reckoning information is used for dead reckoning.
Considering that the observed characteristic information is not comprehensive enough due to the fact that a single-frame road image is possibly blocked, the detection distance of a sensor is limited and the like, the embodiment of the application provides splicing and fusion of the observed characteristics of multiple-frame road images for subsequent pose calibration and map updating.
The preset time period is a time period of a preset duration before (including the first acquisition time), and the preset duration can be set by itself, which is not limited herein specifically. The preset time period comprises a plurality of acquisition moments with fixed time intervals, and the fixed time intervals are related to the acquisition frequency of the acquisition equipment.
As an optional implementation manner, the second obtaining module may first obtain the multi-frame road image and the vehicle navigation position information collected in the preset time period by: determining each third acquisition time in a preset time period before the first acquisition time; acquiring road images acquired by the first type of sensor at each third acquisition time, wherein the first type of sensor comprises one of the following: a camera, a laser radar; acquiring vehicle navigation information acquired by a second type sensor at each third acquisition time, wherein the second type sensor comprises one of the following: inertial measurement unit, wheel speed meter, speedometer.
Then, for each frame of road image, the second acquisition module can extract first observation feature information in the road image through detection, segmentation, identification and other methods, and convert the first observation feature information in the sensor coordinate system into second observation feature information in the vehicle coordinate system according to the external parameters of the first sensor.
To splice and fuse the observation feature information of the multi-frame road image, the observation feature information needs to be ensured to be in the same coordinate system, so that for each frame of road image, the second acquisition module can determine the relative pose between the vehicle navigation position information acquired at the third acquisition time and the vehicle navigation position information acquired at the first acquisition time corresponding to the road image, and convert the second observation feature information into third observation feature information according to the relative pose; and finally, fusing the third observation characteristic information corresponding to each frame of road image to obtain fused observation characteristic information.
And the registration module 53 is used for registering the road vector feature information and the fusion observation feature information to obtain registration pose information of the vehicle.
The registration process solves an optimal pose, so that the distance between the road vector characteristic information and the semantic observation corresponding to the fusion observation characteristic information is minimum.
As an alternative embodiment, the registration module may register the road vector feature information and the fused observation feature information by: the first pose information is used as an optimization initial value, and fused observation feature information in a vehicle coordinate system is converted into fourth observation feature information in a world coordinate system according to the first pose information; then establishing a cost function according to a first reprojection error between the road vector characteristic information and the fourth observation characteristic information; and determining the registration pose information of the vehicle in a mode of minimizing the cost function.
As an optional implementation manner, the high-precision map updating device in the embodiment of the present application further includes a real-time positioning module, configured to determine, after obtaining registration pose information of the vehicle, target positioning information of the vehicle at the first acquisition time according to the registration pose information, and display the local high-precision map and the target positioning information in a man-machine interaction interface of the vehicle.
And the updating module 54 is configured to upload the fused observation feature information to the cloud module when the registration pose information meets a preset condition, so as to update the local high-precision map.
Specifically, the updating module may convert the fused observation feature information under the vehicle coordinate system into fifth observation feature information under the world coordinate system according to the registration pose information; then determining a second projection error between the road vector characteristic information and the fifth observation characteristic information, and determining the confidence level of the registration pose information according to the second projection error; and when the confidence coefficient is larger than a preset confidence coefficient threshold value and the second re-projection error is larger than a preset error threshold value, the fusion observation feature information is sent to the cloud module.
Considering that the map is updated based on the characteristics of single-trip observation of a bicycle, the update is unreliable due to the influence of factors such as shielding, and the like, the cloud high-precision map is updated by adopting a crowdsourcing idea.
As an optional implementation manner, the cloud module manages the received multiple groups of fusion observation feature information, counts the occurrence frequency of the same fusion observation feature information, determines the fusion observation feature information with the highest occurrence frequency as target fusion observation feature information, and updates the local high-precision map according to the target fusion observation feature information.
It should be noted that, each module in the high-precision map updating apparatus in the embodiment of the present application corresponds to each implementation step of the high-precision map updating method in embodiment 1 one by one, and since detailed description has already been made in embodiment 1, details that are not partially shown in this embodiment may refer to embodiment 1, and will not be described in detail here again.
Example 3
According to an embodiment of the present application, there is further provided a nonvolatile storage medium including a stored program, where a device in which the nonvolatile storage medium is located executes the high-precision map updating method in embodiment 1 by running the program.
Specifically, the device where the nonvolatile storage medium is located executes the following steps by running the program: acquiring first pose information of a vehicle at a first acquisition moment, and acquiring a local high-precision map corresponding to the first pose information from a cloud module, wherein the local high-precision map comprises road vector feature information; acquiring multi-frame road images and vehicle dead reckoning information acquired in a preset time period, and determining fusion observation characteristic information according to the multi-frame road images and the vehicle dead reckoning information, wherein the vehicle dead reckoning information is used for dead reckoning; registering the road vector characteristic information and the fusion observation characteristic information to obtain registration pose information of the vehicle; and uploading the fusion observation characteristic information to a cloud module when the registration pose information meets preset conditions so as to update the local high-precision map.
According to an embodiment of the present application, there is also provided a processor for running a program, wherein the program executes the high-precision map updating method in embodiment 1.
Specifically, the program execution realizes the following steps: acquiring first pose information of a vehicle at a first acquisition moment, and acquiring a local high-precision map corresponding to the first pose information from a cloud module, wherein the local high-precision map comprises road vector feature information; acquiring multi-frame road images and vehicle dead reckoning information acquired in a preset time period, and determining fusion observation characteristic information according to the multi-frame road images and the vehicle dead reckoning information, wherein the vehicle dead reckoning information is used for dead reckoning; registering the road vector characteristic information and the fusion observation characteristic information to obtain registration pose information of the vehicle; and uploading the fusion observation characteristic information to a cloud module when the registration pose information meets preset conditions so as to update the local high-precision map.
According to an embodiment of the present application, there is also provided an in-vehicle apparatus including: a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the high-precision map updating method in embodiment 1 by the computer program.
In particular, the processor is configured to implement the following steps by computer program execution: acquiring first pose information of a vehicle at a first acquisition moment, and acquiring a local high-precision map corresponding to the first pose information from a cloud module, wherein the local high-precision map comprises road vector feature information; acquiring multi-frame road images and vehicle dead reckoning information acquired in a preset time period, and determining fusion observation characteristic information according to the multi-frame road images and the vehicle dead reckoning information, wherein the vehicle dead reckoning information is used for dead reckoning; registering the road vector characteristic information and the fusion observation characteristic information to obtain registration pose information of the vehicle; and uploading the fusion observation characteristic information to a cloud module when the registration pose information meets preset conditions so as to update the local high-precision map.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of units may be a logic function division, and there may be another division manner in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (10)

1. A high-precision map updating method, characterized by comprising:
acquiring first pose information of a vehicle at a first acquisition moment, and acquiring a local high-precision map corresponding to the first pose information from a cloud module, wherein the local high-precision map comprises road vector feature information;
acquiring multi-frame road images and vehicle dead reckoning information acquired in a preset time period, and determining fusion observation characteristic information according to the multi-frame road images and the vehicle dead reckoning information, wherein the vehicle dead reckoning information is used for dead reckoning;
registering the road vector characteristic information and the fusion observation characteristic information to obtain registration pose information of the vehicle;
and uploading the fusion observation feature information to the cloud module when the registration pose information meets a preset condition so as to update the local high-precision map.
2. The method of claim 1, wherein acquiring first pose information of the vehicle at a first acquisition time comprises:
acquiring the first pose information of the vehicle from a global satellite navigation system when the first acquisition time is an initial acquisition time; or acquiring the first pose information input by the target object in the human-computer interaction interface of the vehicle;
when the first acquisition time is not the initial acquisition time, acquiring first pose information of the vehicle at the first acquisition time, acquiring second pose information and second pose information of the vehicle at a second acquisition time, and determining the first pose information of the vehicle at the first acquisition time according to the second pose information, the first pose information and the second pose information; the second collection time is the last collection time of the first collection time.
3. The method of claim 1, wherein acquiring the plurality of frames of road images and vehicle navigation information acquired over the predetermined period of time comprises:
determining each third acquisition time in the preset time period before the first acquisition time;
Acquiring the road image acquired by a first type sensor at each third acquisition time, wherein the first type sensor comprises one of the following components: a camera, a laser radar;
acquiring the vehicle navigation information acquired by a second type sensor at each third acquisition time, wherein the second type sensor comprises one of the following components: inertial measurement unit, wheel speed meter, speedometer.
4. A method according to claim 3, wherein determining fusion observed characteristic information from the multi-frame road image and the vehicle navigation position information comprises:
extracting first observation feature information in the road image for each frame of the road image, and converting the first observation feature information under a sensor coordinate system into second observation feature information under a vehicle coordinate system according to external parameters of the first sensor;
determining a relative pose between the vehicle navigation position information acquired at the third acquisition time corresponding to the road image and the vehicle navigation position information acquired at the first acquisition time, and converting the second observation characteristic information into third observation characteristic information according to the relative pose;
And fusing the third observation characteristic information corresponding to the road image of each frame to obtain the fused observation characteristic information.
5. The method of claim 1, wherein registering the road vector feature information and the fused observation feature information to obtain registration pose information of the vehicle comprises:
converting the fused observed characteristic information in a vehicle coordinate system into fourth observed characteristic information in a world coordinate system according to the first pose information;
establishing a cost function according to a first reprojection error between the road vector characteristic information and the fourth observation characteristic information;
the registration pose information of the vehicle is determined by minimizing the cost function.
6. The method of claim 5, wherein sending the fused observed feature information to the cloud module when the registration pose information satisfies a preset condition comprises:
converting the fused observed characteristic information in the vehicle coordinate system into fifth observed characteristic information in the world coordinate system according to the registration pose information;
determining a second projection error between the road vector characteristic information and the fifth observation characteristic information, and determining the confidence level of the registration pose information according to the second projection error;
And when the confidence coefficient is larger than a preset confidence coefficient threshold value and the second projection error is larger than a preset error threshold value, the fusion observation feature information is sent to the cloud module.
7. The method of claim 1, wherein uploading the fused observed feature information to the cloud module to update the local high-precision map comprises:
and uploading the fusion observation feature information to the cloud module, wherein the cloud module is used for managing the received multiple groups of fusion observation feature information, counting the occurrence frequency of the same fusion observation feature information, determining the fusion observation feature information with the largest occurrence frequency as target fusion observation feature information, and updating the local high-precision map according to the target fusion observation feature information.
8. The method of claim 1, wherein after obtaining registration pose information for the vehicle, the method further comprises:
determining target positioning information of the vehicle at the first acquisition moment according to the registration pose information;
and displaying the local high-precision map and the target positioning information in a man-machine interaction interface of the vehicle.
9. A high-precision map updating apparatus, characterized by comprising:
the vehicle position detection system comprises a first acquisition module, a second acquisition module and a cloud module, wherein the first acquisition module is used for acquiring first position information of a vehicle at a first acquisition time, and acquiring a local high-precision map corresponding to the first position information from the cloud module, wherein the local high-precision map comprises road vector feature information;
the second acquisition module is used for acquiring multi-frame road images and vehicle dead reckoning information acquired in a preset time period, and determining fusion observation characteristic information according to the multi-frame road images and the vehicle dead reckoning information, wherein the vehicle dead reckoning information is used for dead reckoning;
the registration module is used for registering the road vector characteristic information and the fusion observation characteristic information to obtain registration pose information of the vehicle;
and the updating module is used for uploading the fusion observation characteristic information to the cloud module when the registration pose information meets a preset condition so as to update the local high-precision map.
10. An in-vehicle apparatus, characterized by comprising: a memory and a processor, wherein the memory stores a computer program therein, the processor being configured to execute the high-precision map updating method of any one of claims 1 to 8 by the computer program.
CN202310080648.3A 2023-01-16 2023-01-16 High-precision map updating method and device Pending CN116045964A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310080648.3A CN116045964A (en) 2023-01-16 2023-01-16 High-precision map updating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310080648.3A CN116045964A (en) 2023-01-16 2023-01-16 High-precision map updating method and device

Publications (1)

Publication Number Publication Date
CN116045964A true CN116045964A (en) 2023-05-02

Family

ID=86121831

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310080648.3A Pending CN116045964A (en) 2023-01-16 2023-01-16 High-precision map updating method and device

Country Status (1)

Country Link
CN (1) CN116045964A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229765A (en) * 2023-05-06 2023-06-06 贵州鹰驾交通科技有限公司 Vehicle-road cooperation method based on digital data processing

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229765A (en) * 2023-05-06 2023-06-06 贵州鹰驾交通科技有限公司 Vehicle-road cooperation method based on digital data processing

Similar Documents

Publication Publication Date Title
JP6812404B2 (en) Methods, devices, computer-readable storage media, and computer programs for fusing point cloud data
CN112116654B (en) Vehicle pose determining method and device and electronic equipment
US8359156B2 (en) Map generation system and map generation method by using GPS tracks
JP6950832B2 (en) Position coordinate estimation device, position coordinate estimation method and program
CN111391823A (en) Multilayer map making method for automatic parking scene
WO2020043081A1 (en) Positioning technique
CN111275960A (en) Traffic road condition analysis method, system and camera
CN113034566B (en) High-precision map construction method and device, electronic equipment and storage medium
CN113406682A (en) Positioning method, positioning device, electronic equipment and storage medium
CN113959457B (en) Positioning method and device for automatic driving vehicle, vehicle and medium
CN115164918A (en) Semantic point cloud map construction method and device and electronic equipment
WO2018131546A1 (en) Information processing device, information processing system, information processing method, and information processing program
CN116045964A (en) High-precision map updating method and device
CN111982132B (en) Data processing method, device and storage medium
CN115344655A (en) Method and device for finding change of feature element, and storage medium
CN111323029B (en) Navigation method and vehicle-mounted terminal
EP3816938A1 (en) Region clipping method and recording medium storing region clipping program
US11815362B2 (en) Map data generation apparatus
CN115112125A (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115841660A (en) Distance prediction method, device, equipment, storage medium and vehicle
JP7429246B2 (en) Methods and systems for identifying objects
CN113034538B (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
WO2021005073A1 (en) Method for aligning crowd-sourced data to provide an environmental data model of a scene
CN112099481A (en) Method and system for constructing road model
CN111060114A (en) Method and device for generating feature map of high-precision map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination