CN114440860A - Positioning method, positioning device, computer storage medium and processor - Google Patents

Positioning method, positioning device, computer storage medium and processor Download PDF

Info

Publication number
CN114440860A
CN114440860A CN202210096657.7A CN202210096657A CN114440860A CN 114440860 A CN114440860 A CN 114440860A CN 202210096657 A CN202210096657 A CN 202210096657A CN 114440860 A CN114440860 A CN 114440860A
Authority
CN
China
Prior art keywords
semantic information
observation data
pose
registration
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210096657.7A
Other languages
Chinese (zh)
Other versions
CN114440860B (en
Inventor
颜扬治
林宝尉
袁维平
傅文标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecarx Hubei Tech Co Ltd
Original Assignee
Ecarx Hubei Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecarx Hubei Tech Co Ltd filed Critical Ecarx Hubei Tech Co Ltd
Priority to CN202210096657.7A priority Critical patent/CN114440860B/en
Publication of CN114440860A publication Critical patent/CN114440860A/en
Application granted granted Critical
Publication of CN114440860B publication Critical patent/CN114440860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a positioning method, a positioning device, a computer storage medium and a processor. Wherein, the method comprises the following steps: acquiring pose information of a vehicle; acquiring observation data of an environment where a vehicle is located, and extracting first semantic information from the observation data; based on the pose information, splicing first semantic information corresponding to multi-frame observation data to obtain second semantic information; acquiring a high-precision map, wherein the high-precision map comprises third semantic information; and registering the second semantic information and the third semantic information to obtain a registration result, and positioning the vehicle according to the registration result. The invention solves the technical problem of lower positioning precision of the existing positioning technology.

Description

Positioning method, positioning device, computer storage medium and processor
Technical Field
The invention relates to the field of automatic driving, in particular to a positioning method, a positioning device, a computer storage medium and a processor.
Background
The positioning technology is one of the basic and core technologies of robot application technologies such as automatic driving and the like, and provides position and attitude, namely attitude information for the robot. According to the positioning principle, the positioning technology can be divided into geometric positioning, dead reckoning and feature positioning.
The geometric positioning is to measure the distance or angle of a reference device with a known position and then determine the position of the reference device through geometric calculation. Including gnss (global navigation satellite system), uwb (ultra Wide band), bluetooth, 5G, etc., provide absolute positioning information. The GNSS technology is most widely applied in the intelligent automobile application. The GNSS positioning is based on a satellite positioning technology and comprises single-point positioning, differential GPS positioning and RTK (Real-time kinematic) GPS positioning, wherein the single-point positioning provides 3-10 m positioning accuracy, the differential GPS provides 0.5-2 m positioning accuracy, and the RTK GPS provides centimeter-level positioning accuracy. The method has the limitation that the method depends on positioning facilities, is influenced by signal shielding, reflection and the like, and fails in scenes such as tunnels, elevated buildings and the like.
Dead Reckoning (Dead Reckoning) is to calculate the position of the next moment according to the motion data of sensors such as an imu (inertial Measurement unit) and a wheel speed meter and the like from the position of the previous moment, and provides relative positioning information. The limitation is that as the estimated distance increases, the positioning error will increase cumulatively.
The feature localization firstly obtains a plurality of features of the surrounding environment, such as base station ID, Wifi fingerprint, image, Lidar point cloud and the like. Then, the observation features are matched with a feature map established in advance, the position in the feature map is determined, and absolute positioning information can be provided. The direct factors that affect the accuracy of feature location are the number, quality and discrimination of features. The limitation is that the positioning accuracy and stability are reduced when the factors such as scene and environment affect the characteristic observation.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a positioning method, a positioning device, a computer storage medium and a processor, which at least solve the technical problem of low positioning precision of the existing positioning technology.
According to an aspect of an embodiment of the present invention, there is provided a positioning method, including: acquiring pose information of a vehicle; acquiring observation data of an environment where the vehicle is located, and extracting first semantic information from the observation data; based on the pose information, splicing the first semantic information corresponding to multiple frames of observation data to obtain second semantic information; acquiring a high-precision map, wherein the high-precision map comprises third semantic information; and registering the second semantic information and the third semantic information to obtain a registration result, and positioning the vehicle according to the registration result.
Further, based on the pose information, splicing the first semantic information corresponding to multiple frames of the observation data to obtain second semantic information, wherein the method comprises the following steps: determining the pose information corresponding to a plurality of frames of the observation data; determining a relative pose between first-class observation data and second-class observation data, wherein the first-class observation data and the second-class observation data form a plurality of frames of observation data, and the second-class observation data is a frame of observation data which is obtained latest in the plurality of frames of observation data; converting the first semantic information corresponding to the first type of observation data based on the relative pose to obtain first converted semantic information; and splicing the plurality of first converted semantic information and the first semantic information corresponding to the second type of observation data to obtain the second semantic information.
Further, in a case that the first semantic information corresponding to the same target object appears in multiple frames of the first type of observation data, the method further includes: acquiring an influence factor corresponding to each first conversion semantic information; and calculating the sum of the plurality of first conversion semantic information by taking the influence factor as a weight.
Further, under the condition that the first semantic information corresponding to the same target object appears in at least one frame of the first type of observation data and the second type of observation data, acquiring an influence factor corresponding to each first conversion semantic information and an influence factor corresponding to the first semantic information corresponding to the second type of observation data; and calculating the sum of at least one first conversion semantic information and the first semantic information corresponding to the second type of observation data by taking the influence factor as weight.
Further, the second semantic information and the third semantic information are registered to obtain a registration result, and the vehicle is positioned according to the registration result, including the following steps: step 1: performing primary overall semantic registration on the second semantic information and the third semantic information to obtain a first registration result; step 2: performing local semantic registration based on the first registration result to obtain a second registration result; and step 3: performing a second integral semantic registration based on the second registration result to obtain a third registration result; and 4, step 4: and repeatedly executing the step 2 and the step 3 at least once until the cost function representing the third registration result is converged, and positioning the vehicle by adopting the third registration result.
Further, performing a first overall semantic registration on the second semantic information and the third semantic information to obtain a first registration result, where the method includes: acquiring a first reprojection error between a plurality of groups of second semantic information and third semantic information; calculating the error sum of a plurality of first reprojection errors with confidence as weight; determining the smallest error and the corresponding pose as an overall registration pose.
Further, repeatedly executing step 2 and step 3 at least once until the cost function representing the third registration result converges, and positioning the vehicle by using the third registration result includes: acquiring a second reprojection error between the second semantic information and the third semantic information corresponding to each local object under the overall registration pose; determining the pose corresponding to the minimum second reprojection error as a local registration pose; determining pose parameters according to the final pose to be solved, the inverse matrix of the overall registration pose and the local registration pose; acquiring a third reprojection error between the second semantic information and the third semantic information corresponding to the pose parameter; and determining the pose corresponding to the minimum third reprojection error as a final pose, positioning the vehicle by adopting the final pose, and enabling the third reprojection error to be minimum when the cost function is converged.
Further, acquiring pose information of the vehicle includes: acquiring pose information of the vehicle by adopting a first type of sensor, wherein the first type of sensor comprises at least one of the following: an inertial measurement unit, a wheel speed meter, the first type of sensor being mounted on the vehicle.
Further, acquiring observation data of the environment where the vehicle is located includes: acquiring observation data of an environment in which the vehicle is located by using a second type of sensor, wherein the second type of sensor comprises at least one of the following: image acquisition equipment, lidar, the second type sensor is installed on the vehicle.
Further, extracting the first semantic information from the observation data includes: preprocessing the observation data to obtain the first semantic information, wherein the preprocessing comprises at least one of the following steps: image recognition processing, image segmentation processing, point cloud processing, and coordinate transformation, wherein the coordinate transformation refers to transformation from a world coordinate system to a vehicle coordinate system.
According to another aspect of the embodiments of the present invention, there is also provided a positioning apparatus, including: a first acquisition unit configured to acquire pose information of a vehicle; the second acquisition unit is used for acquiring observation data of the environment where the vehicle is located and extracting first semantic information from the observation data; the splicing unit is used for splicing the first semantic information corresponding to the observation data of multiple frames based on the pose information to obtain second semantic information; the third acquisition unit is used for acquiring a high-precision map, and the high-precision map comprises third semantic information; and the registration unit is used for registering the second semantic information and the third semantic information to obtain a registration result, and positioning the vehicle according to the registration result.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, which includes a stored program, wherein when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the above positioning method.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes the positioning method described above.
In the embodiment of the invention, the pose information of a vehicle is firstly acquired; acquiring observation data of an environment where the vehicle is located, and extracting first semantic information from the observation data; then, based on the pose information, splicing the first semantic information corresponding to the multi-frame observation data to obtain second semantic information; then, acquiring a high-precision map, wherein the high-precision map comprises third semantic information; and finally, registering the second semantic information and the third semantic information to obtain a registration result, and positioning the vehicle according to the registration result. According to the method and the device, the second semantic information is obtained by splicing the first semantic information corresponding to the multi-frame observation data, and the second semantic information is registered with the third semantic information contained in the high-precision map, so that the aim of positioning the vehicle according to the registration result is fulfilled, the technical effect of accurately positioning the vehicle is achieved, and the technical problem of low positioning precision of the existing positioning technology is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flow chart of a positioning method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of an alternative high-precision map according to an embodiment of the invention;
FIG. 3 is a flow chart of an alternative positioning method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an alternative positioning method according to an embodiment of the invention;
fig. 5 is a schematic view of a positioning device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided a positioning method, it should be noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a flow chart of a positioning method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
and S102, acquiring pose information of the vehicle.
The pose information in the above steps may be acquired by using an Inertial Measurement Unit (IMU) mounted on the vehicle, or may be acquired by integrating data measured by the Inertial Measurement Unit, the wheel speed meter, and/or the vehicle speed meter. Of course, the inertial measurement unit, the wheel speed meter, and the vehicle speed meter are only exemplary, and any suitable sensor for acquiring the pose information may be adopted by those skilled in the art.
And step S104, acquiring observation data of the environment where the vehicle is located, and extracting first semantic information from the observation data.
Specifically, the observation data in the above steps is 3D observation data.
The observation data in the above steps may be obtained by a camera, LiDAR (Laser Detection and Ranging) or other sensor mounted on the vehicle, or a combination of these sensors, which is not specifically limited herein, and the observation data of the environment where the vehicle is located is obtained by these sensors, and then the first semantic information may be extracted by methods such as Detection, segmentation, identification, and the like.
In an alternative embodiment, the observation data acquired by the sensors may be converted into a vehicle coordinate system. Specifically, let the observation of sensor A be PAThe external parameter of the sensor is TBAThen observation P under the vehicle coordinate systemB=TBA*PA. If symbolized, the road single-frame observation data are P1, P2 and P3 … … Pm, wherein Pi refers to each semantic element in the single-frame observation. And assigning a confidence level B to each observed single frame of observed data1、B2、B3……Bm. The confidence level depends on the quality of observation of each semantic element in a single frame of observation data. In particular, in a single frame observation, a single semantic element PiThe observation quality of (2) is determined by the quasi-calling rate of semantic detection, assuming that the quasi-calling rate values of semantic elements are P and R respectively, and taking the weighted harmonic mean F number (F-measure): f ═ a +1) × P × (a × a) × (P + R)), where a is a parameter, and the observed quality evaluation using the F number as a single semantic element is thatBi
In another alternative example, the interval between adjacent frames is related to parameters such as sensor observation distance and sensor sampling density, and needs to be determined in practice through debugging. For example, the reliable observation distance of Lidar can reach more than 100m, but as the observation distance becomes larger, the scanning lines become sparse. In general practical algorithmic practice, therefore, the inter-frame spacing of Lidar may take 5m-100m, e.g., 5m, 10m, 20m, 30m, 40m, 50m, 60m, 70m, 80m, 90m, 100 m.
And S106, splicing the first semantic information corresponding to the multi-frame observation data based on the pose information to obtain second semantic information.
The second semantic information may be obtained by obtaining a relative pose according to pose information corresponding to the multi-frame observation data, converting the first semantic information according to the relative pose, and accumulating the converted semantic information and the first semantic information corresponding to the multi-frame observation data, that is, the second semantic information.
And S108, acquiring a high-precision map, wherein the high-precision map comprises third semantic information.
The high-precision map in the above steps may be a map established after acquiring road information through a high-precision positioning device and a sensor, and the high-precision map stores third semantic information in a vector information manner, where the third semantic information includes, but is not limited to, storing road surface object information such as lamp posts, guideboards, and road edges on a road surface and road surface identification information such as solid lines, dotted lines, arrows, zebra stripes, stop lines, and characters, and as shown in fig. 2, fig. 2 is a typical high-precision map schematic diagram.
And S110, registering the second semantic information and the third semantic information to obtain a registration result, and positioning the vehicle according to the registration result.
The registration result in the above step may be the most accurate result obtained by multiple registrations, and may be, for example, a first registration result obtained by overall semantic registration, a second registration result obtained by performing local semantic registration based on the first registration result, and a third registration result obtained by performing further overall semantic registration based on the second registration result, where the third registration result may be considered as the most accurate registration result in the current state, and the vehicle may be located based on the result.
In the embodiment of the invention, the pose information of a vehicle is firstly acquired; acquiring observation data of an environment where the vehicle is located, and extracting first semantic information from the observation data; then, based on the pose information, splicing the first semantic information corresponding to the multi-frame observation data to obtain second semantic information; then, acquiring a high-precision map, wherein the high-precision map comprises third semantic information; and finally, registering the second semantic information and the third semantic information to obtain a registration result, and positioning the vehicle according to the registration result. According to the method, the second semantic information is obtained by splicing the first semantic information corresponding to the multi-frame observation data, and the second semantic information is registered with the third semantic information contained in the high-precision map, so that the aim of positioning the vehicle according to the registration result is fulfilled, the technical effect of accurately positioning the vehicle is achieved, and the technical problem of low positioning precision of the existing positioning technology is solved.
Through the steps, a positioning method with higher precision can be realized, and the method obtains road observation information through a sensor and obtains road semantic information through algorithm processing. And then registering the acquired positioning pose with semantic information stored in a High-precision Map (HD Map) in advance. In the registration process, elastic registration is realized through overall semantic registration and local semantic registration so as to inhibit noise in observation and maps. And high-precision and high-robustness registration positioning is realized. Because the semantic information is adopted for registration and positioning, the semantic information has better stability compared with general characteristic information, and is not easily interfered by factors such as scenes, environments and the like. Compared with the traditional navigation map, the high-precision map provides higher precision and more information, and simultaneously occupies smaller space due to the fact that semantic vector information is directly stored. In the elastic registration process, the overall and local registration between the current observation and the map information is considered, and various noises can be effectively inhibited, so that the technical effect of accurately positioning the vehicle is realized.
Further, based on the pose information, splicing the first semantic information corresponding to the multi-frame observation data to obtain second semantic information, wherein the method comprises the following steps: determining pose information corresponding to multi-frame observation data; determining the relative pose between the first type of observation data and the second type of observation data, wherein the first type of observation data and the second type of observation data form multi-frame observation data, and the second type of observation data is one frame of observation data which is obtained latest in the multi-frame observation data; converting first semantic information corresponding to the first type of observation data based on the relative pose to obtain first converted semantic information; and splicing the plurality of first converted semantic information and the first semantic information corresponding to the second type of observation data to obtain second semantic information. The latest acquired frame of observation data in the multi-frame observation data refers to the latest acquired frame of observation data. For example, 2: 00-2: 10 total 10 frames of observation data are acquired, and 2: 10 as a second type of observation data.
Determining the pose information corresponding to the multi-frame observation data in the above step may be by judging whether a timestamp of each frame of the multi-frame observation data is equal to a timestamp of the pose information, and if the timestamps are equal, determining the corresponding relationship; the relative pose in the above steps may be obtained by DR (Dead Reckoning). Further, the relative pose here may be the relative pose from the point a to the point b provided by the DR, and specifically, in the DR coordinate system, it should be noted that the coordinate system is defined by the DR, and generally, the pose when the DR acquires the first frame of observation data is taken as the origin, the pose of the point a is Ta, and the pose of the point b is Tb, and then the relative pose Tba between a and b is Ta-inverse Tb, where Ta-inverse represents the inverse matrix of Ta.
In an optional embodiment, the acquired multi-frame observation data are F1 and F2 … Fn, and the corresponding pose information is T1 and T2 … Tn. And Fn is the latest frame (namely the second type of observation data), calculating the relative pose of the multi-frame observation and the latest frame observation, and for the ith frame, converting the multi-frame observation to the latest frame through the relative pose Tni-Ti-inverse Tn. For the ith frame, the observation nFi that is converted to the latest frame is Tni Fi, and the plurality of first converted semantic information and the first semantic information corresponding to the second type of observation data are spliced to obtain the second semantic information.
In another optional embodiment, taking the street lamp photographed when the automobile runs on the road as an example, if the street lamp image photographed at the current moment is a last obtained frame of observation data in the multiple frames of observation data, the street lamp image at the moment is the second type of observation data, a plurality of frames of images with a certain time interval before the second type of observation data are the first type of observation data, and in this case, the plurality of first type observation data can be different forms of the street lamp in the driving process of the vehicle, at the moment, first semantic information corresponding to the first type observation data is converted to obtain first converted semantic information, and accumulating the plurality of first converted semantic information and the first semantic information corresponding to the second type of observation data, that is, acquiring the most comprehensive information of the current street lamp, that is, the second semantic information.
Further, in a case that the first semantic information corresponding to the same target object appears in multiple frames of the first type of observation data, the method further includes: acquiring influence factors corresponding to the first conversion semantic information; and calculating the sum of the plurality of first conversion semantic information by taking the influence factor as a weight. Specifically, when the same target object (for example, a street lamp) appears in multiple frames of first-class observation data, the same street lamp displays different parts and/or forms in different first-class observation data, and at this time, a corresponding influence factor is given to first conversion semantic information corresponding to each frame of first-class observation data, and then the sum of multiple first conversion semantic information is calculated by taking the influence factor as a weight, so that accurate semantic information for the same target object can be obtained.
Further, under the condition that first semantic information corresponding to the same target object appears in at least one frame of first-class observation data and second-class observation data, acquiring an influence factor corresponding to each first conversion semantic information and an influence factor corresponding to the first semantic information corresponding to the second-class observation data; and calculating the sum of at least one first conversion semantic information and first semantic information corresponding to the second type of observation data by taking the influence factor as a weight. Specifically, when the same target object (for example, a street lamp) appears in at least one frame of the first-type observation data and the second-type observation data, the parts and/or forms of the same street lamp displayed in the first-type observation data and the second-type observation data are different, and at this time, the influence factor corresponding to each first converted semantic information and the influence factor corresponding to the first semantic information corresponding to the second-type observation data are different; and calculating the sum of at least one first conversion semantic information and first semantic information corresponding to the second type of observation data by taking the influence factor as the weight, so that accurate semantic information of the same target object can be obtained.
Further, the second semantic information and the third semantic information are registered to obtain a registration result, and the vehicle is positioned according to the registration result, and the method comprises the following steps: step 1: performing primary integral semantic registration on the second semantic information and the third semantic information to obtain a first registration result; step 2: performing local semantic registration based on the first registration result to obtain a second registration result; and 3, step 3: performing a second integral semantic registration based on the second registration result to obtain a third registration result; and 4, step 4: and (4) repeatedly executing the step (2) and the step (3) at least once until the cost function representing the third registration result is converged, and positioning the vehicle by adopting the third registration result. The method comprises the steps of performing first overall semantic registration, then performing local semantic registration and second overall semantic registration, and then continuously repeating the steps of the local semantic registration and the second overall semantic registration, namely performing iteration until a cost function representing a third registration result is converged, and positioning the vehicle by adopting the third registration result. To achieve accurate positioning.
Further, performing a first overall semantic registration on the second semantic information and the third semantic information to obtain a first registration result, including: acquiring a first reprojection error between multiple groups of second semantic information and third semantic information; calculating the error sum of a plurality of first reprojection errors with confidence degrees as weights; and determining the minimum error and the corresponding pose as an integral registration pose. The error sum is obtained by adopting the weighted average idea, the minimum error sum is obtained by multiple iterative calculations, and the minimum error sum corresponding to the pose is taken as the integral registration pose.
In an optional embodiment, the road multi-frame observation data and the high-precision map are subjected to primary overall semantic registration to obtain an overall registration pose TWB. Assuming that the second semantic information corresponding to the road multi-frame observation data is P1, P2 and P3 … … Pn, and the third semantic information corresponding to the high-precision map is M1, M2 and M3 … … Mn, the registration process is to solve an optimal pose TWB, so that the distance between the second semantic information in the road multi-frame observation data and the third semantic information corresponding to the high-precision map is the minimum. Formulated, the cost function is: f (TWB) ═ SUM (DIST (TWB × Pi, Mi)).
Where DIST (×) represents a reprojection error between the second semantic information Pi and the third semantic information Mi, i.e., the first reprojection error, and SUM (×) represents an error SUM weighted with confidence for the plurality of first reprojection errors.
The optimization solution can be expressed as: TWB ═ argmin (f (TWB)).
and (5) calculating the optimal TWB by argmin (×) to minimize the value of the cost function, namely obtaining the minimum error sum through multiple iterative calculations, and taking the minimum error sum corresponding to the pose as an integral registration pose.
Further, repeatedly executing the step 2 and the step 3 at least once until the cost function representing the third registration result converges, and positioning the vehicle by using the third registration result comprises: acquiring a second reprojection error between each local object and corresponding second semantic information and third semantic information under the overall registration pose; determining the pose corresponding to the minimum second reprojection error as a local registration pose; determining pose parameters according to the final pose to be solved, the inverse matrix of the overall registration pose and the local registration pose; acquiring a third reprojection error between second semantic information and third semantic information corresponding to the pose parameters; and determining the pose corresponding to the minimum third reprojection error as the final pose, positioning the vehicle by adopting the final pose, and enabling the third reprojection error to be minimum when the cost function is converged. Namely, each local object under the overall registration pose is obtained, a second reprojection error between corresponding second semantic information and third semantic information is obtained, and the pose corresponding to the minimum second reprojection error is taken as the local registration pose through iterative computation; and finding the pose corresponding to the minimum third reprojection error through continuous iterative computation.
In an optional embodiment, local semantic observation registration is performed on the basis of the overall registration pose TWB. And carrying out local registration on each local object in the overall registration pose. Formulated, the cost function of local registration is:
F(TWB-i)=DIST(TWB-i*Pi,Mi)。
note that the cost function here is for local object i.
The optimization solution can be expressed as: TWB-i ═ argmin (F (TWB-i));
wherein DIST (, i.e., the second reprojection error described above; argmin (×) represents that the optimal TWB-i is solved, so that the value of the cost function is minimized, that is, the pose corresponding to the minimum second reprojection error is determined to be the local registration pose.
In an alternative embodiment, a second global semantic registration is performed. Is expressed by the formula:
F(TWB-final)=SUM(DIST(TWB-final*TWB-inverse*TWB-i*Pi,Mi))
the TWB-final is a pose parameter to be solved, the TWB-inverse is an inverse matrix of an overall registration pose, and the TWB-i is a local registration pose;
the optimization solution can be expressed as:
TWB-final=argmin(F(TWB-final));
and argmin (x) represents that the optimal TWB-final is obtained, so that the value of the cost function is minimum, namely, the pose corresponding to the minimum third reprojection error is determined as the final pose.
Further, acquiring pose information of the vehicle includes: acquiring pose information of the vehicle by adopting a first type of sensor, wherein the first type of sensor comprises at least one of the following components: the inertial measurement unit, the wheel speed meter, the first kind sensor is installed on the vehicle. Namely, the position and attitude information of the vehicle can be acquired by adopting sensors such as an inertial measurement unit, a wheel speed meter and the like which are installed on the vehicle.
Further, acquiring observation data of the environment where the vehicle is located includes: and acquiring observation data of the environment where the vehicle is located by adopting a second type of sensor, wherein the second type of sensor comprises at least one of the following sensors: image acquisition equipment, lidar, second class sensor are installed on the vehicle. Namely, the observation data of the environment where the vehicle is located can be acquired by adopting sensors such as image acquisition equipment and laser radar.
Further, extracting the first semantic information from the observation data includes: the observation data are preprocessed to obtain first semantic information, and the preprocessing comprises at least one of the following steps: the method comprises the steps of image identification processing, image segmentation processing, point cloud processing and coordinate transformation, wherein the coordinate transformation refers to the transformation from a world coordinate system to a vehicle coordinate system. Namely, the first semantic information is obtained by adopting an image processing technology, a point cloud processing technology and a coordinate transformation algorithm. Further, performing a second overall semantic registration based on the second registration result to obtain a third registration result, wherein the method comprises the following steps: acquiring a third reprojection error between second semantic information and third semantic information corresponding to a product quantity, wherein the product quantity is a product of a final pose to be solved, an inverse matrix of an overall registration pose and a local registration pose; and determining the pose corresponding to the minimum third reprojection error as the final pose.
The embodiment relates to a specific positioning method, as shown in fig. 3, which specifically includes the following steps:
step S301, positioning initialization. Providing an initial predicted pose by using other absolute positioning sources, such as GNSS (global navigation satellite system), and acquiring vehicle pose information through elastic registration of observation data and a high-precision map;
and step S302, acquiring the predicted pose of the current frame. And acquiring a predicted pose in the current observation by acquiring the relative pose and combining the vehicle pose obtained by the registration of the previous frame (if the previous frame is an initialization frame, the initialization pose is acquired).
And step S303, acquiring a vehicle positioning pose. Under the guidance of the predicted pose, performing 3D observation and elastic registration of a map to acquire a vehicle pose, namely a positioning pose of a current frame time to be solved;
and step S304, circularly performing the steps S302 and S303 to realize continuous positioning.
In the above steps, as shown in fig. 4, a vehicle is equipped with sensors such as an IMU, a camera, a LiDAR, a wheel speed meter or a speedometer, the relative pose calculation of the vehicle is obtained through DR, an initial predicted pose is provided through GNSS, and the pose information of the vehicle is obtained through elastic registration of observation data and a high-precision map; and finally, under the guidance of the predicted pose, performing 3D observation and elastic registration of a map to acquire the pose of the vehicle, determining the positioning pose of the vehicle at the current frame time, and realizing accurate continuous positioning.
Example 2
According to the embodiment of the present invention, there is also provided a positioning apparatus, which can execute the positioning method in the foregoing embodiment, and the specific implementation manner and the preferred application scenario are the same as those in the foregoing embodiment, and are not described herein again.
Fig. 5 is a schematic view of a positioning apparatus according to an embodiment of the present invention, as shown in fig. 5, the apparatus including:
a first acquisition unit 50 for acquiring pose information of the vehicle;
a second obtaining unit 51, configured to obtain observation data of an environment where the vehicle is located, and extract first semantic information from the observation data;
the splicing unit 52 is configured to splice the first semantic information corresponding to the multiple frames of observation data based on the pose information to obtain second semantic information;
a third obtaining unit 53, configured to obtain a high-precision map, where the high-precision map includes third semantic information;
and the registration unit 54 is configured to register the second semantic information and the third semantic information to obtain a registration result, and position the vehicle according to the registration result.
Further, the splicing unit includes: the first determination module is used for determining the pose information corresponding to multi-frame observation data; the second determination module is used for determining the relative pose between the first type of observation data and the second type of observation data, wherein the first type of observation data and the second type of observation data form multi-frame observation data, and the second type of observation data is a frame of observation data which is obtained latest in the multi-frame observation data; the conversion module is used for converting first semantic information corresponding to the first type of observation data based on the relative pose to obtain first converted semantic information; and the accumulation module is used for accumulating the plurality of first converted semantic information and the first semantic information corresponding to the second type of observation data to obtain second semantic information.
Further, the splicing unit includes: the first acquisition module is used for acquiring the influence factors corresponding to the first conversion semantic information; and the calculation module is used for calculating the sum of the plurality of first conversion semantic information by taking the influence factor as weight.
Further, the splicing unit further comprises: a second obtaining module, configured to obtain an influence factor corresponding to each piece of the first converted semantic information, and an influence factor corresponding to the first semantic information corresponding to the second type of observation data; and the second calculation module is used for calculating the sum of at least one first conversion semantic information and the first semantic information corresponding to the second type of observation data by taking the influence factor as weight.
Further, the registration unit includes: the first registration module is used for carrying out primary integral semantic registration on the second semantic information and the third semantic information to obtain a first registration result; the second registration module is used for carrying out local semantic registration based on the first registration result to obtain a second registration result; the third registration module is used for carrying out second integral semantic registration based on the second registration result to obtain a third registration result; and the positioning module is used for positioning the vehicle by adopting the third registration result until the cost function representing the third registration result is converged.
Further, the first registration module comprises: the first obtaining submodule is used for obtaining a first reprojection error between a plurality of groups of second semantic information and third semantic information; the first calculation submodule is used for calculating the error sum of a plurality of first reprojection errors with confidence degrees as weights; and the first determining submodule is used for determining the minimum error and the corresponding pose as an integral registration pose.
Further, the positioning module comprises: the second acquisition submodule is used for acquiring a second reprojection error between each local object and corresponding second semantic information and third semantic information under the overall registration pose; the second determining submodule is used for determining the pose corresponding to the minimum second reprojection error as a local registration pose; the third obtaining submodule is used for obtaining a third reprojection error between the second semantic information and the third semantic information corresponding to the pose parameter; and the third determining submodule determines the pose corresponding to the minimum third reprojection error as the final pose, positions the vehicle by adopting the final pose, and minimizes the third reprojection error when the cost function is converged.
Further, the first acquiring unit includes a first processing module configured to acquire pose information of the vehicle using a first type of sensor, where the first type of sensor includes at least one of: the inertial measurement unit, the wheel speed meter, the first kind sensor is installed on the vehicle.
Further, the second obtaining unit includes a second processing module, configured to obtain observation data of an environment where the vehicle is located by using a second type of sensor, where the second type of sensor includes at least one of: image acquisition equipment, lidar, second class sensor are installed on the vehicle.
Further, the second obtaining unit further includes a third processing module, configured to perform preprocessing on the observation data to obtain the first semantic information, where the preprocessing includes at least one of: the method comprises the steps of image identification processing, image segmentation processing, point cloud processing and coordinate transformation, wherein the coordinate transformation refers to the transformation from a world coordinate system to a vehicle coordinate system.
In the embodiment of the invention, the first acquisition unit is used for acquiring the pose information of the vehicle; the second acquisition unit is used for acquiring observation data of the environment where the vehicle is located and extracting first semantic information from the observation data; the splicing unit is used for splicing the first semantic information corresponding to the multi-frame observation data based on the pose information to obtain second semantic information; the third acquisition unit is used for acquiring a high-precision map, and the high-precision map comprises third semantic information; the registration unit is used for registering the second semantic information and the third semantic information to obtain a registration result, and positioning the vehicle according to the registration result. According to the method, the second semantic information is obtained by splicing the first semantic information corresponding to the multi-frame observation data, and the second semantic information is registered with the third semantic information contained in the high-precision map, so that the aim of positioning the vehicle according to the registration result is fulfilled, the technical effect of accurately positioning the vehicle is achieved, and the technical problem of low positioning precision of the existing positioning technology is solved.
Example 3
According to an embodiment of the present invention, there is also provided a computer-readable storage medium, where the computer-readable storage medium includes a stored program, and when the program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the positioning method in the foregoing embodiment 1.
Example 4
According to an embodiment of the present invention, there is also provided a processor, where the processor is configured to execute a program, where the program executes the positioning method in embodiment 1.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (13)

1. A method of positioning, comprising:
acquiring pose information of a vehicle;
acquiring observation data of an environment where the vehicle is located, and extracting first semantic information from the observation data;
based on the pose information, splicing the first semantic information corresponding to multiple frames of observation data to obtain second semantic information;
acquiring a high-precision map, wherein the high-precision map comprises third semantic information;
and registering the second semantic information and the third semantic information to obtain a registration result, and positioning the vehicle according to the registration result.
2. The method according to claim 1, wherein based on the pose information, the first semantic information corresponding to the observation data of a plurality of frames is spliced to obtain second semantic information, and the method comprises:
determining the pose information corresponding to a plurality of frames of the observation data;
determining a relative pose between first-class observation data and second-class observation data, wherein the first-class observation data and the second-class observation data form a plurality of frames of observation data, and the second-class observation data is a frame of observation data which is obtained latest in the plurality of frames of observation data;
converting the first semantic information corresponding to the first type of observation data based on the relative pose to obtain first converted semantic information;
and splicing the plurality of first converted semantic information and the first semantic information corresponding to the second type of observation data to obtain the second semantic information.
3. The method according to claim 2, wherein in a case where the first semantic information corresponding to the same target object appears in multiple frames of the first type of observation data, the method further comprises:
acquiring an influence factor corresponding to each first conversion semantic information;
and calculating the sum of the plurality of first conversion semantic information by taking the influence factor as a weight.
4. The method according to claim 2, characterized in that in case the first semantic information corresponding to the same target object is present in at least one frame of the first type of observation and in the second type of observation,
obtaining an influence factor corresponding to each piece of first converted semantic information and an influence factor corresponding to the first semantic information corresponding to the second type of observation data;
and calculating the sum of at least one first conversion semantic information and the first semantic information corresponding to the second type of observation data by taking the influence factor as weight.
5. The method according to claim 1, wherein the second semantic information and the third semantic information are registered to obtain a registration result, and the vehicle is positioned according to the registration result, comprising the steps of:
step 1: performing primary overall semantic registration on the second semantic information and the third semantic information to obtain a first registration result;
step 2: performing local semantic registration based on the first registration result to obtain a second registration result;
and step 3: performing a second integral semantic registration based on the second registration result to obtain a third registration result;
and 4, step 4: and repeatedly executing the step 2 and the step 3 at least once until the cost function representing the third registration result is converged, and positioning the vehicle by adopting the third registration result.
6. The method according to claim 5, wherein performing a first global semantic registration on the second semantic information and the third semantic information to obtain a first registration result, includes:
acquiring a first reprojection error between a plurality of groups of second semantic information and third semantic information;
calculating the error sum of a plurality of first reprojection errors with confidence as weight;
determining the smallest error and the corresponding pose as an overall registration pose.
7. The method of claim 6, wherein repeating steps 2 and 3 at least once until a cost function characterizing the third registration result converges, and using the third registration result to locate the vehicle comprises:
acquiring a second reprojection error between the second semantic information and the third semantic information corresponding to each local object under the overall registration pose;
determining the pose corresponding to the minimum second reprojection error as a local registration pose;
determining pose parameters according to the final pose to be solved, the inverse matrix of the overall registration pose and the local registration pose;
acquiring a third reprojection error between the second semantic information and the third semantic information corresponding to the pose parameter;
and determining the pose corresponding to the minimum third reprojection error as a final pose, positioning the vehicle by adopting the final pose, and enabling the third reprojection error to be minimum when the cost function is converged.
8. The method according to any one of claims 1 to 7, wherein acquiring pose information of a vehicle includes:
acquiring pose information of the vehicle by adopting a first type of sensor, wherein the first type of sensor comprises at least one of the following: an inertial measurement unit, a wheel speed meter, the first type of sensor being mounted on the vehicle.
9. The method of any one of claims 1 to 7, wherein obtaining observation data of an environment in which the vehicle is located comprises:
acquiring observation data of an environment in which the vehicle is located by using a second type of sensor, wherein the second type of sensor comprises at least one of the following: image acquisition equipment, lidar, the second type sensor is installed on the vehicle.
10. The method according to any one of claims 1 to 7, wherein extracting first semantic information from the observation data comprises:
preprocessing the observation data to obtain the first semantic information, wherein the preprocessing comprises at least one of the following steps: image recognition processing, image segmentation processing, point cloud processing, and coordinate transformation, wherein the coordinate transformation refers to transformation from a world coordinate system to a vehicle coordinate system.
11. A positioning device, comprising:
a first acquisition unit configured to acquire pose information of a vehicle;
the second acquisition unit is used for acquiring observation data of the environment where the vehicle is located and extracting first semantic information from the observation data;
the splicing unit is used for splicing the first semantic information corresponding to the observation data of multiple frames based on the pose information to obtain second semantic information;
the third acquisition unit is used for acquiring a high-precision map, and the high-precision map comprises third semantic information;
and the registration unit is used for registering the second semantic information and the third semantic information to obtain a registration result, and positioning the vehicle according to the registration result.
12. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method of any one of claims 1 to 10.
13. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 1 to 10.
CN202210096657.7A 2022-01-26 2022-01-26 Positioning method, positioning device, computer storage medium and processor Active CN114440860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210096657.7A CN114440860B (en) 2022-01-26 2022-01-26 Positioning method, positioning device, computer storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210096657.7A CN114440860B (en) 2022-01-26 2022-01-26 Positioning method, positioning device, computer storage medium and processor

Publications (2)

Publication Number Publication Date
CN114440860A true CN114440860A (en) 2022-05-06
CN114440860B CN114440860B (en) 2024-07-19

Family

ID=81369703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210096657.7A Active CN114440860B (en) 2022-01-26 2022-01-26 Positioning method, positioning device, computer storage medium and processor

Country Status (1)

Country Link
CN (1) CN114440860B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107144285A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Posture information determines method, device and movable equipment
CN110232084A (en) * 2019-06-19 2019-09-13 河北工业大学 The approximate pattern matching method integrally constrained with part-
CN110264502A (en) * 2019-05-17 2019-09-20 华为技术有限公司 Point cloud registration method and device
CN111435537A (en) * 2019-01-13 2020-07-21 北京初速度科技有限公司 Model training method and device and pose optimization method and device based on splicing map
CN111524168A (en) * 2020-04-24 2020-08-11 中国科学院深圳先进技术研究院 Point cloud data registration method, system and device and computer storage medium
CN113554698A (en) * 2020-04-23 2021-10-26 杭州海康威视数字技术股份有限公司 Vehicle pose information generation method and device, electronic equipment and storage medium
CN113834492A (en) * 2021-09-22 2021-12-24 广州小鹏自动驾驶科技有限公司 Map matching method, system, device and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107144285A (en) * 2017-05-08 2017-09-08 深圳地平线机器人科技有限公司 Posture information determines method, device and movable equipment
CN111435537A (en) * 2019-01-13 2020-07-21 北京初速度科技有限公司 Model training method and device and pose optimization method and device based on splicing map
CN110264502A (en) * 2019-05-17 2019-09-20 华为技术有限公司 Point cloud registration method and device
CN110232084A (en) * 2019-06-19 2019-09-13 河北工业大学 The approximate pattern matching method integrally constrained with part-
CN113554698A (en) * 2020-04-23 2021-10-26 杭州海康威视数字技术股份有限公司 Vehicle pose information generation method and device, electronic equipment and storage medium
CN111524168A (en) * 2020-04-24 2020-08-11 中国科学院深圳先进技术研究院 Point cloud data registration method, system and device and computer storage medium
CN113834492A (en) * 2021-09-22 2021-12-24 广州小鹏自动驾驶科技有限公司 Map matching method, system, device and readable storage medium

Also Published As

Publication number Publication date
CN114440860B (en) 2024-07-19

Similar Documents

Publication Publication Date Title
CN108152831B (en) Laser radar obstacle identification method and system
CN109059906B (en) Vehicle positioning method and device, electronic equipment and storage medium
CN110617821B (en) Positioning method, positioning device and storage medium
CN112116654B (en) Vehicle pose determining method and device and electronic equipment
CN109710724B (en) A kind of method and apparatus of building point cloud map
CN111391823A (en) Multilayer map making method for automatic parking scene
EP3137850A1 (en) Method and system for determining a position relative to a digital map
CN111830953A (en) Vehicle self-positioning method, device and system
CN113933818A (en) Method, device, storage medium and program product for calibrating laser radar external parameter
CN110187375A (en) A kind of method and device improving positioning accuracy based on SLAM positioning result
CN111127584A (en) Method and device for establishing visual map, electronic equipment and storage medium
CN112455502B (en) Train positioning method and device based on laser radar
CN115494533A (en) Vehicle positioning method, device, storage medium and positioning system
CN115900712A (en) Information source reliability evaluation combined positioning method
JP6828448B2 (en) Information processing equipment, information processing systems, information processing methods, and information processing programs
CN113838129B (en) Method, device and system for obtaining pose information
CN112965076A (en) Multi-radar positioning system and method for robot
CN112184906A (en) Method and device for constructing three-dimensional model
KR102130687B1 (en) System for information fusion among multiple sensor platforms
CN116045964A (en) High-precision map updating method and device
CN114440860B (en) Positioning method, positioning device, computer storage medium and processor
CN115127538A (en) Map updating method, computer equipment and storage device
CN114281832A (en) High-precision map data updating method and device based on positioning result and electronic equipment
CN112050830B (en) Motion state estimation method and device
CN114283397A (en) Global relocation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant