CN110967011A - Positioning method, device, equipment and storage medium - Google Patents

Positioning method, device, equipment and storage medium Download PDF

Info

Publication number
CN110967011A
CN110967011A CN201911360256.2A CN201911360256A CN110967011A CN 110967011 A CN110967011 A CN 110967011A CN 201911360256 A CN201911360256 A CN 201911360256A CN 110967011 A CN110967011 A CN 110967011A
Authority
CN
China
Prior art keywords
grid map
matching probability
determining
probability distribution
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911360256.2A
Other languages
Chinese (zh)
Other versions
CN110967011B (en
Inventor
韩升升
王维
邓海林
赵哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhijia Usa
Suzhou Zhijia Technology Co Ltd
Original Assignee
Suzhou Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhijia Technology Co Ltd filed Critical Suzhou Zhijia Technology Co Ltd
Priority to CN201911360256.2A priority Critical patent/CN110967011B/en
Publication of CN110967011A publication Critical patent/CN110967011A/en
Application granted granted Critical
Publication of CN110967011B publication Critical patent/CN110967011B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The application discloses a positioning method, a positioning device, positioning equipment and a storage medium, and belongs to the technical field of positioning. The method comprises the following steps: predicting the pose information of the automatic driving vehicle at the current moment to obtain predicted pose information; generating a first grid map comprising a plurality of grids according to the predicted pose information, wherein each grid corresponds to an echo reflection intensity mean value and an elevation mean value; determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of grids in the first grid map and the global off-line grid map; determining a second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of the grids of the lanes in the first grid map and the global off-line grid map; and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution. The positioning accuracy is improved through the method.

Description

Positioning method, device, equipment and storage medium
Technical Field
The present application relates to the field of positioning technologies, and in particular, to a positioning method, apparatus, device, and storage medium.
Background
The automatic driving vehicle is an intelligent vehicle capable of realizing unmanned driving, and the position information of the automatic driving vehicle in a pre-constructed global off-line grid map can be determined in real time in the driving process so as to adjust and plan a driving path in real time.
In the related art, a laser radar is generally used to acquire point cloud data at a current time, and a grid map is determined according to the acquired point cloud data, where the grid map includes an elevation average value of all points in a single grid. And matching the elevation average value of each grid in the grid map with the elevation average value of the grid of a pre-constructed global offline grid map, and then determining the position information of the automatic driving vehicle in the global offline grid map according to the matching result so as to realize the positioning of the automatic driving vehicle.
However, the method positions the autonomous vehicle only according to the elevation average value of each grid, and the used information is relatively single, so that the problem that the autonomous vehicle is not accurately positioned may occur.
Disclosure of Invention
The application provides a positioning method, a positioning device, positioning equipment and a storage medium, which can solve the problem of inaccurate positioning of an automatic driving vehicle in the related art. The technical scheme is as follows:
in one aspect, a positioning method is provided, which is applied to an autonomous vehicle, and the method includes:
predicting the pose information of the automatic driving vehicle at the current moment to obtain predicted pose information;
generating a first grid map according to the predicted pose information, wherein the first grid map comprises a plurality of grids, each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid;
determining a first matching probability distribution of the first grid map and a global off-line grid map according to the elevation mean value of grids in the first grid map and the global off-line grid map;
determining a second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of grids including lanes in the first grid map and the global off-line grid map;
determining position information of the autonomous vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution.
In one possible implementation manner of the present application, the predicting the pose information of the autonomous vehicle at the current time includes:
predicting the pose information of the automatic driving vehicle at the current moment according to the speed of the automatic driving vehicle, IMU (Inertial Measurement Unit) data, historical pose information of the previous moment and the time difference between the current moment and the previous moment, wherein the historical pose information comprises historical position information and historical attitude information.
In a possible implementation manner of the present application, the generating a first grid map according to the predicted pose information includes:
acquiring first point cloud data at the current moment, wherein the first point cloud data at least comprise detected echo reflection intensity values and elevation values of all points;
converting a first point cloud corresponding to the first point cloud data into a world coordinate system based on the first point cloud data and the predicted pose information to obtain second point cloud data;
and generating the first raster map based on the second point cloud data and first historical point cloud data in a specified time period before the current time, wherein the first historical point cloud data is historical point cloud data in a world coordinate system.
In a possible implementation manner of the present application, the determining a first matching probability distribution of the first grid map and the global offline grid map according to an elevation mean of grids in the first grid map and the global offline grid map includes:
acquiring a second grid map with a first size by taking a position corresponding to the predicted pose information as a center in the first grid map, and acquiring a third grid map with a second size by taking a position corresponding to the predicted pose information as a center in the global offline grid map, wherein the first size is smaller than the second size;
and determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
In a possible implementation manner of the present application, the determining a first matching probability distribution of the first grid map and the global offline grid map according to an elevation mean of grids in the second grid map and the third grid map includes:
moving the second grid map on the third grid map by taking a specified offset as a moving step length so as to traverse the third grid map;
after each designated offset is moved, determining a first matching probability corresponding to a current moving position coordinate based on the elevation mean values of grids in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the second grid map is moved at this time;
and determining the first matching probability distribution based on all the mobile position coordinates determined in the traversal process and the first matching probabilities corresponding to all the mobile position coordinates.
In a possible implementation manner of the present application, before determining a second matching probability distribution of the first grid map and the global offline grid map according to an echo reflection intensity mean of grids including lanes in the first grid map and the global offline grid map, the method further includes:
respectively determining grids comprising lanes in the second grid map and the third grid map;
correspondingly, the determining a second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean values of the grids including the lanes in the first grid map and the global off-line grid map includes:
moving the second grid map on the third grid map by taking a specified offset as a moving step length so as to traverse the third grid map;
after the specified offset is moved every time, determining a second matching probability corresponding to a current moving position coordinate based on the elevation mean value of grids including lanes in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first specified point in the second grid map relative to a second specified point in the third grid map after the second grid map is moved this time;
and determining second matching probability distribution based on all the mobile position coordinates determined in the traversal process and second matching probabilities corresponding to all the mobile position coordinates.
In one possible implementation manner of the present application, the determining, based on the first matching probability distribution and the second matching probability distribution, the position information of the autonomous vehicle in the global offline grid map at the current time includes:
determining a fusion matching probability distribution of the first grid map and the global off-line grid map according to the first matching probability distribution and the second matching probability distribution;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the fusion matching probability distribution and the predicted pose information.
In a possible implementation manner of the present application, the determining a fusion matching probability distribution of the first grid map and the global offline grid map according to the first matching probability distribution and the second matching probability distribution includes:
determining the variances of the first matching probability distribution in the x direction and the y direction respectively to obtain a first variance and a second variance; determining the variances of the second matching probability distribution in the x direction and the y direction respectively to obtain a third variance and a fourth variance;
determining a first weight of the first matching probability distribution and a second weight of the second matching probability distribution according to the first variance, the second variance, the third variance, and the fourth variance;
determining the fusion matching probability distribution according to the first weight, the second weight, the first matching probability distribution and the second matching probability distribution.
In one possible implementation manner of the present application, the determining, based on the fusion matching probability distribution and the predicted pose information, the position information of the autonomous vehicle in the global offline grid map at the current time includes:
selecting a plurality of target fusion matching probabilities from the fusion matching probability distribution;
determining a standard deviation corresponding to each target fusion matching probability according to the fusion matching probability distribution;
determining a matching value corresponding to each target fusion matching probability according to each target fusion matching probability and a standard deviation corresponding to each target fusion matching probability;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the matching value corresponding to each target fusion matching probability and the predicted pose information.
In a possible implementation manner of the present application, the selecting a plurality of target fusion matching probabilities from the fusion matching probability distribution includes:
determining a maximum fused match probability from the plurality of fused match probabilities;
determining a fusion matching probability that is greater than N times the maximum fusion matching probability among the fusion matching probabilities as the target fusion matching probabilities, wherein N is an integer greater than 0 and less than 1.
In another aspect, there is provided a positioning apparatus, the apparatus comprising:
the prediction module is used for predicting the pose information of the automatic driving vehicle at the current moment to obtain predicted pose information;
the generation module is used for generating a first grid map according to the predicted pose information, the first grid map comprises a plurality of grids, each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid;
the first determining module is used for determining first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean value of grids in the first grid map and the global off-line grid map;
the second determining module is used for determining second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of grids including lanes in the first grid map and the global off-line grid map;
and the third determination module is used for determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution.
In one possible implementation manner of the present application, the prediction module is configured to:
predicting the pose information of the automatic driving vehicle at the current moment according to the speed of the automatic driving vehicle, IMU data, historical pose information of the automatic driving vehicle at the previous moment and the time difference between the current moment and the previous moment, wherein the historical pose information comprises historical position information and historical posture information.
In one possible implementation manner of the present application, the generating module is configured to:
acquiring first point cloud data at the current moment, wherein the first point cloud data at least comprise detected echo reflection intensity values and elevation values of all points;
converting a first point cloud corresponding to the first point cloud data into a world coordinate system based on the first point cloud data and the predicted pose information to obtain second point cloud data;
and generating the first raster map based on the second point cloud data and first historical point cloud data in a specified time period before the current time, wherein the first historical point cloud data is historical point cloud data in a world coordinate system.
In one possible implementation manner of the present application, the first determining module is configured to:
acquiring a second grid map with a first size by taking a position corresponding to the predicted pose information as a center in the first grid map, and acquiring a third grid map with a second size by taking a position corresponding to the predicted pose information as a center in the global offline grid map, wherein the first size is smaller than the second size;
and determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
In one possible implementation manner of the present application, the first determining module is configured to:
moving the second grid map on the third grid map by taking a specified offset as a moving step length so as to traverse the third grid map;
after the designated offset is moved every time, determining a first matching probability corresponding to a current moving position coordinate based on the elevation mean values of grids in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the second grid map is moved this time;
and determining the first matching probability distribution based on all the mobile position coordinates determined in the traversal process and the first matching probabilities corresponding to all the mobile position coordinates.
In one possible implementation manner of the present application, the second determining module is further configured to:
respectively determining grids comprising lanes in the second grid map and the third grid map;
moving the second grid map on the third grid map by taking a specified offset as a moving step length so as to traverse the third grid map;
after the specified offset is moved every time, determining a second matching probability corresponding to a current moving position coordinate based on the elevation mean value of grids including lanes in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first specified point in the second grid map relative to a second specified point in the third grid map after the second grid map is moved this time;
and determining second matching probability distribution based on all the mobile position coordinates determined in the traversal process and second matching probabilities corresponding to all the mobile position coordinates.
In one possible implementation manner of the present application, the third determining module is configured to:
determining a fusion matching probability distribution of the first grid map and the global off-line grid map according to the first matching probability distribution and the second matching probability distribution;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the fusion matching probability distribution and the predicted pose information.
In one possible implementation manner of the present application, the third determining module is configured to:
determining the variances of the first matching probability distribution in the x direction and the y direction respectively to obtain a first variance and a second variance; determining the variances of the second matching probability distribution in the x direction and the y direction respectively to obtain a third variance and a fourth variance;
determining a first weight of the first matching probability distribution and a second weight of the second matching probability distribution according to the first variance, the second variance, the third variance, and the fourth variance;
determining the fusion matching probability distribution according to the first weight, the second weight, the first matching probability distribution and the second matching probability distribution.
In one possible implementation manner of the present application, the third determining module is configured to:
selecting a plurality of target fusion matching probabilities from the fusion matching probability distribution;
determining a standard deviation corresponding to each target fusion matching probability according to the fusion matching probability distribution;
determining a matching value corresponding to each target fusion matching probability according to each target fusion matching probability and a standard deviation corresponding to each target fusion matching probability;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the matching value corresponding to each target fusion matching probability and the predicted pose information.
In one possible implementation manner of the present application, the third determining module is configured to:
determining a maximum fused match probability from the plurality of fused match probabilities;
determining a fusion matching probability that is greater than N times the maximum fusion matching probability among the fusion matching probabilities as the target fusion matching probabilities, wherein N is an integer greater than 0 and less than 1.
In another aspect, an apparatus is provided, which includes a memory for storing a computer program and a processor for executing the computer program stored in the memory to implement the steps of the positioning method described above.
In another aspect, a computer-readable storage medium is provided, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the positioning method described above.
In another aspect, a computer program product comprising instructions is provided, which when run on a computer, causes the computer to perform the steps of the positioning method described above.
The technical scheme provided by the application can at least bring the following beneficial effects:
and predicting the pose information of the automatic driving vehicle at the current moment to obtain predicted pose information. And generating a first grid map comprising a plurality of grids according to the predicted pose information, wherein each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid. And determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the first grid map and the global off-line grid map. And determining second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of the grids of the lanes in the first grid map and the global off-line grid map. And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution. That is to say, the position information of the automatic driving vehicle in the global offline grid map at the current moment is determined according to the elevation average value of the grid and the echo reflection intensity average value of the grid, and compared with the method for positioning the automatic driving vehicle by using a single elevation average value, the method reduces the probability of inaccurate positioning and improves the accuracy rate of positioning the automatic driving vehicle.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart illustrating a method of positioning according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of positioning according to another exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a second grid map after moving once within a third grid map in accordance with an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating a second grid map after moving twice in a third grid map in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a positioning device according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating the structure of an apparatus according to an exemplary embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the positioning method provided by the embodiment of the present application in detail, an implementation environment provided by the embodiment of the present application is introduced.
The positioning method provided by the embodiment of the application is applied to the automatic driving vehicle. A LIDAR (Light Detection and Ranging) system may be installed in the autonomous vehicle. LIDAR systems may include laser radar, GPS (Global Positioning System), IMU, and devices. The laser radar, the GPS and the IMU establish communication connection with the device, respectively, where the communication connection may be a wired connection or a wireless connection, and this is not limited in this embodiment of the present application.
The lidar may include a transmitter, a receiver, and an information processor, among other things. The emitter is used for converting the electric pulse into an optical pulse to be emitted out, the optical pulse is shot on an object and reflected back, the receiver is used for receiving the reflected optical pulse and converting the reflected optical pulse into an electric pulse, and the information processor is used for processing the electric pulse to obtain point cloud data. The point cloud data may include three-dimensional coordinates x, y, z of the object in a local coordinate system and echo reflected intensity values I, where z may also be referred to as elevation values. Wherein, the local coordinate system is a coordinate system established by taking the laser radar as an origin.
Wherein the GPS may be used to coarsely locate the autonomous vehicle and send coarse location information to the device.
The IMU is used for measuring three-axis attitude angles and acceleration of the automatic driving vehicle and calculating attitude information of the automatic driving vehicle according to the three-axis attitude angles and the acceleration. The IMU may include three single-axis accelerometers for detecting acceleration of the autonomous vehicle and three single-axis gyroscopes for detecting angular velocity of the autonomous vehicle.
The device can be used for processing point cloud data output by the laser radar, calculating attitude information of the automatic driving vehicle according to the acceleration and the angular velocity output by the IMU, and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the processed point cloud data, the rough position information and the attitude information.
The device may be any electronic product that can perform human-Computer interaction with a user through one or more modes such as a keyboard, a touch pad, a touch screen, a remote controller, voice interaction, or handwriting equipment, for example, a PC (Personal Computer), a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a pocket PC (pocket PC), a tablet Computer, a smart car, a smart television, and the like.
Those skilled in the art will appreciate that the above-described LIDAR systems are merely exemplary and that other existing or future LIDAR systems, as may be suitable for use in the present application, are intended to be encompassed within the scope of the present application and are hereby incorporated by reference.
After the implementation environment of the embodiment of the present application is introduced, a detailed explanation is next provided for the positioning method provided in the embodiment of the present application.
FIG. 1 is a flow chart illustrating a positioning method according to an exemplary embodiment, as applied to an autonomous vehicle in the implementation environment described above. Referring to fig. 1, the method may include the following steps:
step 101: and predicting the pose information of the automatic driving vehicle at the current moment to obtain predicted pose information.
The pose information comprises position information and attitude information. The position information may include three-dimensional coordinates and the pose information may include at least a yaw angle of the autonomous vehicle.
In implementation, the specific implementation of predicting the pose information of the autonomous vehicle at the current time may include: and predicting the pose information of the automatic driving vehicle at the current moment according to the speed of the automatic driving vehicle, IMU data, historical pose information of the last moment and the time difference between the current moment and the last moment.
Wherein the IMU data may include acceleration and angular velocity of the autonomous vehicle, and the historical pose information includes historical position information and historical attitude information.
The historical pose information at the previous moment is the high-precision pose information of the automatic driving vehicle determined by the technical scheme of the application, and can be obtained by a multi-sensor fusion filter.
As an example, after the pose information at each time is determined by the technical solution of the present application, the pose information at each time may be stored, and for convenience of description, the pose information before the current time may be referred to as history pose information. Therefore, when predicting the predicted pose information at the present time, the historical pose information at the previous time can be directly used.
That is, in implementations, the pose information for the autonomous vehicle at the current time may be predicted based on the speed of the autonomous vehicle, the acceleration of the autonomous vehicle, the angular velocity of the autonomous vehicle, the time difference between the current time and the previous time, and historical pose information for the previous time.
As an example, attitude change information of the autonomous vehicle, which indicates an attitude change situation of the autonomous vehicle from a previous time to a current time, may be obtained from an acceleration and an angular velocity of the autonomous vehicle, and the attitude information of the current time may be obtained from the attitude change information and historical attitude information of the previous time. The acceleration and the traveling direction of the autonomous vehicle from the previous time to the present time can be determined based on the acceleration and the angular velocity of the autonomous vehicle, and then the position information of the autonomous vehicle at the present time can be determined in combination with the velocity of the autonomous vehicle and the time difference between the present time and the previous time. And determining the position information and the posture information of the automatic driving vehicle at the current moment as the pose information at the current moment, namely obtaining the predicted pose information.
Further, in addition to determining the predicted pose information using the above method, if the GPS position information is acquired at the current time, in this case, the predicted pose information at the current time may be determined in combination with the GPS position information acquired at the current time. In implementation, the predicted pose information may be determined based on the IMU data and historical pose information at the previous time, and in combination with GPS location information at the current time.
It should be noted that the present application is described only by taking an example in which the above-described manner of determining the predicted pose information is implemented by an autonomous vehicle. In one possible implementation, determining the predicted pose information may also be performed by a multi-sensor fusion filter.
Step 102: and generating a first grid map according to the predicted pose information.
The first grid map comprises a plurality of grids, each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid.
The echo reflection intensity value is the reflection intensity of the light pulse after irradiating the object, and the numerical range of the echo reflection intensity value can be 0-255.
In implementation, referring to fig. 2, a specific implementation of generating the first grid map according to the predicted pose information may include: and acquiring first point cloud data at the current moment, wherein the first point cloud data at least comprises the detected echo reflection intensity value and the detected elevation value of each point. And converting the first point cloud corresponding to the first point cloud data into a world coordinate system based on the first point cloud data and the predicted pose information to obtain second point cloud data. And generating a first raster map based on the second point cloud data and the first historical point cloud data in a specified time period before the current time.
The first historical point cloud data is historical point cloud data in a world coordinate system.
That is to say, the first point cloud data at the current time may be acquired, the coordinate system of the first point cloud data may be converted to obtain the second point cloud data in the world coordinate system, and then the first grid map may be generated according to the second point cloud data and the first historical point cloud data in the world coordinate system.
As an example, first point cloud data of the current time may be acquired by a laser radar, the first point cloud data includes three-dimensional coordinates x, y, and z of each detected point in a local coordinate system, and a reflection intensity value of an echo of each point, and z in the three-dimensional coordinates may be taken as an elevation value of each point. For example, the point cloud data of a certain point can be represented as (x, y, z, I), (x, y, z) being three-dimensional coordinates, I being the echo reflection intensity value of the point.
As an example, after first point cloud data at the current time is acquired, coordinate conversion may be performed on the first point cloud data according to the predicted pose information, and point cloud under a local coordinate system is converted into a world coordinate system, so as to obtain second point cloud data. For example, the first point cloud data may be converted by formula (1) to obtain the second point cloud data.
Pi g=Ti*Pi l(1)
Wherein, Pi gPoint cloud data of the point cloud representing the time i in the world coordinate system, Pi lPoint cloud data of point cloud at the time point i under a local coordinate system,Tiand representing the pose information of the automatic driving vehicle at the moment i. Wherein, Ti=[T0,T1,...Tn]And n is the number of the point cloud midpoint at the moment i.
The first point cloud data in the local coordinate system can be converted into the second point cloud data in the world coordinate system by the above formula (1).
As an example, first historical point cloud data in a world coordinate system at a plurality of times within a specified time period before the current time may be stored. The specified time period may be set by a user according to actual needs, or may be set by a device, which is not limited in this embodiment of the present application.
Illustratively, the first historical point cloud data may be stored in the form of a buffer queue. The buffer queue may be Q ═ F0,F1,F2,...Fi]And F is (point _ cluster, timestamp, position), wherein point _ cluster is first historical point cloud data of a certain moment in a world coordinate system, timestamp is the timestamp of the moment, and position is the position and posture information of the automatic driving vehicle at the moment.
As another example, historical point cloud data under a local coordinate system at a plurality of times within a specified time period before the current time may be stored, and then the historical point cloud data under the local coordinate system may be converted into first historical point cloud data under a world coordinate system according to formula (1) according to the historical point cloud data and historical pose information corresponding thereto.
As an example, after the second point cloud data and the first historical point cloud data are obtained, the second point cloud data and the first historical point cloud data may be superimposed, each point is projected into a two-dimensional plane according to a three-dimensional coordinate of each point after the superimposition, the two-dimensional plane including each point after the superimposition is divided into a plurality of grids having the same size and shape, and the first grid map may be obtained.
Wherein, in the generated first grid map, each grid may include a plurality of points therein, and each grid corresponds to the echo reflection intensity mean value and the elevation mean value. The echo reflection intensity average value corresponding to a single grid may be obtained by summing and averaging echo reflection intensity values of all points included in the grid, and the elevation average value corresponding to a single grid may be obtained by summing and averaging elevation values of all points included in the grid.
As an example, the information corresponding to the grid in the first grid map may be referred to as a grid element, and the grid element may include a position coordinate of the grid in the first grid map, an echo reflection intensity average value corresponding to the grid, and an elevation average value corresponding to the grid.
Illustratively, the grid element set of the first grid map may be written as Localmap ═ G0,G1,G2,...,Gl]Where, Localmap represents a first grid map, G ═ may be used to represent a grid element, (x, y) is a position coordinate of a grid corresponding to the grid element in the first grid map, mean _ I represents an echo reflection intensity mean value corresponding to the grid element, and mean _ z represents an elevation mean value corresponding to the grid element.
Step 103: and determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the first grid map and the global off-line grid map.
As an example, a global offline grid map is a map that is generated in advance and generally does not change, and the global offline grid map can be used to describe the layout of roads and surrounding environment of a certain area. The global grid map may be generated by the method of generating the first grid map in step 102, or may be generated by other methods, which is not limited in this embodiment.
In some embodiments, a two-dimensional plane of an area in a world coordinate system may be divided into a plurality of grids, and each grid has the same shape and size. Referring to fig. 2, performing SLAM (Simultaneous Localization and Mapping, instant positioning and map building) optimization processing on all point clouds acquired by a laser radar to obtain pose information of the point clouds at each moment, converting the point clouds under a local coordinate system into a world coordinate system through a formula (1) according to the point cloud data and the pose information, and then projecting points in the area into a grid of a two-dimensional plane to obtain a global offline grid map. Each grid of the global offline grid map corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid.
As an example, information corresponding to a grid in the global offline grid map may be referred to as a grid element, and the grid element may include a position coordinate of the grid in the global offline grid map, an echo reflection intensity average value corresponding to the grid, and an elevation average value corresponding to the grid.
Illustratively, the grid element set of the global offline grid map may be denoted as base map ═ B0,B1,B2,...,Bl]The Basemap is a global offline grid map, B ═ (x, y, mean _ I, mean _ z) may be used to represent a grid element, (x, y) is a position coordinate of a grid corresponding to the grid element in the global offline grid map, mean _ I represents an echo reflection intensity average value corresponding to the grid element, and mean _ z represents an elevation average value corresponding to the grid element.
Because the first grid map and the global offline grid map are both relatively large maps and include a large number of grids, the calculation amount is large during matching, and the device resources are wasted. Moreover, the global offline grid map includes maps of all areas of an area, which is too wide in coverage, and the position information of the autonomous vehicle at the current time is usually within a small range, and the parts of the global offline grid map other than the small range are not very helpful in locating the autonomous vehicle, and may not be needed. Therefore, a part of the map can be acquired from the first grid map and the global offline grid map respectively for matching.
In an implementation, determining a first matching probability distribution of the first grid map and the global offline grid map according to the elevation mean of the grids in the first grid map and the global offline grid map may include: in the first grid map, a second grid map of a first size is acquired with a position corresponding to the predicted pose information as a center, and in the global offline grid map, a third grid map of a second size is acquired with the position corresponding to the predicted pose information as a center. And determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
The first size is smaller than the second size, and the first size and the second size may be set by a user according to actual needs or may be set by default by a device, which is not limited in the embodiment of the present application.
That is, referring to fig. 2, a second grid map may be acquired in the first grid map with the position corresponding to the predicted pose information as the center, and a third grid map may be acquired in the global offline grid map with the position corresponding to the predicted pose information as the center, where the size of the second grid map is smaller than that of the third grid map. And then determining a first matching probability of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
Because the position corresponding to the predicted pose information may not be very accurate, the third grid map obtained from the global offline map according to the predicted pose information may be larger than the second grid map, and therefore when the predicted pose information is too far from the real position information, the first matching probability distribution of the first grid map and the global offline grid map can be accurately determined.
As an example, in the first grid map, a grid in which a position corresponding to the predicted pose information is located is determined, with the grid as a center and the first size as a radius, grid elements of all grids located within the radius are acquired, and the second grid map is generated from all grids and the grid elements of each grid. Illustratively, the first size may be 50, that is, the second grid map may include 101 × 101 grids.
Similarly, a third grid map may be obtained from the global offline grid map.
Illustratively, the second grid map may be a square map with a side length of the first dimension being 2 times. Alternatively, the second grid map may be a circular map with the first size as a radius. When the second grid map is square, the third grid map is a square map having the second size 2 times as large as the side length, and when the second grid map is circular, the third grid map is a circular map having the second size 2 times as large as the radius. That is, the shape of the second grid map is the same as the shape of the third grid map.
In some embodiments, after obtaining the second grid map and the third grid map, a first matching probability distribution of the first grid map and the global offline grid map may be determined according to an elevation mean value of grids in the second grid map and the third grid map, and the specific implementation may include:
and moving the second grid map on the third grid map by taking the specified offset as a moving step to traverse the third grid map. After each designated offset is moved, determining a first matching probability corresponding to a current moving position coordinate based on the elevation mean values of grids in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the second grid map is moved at this time. And determining a first matching probability distribution based on all the mobile position coordinates determined in the traversal process and the first matching probabilities corresponding to all the mobile position coordinates.
Wherein the specified offset may be the same as the size of the grid, and the specified offset may represent a movement distance in the x-direction or a movement distance in the y-direction. In implementation, it may be determined whether to move in the x direction or the y direction each time according to actual situations, as long as the third grid map can be traversed.
The first designated point may be any point in the second grid map, and the second designated point may be a point in the third grid map that coincides with the first designated point when the vertex on the upper left corner of the second grid map coincides with the vertex on the upper left corner of the third grid map.
That is, the second grid map may be moved on the third grid map with a specified offset as a movement step, one movement position coordinate per movement, and a first matching probability corresponding to the movement position coordinate may be determined, and the first matching probability distribution may be determined based on the movement position coordinate determined per time and the first matching probability determined per time.
As an example, the top left corner vertex of the second grid map and the top left corner vertex of the third grid map may be coincident, the position coordinate of the coincident point may be determined to be (0,0), and the second grid map may be moved from the top left corner vertex of the third grid map, each time by a specified offset, to traverse the third grid map.
As an example, the top left vertex in the second grid map may be used as the first designated point, the top left vertex in the third grid map may be used as the second designated point, and the mobile position coordinate may be determined according to the mobile direction and the mobile distance each time the mobile designated offset is moved, and then the first matching probability corresponding to the mobile position coordinate at this time may be determined based on the elevation mean value of each grid in the second grid map and the elevation mean value of each grid in the third grid map corresponding to the second grid map in an overlapping manner.
For example, referring to fig. 3, it is assumed that the vertex at the top left corner in the second grid map is the first designated point, the vertex at the top left corner in the third grid map is the second designated point, the second grid map includes 5 × 5 grids, the third grid map includes 10 × 10 grids, the designated offset is 1, and after the second grid map is moved by 1 grid in the x direction for the first time, the displacement of the first designated point relative to the second designated point in fig. 2 is 1 grid along the x direction, so that the current movement position coordinate may be determined to be (1, 0).
Continuing with the above example, referring to fig. 4, assuming that the top left vertex in the second grid map is the first designated point, the top left vertex in the third grid map is the second designated point, the second grid map includes 5 × 5 grids, the third grid map includes 10 × 10 grids, the designated offset is still 1, and after the second grid map is moved by 1 grid in the x direction, the displacement of the first designated point relative to the second designated point in fig. 3 is 2 grids along the x direction, so that the current moving position coordinate may be determined to be (2, 0). By analogy, a mobile location coordinate may be determined after each movement.
Illustratively, the first matching probability corresponding to the single movement position coordinate may be determined according to the following formula (2).
Figure BDA0002336991170000161
Wherein R isZ(x, y) is a first matching probability corresponding to the mobile position coordinate (x, y), T (x ', y') is an elevation mean value corresponding to the grid with the position coordinate (x ', y') in the second grid map, and I (x + x ', y + y') is an elevation mean value corresponding to the grid with the position coordinate (x + x ', y + y') in the third grid map.
For example, assuming that the mobile position coordinate is (0,1), the second grid map includes 5 × 5 grids, and the third grid map includes 10 × 10 grids, the first matching probability R corresponding to the mobile position coordinate (0,1) may be determined according to the elevation value corresponding to each grid in the second grid map and the elevation value corresponding to each grid in the third grid map corresponding to the second grid map in an overlapping mannerZ(0,1)。
The above equation (2) is an equation for performing template matching using SSD (Sum of Squared Differences algorithm). That is, a first match probability of the first grid map and the global offline grid map may be determined using an algorithm of SSD template matching.
According to the formula (2), a first matching probability corresponding to the moving position coordinates of each movement in the traversal process can be determined, so that a plurality of first matching probabilities are determined, and a first matching probability distribution can be determined according to all the moving position coordinates and the first matching probabilities corresponding to all the moving position coordinates.
Further, after the first matching probabilities corresponding to all the mobile position coordinates are determined, normalization processing may be performed on the first matching probability corresponding to each mobile position coordinate through the following formula (3), so as to obtain the normalized first matching probability.
R'Z(x,y)=1.0-(RZ(x,y)-RZmin)/(RZmax-RZmin) (3)
Wherein R'Z(x, y) is the first matching probability after normalization processing corresponding to the mobile position coordinate (x, y), RZmaxThe maximum first matching probability R in the first matching probabilities corresponding to all the mobile position coordinatesZminAnd the minimum first matching probability in the first matching probabilities corresponding to all the mobile position coordinates.
Step 104: and determining second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of the grids of the lanes in the first grid map and the global off-line grid map.
Before executing this step, a second grid map may be obtained from the first grid map in step 103, and a third grid map may be obtained from the global grid map, where the size of the third grid map is larger than that of the second grid map.
Since the autonomous vehicle generally travels on a lane, in order to reduce the calculation amount of matching, grids including lanes may be determined in the second grid map and the third grid map, respectively.
As an example, referring to fig. 2, lane lines in the second grid map and the third grid map may be extracted through a lane line extraction model by a deep learning method. Then, a grid located between two lane lines is determined as a grid including lanes according to the lane lines. Further, the grid comprising lanes may be marked, e.g. the grid comprising lanes may be labeled.
In an implementation, the second matching probability of the first grid map and the global offline grid map may be determined according to echo reflection intensity means of grids including lanes in the second grid map and the third grid map. The specific implementation can include:
and moving the second grid map on the third grid map by taking the specified offset as a moving step to traverse the third grid map. After each specified offset is moved, determining a second matching probability corresponding to the current moving position coordinate based on the echo reflection intensity mean value of the grids including the lane in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first specified point in the second grid map relative to a second specified point in the third grid map after the second grid map is moved at this time. And determining second matching probability distribution based on all the mobile position coordinates determined in the traversal process and the second matching probabilities corresponding to all the mobile position coordinates.
That is, the second grid map may be moved on the third grid map with a specified offset as a movement step, one movement position coordinate may be corresponded to each movement, a second matching probability corresponding to the movement position coordinate may be determined according to the echo reflection intensity average values of grids including lanes in the second grid map and the third grid map, and the second matching probability distribution may be determined based on the movement position coordinate determined each time and the second matching probability determined each time.
As an example, an upper left corner vertex in the second grid map may be used as a first designated point, and then an upper left corner vertex in the third grid map is used as a second designated point, and each time the designated offset is moved, the moving position coordinate may be determined according to the moving direction and the moving distance, and then the second matching probability corresponding to the moving position coordinate of this time is determined based on the echo reflection intensity mean value of the grid including the lane in the second grid map and the echo reflection intensity mean value of the grid including the lane in the third grid map, which is overlapped with the second grid map and corresponds to the second grid map.
As an example, the process of moving the second grid map in the third grid map is the same as the step 104, and the specific implementation may refer to the related description of the step 104.
As an example, after the offset is specified for each movement, when the second matching probability corresponding to the current movement position coordinate is determined based on the echo reflection intensity average values of the grids including the lanes in the second grid map and the third grid map, the following formula (4) may be used to implement the determination.
Figure BDA0002336991170000181
Wherein R isI(x, y) is a second matching probability corresponding to the mobile position coordinate (x, y), T '(x', y ') is an echo reflection intensity average value corresponding to the grid including the lane in the second grid map having the position coordinate (x', y '), and I' (x + x ', y + y') is an echo reflection intensity average value corresponding to the grid including the lane in the third grid map having the position coordinate (x + x ', y + y').
In the above formula (4), the above calculation can be performed only when both the grid having the position coordinate (x ', y') in the second grid map and the grid having the corresponding position coordinate (x + x ', y + y') in the third grid map are grids including lanes, and otherwise, the value of T '(x', y '). I' (x + x ', y + y') is 0.
According to the formula (4), a second matching probability corresponding to the moving position coordinates of each movement in the traversal process can be determined, so that a plurality of second matching probabilities are determined, and a second matching probability distribution can be determined according to all the moving position coordinates and the second matching probabilities corresponding to all the moving position coordinates.
Further, after the second matching probabilities corresponding to all the mobile position coordinates are determined, normalization processing may be performed on the second matching probability corresponding to each mobile position coordinate through the following formula (5), so as to obtain the normalized second matching probability.
R'I(x,y)=1.0-(RI(x,y)-RImin)/(RImax-RImin) (5)
Wherein R'I(x, y) is the first matching probability after normalization processing corresponding to the mobile position coordinate (x, y), RImaxThe maximum second matching probability R of the second matching probabilities corresponding to all the mobile position coordinatesIminThe smallest second matching probability of the second matching probabilities corresponding to all the mobile position coordinatesAnd (4) rate.
Step 105: and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution.
The first matching probability and the second matching probability are combined to determine the position information of the automatic driving vehicle in the global off-line map at the current moment, namely, the elevation value and the echo reflection intensity value of the point cloud data are combined to position the automatic driving vehicle, so that the positioning result is more accurate.
In an implementation, the specific implementation of determining the position information of the autonomous vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution may include: and determining the fusion matching probability distribution of the first grid map and the global off-line grid map according to the first matching probability distribution and the second matching probability distribution. And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the fusion matching probability distribution and the prediction pose information.
That is, the first matching probability distribution and the second matching probability distribution may be combined according to a certain weight to determine a fused matching probability distribution of the first grid map and the global offline grid map.
In some embodiments, determining a fused matching probability distribution of the first grid map and the global offline grid map according to the first matching probability distribution and the second matching probability distribution may include the following steps:
(1) and determining the variances of the first matching probability distribution in the x direction and the y direction respectively to obtain a first variance and a second variance, and determining the variances of the second matching probability distribution in the x direction and the y direction respectively to obtain a third variance and a fourth variance.
As an example, the variance of the first matching probability distribution in the x-direction, i.e. the first variance, and the variance of the second matching probability distribution in the x-direction, i.e. the third variance, may be determined by equation (6).
Figure BDA0002336991170000191
Wherein, (X, y) is the mobile position coordinate, X is the value range of X in the mobile position coordinate, and the maximum value can be the difference between the number of the grids in the X direction of the third grid map and the number of the grids in the X direction of the second grid map plus one. Y is the value range of Y in the mobile position coordinate, and the maximum value can be the difference between the number of the grids in the Y direction of the third grid map and the number of the grids in the Y direction of the second grid map plus one. x is the value in the x direction in the coordinate of the mobile position; when the above equation (6) is used to determine the first variance, R (x, y) is RZ(x,y),
Figure BDA0002336991170000192
Can be expressed as
Figure BDA0002336991170000193
When the above equation (6) is used to determine the third variance, R (x, y) is RI(x,y),
Figure BDA0002336991170000194
Can be expressed as
Figure BDA0002336991170000195
Figure BDA0002336991170000196
Can be calculated by the formula (7).
Figure BDA0002336991170000197
As an example, the variance of the first matching probability distribution in the y direction, i.e., the second variance, and the variance of the second matching probability distribution in the y direction, i.e., the fourth variance, may be determined by equation (8).
Figure BDA0002336991170000198
Wherein (X, y) is the coordinate of the moving position, and X is the coordinate of the moving positionThe maximum value of the value range of x may be the difference between the number of the grids in the x direction of the third grid map and the number of the grids in the x direction of the second grid map plus one. Y is the value range of Y in the mobile position coordinate, and the maximum value can be the difference between the number of the grids in the Y direction of the third grid map and the number of the grids in the Y direction of the second grid map plus one. y is the value in the y direction in the coordinate of the mobile position; when the above formula (8) is used to determine the second variance, R (x, y) is RZ(x,y),
Figure BDA0002336991170000201
Can be expressed as
Figure BDA0002336991170000202
When the above equation (8) is used to determine the fourth difference, R (x, y) is RI(x,y),
Figure BDA0002336991170000203
Can be expressed as
Figure BDA0002336991170000204
Figure BDA0002336991170000205
Can be calculated by equation (9).
Figure BDA0002336991170000206
From the above equations (6), (7), (8) and (9), the first variance, the second variance, the third variance and the fourth variance can be determined.
(2) Determining a first weight of the first matching probability distribution and a second weight of the second matching probability distribution according to the first variance, the second variance, the third variance and the fourth difference.
As an example, the second weight of the second match probability distribution may be determined according to equation (10).
Figure BDA0002336991170000207
Wherein gamma is a second weight,
Figure BDA0002336991170000208
in order to be the first variance, the first variance is,
Figure BDA0002336991170000209
in order to be the second variance, the first variance is,
Figure BDA00023369911700002010
in order to be the third variance, the first variance is,
Figure BDA00023369911700002011
is the fourth variance.
The first weight of the first matching probability distribution is 1-gamma. Where γ is the second weight.
(3) And determining fusion matching probability distribution according to the first weight, the second weight, the first matching probability distribution and the second matching probability distribution.
As an example, the fused match probability may be determined according to equation (11).
Figure BDA00023369911700002012
Wherein, (x, y) is the coordinate of the mobile position, P (x, y) is the fusion matching probability corresponding to the coordinate of the mobile position, 1-gamma is the first weight, gamma is the second weight, R is the second weightZ(x, y) is the first match probability distribution, RI(x, y) is the second match probability distribution.
According to the three steps, the fusion matching probability distribution of the first grid map and the global off-line grid map can be determined.
In some embodiments, after determining the fusion matching probability distribution, the position information of the autonomous vehicle in the global offline grid map at the current time may be determined based on the fusion matching probability distribution and the predicted pose information, and specifically, the method may include the following steps:
(1) a plurality of target fusion matching probabilities are selected from the fusion matching probability distribution.
As an example, the specific implementation of selecting the plurality of target fusion matching probabilities from the fusion matching probability distribution may include: and determining the maximum fusion matching probability from the multiple fusion matching probabilities, and determining the fusion matching probability which is more than N times of the maximum fusion matching probability from the multiple fusion matching probabilities as multiple target fusion matching probabilities.
Wherein N is an integer greater than 0 and less than 1. For example, N may be 0.85.
Illustratively, the set of multiple target fusion matching probabilities may be represented by equation (12).
P=[P0(x0,y0),P1(x1,y1),P2(x2,y2),…,Pn(xn,yn)](12)
Wherein, P0(x0, y0) is the maximum fused match probability, (x0, y0) is the mobile position coordinate corresponding to the maximum fused match probability, P1(x1,y1),P2(x2,y2),…,Pn(xn, yn) is a number of fusion match probabilities greater than N times the maximum fusion match probability.
Wherein, Pn(xn,yn)>N*P0(x0,y0)。
Illustratively, assuming that the plurality of fusion matching probabilities are 90, 80, 90, 30, 60, 75, 30, 20, respectively, and N is 0.8, the maximum fusion matching probability can be determined to be 90, and the N times the maximum fusion matching probability is 72, the plurality of target fusion matching probabilities can be determined to be 90, 80, 90, and 75.
As another example, the specific implementation of selecting the multiple target fusion matching probabilities from the fusion matching probability distribution may further include: and acquiring fusion matching probability corresponding to each mobile position coordinate, and arranging the fusion matching probabilities corresponding to all the mobile position coordinates according to the sequence of the fusion matching probabilities from high to low to obtain a plurality of arranged fusion matching probabilities. And then determining the first M fusion matching probabilities from the arranged multiple fusion matching probabilities, and determining the first M fusion matching probabilities as multiple target fusion matching probabilities.
Wherein M is an integer greater than 1. For example, M may be 10.
Continuing the above example, assuming that M is 5, it can be determined that the ranked plurality of fused match probabilities is 90, 80, 75, 60, 30, 20, and the top M fused match probabilities is 90, 80, 75, 60, i.e., the plurality of target fused match probabilities is 90, 80, 75, and 60.
It should be noted that both M and N may be set by a user according to actual needs, or may be set by default of a device, which is not limited in this embodiment of the application.
(2) And determining the standard deviation corresponding to each target fusion matching probability according to the fusion matching probability distribution.
When the sizes of the multiple target fusion matching probabilities are relatively close, it is difficult to determine the target mobile position coordinates from the mobile position coordinates corresponding to the multiple fusion matching probabilities.
And the target moving position coordinate is the deviation between the predicted pose information obtained by prediction and the position information of the automatic driving vehicle in the global offline grid map.
As an example, the standard deviation of the single target fusion matching probability in the x direction can be calculated by formula (13).
Figure BDA0002336991170000221
Wherein, P (X, y) is a fusion matching probability corresponding to all the mobile position coordinates in the area with the first threshold as the radius and centered on the mobile position coordinate corresponding to the target fusion matching probability, and X' is a set of values in the X direction of all the mobile position coordinates in the area with the first threshold as the radius and centered on the mobile position coordinate corresponding to the target fusion matching probability; y' is a set of values of all the mobile position coordinates in the Y direction in an area with a first threshold value as a radius and with the mobile position coordinates corresponding to the target fusion matching probability as a center; x is the number of0Fusing the x direction in the coordinate of the mobile position corresponding to the matching probability for the targetThe value of (c).
The first threshold may be set by a user according to actual needs, or may be set by default by a device, which is not limited in the embodiment of the present application.
For example, assuming that the coordinates of the mobile position corresponding to the target fusion matching probability are (2,2) and the first threshold is 1, x may be determined0The number of (x, y) in P (x, y) is 2, (1,2), (1,1), (3,2), (2,3), (3,1), (1,3), and the like, and these moving position coordinates, the fusion matching probability corresponding to these moving position coordinates, the moving position coordinates (2,2), and the target fusion matching probability can be substituted into formula (13), and the standard deviation corresponding to the target fusion matching probability in the x direction corresponding to the moving position coordinates (2,2) can be obtained.
The corresponding standard deviation of the single target fusion matching probability in the y direction can be calculated by the formula (14).
Figure BDA0002336991170000222
Wherein, P (X, y) is a fusion matching probability corresponding to all the mobile position coordinates in the area with the first threshold as the radius and centered on the mobile position coordinate corresponding to the target fusion matching probability, and X' is a set of values in the X direction of all the mobile position coordinates in the area with the first threshold as the radius and centered on the mobile position coordinate corresponding to the target fusion matching probability; y' is a set of values of all the mobile position coordinates in the Y direction in an area with a first threshold value as a radius and with the mobile position coordinates corresponding to the target fusion matching probability as a center; y is0And fusing the value in the y direction in the mobile position coordinate corresponding to the matching probability for the target.
According to the above formula (13) and formula (14), the standard deviation of each target fusion matching probability in the x direction and the y direction can be determined.
(3) And determining a matching value corresponding to each target fusion matching probability according to each target fusion matching probability and the standard deviation corresponding to each target fusion matching probability.
When the target fusion matching probability corresponding to the mobile position coordinate is high, and the target fusion matching probability corresponding to other mobile position coordinates in the area taking the mobile position coordinate as the center and the first threshold as the radius is low, the standard deviation corresponding to the mobile position coordinate is low. When the target fusion matching probability corresponding to the mobile position coordinate is high, and the target fusion matching probability corresponding to other mobile position coordinates in the area with the mobile position coordinate as the center and the first threshold as the radius is also high, the standard deviation corresponding to the mobile position coordinate is high. In this step, the matching value of the target fusion matching probability corresponding to the first condition may be increased, so that the subsequent determination of the target mobile position coordinate is more accurate.
For example, assuming that the target fusion matching probability corresponding to the mobile position coordinate (x, y) is 99, the target fusion matching probabilities corresponding to other mobile position coordinates within a region whose radius is equal to the first threshold value are 99, 98, 99, 96, 99, the target fusion matching probability corresponding to the mobile position coordinate (x ', y') is 90, the target fusion matching probabilities corresponding to other mobile position coordinates within a region whose radius is equal to the first threshold value are 60, 30, 20, 30, and the target fusion matching probability corresponding to other mobile position coordinates within a region whose radius is equal to the mobile position coordinate (x, y) is greater than the target fusion matching probability corresponding to the mobile position coordinate (x ', y'), but the target fusion matching probability corresponding to the mobile position coordinate (x ', y') is much greater than the fusion matching probability corresponding to other mobile position coordinates in the vicinity thereof, the matching value of the target fusion matching probability corresponding to the mobile position coordinate (x ', y') can be increased, so that the subsequently determined target mobile position coordinate is more accurate.
As an example, the matching value corresponding to the single target fusion matching probability can be determined by formula (15).
P'(x,y)=P(x,y)·(σxσy)β(15)
Where P' (x, y) is a matching value corresponding to the target fusion matching probability P (x, y), σxThe corresponding standard deviation, sigma, of the target fusion matching probability P (x, y) in the x directionyFor the target fusion the matching probability P (x, y) is inThe standard deviation corresponding to the y direction, β, is a parameter that can be adjusted according to actual needs, and β<0。
When P (x, y) is different target fusion matching probabilities, a matching value corresponding to each target fusion matching probability may be determined.
(4) And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the matching value corresponding to each target fusion matching probability and the predicted pose information.
Because the predicted pose information is predicted and may be inaccurate, it is necessary to determine a deviation between a position corresponding to the predicted pose information and a position corresponding to the position information of the autonomous vehicle, that is, to determine a target moving position coordinate, and then determine the position information of the autonomous vehicle in the global offline grid map at the current time according to the target moving position coordinate and the predicted pose information.
As an example, the target movement position coordinates may be determined according to equation (16).
Figure BDA0002336991170000241
Wherein the content of the first and second substances,
Figure BDA0002336991170000242
is the value of x in the target mobile position coordinates,
Figure BDA0002336991170000243
is the value of y in the target moving position coordinate, P' (X, y) is the matching value of the target fusion matching probability corresponding to the moving position coordinate (X, y), XPFor the set of x values, Y, of the multiple target fusion match probabilities in equation (12)PThe multiple targets in equation (12) fuse the set of y values of the match probability.
As an example, after the target moving position coordinate is determined, the position coordinate of the grid where the position corresponding to the predicted pose information is located may be determined in the global offline grid map, the determined position coordinate of the grid and the target moving position coordinate may be added to obtain the position coordinate of the grid where the autonomous vehicle is located in the global offline grid map, and the position coordinate of the grid where the autonomous vehicle is located in the global offline grid map may be converted according to the conversion relationship between the position coordinate of the grid and the plane coordinate of the midpoint of the grid, so as to obtain the position information of the autonomous vehicle in the global offline grid map.
Further, the autonomous vehicle may be equipped with a wheel speed meter that may be used to estimate pose information of the autonomous vehicle. Referring to fig. 2, the autonomous driving vehicle receives wheel speed meter data including pose information sent by a wheel speed meter, and the wheel speed meter data, IMU data, and the determined position information of the autonomous driving vehicle in the global offline grid map may be input into a multi-sensor fusion filter for fusion, so as to obtain more accurate pose information of the autonomous driving vehicle in the global offline grid map, i.e., obtain high-precision pose information of the autonomous driving vehicle.
In the embodiment of the application, the pose information of the automatic driving vehicle at the current moment is predicted to obtain the predicted pose information. And generating a first grid map comprising a plurality of grids according to the predicted pose information, wherein each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid. And determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the first grid map and the global off-line grid map. And determining second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of the grids of the lanes in the first grid map and the global off-line grid map. And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution. That is to say, the position information of the automatic driving vehicle in the global offline grid map at the current moment is determined according to the elevation average value of the grid and the echo reflection intensity average value of the grid, and compared with the method for positioning the automatic driving vehicle by using a single elevation average value, the method reduces the probability of inaccurate positioning and improves the accuracy rate of positioning the automatic driving vehicle.
Fig. 5 is a schematic diagram illustrating a structure of a positioning apparatus according to an exemplary embodiment, which may be implemented by software, hardware or a combination of the two as part or all of a device. Referring to fig. 5, the apparatus includes: a prediction module 501, a generation module 502, a first determination module 503, a second determination module 504 and a third determination module 505.
The prediction module 501 is configured to predict pose information of the autonomous vehicle at the current time to obtain predicted pose information;
a generating module 502, configured to generate a first grid map according to the predicted pose information, where the first grid map includes multiple grids, each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid;
a first determining module 503, configured to determine a first matching probability distribution of the first grid map and the global offline grid map according to an elevation mean of grids in the first grid map and the global offline grid map;
a second determining module 504, configured to determine a second matching probability distribution of the first grid map and the global offline grid map according to an echo reflection intensity average of grids including lanes in the first grid map and the global offline grid map;
and a third determining module 505, configured to determine, based on the first matching probability distribution and the second matching probability distribution, position information of the autonomous vehicle in the global offline grid map at the current moment.
In one possible implementation manner of the present application, the prediction module 501 is configured to:
predicting the pose information of the automatic driving vehicle at the current moment according to the speed of the automatic driving vehicle, IMU data, historical pose information of the automatic driving vehicle at the previous moment and the time difference between the current moment and the previous moment, wherein the historical pose information comprises historical position information and historical posture information.
In one possible implementation manner of the present application, the generating module 502 is configured to:
acquiring first point cloud data at the current moment, wherein the first point cloud data at least comprise detected echo reflection intensity values and elevation values of all points;
converting first point clouds corresponding to the first point cloud data into a world coordinate system based on the first point cloud data and the predicted pose information to obtain second point cloud data;
and generating a first raster map based on the second point cloud data and first historical point cloud data in a specified time period before the current time, wherein the first historical point cloud data is historical point cloud data in a world coordinate system.
In one possible implementation manner of the present application, the first determining module 503 is configured to:
acquiring a second grid map with a first size by taking a position corresponding to the predicted pose information as a center in the first grid map, and acquiring a third grid map with a second size by taking the position corresponding to the predicted pose information as a center in the global off-line grid map, wherein the first size is smaller than the second size;
and determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
In one possible implementation manner of the present application, the first determining module 503 is configured to:
moving the second grid map on the third grid map by taking the designated offset as a moving step length so as to traverse the third grid map;
after each movement of the designated offset, determining a first matching probability corresponding to a current movement position coordinate based on the elevation mean values of grids in the second grid map and the third grid map, wherein the movement position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the current movement of the second grid map;
and determining a first matching probability distribution based on all the mobile position coordinates determined in the traversal process and the first matching probabilities corresponding to all the mobile position coordinates.
In one possible implementation manner of the present application, the second determining module 504 is further configured to:
respectively determining grids comprising lanes in the second grid map and the third grid map;
moving the second grid map on the third grid map by taking the designated offset as a moving step length so as to traverse the third grid map;
after each movement of the designated offset, determining a second matching probability corresponding to a current movement position coordinate based on the elevation mean value of grids including lanes in the second grid map and the third grid map, wherein the movement position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the current movement of the second grid map;
and determining second matching probability distribution based on all the mobile position coordinates determined in the traversal process and the second matching probabilities corresponding to all the mobile position coordinates.
In one possible implementation manner of the present application, the third determining module 505 is configured to:
determining fusion matching probability distribution of the first grid map and the global off-line grid map according to the first matching probability distribution and the second matching probability distribution;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the fusion matching probability distribution and the prediction pose information.
In one possible implementation manner of the present application, the third determining module 505 is configured to:
determining variances of the first matching probability distribution in the x direction and the y direction respectively to obtain a first variance and a second variance; determining the variances of the second matching probability distribution in the x direction and the y direction respectively to obtain a third variance and a fourth variance;
determining a first weight of the first matching probability distribution and a second weight of the second matching probability distribution according to the first variance, the second variance, the third variance and the fourth difference;
and determining fusion matching probability distribution according to the first weight, the second weight, the first matching probability distribution and the second matching probability distribution.
In one possible implementation manner of the present application, the third determining module 505 is configured to:
selecting a plurality of target fusion matching probabilities from the fusion matching probability distribution;
determining a standard deviation corresponding to each target fusion matching probability according to the fusion matching probability distribution;
determining a matching value corresponding to each target fusion matching probability according to each target fusion matching probability and a standard deviation corresponding to each target fusion matching probability;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the matching value corresponding to each target fusion matching probability and the predicted pose information.
In one possible implementation manner of the present application, the third determining module 505 is configured to:
determining a maximum fusion matching probability from the plurality of fusion matching probabilities;
and determining fusion matching probability which is more than N times of the maximum fusion matching probability in the fusion matching probabilities as a plurality of target fusion matching probabilities, wherein N is an integer which is more than 0 and less than 1.
In the embodiment of the application, the pose information of the automatic driving vehicle at the current moment is predicted to obtain the predicted pose information. And generating a first grid map comprising a plurality of grids according to the predicted pose information, wherein each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid. And determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the first grid map and the global off-line grid map. And determining second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of the grids of the lanes in the first grid map and the global off-line grid map. And determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution. That is to say, the position information of the automatic driving vehicle in the global offline grid map at the current moment is determined according to the elevation average value of the grid and the echo reflection intensity average value of the grid, and compared with the method for positioning the automatic driving vehicle by using a single elevation average value, the method reduces the probability of inaccurate positioning and improves the accuracy rate of positioning the automatic driving vehicle.
It should be noted that: in the positioning device provided in the above embodiment, only the division of the above functional modules is used for illustration in positioning, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the positioning apparatus and the positioning method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 6 is a block diagram illustrating the structure of an apparatus 600 according to an example embodiment. The device 600 may be a portable mobile terminal such as: smart phones, tablet computers, MP3 players (Moving Picture Experts group Audio Layer III, motion video Experts compression standard Audio Layer 3), MP4 players (Moving Picture Experts compression standard Audio Layer IV, motion video Experts compression standard Audio Layer 4), notebook computers, or desktop computers. Device 600 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
In general, the apparatus 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 602 is used to store at least one instruction for execution by the processor 601 to implement the positioning method provided by the method embodiments of the present application.
In some embodiments, the apparatus 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a touch screen display 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the device 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the device 600 or in a folded design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the device 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. The microphones may be multiple and placed at different locations of the device 600 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used to locate the current geographic location of the device 600 for navigation or LBS (location based Service). The positioning component 608 can be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
A power supply 609 is used to provide power to the various components in the device 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the device 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the apparatus 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the touch screen display 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the device 600, and the gyro sensor 612 may cooperate with the acceleration sensor 611 to acquire a 3D motion of the user on the device 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 613 can be disposed on the side bezel of device 600 and/or underneath touch display screen 605. When the pressure sensor 613 is disposed on the side frame of the device 600, the holding signal of the user to the device 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the touch display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the device 600. When a physical key or vendor Logo is provided on the device 600, the fingerprint sensor 614 may be integrated with the physical key or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of touch display 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 605 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 605 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
Proximity sensor 616, also known as a distance sensor, is typically disposed on the front panel of device 600. The proximity sensor 616 is used to capture the distance between the user and the front of the device 600. In one embodiment, the processor 601 controls the touch display 605 to switch from the bright screen state to the dark screen state when the proximity sensor 616 detects that the distance between the user and the front surface of the device 600 is gradually decreased; when the proximity sensor 616 detects that the distance between the user and the front of the device 600 is gradually increasing, the touch display screen 605 is controlled by the processor 601 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 does not constitute a limitation of the device 600, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
In some embodiments, a computer-readable storage medium is also provided, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the positioning method in the above-mentioned embodiments. For example, the computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is noted that the computer-readable storage medium referred to herein may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps for implementing the above embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
That is, in some embodiments, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the positioning method described above.
The above-mentioned embodiments are provided not to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (22)

1. A positioning method for use in an autonomous vehicle, the method comprising:
predicting the pose information of the automatic driving vehicle at the current moment to obtain predicted pose information;
generating a first grid map according to the predicted pose information, wherein the first grid map comprises a plurality of grids, each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid;
determining a first matching probability distribution of the first grid map and a global off-line grid map according to the elevation mean value of grids in the first grid map and the global off-line grid map;
determining a second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of grids including lanes in the first grid map and the global off-line grid map;
determining position information of the autonomous vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution.
2. The method of claim 1, wherein the predicting pose information for the autonomous vehicle at the current time comprises:
and predicting the pose information of the automatic driving vehicle at the current moment according to the speed of the automatic driving vehicle, inertial measurement unit IMU data, historical pose information of the previous moment and the time difference between the current moment and the previous moment, wherein the historical pose information comprises historical position information and historical posture information.
3. The method of claim 2, wherein the generating a first grid map from the predicted pose information comprises:
acquiring first point cloud data at the current moment, wherein the first point cloud data at least comprise detected echo reflection intensity values and elevation values of all points;
converting a first point cloud corresponding to the first point cloud data into a world coordinate system based on the first point cloud data and the predicted pose information to obtain second point cloud data;
and generating the first raster map based on the second point cloud data and first historical point cloud data in a specified time period before the current time, wherein the first historical point cloud data is historical point cloud data in a world coordinate system.
4. The method of claim 1, wherein determining a first match probability distribution for the first grid map and a global offline grid map from an elevation mean of a grid of the first grid map and the global offline grid map comprises:
acquiring a second grid map with a first size by taking a position corresponding to the predicted pose information as a center in the first grid map, and acquiring a third grid map with a second size by taking a position corresponding to the predicted pose information as a center in the global offline grid map, wherein the first size is smaller than the second size;
and determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
5. The method of claim 4, wherein determining a first match probability distribution of the first grid map and the global offline grid map according to an elevation mean of a grid of the second grid map and the third grid map comprises:
moving the second grid map on the third grid map by taking a specified offset as a moving step length so as to traverse the third grid map;
after each designated offset is moved, determining a first matching probability corresponding to a current moving position coordinate based on the elevation mean values of grids in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the second grid map is moved at this time;
and determining the first matching probability distribution based on all the mobile position coordinates determined in the traversal process and the first matching probabilities corresponding to all the mobile position coordinates.
6. The method of claim 4, wherein prior to determining the second matching probability distribution of the first grid map and the global offline grid map based on the echo reflection intensity averages of grids of lanes included in the first grid map and the global offline grid map, further comprising:
respectively determining grids comprising lanes in the second grid map and the third grid map;
correspondingly, the determining a second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean values of the grids including the lanes in the first grid map and the global off-line grid map includes:
moving the second grid map on the third grid map by taking a specified offset as a moving step length so as to traverse the third grid map;
after the specified offset is moved every time, determining a second matching probability corresponding to a current moving position coordinate based on the elevation mean value of grids including lanes in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first specified point in the second grid map relative to a second specified point in the third grid map after the second grid map is moved this time;
and determining second matching probability distribution based on all the mobile position coordinates determined in the traversal process and second matching probabilities corresponding to all the mobile position coordinates.
7. The method of claim 1, wherein determining the location information of the autonomous vehicle in the global offline grid map at the current time based on the first and second matching probability distributions comprises:
determining a fusion matching probability distribution of the first grid map and the global off-line grid map according to the first matching probability distribution and the second matching probability distribution;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the fusion matching probability distribution and the predicted pose information.
8. The method of claim 7, wherein determining the fused match probability distribution of the first grid map and the global offline grid map from the first match probability distribution and the second match probability distribution comprises:
determining the variances of the first matching probability distribution in the x direction and the y direction respectively to obtain a first variance and a second variance; determining the variances of the second matching probability distribution in the x direction and the y direction respectively to obtain a third variance and a fourth variance;
determining a first weight of the first matching probability distribution and a second weight of the second matching probability distribution according to the first variance, the second variance, the third variance, and the fourth variance;
determining the fusion matching probability distribution according to the first weight, the second weight, the first matching probability distribution and the second matching probability distribution.
9. The method of claim 7 or 8, wherein the determining the location information of the autonomous vehicle in the global offline grid map at the current time based on the fused match probability distribution and the predicted pose information comprises:
selecting a plurality of target fusion matching probabilities from the fusion matching probability distribution;
determining a standard deviation corresponding to each target fusion matching probability according to the fusion matching probability distribution;
determining a matching value corresponding to each target fusion matching probability according to each target fusion matching probability and a standard deviation corresponding to each target fusion matching probability;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the matching value corresponding to each target fusion matching probability and the predicted pose information.
10. The method of claim 9, wherein said selecting a plurality of target fusion match probabilities from said fusion match probability distribution comprises:
determining a maximum fused match probability from the plurality of fused match probabilities;
determining a fusion matching probability that is greater than N times the maximum fusion matching probability among the fusion matching probabilities as the target fusion matching probabilities, wherein N is an integer greater than 0 and less than 1.
11. A positioning device for use in an autonomous vehicle, said device comprising:
the prediction module is used for predicting the pose information of the automatic driving vehicle at the current moment to obtain predicted pose information;
the generation module is used for generating a first grid map according to the predicted pose information, the first grid map comprises a plurality of grids, each grid corresponds to an echo reflection intensity mean value and an elevation mean value, the echo reflection intensity mean value is an average value of echo reflection intensity values of all points in a single grid, and the elevation mean value is an average value of elevation values of all points in the single grid;
the first determining module is used for determining first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean value of grids in the first grid map and the global off-line grid map;
the second determining module is used for determining second matching probability distribution of the first grid map and the global off-line grid map according to the echo reflection intensity mean value of grids including lanes in the first grid map and the global off-line grid map;
and the third determination module is used for determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the first matching probability distribution and the second matching probability distribution.
12. The apparatus of claim 11, wherein the prediction module is to:
and predicting the pose information of the automatic driving vehicle at the current moment according to the speed of the automatic driving vehicle, inertial measurement unit IMU data, historical pose information of the previous moment and the time difference between the current moment and the previous moment, wherein the historical pose information comprises historical position information and historical posture information.
13. The apparatus of claim 12, wherein the generation module is to:
acquiring first point cloud data at the current moment, wherein the first point cloud data at least comprise detected echo reflection intensity values and elevation values of all points;
converting a first point cloud corresponding to the first point cloud data into a world coordinate system based on the first point cloud data and the predicted pose information to obtain second point cloud data;
and generating the first raster map based on the second point cloud data and first historical point cloud data in a specified time period before the current time, wherein the first historical point cloud data is historical point cloud data in a world coordinate system.
14. The apparatus of claim 11, wherein the first determination module is to:
acquiring a second grid map with a first size by taking a position corresponding to the predicted pose information as a center in the first grid map, and acquiring a third grid map with a second size by taking a position corresponding to the predicted pose information as a center in the global offline grid map, wherein the first size is smaller than the second size;
and determining a first matching probability distribution of the first grid map and the global off-line grid map according to the elevation mean values of the grids in the second grid map and the third grid map.
15. The apparatus of claim 14, wherein the first determination module is to:
moving the second grid map on the third grid map by taking a specified offset as a moving step length so as to traverse the third grid map;
after each designated offset is moved, determining a first matching probability corresponding to a current moving position coordinate based on the elevation mean values of grids in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first designated point in the second grid map relative to a second designated point in the third grid map after the second grid map is moved at this time;
and determining the first matching probability distribution based on all the mobile position coordinates determined in the traversal process and the first matching probabilities corresponding to all the mobile position coordinates.
16. The apparatus of claim 14, wherein the second determining module is further for:
respectively determining grids comprising lanes in the second grid map and the third grid map;
moving the second grid map on the third grid map by taking a specified offset as a moving step length so as to traverse the third grid map;
after the specified offset is moved every time, determining a second matching probability corresponding to a current moving position coordinate based on the elevation mean value of grids including lanes in the second grid map and the third grid map, wherein the moving position coordinate is used for indicating the displacement of a first specified point in the second grid map relative to a second specified point in the third grid map after the second grid map is moved this time;
and determining second matching probability distribution based on all the mobile position coordinates determined in the traversal process and second matching probabilities corresponding to all the mobile position coordinates.
17. The apparatus of claim 11, wherein the third determination module is to:
determining a fusion matching probability distribution of the first grid map and the global off-line grid map according to the first matching probability distribution and the second matching probability distribution;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment based on the fusion matching probability distribution and the predicted pose information.
18. The apparatus of claim 17, wherein the third determination module is to:
determining the variances of the first matching probability distribution in the x direction and the y direction respectively to obtain a first variance and a second variance; determining the variances of the second matching probability distribution in the x direction and the y direction respectively to obtain a third variance and a fourth variance;
determining a first weight of the first matching probability distribution and a second weight of the second matching probability distribution according to the first variance, the second variance, the third variance, and the fourth variance;
determining the fusion matching probability distribution according to the first weight, the second weight, the first matching probability distribution and the second matching probability distribution.
19. The apparatus of claim 17 or 18, wherein the third determining module is to:
selecting a plurality of target fusion matching probabilities from the fusion matching probability distribution;
determining a standard deviation corresponding to each target fusion matching probability according to the fusion matching probability distribution;
determining a matching value corresponding to each target fusion matching probability according to each target fusion matching probability and a standard deviation corresponding to each target fusion matching probability;
and determining the position information of the automatic driving vehicle in the global offline grid map at the current moment according to the matching value corresponding to each target fusion matching probability and the predicted pose information.
20. The apparatus of claim 19, wherein the third determination module is to:
determining a maximum fused match probability from the plurality of fused match probabilities;
determining a fusion matching probability that is greater than N times the maximum fusion matching probability among the fusion matching probabilities as the target fusion matching probabilities, wherein N is an integer greater than 0 and less than 1.
21. An apparatus comprising a memory for storing a computer program and a processor for executing the computer program stored in the memory to perform the steps of the method of any one of claims 1 to 10.
22. A computer-readable storage medium, characterized in that the storage medium has stored therein a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
CN201911360256.2A 2019-12-25 2019-12-25 Positioning method, device, equipment and storage medium Active CN110967011B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911360256.2A CN110967011B (en) 2019-12-25 2019-12-25 Positioning method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911360256.2A CN110967011B (en) 2019-12-25 2019-12-25 Positioning method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110967011A true CN110967011A (en) 2020-04-07
CN110967011B CN110967011B (en) 2022-06-28

Family

ID=70036613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911360256.2A Active CN110967011B (en) 2019-12-25 2019-12-25 Positioning method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110967011B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111596298A (en) * 2020-05-13 2020-08-28 北京百度网讯科技有限公司 Target object positioning method, device, equipment and storage medium
CN111649739A (en) * 2020-06-02 2020-09-11 北京百度网讯科技有限公司 Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
CN111683203A (en) * 2020-06-12 2020-09-18 达闼机器人有限公司 Grid map generation method and device and computer readable storage medium
CN111694009A (en) * 2020-05-07 2020-09-22 南昌大学 Positioning system, method and device
CN111949816A (en) * 2020-06-22 2020-11-17 北京百度网讯科技有限公司 Positioning processing method and device, electronic equipment and storage medium
CN112083725A (en) * 2020-09-04 2020-12-15 湖南大学 Structure-shared multi-sensor fusion positioning system for automatic driving vehicle
CN112802103A (en) * 2021-02-01 2021-05-14 深圳万拓科技创新有限公司 Pose repositioning method, device, equipment and medium of laser sweeper
CN112902951A (en) * 2021-01-21 2021-06-04 深圳市镭神智能***有限公司 Positioning method, device and equipment of driving equipment and storage medium
CN113031006A (en) * 2021-02-26 2021-06-25 杭州海康机器人技术有限公司 Method, device and equipment for determining positioning information
CN113734167A (en) * 2021-09-10 2021-12-03 苏州智加科技有限公司 Vehicle control method, device, terminal and storage medium
CN113865598A (en) * 2020-06-30 2021-12-31 华为技术有限公司 Positioning map generation method, positioning method and positioning device
CN113916243A (en) * 2020-07-07 2022-01-11 长沙智能驾驶研究院有限公司 Vehicle positioning method, device, equipment and storage medium for target scene area
CN114199245A (en) * 2021-10-28 2022-03-18 北京汽车研究总院有限公司 Positioning method and device for automatic driving vehicle, vehicle and storage medium
CN115220009A (en) * 2021-04-15 2022-10-21 阿里巴巴新加坡控股有限公司 Data processing method and device, electronic equipment and computer storage medium
TWI781829B (en) * 2021-11-22 2022-10-21 財團法人車輛研究測試中心 Fusion vehicle localization method and system
CN115877429A (en) * 2023-02-07 2023-03-31 安徽蔚来智驾科技有限公司 Positioning method and device for automatic driving vehicle, storage medium and vehicle
WO2023060592A1 (en) * 2021-10-15 2023-04-20 华为技术有限公司 Positioning method and device
CN117991259A (en) * 2024-04-07 2024-05-07 陕西欧卡电子智能科技有限公司 Unmanned ship repositioning method and device based on laser radar and millimeter wave radar

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170100615A1 (en) * 2015-10-09 2017-04-13 Leonard E. Doten Wildfire aerial fighting system utilizing lidar
CN106908775A (en) * 2017-03-08 2017-06-30 同济大学 A kind of unmanned vehicle real-time location method based on laser reflection intensity
CN108345008A (en) * 2017-01-23 2018-07-31 郑州宇通客车股份有限公司 A kind of target object detecting method, point cloud data extracting method and device
CN108732582A (en) * 2017-04-20 2018-11-02 百度在线网络技术(北京)有限公司 Vehicle positioning method and device
CN108732603A (en) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 Method and apparatus for positioning vehicle
CN109725329A (en) * 2019-02-20 2019-05-07 苏州风图智能科技有限公司 A kind of unmanned vehicle localization method and device
CN109781119A (en) * 2017-11-15 2019-05-21 百度在线网络技术(北京)有限公司 A kind of laser point cloud localization method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109839112B (en) * 2019-03-11 2023-04-07 中南大学 Underground operation equipment positioning method, device and system and storage medium
CN109934868B (en) * 2019-03-18 2021-04-06 北京理工大学 Vehicle positioning method based on matching of three-dimensional point cloud and satellite map

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170100615A1 (en) * 2015-10-09 2017-04-13 Leonard E. Doten Wildfire aerial fighting system utilizing lidar
CN108345008A (en) * 2017-01-23 2018-07-31 郑州宇通客车股份有限公司 A kind of target object detecting method, point cloud data extracting method and device
CN106908775A (en) * 2017-03-08 2017-06-30 同济大学 A kind of unmanned vehicle real-time location method based on laser reflection intensity
CN108732603A (en) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 Method and apparatus for positioning vehicle
CN108732582A (en) * 2017-04-20 2018-11-02 百度在线网络技术(北京)有限公司 Vehicle positioning method and device
CN109781119A (en) * 2017-11-15 2019-05-21 百度在线网络技术(北京)有限公司 A kind of laser point cloud localization method and system
CN109725329A (en) * 2019-02-20 2019-05-07 苏州风图智能科技有限公司 A kind of unmanned vehicle localization method and device

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694009A (en) * 2020-05-07 2020-09-22 南昌大学 Positioning system, method and device
CN111596298B (en) * 2020-05-13 2022-10-14 北京百度网讯科技有限公司 Target object positioning method, device, equipment and storage medium
CN111596298A (en) * 2020-05-13 2020-08-28 北京百度网讯科技有限公司 Target object positioning method, device, equipment and storage medium
KR20210042856A (en) * 2020-06-02 2021-04-20 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 Positioning method and apparatus, autonomous driving vehicle, electronic device and storage medium
CN111649739B (en) * 2020-06-02 2023-09-01 阿波罗智能技术(北京)有限公司 Positioning method and device, automatic driving vehicle, electronic equipment and storage medium
CN111649739A (en) * 2020-06-02 2020-09-11 北京百度网讯科技有限公司 Positioning method and apparatus, autonomous vehicle, electronic device, and storage medium
US11789455B2 (en) 2020-06-02 2023-10-17 Beijing Baidu Netcom Science Technology Co., Ltd. Control of autonomous vehicle based on fusion of pose information and visual data
KR102570094B1 (en) 2020-06-02 2023-08-23 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 Positioning method and apparatus, autonomous driving vehicle, electronic device and storage medium
CN111683203A (en) * 2020-06-12 2020-09-18 达闼机器人有限公司 Grid map generation method and device and computer readable storage medium
US11972523B2 (en) 2020-06-12 2024-04-30 Cloudminds Robotics Co., Ltd. Grid map generation method and device, and computer-readable storage medium
CN111683203B (en) * 2020-06-12 2021-11-09 达闼机器人有限公司 Grid map generation method and device and computer readable storage medium
CN111949816A (en) * 2020-06-22 2020-11-17 北京百度网讯科技有限公司 Positioning processing method and device, electronic equipment and storage medium
CN111949816B (en) * 2020-06-22 2023-09-26 北京百度网讯科技有限公司 Positioning processing method, device, electronic equipment and storage medium
CN113865598A (en) * 2020-06-30 2021-12-31 华为技术有限公司 Positioning map generation method, positioning method and positioning device
WO2022007776A1 (en) * 2020-07-07 2022-01-13 长沙智能驾驶研究院有限公司 Vehicle positioning method and apparatus for target scene region, device and storage medium
CN113916243A (en) * 2020-07-07 2022-01-11 长沙智能驾驶研究院有限公司 Vehicle positioning method, device, equipment and storage medium for target scene area
CN113916243B (en) * 2020-07-07 2022-10-14 长沙智能驾驶研究院有限公司 Vehicle positioning method, device, equipment and storage medium for target scene area
CN112083725A (en) * 2020-09-04 2020-12-15 湖南大学 Structure-shared multi-sensor fusion positioning system for automatic driving vehicle
CN112902951A (en) * 2021-01-21 2021-06-04 深圳市镭神智能***有限公司 Positioning method, device and equipment of driving equipment and storage medium
CN112802103A (en) * 2021-02-01 2021-05-14 深圳万拓科技创新有限公司 Pose repositioning method, device, equipment and medium of laser sweeper
CN112802103B (en) * 2021-02-01 2024-02-09 深圳万拓科技创新有限公司 Pose repositioning method, device, equipment and medium of laser sweeper
CN113031006B (en) * 2021-02-26 2023-06-27 杭州海康机器人股份有限公司 Method, device and equipment for determining positioning information
CN113031006A (en) * 2021-02-26 2021-06-25 杭州海康机器人技术有限公司 Method, device and equipment for determining positioning information
CN115220009A (en) * 2021-04-15 2022-10-21 阿里巴巴新加坡控股有限公司 Data processing method and device, electronic equipment and computer storage medium
CN113734167A (en) * 2021-09-10 2021-12-03 苏州智加科技有限公司 Vehicle control method, device, terminal and storage medium
WO2023060592A1 (en) * 2021-10-15 2023-04-20 华为技术有限公司 Positioning method and device
CN114199245A (en) * 2021-10-28 2022-03-18 北京汽车研究总院有限公司 Positioning method and device for automatic driving vehicle, vehicle and storage medium
TWI781829B (en) * 2021-11-22 2022-10-21 財團法人車輛研究測試中心 Fusion vehicle localization method and system
CN115877429A (en) * 2023-02-07 2023-03-31 安徽蔚来智驾科技有限公司 Positioning method and device for automatic driving vehicle, storage medium and vehicle
CN115877429B (en) * 2023-02-07 2023-07-07 安徽蔚来智驾科技有限公司 Positioning method and device for automatic driving vehicle, storage medium and vehicle
CN117991259A (en) * 2024-04-07 2024-05-07 陕西欧卡电子智能科技有限公司 Unmanned ship repositioning method and device based on laser radar and millimeter wave radar

Also Published As

Publication number Publication date
CN110967011B (en) 2022-06-28

Similar Documents

Publication Publication Date Title
CN110967011B (en) Positioning method, device, equipment and storage medium
WO2021128777A1 (en) Method, apparatus, device, and storage medium for detecting travelable region
CN111126182B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
CN110986930B (en) Equipment positioning method and device, electronic equipment and storage medium
CN110148178B (en) Camera positioning method, device, terminal and storage medium
CN112270718B (en) Camera calibration method, device, system and storage medium
CN110920631B (en) Method and device for controlling vehicle, electronic equipment and readable storage medium
CN112150560B (en) Method, device and computer storage medium for determining vanishing point
CN110570465B (en) Real-time positioning and map construction method and device and computer readable storage medium
CN110633336B (en) Method and device for determining laser data search range and storage medium
CN111928861B (en) Map construction method and device
CN110775056B (en) Vehicle driving method, device, terminal and medium based on radar detection
CN111754564B (en) Video display method, device, equipment and storage medium
CN111538009B (en) Radar point marking method and device
CN113432620B (en) Error estimation method and device, vehicle-mounted terminal and storage medium
CN112734346B (en) Method, device and equipment for determining lane coverage and readable storage medium
CN111664860B (en) Positioning method and device, intelligent equipment and storage medium
CN114623836A (en) Vehicle pose determining method and device and vehicle
CN112835021A (en) Positioning method, device, system and computer readable storage medium
CN114093020A (en) Motion capture method, motion capture device, electronic device and storage medium
CN112241662B (en) Method and device for detecting drivable area
CN113689484B (en) Method and device for determining depth information, terminal and storage medium
CN112050088B (en) Pipeline detection method and device and computer storage medium
CN117372320A (en) Quality detection method, device and equipment for positioning map and readable storage medium
CN117055016A (en) Detection precision testing method and device of sensor and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20200612

Address after: 215100 16 / F, Lingyu Business Plaza, 66 qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant after: SUZHOU ZHIJIA TECHNOLOGY Co.,Ltd.

Applicant after: Zhijia (Cayman) Company

Applicant after: Zhijia (USA)

Address before: 215100 16 / F, Lingyu Business Plaza, 66 qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant before: SUZHOU ZHIJIA TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210310

Address after: 16 / F, Lingyu Business Plaza, 66 qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant after: SUZHOU ZHIJIA TECHNOLOGY Co.,Ltd.

Applicant after: Zhijia (USA)

Address before: 215100 16 / F, Lingyu Business Plaza, 66 qinglonggang Road, high speed rail new town, Xiangcheng District, Suzhou City, Jiangsu Province

Applicant before: SUZHOU ZHIJIA TECHNOLOGY Co.,Ltd.

Applicant before: Zhijia (Cayman) Company

Applicant before: Zhijia (USA)

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant