CN113776544A - Point cloud map updating method and device, electronic equipment and positioning system - Google Patents

Point cloud map updating method and device, electronic equipment and positioning system Download PDF

Info

Publication number
CN113776544A
CN113776544A CN202010522053.5A CN202010522053A CN113776544A CN 113776544 A CN113776544 A CN 113776544A CN 202010522053 A CN202010522053 A CN 202010522053A CN 113776544 A CN113776544 A CN 113776544A
Authority
CN
China
Prior art keywords
point
map
point cloud
laser radar
scanning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010522053.5A
Other languages
Chinese (zh)
Inventor
李宁
吕吉鑫
万富华
孙杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010522053.5A priority Critical patent/CN113776544A/en
Publication of CN113776544A publication Critical patent/CN113776544A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3859Differential updating map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The embodiment of the application provides a point cloud map updating method, a point cloud map updating device, an electronic device and a positioning system, wherein a scanned laser radar point cloud is compared with a prestored point cloud map to determine difference points of the laser radar point cloud and the point cloud map, and the observation probability of each difference point is determined, and the observation probability represents the probability that one scanning point and one map point are the same point, namely, if one map point in the point cloud map is the difference point with the laser radar point cloud and the observation probability of the map point is smaller, the probability that the scanning point associated with the map point exists in an actual scene is lower, and if the observation probability of the map point is smaller than a preset threshold value, the map point does not exist in the actual scene at all, so that the map point is deleted from the point cloud map, the point cloud map updating is realized, and the updated point cloud map more conforms to the actual scene, Is more accurate.

Description

Point cloud map updating method and device, electronic equipment and positioning system
Technical Field
The present disclosure relates to the field of laser radar positioning technologies, and in particular, to a method and an apparatus for updating a point cloud map, an electronic device, and a positioning system.
Background
Currently, unmanned vehicles, mobile robots, and the like start to be positioned by using a laser radar positioning technology. The basic principle of lidar positioning is as follows: the method comprises the steps of installing a laser radar on mobile equipment such as an unmanned automobile and a mobile robot, scanning a scene with the laser radar to obtain scanning data, and matching the scanning data with a point cloud map constructed in advance for the scene to finally obtain a positioning result of the mobile equipment.
It can be seen that in the technology of positioning by using laser radar, the most important is the point cloud map. At present, a point cloud map is constructed by scanning a laser radar and constructing the point cloud according to the scanned laser radar. In practical application, the point cloud map needs to be updated in real time, in the corresponding method, the laser radar scans the laser radar point cloud of the current scene in real time, the scanned laser radar point cloud is compared with the point cloud map, and once differences are found, the point cloud map is updated according to the laser radar point cloud scanned in real time.
However, in practical positioning applications, situations may arise where the scanned lidar point cloud is inconsistent with the actual scene, for example: when scanning is carried out, a certain building is shielded at the scanning moment, so that points corresponding to the building are not found in the scanned laser radar point cloud, when the point cloud map is updated, the points corresponding to the building can be deleted, and the updated point cloud map is not accurate enough.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for updating a point cloud map, an electronic device, and a positioning system, so that the point cloud map is more accurate. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a point cloud map updating method, where the method includes:
the method comprises the steps of obtaining a laser radar point cloud of a current scene scanned by a laser radar and a pre-stored point cloud map of the current scene, wherein the laser radar point cloud comprises a plurality of scanning points, and the point cloud map comprises a plurality of map points;
comparing the laser radar point cloud with the point cloud map, and determining difference points of the laser radar point cloud and the point cloud map;
determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and the initial value of the observation probability is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar every time;
and deleting the difference points of which the observation probability is smaller than a preset threshold value in the point cloud map to obtain an updated point cloud map.
Optionally, after the step of obtaining the lidar point cloud of the current scene scanned by the lidar, the method further includes:
acquiring cloud blocks of all points in the laser radar point cloud and current motion data, wherein the point cloud blocks are the minimum unit of the laser radar point cloud;
calculating the pose transformation relation of each cloud block relative to the initial cloud block by using the current motion data;
converting each cloud block into a coordinate system of the initial cloud block according to the pose transformation relation;
and splicing the converted cloud blocks of each point to obtain an updated laser radar point cloud.
Optionally, after the step of obtaining the lidar point cloud of the current scene scanned by the lidar, the method further includes:
acquiring point cloud information of laser radar point cloud;
performing semantic segmentation on the laser radar point cloud based on the point cloud information to obtain the semantic type of each scanning point in the laser radar point cloud;
identifying a target scanning point with a designated semantic type, wherein the designated semantic type is a semantic type with motion characteristics;
and deleting the target scanning point from the laser radar point cloud to obtain an updated laser radar point cloud.
Optionally, the step of comparing the laser radar point cloud with the point cloud map to determine the difference point between the laser radar point cloud and the point cloud map includes:
mapping the laser radar point cloud to a coordinate system to which a point cloud map belongs to obtain the coordinates of each scanning point in the laser radar point cloud under the coordinate system;
acquiring coordinates of each map point in the point cloud map;
if only one map point or only one scanning point is determined at a position under the coordinate system according to the coordinates of each map point and each scanning point, determining that the point at the position is a difference point.
Optionally, the step of determining the observation probability of each difference point includes:
respectively calculating the distance between each map point and each scanning point under the coordinate system according to the coordinates of each map point and the coordinates of each scanning point;
aiming at any difference point, if the difference point is a map point and no scanning point with the distance from the difference point smaller than a preset distance threshold exists, reducing the observation probability of the difference point according to a preset reduction strategy;
for any difference point, if the difference point is a scanning point and no map point with the distance from the difference point being smaller than a preset distance threshold exists, adding a map point at a position corresponding to the coordinates of the difference point in a point cloud map, and initializing the observation probability of the difference point;
aiming at any difference point, if the difference point is a map point and a scanning point exists, wherein the distance between the difference point and the scanning point is smaller than a preset distance threshold value, the observation probability of the difference point is increased according to a preset increasing strategy.
Optionally, before the step of mapping the laser radar point cloud to the coordinate system to which the point cloud map belongs to obtain the coordinates of each scanning point in the laser radar point cloud in the coordinate system, the method further includes:
acquiring a predicted pose relation and radar coordinates of each scanning point in laser radar point cloud;
converting each scanning point in the laser radar point cloud to a coordinate system to which a point cloud map belongs according to the predicted pose relation to obtain a predicted coordinate of each scanning point;
aiming at any scanning point, searching a related map point closest to the scanning point from the point cloud map according to the predicted coordinate of the scanning point and the coordinates of each map point;
filtering unstable associated map points with unstable characteristics in the searched associated map points by using a preset filtering method;
calculating the relative position and pose relationship between the laser radar point cloud and the point cloud map according to the coordinates of the rest associated map points and the radar coordinates of each scanning point;
the method comprises the following steps of mapping the laser radar point cloud to a coordinate system to which a point cloud map belongs to obtain the coordinates of each scanning point in the laser radar point cloud under the coordinate system, wherein the steps comprise:
and converting each scanning point in the laser radar point cloud into a coordinate system to which the point cloud map belongs according to the relative pose relationship to obtain the coordinate of each scanning point in the coordinate system.
Optionally, before the step of comparing the laser radar point cloud with the point cloud map and determining the difference point between the laser radar point cloud and the point cloud map, the method further includes:
converting the laser radar point cloud and the point cloud map into the same coordinate system to obtain the coordinates of each scanning point in the laser radar point cloud and the coordinates of each map point in the point cloud map in the same coordinate system;
acquiring noise information of the laser radar;
calculating registration confidence coefficients of the laser radar point cloud and the point cloud map by utilizing a preset pose solving function according to the coordinates of each scanning point, the coordinates of each map point and noise information;
comparing the laser radar point cloud with the point cloud map, and determining difference points of the laser radar point cloud and the point cloud map, wherein the step comprises the following steps of:
and if the registration confidence coefficient is greater than a preset confidence coefficient threshold value, comparing the laser radar point cloud with the point cloud map, and determining the difference point between the laser radar point cloud and the point cloud map.
Optionally, the laser radar point cloud includes a multi-frame laser radar point cloud;
before the steps of comparing the laser radar point cloud with the point cloud map and determining the difference point between the laser radar point cloud and the point cloud map, the method further comprises the following steps:
and fusing the multi-frame laser radar point cloud to obtain fused laser radar point cloud.
In a second aspect, an embodiment of the present application provides a point cloud map updating apparatus, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a laser radar point cloud of a current scene scanned by a laser radar and a pre-stored point cloud map of the current scene, the laser radar point cloud comprises a plurality of scanning points, and the point cloud map comprises a plurality of map points;
the comparison module is used for comparing the laser radar point cloud with the point cloud map and determining the difference point between the laser radar point cloud and the point cloud map;
the calculation module is used for determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and the initial value of the observation probability is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar every time;
and the updating module is used for deleting the difference points of which the observation probability is smaller than a preset threshold value in the point cloud map to obtain an updated point cloud map.
Optionally, the apparatus further comprises: a motion compensation module;
the motion compensation module is used for acquiring cloud blocks of all points in the laser radar point cloud and current motion data, wherein the cloud blocks are the minimum unit of the laser radar point cloud; calculating the pose transformation relation of each cloud block relative to the initial cloud block by using the current motion data; converting each cloud block into a coordinate system of the initial cloud block according to the pose transformation relation; and splicing the converted cloud blocks of each point to obtain an updated laser radar point cloud.
Optionally, the apparatus further comprises: a semantic segmentation module;
the semantic segmentation module is used for acquiring point cloud information of the laser radar point cloud; performing semantic segmentation on the laser radar point cloud based on the point cloud information to obtain the semantic type of each scanning point in the laser radar point cloud; identifying a target scanning point with a designated semantic type, wherein the designated semantic type is a semantic type with motion characteristics; and deleting the target scanning point from the laser radar point cloud to obtain an updated laser radar point cloud.
Optionally, the comparison module is specifically configured to:
mapping the laser radar point cloud to a coordinate system to which a point cloud map belongs to obtain the coordinates of each scanning point in the laser radar point cloud under the coordinate system;
acquiring coordinates of each map point in the point cloud map;
if only one map point or only one scanning point is determined at a position under the coordinate system according to the coordinates of each map point and each scanning point, determining that the point at the position is a difference point.
Optionally, the calculation module is specifically configured to:
respectively calculating the distance between each map point and each scanning point under the coordinate system according to the coordinates of each map point and the coordinates of each scanning point;
aiming at any difference point, if the difference point is a map point and no scanning point with the distance from the difference point smaller than a preset distance threshold exists, reducing the observation probability of the difference point according to a preset reduction strategy;
for any difference point, if the difference point is a scanning point and no map point with the distance from the difference point being smaller than a preset distance threshold exists, adding a map point at a position corresponding to the coordinates of the difference point in a point cloud map, and initializing the observation probability of the difference point;
aiming at any difference point, if the difference point is a map point and a scanning point exists, wherein the distance between the difference point and the scanning point is smaller than a preset distance threshold value, the observation probability of the difference point is increased according to a preset increasing strategy.
Optionally, the apparatus further comprises: a laser positioning module;
the laser positioning module is used for acquiring the predicted pose relationship and the radar coordinates of each scanning point in the laser radar point cloud; converting each scanning point in the laser radar point cloud to a coordinate system to which a point cloud map belongs according to the predicted pose relation to obtain a predicted coordinate of each scanning point; aiming at any scanning point, searching a related map point closest to the scanning point from the point cloud map according to the predicted coordinate of the scanning point and the coordinates of each map point; filtering unstable associated map points with unstable characteristics in the searched associated map points by using a preset filtering method; calculating the relative position and pose relationship between the laser radar point cloud and the point cloud map according to the coordinates of the rest associated map points and the radar coordinates of each scanning point;
and the comparison module is specifically used for converting each scanning point in the laser radar point cloud to a coordinate system to which the point cloud map belongs according to the relative pose relation, so as to obtain the coordinate of each scanning point in the coordinate system.
Optionally, the apparatus further comprises: a confidence calculation module;
the confidence coefficient calculation module is used for converting the laser radar point cloud and the point cloud map into the same coordinate system to obtain the coordinates of each scanning point in the laser radar point cloud and the coordinates of each map point in the point cloud map in the same coordinate system; acquiring noise information of the laser radar; calculating registration confidence coefficients of the laser radar point cloud and the point cloud map by utilizing a preset pose solving function according to the coordinates of each scanning point, the coordinates of each map point and noise information;
and the comparison module is specifically used for comparing the laser radar point cloud with the point cloud map and determining the difference point between the laser radar point cloud and the point cloud map if the registration confidence coefficient is greater than a preset confidence coefficient threshold value.
Optionally, the laser radar point cloud includes a multi-frame laser radar point cloud;
the device also includes: a fusion module;
and the fusion module is used for fusing the multi-frame laser radar point cloud to obtain fused laser radar point cloud.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions that can be executed by the processor, and the machine executable instructions are loaded and executed by the processor, so as to implement the method provided in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a machine-readable storage medium, in which machine-executable instructions are stored, and when the machine-executable instructions are loaded and executed by a processor, the method provided in the first aspect of the embodiment of the present application is implemented.
In a fifth aspect, an embodiment of the present application provides a positioning system, where the system includes an electronic device and a laser radar;
the laser radar is used for scanning the laser radar point cloud of the current scene and sending the laser radar point cloud to the electronic equipment;
the electronic equipment is used for receiving laser radar point cloud sent by a laser radar and acquiring a pre-stored point cloud map of the current scene, wherein the laser radar point cloud comprises a plurality of scanning points, and the point cloud map comprises a plurality of map points; comparing the laser radar point cloud with the point cloud map, and determining difference points of the laser radar point cloud and the point cloud map; determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and the initial value of the observation probability is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar every time; deleting the difference points of which the observation probability is smaller than a preset threshold value in the point cloud map to obtain an updated point cloud map; and carrying out target positioning by using the point cloud map.
The embodiment of the application provides a point cloud map updating method, a point cloud map updating device, electronic equipment and a positioning system, wherein the point cloud map updating method comprises the following steps: the method comprises the steps of obtaining a laser radar point cloud of a current scene scanned by a laser radar and a pre-stored point cloud map of the current scene, comparing the laser radar point cloud with the point cloud map, determining difference points of the laser radar point cloud and the point cloud map, determining observation probability of each difference point, deleting the difference points of which the observation probability is smaller than a preset threshold value in the point cloud map, and obtaining an updated point cloud map.
By applying the embodiment of the application, the scanning laser radar point cloud of the current scene obtained by scanning is compared with the prestored point cloud map of the current scene to determine the difference points of the laser radar point cloud and the point cloud map, and the observation probability of each difference point is determined, the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map and is reduced after the laser radar does not scan the scanning point which is the same point as the map point each time, that is, if one map point in the point cloud map is the difference point with the laser radar point cloud and the observation probability of the map point is smaller, the probability that the scanning point associated with the map point exists in the actual scene is lower, and if the observation probability of the map point is smaller than the preset threshold, the map point does not exist in the actual scene at all, so that the map point is deleted from the point cloud map, the point cloud map is updated, and the updated point cloud map is more in line with the actual scene and more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating a point cloud map updating method according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart illustrating a point cloud map updating method according to another embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a method of motion compensation according to an embodiment of the present application;
fig. 4 is a schematic flow chart illustrating a point cloud map updating method according to still another embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating a point cloud map updating method according to another embodiment of the present disclosure;
FIG. 6 is a schematic flow chart illustrating a method of laser positioning according to an embodiment of the present disclosure;
FIG. 7 is a schematic flowchart illustrating a point cloud map updating method according to another embodiment of the present disclosure;
FIG. 8(a) is a mapping before laser positioning according to an embodiment of the present application;
FIG. 8(b) is a mapping after laser positioning according to an embodiment of the present application;
FIG. 9(a) is a point cloud map before updating according to an embodiment of the present application;
FIG. 9(b) is an updated point cloud map according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a point cloud map updating apparatus according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a positioning system according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to obtain a more accurate point cloud map, the embodiment of the application provides a point cloud map updating method and device, electronic equipment and a positioning system. Next, a point cloud map updating method provided in the embodiment of the present application is introduced first.
The execution main body of the point cloud map updating method provided by the embodiment of the application can be a control device on mobile equipment such as an unmanned automobile, a mobile robot and the like, and the control device is used for realizing control such as automatic automobile driving, automatic robot driving and the like; the execution main body can also be a background server for realizing control of automatic driving of an automobile, automatic driving of a robot and the like, and the background server has the functions of point cloud map updating, driving path planning, driving path issuing and the like. The executing subject of the point cloud map updating method provided by the embodiment of the present application is not specifically limited, and hereinafter, referred to as an electronic device, and the manner of implementing the point cloud map updating method provided by the embodiment of the present application may be at least one of software, a hardware circuit, and a logic circuit provided in the electronic device.
As shown in fig. 1, a point cloud map updating method provided in an embodiment of the present application may include the following steps.
S101, laser radar point cloud of a current scene scanned by a laser radar and a pre-stored point cloud map of the current scene are obtained, wherein the laser radar point cloud comprises a plurality of scanning points, and the point cloud map comprises a plurality of map points.
S102, comparing the laser radar point cloud with the point cloud map, and determining difference points of the laser radar point cloud and the point cloud map.
S103, determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and the initial value of the observation probability is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar every time.
And S104, deleting the difference points with the observation probability smaller than a preset threshold value in the point cloud map to obtain an updated point cloud map.
By applying the embodiment of the application, the scanning laser radar point cloud of the current scene obtained by scanning is compared with the prestored point cloud map of the current scene to determine the difference points of the laser radar point cloud and the point cloud map, and the observation probability of each difference point is determined, the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map and is reduced after the laser radar does not scan the scanning point which is the same point as the map point each time, that is, if one map point in the point cloud map is the difference point with the laser radar point cloud and the observation probability of the map point is smaller, the probability that the scanning point associated with the map point exists in the actual scene is lower, and if the observation probability of the map point is smaller than the preset threshold, the map point does not exist in the actual scene at all, so the map point is deleted from the point cloud map, the point cloud map is updated, the updated point cloud map is more in line with the actual scene and more accurate, and further, the updated point cloud map is more accurate, and the positioning precision can be improved when the point cloud map is used for positioning.
The point cloud map of the current scene is a map which is established by utilizing a preset positioning mapping method to obtain point cloud information of the current scene and based on the point cloud information of the current scene. The positioning Mapping method mentioned here may be a Mapping method such as SLAM (Simultaneous Mapping and positioning), L-SLAM (laser-Simultaneous Mapping and positioning), and the like. The current scene is a scene where the mobile device to be positioned is currently located, such as a certain park, parking lot, factory, production line, etc. The point cloud map comprises a plurality of map points, and one map point or a combination of a plurality of map points represents one target object in an actual scene, such as a building, an obstacle and the like. The laser radar point cloud is point cloud data scanned by the laser radar in the current scene. In order to ensure the real-time performance of positioning, the obtained lidar point cloud is generally a current frame lidar point cloud scanned in real time.
After the laser radar point cloud and the point cloud map are obtained, the laser radar point cloud and the point cloud map need to be compared, and difference points of the laser radar point cloud and the point cloud map are determined. The comparison process is to judge whether a point exists in the laser radar point cloud or in the point cloud map, and if the point exists in the laser radar point cloud or in the point cloud map, the point is the difference point between the laser radar point cloud and the point cloud map. The specific comparison process may be to identify whether the same type of point exists in the lidar point cloud and the point cloud map by type identification, and if one type of point exists only in the lidar point cloud or only in the point cloud map, the point is a difference point between the lidar point cloud and the point cloud map. Or, the comparison process may also be a mode of mapping the laser radar point cloud to the point cloud map, and determining whether the scanned point and the map point coincide with each other in the same mapped position point, if the scanned point and the map point do not coincide with each other in one position point, that is, if the scanned point or the map point only exists in the position point, the position point is a difference point between the laser radar point cloud and the point cloud map.
Optionally, S102 may be specifically implemented by the following method: mapping the laser radar point cloud to a coordinate system to which a point cloud map belongs to obtain the coordinates of each scanning point in the laser radar point cloud under the coordinate system; acquiring coordinates of each map point in the point cloud map; if only one map point or only one scanning point is determined at a position under the coordinate system according to the coordinates of each map point and each scanning point, determining that the point at the position is a difference point.
The scanning points in the laser radar point cloud can be mapped to the coordinate system which the point cloud map belongs to, so that the scanning points and the map points have uniform coordinates, the specific mapping mode can be coordinate mapping directly or conversion through a certain pose conversion relation, the specific mapping process will be detailed in the subsequent embodiment, and the detailed description is omitted here. Through the mapping process from the laser radar point cloud to the coordinate system to which the point cloud map belongs, the coordinates of each scanning point in the laser radar point cloud in the coordinate system can be obtained, the coordinates of each map point can be directly read from the point cloud map, and the coordinates of each scanning point and the coordinates of each map point belong to the same coordinate system.
After obtaining the coordinates of each scanning point and the coordinates of each map point, whether the scanning point and the map point coincide or not can be judged according to the coordinates of each scanning point and the coordinates of each map point, if the coordinates of one scanning point are the same as the coordinates of one map point, the scanning point can be determined to coincide with the map point, if only one map point or only one scanning point exists at one position, namely, if the coordinates of any map point are not the same as the coordinates of one scanning point, the scanning point can be determined to be a difference point, and if the coordinates of any scanning point are not the same as the coordinates of one map point, the map point can be determined to be a difference point.
The probability of the scanning points or map points associated with the difference points can be known by determining the observation probability of each difference point, and the smaller the probability, the more the difference point appears in the laser radar point cloud or point cloud map. The initial value of the observation probability is set for a map point when the map point is added in the point cloud map, and is used for representing the probability that a scanning point associated with the map point exists in the laser radar point cloud aiming at any map point in the point cloud map. The situation of adding map points in the point cloud map includes: if a scanning point is scanned in the laser radar point cloud and a map point which is the same as the scanning point does not exist in the point cloud map, adding a map point at a position corresponding to the scanning point coordinate in the point cloud map; the technical personnel can also artificially add a map point in the point cloud map according to the actual situation.
Optionally, S103 may be specifically implemented by the following method: respectively calculating the distance between each map point and each scanning point in a coordinate system according to the coordinates of each map point and the coordinates of each scanning point; aiming at any difference point, if the difference point is a map point and no scanning point with the distance from the difference point smaller than a preset distance threshold exists, reducing the observation probability of the difference point according to a preset reduction strategy; for any difference point, if the difference point is a scanning point and no map point with the distance from the difference point being smaller than a preset distance threshold exists, adding a map point at a position corresponding to the coordinates of the difference point in a point cloud map, and initializing the observation probability of the difference point; aiming at any difference point, if the difference point is a map point and a scanning point exists, wherein the distance between the difference point and the scanning point is smaller than a preset distance threshold value, the observation probability of the difference point is increased according to a preset increasing strategy.
The observation probability of each difference point can be calculated after the laser radar point cloud is mapped to the point cloud map, and specifically, the calculation is performed based on the distance between the mapped scanning point and the map point. For any map point, if there is no scan point whose distance from the map point is less than the preset distance threshold, it indicates that the point appears only in the point cloud map, and the scan point is not scanned in the current scene, which indicates that the map point may be a point that has been sampled by mistake before, or because of environmental change, there is a building before the position of the map point, which is not caused currently, but, in order to avoid the occurrence of a sampling missing situation in the current sampling, when it is determined that there is no scan point whose distance from the map point is less than the preset distance threshold, the map point is not deleted directly from the point cloud map, but the observation probability of the map point is reduced, specifically, the observation probability of the map point is reduced by using formula (1).
p(t)=p(t-1)+odd(pc) (1)
Wherein p iscIs a set parameter.
For any scanning point, if there is no map point whose distance from the scanning point is less than a preset distance threshold, it indicates that the scanning point is scanned in the current scene, but the scanning point does not appear in the point cloud map, in order to ensure the integrity of the point cloud map and avoid missed detection, a map point may be added at a position corresponding to the scanning point coordinate in the point cloud map, and the observation probability of the map point is set according to the initialized observation probability of the difference point, where the initialized observation probability may be a preset fixed value, and is generally greater than or equal to the preset threshold.
For any map point, if there is a scanning point whose distance from the map point is less than a preset distance threshold, it indicates that the scanning point and the map point are the same point, and the observation probability of the map point can be increased. Furthermore, the scanning point and the map point can be fused, the fusion mode can be that the map point is reserved in a point cloud map, or a new map point coordinate is obtained through weighting, averaging and other modes according to the scanning point coordinate and the map point coordinate, a map point is added at the position of the new map point coordinate in the point cloud map, and the original map point is deleted. The manner of increasing the observation probability of the map point may be calculated by using equation (2).
Figure BDA0002532437680000121
Wherein p is(t)Observation probability, p, of the current frame for a map point(t-1)Odd (p) log (p (1-p)) which is the last frame observation probability of a map point-1),pa、pb、σdD is the distance between the scanning point and the map point for the set parameter, and is obtained by calculation according to the scanning point coordinate and the map point coordinate.
For map points outside the observation range of the current frame, the original observation probability of the points can be maintained unchanged. In order to ensure the accuracy of laser radar positioning, only map points with observation probability greater than a certain threshold value can be reserved in the point cloud map, and the map points have associated scanning points in an actual scene, so that a vehicle to be positioned can be accurately positioned.
Therefore, if aiming at one map point, after the scanning point which is the same as the map point is not scanned by the laser radar every time, the observation probability is smaller and smaller until the scanning point is reduced to be smaller than the preset threshold value, the map point does not exist in the actual scene, so that the map point is deleted from the point cloud map, and the point cloud map is updated. And vehicle positioning is carried out based on the updated point cloud map, and the point cloud map better conforms to the actual scene due to the fact that the point cloud map deletes the map points which are detected by mistake, so that the positioning result obtained is higher in precision and stronger in robustness when the vehicle positioning is carried out.
Optionally, before S102, the method may further include: converting the laser radar point cloud and the point cloud map into the same coordinate system to obtain the coordinates of each scanning point in the laser radar point cloud and the coordinates of each map point in the point cloud map in the same coordinate system; acquiring noise information of the laser radar; and calculating the registration confidence coefficient of the laser radar point cloud and the point cloud map by utilizing a preset pose solving function according to the coordinates of each scanning point, the coordinates of each map point and the noise information. If the registration confidence is greater than the preset confidence threshold, S102 is performed.
After obtaining the coordinates of each scanning point and the coordinates of each map point, the registration confidence of the laser radar point cloud and the point cloud map can be calculated according to the coordinates of each scanning point and the coordinates of each map point, and the registration confidence represents the coincidence degree of the laser radar point cloud and the point cloud map, that is, the more the number of the scanning points in the laser radar point cloud and the map points in the point cloud map coincide, the higher the coincidence degree is, the higher the registration confidence is, and of course, the registration confidence can be determined by parameters such as a normal vector besides the number of the scanning points and the map points coinciding. The lower the registration confidence coefficient is, the lower the matching degree of the laser radar point cloud and the point cloud map is, so that the point cloud map is not necessary to be updated based on the laser radar point cloud, and the point cloud map is updated based on the laser radar point cloud only under the condition that the registration confidence coefficient is larger than a preset confidence coefficient threshold. When the laser radar point cloud and the point cloud map are converted into the same coordinate system, the laser radar point cloud and the point cloud map can be converted into a polar coordinate system.
The registration confidence may specifically be calculated as follows:
first, a conversion matrix to be calculated is calculated according to formula (3).
T=argmin{∑i||ni(Tbi-ai||)} (3)
Wherein T is to be calculatedTransformation matrix, aiIs the coordinate of the ith scanning point, biIs the coordinate of the ith map point, niIs aiThe normal vector of (2).
The optimal solution of equation (3) is solved, and the registration confidence is calculated as in equation (4).
x=argminJ(z,x) (4)
Wherein x is a pose variable to be solved, J is a preset pose solving function, and z is observation.
The formulas (3) and (4) are corresponding, x is T, and z is a point aiAnd bi. Equation (5) is calculated using the covariance.
Figure BDA0002532437680000141
Where cov (z) is the noise information of the lidar and cov (x) is the registration confidence.
According to the method, the noise information of the laser radar is reflected as the noise information for pose solution, for example, the ranging precision of the laser radar has an error of about 10cm, and the angle has an error of about 0.5 degrees. These data can be used to obtain cov (z) and thus to solve for cov (x).
Optionally, the lidar point cloud may comprise a plurality of frames of lidar point clouds. If the multi-frame lidar point cloud mentioned here refers to the scanning lidar point cloud of the current frame and several frames before, before S102, the method may further include: and fusing the multi-frame laser radar point cloud to obtain fused laser radar point cloud.
For the condition that a plurality of frames of laser radar point clouds are scanned, firstly fusing the plurality of frames of laser radar point clouds, wherein the fusing process is to integrate the coordinates of the same scanning point in the plurality of frames of laser radar point clouds, and the specific process can be to perform operations such as weighting, averaging and the like on the coordinates of the same scanning point in different frames of laser radar point clouds to obtain fused laser radar point clouds; the fusion method may also be to only keep a certain frame of lidar point cloud. After the fusion, the fused laser radar point cloud is compared with the point cloud map.
Based on the embodiment shown in fig. 1, an embodiment of the present application further provides a point cloud map updating method, as shown in fig. 2, which may include the following steps.
S201, laser radar point cloud of a current scene scanned by a laser radar, a pre-stored point cloud map of the current scene and current motion data are obtained, wherein the laser radar point cloud comprises a plurality of scanning points, and the point cloud map comprises a plurality of map points.
S202, reading each point cloud block from the laser radar point cloud, wherein the point cloud block is the minimum unit of the laser radar point cloud.
And S203, calculating the pose transformation relation of each cloud block relative to the initial cloud block by using the current motion data.
And S204, converting the cloud blocks of each point into a coordinate system of the cloud block of the initial point according to the pose transformation relation.
And S205, splicing the converted cloud blocks of each point to obtain updated laser radar point cloud.
And S206, comparing the updated laser radar point cloud with the point cloud map, and determining the difference point between the laser radar point cloud and the point cloud map.
And S207, determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and the initial value of the observation probability is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar every time.
And S208, deleting the difference points with the observation probability smaller than a preset threshold value in the point cloud map to obtain an updated point cloud map.
Because scanned lidar point cloud may have lidar point cloud distortion caused by vehicle motion, motion compensation needs to be performed on the lidar point cloud to eliminate the distortion, the process of motion compensation is as shown in fig. 3, each point cloud block (the point cloud block is a point cloud set within a fixed included angle and is the minimum Unit data scanned by the lidar) and current motion data (the motion data is data generated by a mobile device to be positioned in the motion process, such as an IMU (Inertial Measurement Unit), a wheel speed, etc.) in the lidar scanning lidar point cloud are acquired, after each point cloud block and the current motion data are acquired, motion updating is performed, that is, the pose transformation relationship of each cloud block relative to the initial point cloud block is calculated by using the current motion data, then motion compensation is performed based on the result of motion updating, each frame of point cloud data corresponds to the initial point cloud block, and the calculated pose transformation relation represents the distortion degree of the laser radar point cloud data compared with the initial point cloud data, then the cloud blocks of each point are converted to the coordinates of the cloud block of the initial point according to the pose transformation relation, the converted cloud blocks meet the coordinate requirements of the cloud block of the initial point, the distortion can be eliminated, and the converted cloud blocks of each point are spliced to obtain the laser radar point cloud. Because the laser radar has a certain scanning range, when the laser radar is spliced, all the point cloud blocks in the scanning range need to be spliced, for example, the scanning range is 360 degrees, and when a circle of point cloud blocks in the 360-degree range are spliced, the splicing is completed, so that the updated laser radar point cloud is obtained. And subsequently updating the point cloud map based on the updated laser radar point cloud.
The calculation process of the pose transformation relation comprises the following steps: suppose the pose of the last frame is Tn-1Wherein the rotation matrix and the translation vector are Rn-1And tn-1Pose of current frame is TnWherein the rotation matrix and the translation vector are RnAnd tnIf the velocity between two frames is v, the angular velocity is w (including the rotation axis and the rotation angle), and the time interval is Δ t, the pose variation is:
t=vΔt (6)
R=wΔt (7)
in the formula (7), w is the angular velocity, and multiplied by time is the rotation angle, and the rotation angle of the rotating shaft needs to be converted into a rotation matrix.
The current pose is calculated as follows:
tn=Rtn-1+t (8)
Rn=RRn-1 (9)
based on the embodiment shown in fig. 1, the embodiment of the present application further provides a point cloud map updating method, as shown in fig. 4, which may include the following steps.
S401, laser radar point cloud of a current scene scanned by a laser radar and a pre-stored point cloud map of the current scene are obtained, wherein the laser radar point cloud comprises a plurality of scanning points, and the point cloud map comprises a plurality of map points.
S402, point cloud information of the laser radar point cloud is obtained, semantic segmentation is carried out on the laser radar point cloud based on the point cloud information, and the semantic type of each scanning point in the laser radar point cloud is obtained.
And S403, identifying the target scanning point with the designated semantic type, wherein the designated semantic type is the semantic type with the motion characteristic.
And S404, deleting the target scanning point from the laser radar point cloud to obtain an updated laser radar point cloud.
S405, the updated laser radar point cloud is compared with the point cloud map, and difference points of the laser radar point cloud and the point cloud map are determined.
And S406, determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and the initial value of the observation probability is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar every time.
And S407, deleting the difference points of which the observation probability is smaller than a preset threshold value in the point cloud map to obtain an updated point cloud map.
In an actual scene, due to the existence of objects which move continuously, such as pedestrians, non-motor vehicles, automobiles and the like, the motion characteristics of the objects make corresponding scanned points scanned extremely unstable, and the unstable scanned points can interfere with the positioning accuracy and robustness of the point cloud map. After the laser radar point cloud is obtained, semantic segmentation can be performed on the laser radar point cloud, a specific semantic segmentation method can be realized by adopting a traditional neural network, which is not repeated herein, and semantic types of each scanning point in the laser radar point cloud can be obtained through semantic segmentation, wherein the semantic types comprise a driving road, a pedestrian crosswalk, a parking lot, other roads, buildings, cars, trucks, bicycles, motorcycles, vegetation, trunks, zones, pedestrians, people riding bicycles, people riding motorcycles, fences, rods, traffic lights and the like, and target scanning points with specified semantic types are identified based on a semantic segmentation result, and the specified semantic types refer to semantic types with motion characteristics, such as pedestrians, people riding bicycles, people riding motorcycles, cars, trucks, bicycles, motorcycles and the like which are extremely unstable, therefore, the target scanning points are deleted from the laser radar point cloud, only static and stable scanning points representing buildings, trees and the like are reserved, the laser radar point cloud is updated, and then the point cloud map is updated based on the updated laser radar point cloud.
Based on the embodiment shown in fig. 1, an embodiment of the present application further provides a point cloud map updating method, as shown in fig. 5, which may include the following steps.
S501, acquiring a laser radar point cloud of a current scene scanned by a laser radar, a pre-stored point cloud map of the current scene, coordinates of map points in the point cloud map, a predicted pose relationship and radar coordinates of scanning points in the laser radar point cloud, wherein the laser radar point cloud comprises a plurality of scanning points, and the point cloud map comprises a plurality of map points.
And S502, converting each scanning point in the laser radar point cloud into a coordinate system to which a point cloud map belongs according to the predicted pose relation, and obtaining the predicted coordinate of each scanning point.
S503, aiming at any scanning point, searching the associated map point closest to the scanning point from the point cloud map according to the predicted coordinate of the scanning point and the coordinates of each map point.
S504, filtering out the unstable associated map points with unstable characteristics in the searched associated map points by using a preset filtering method.
And S505, calculating the relative position and pose relationship between the laser radar point cloud and the point cloud map according to the remaining associated map point coordinates and the radar coordinates of each scanning point.
S506, converting each scanning point in the laser radar point cloud into a coordinate system to which the point cloud map belongs according to the relative pose relation, and obtaining the coordinate of each scanning point in the coordinate system.
And S507, if only one map point or only one scanning point is determined at a position under the coordinate system according to the coordinates of each map point and each scanning point, determining that the point at the position is a difference point.
And S508, determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and the initial value of the observation probability is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar every time.
And S509, deleting the difference points with the observation probability smaller than a preset threshold value in the point cloud map to obtain an updated point cloud map.
In an implementation manner of the embodiment of the application, mapping of a laser radar point cloud to a point cloud map is achieved through conversion of a certain pose conversion relation, a determination process of the pose conversion relation may be called laser positioning, specifically, a laser positioning algorithm includes algorithms such as ICP (Iterative Closest Points), NDT (Normal distribution Transform), and the like, and positioning accuracy of the two algorithms can reach a centimeter level. The ICP algorithm is mainly used as an example to introduce the laser positioning process, as shown in fig. 6, the radar coordinates of each scanning point in the laser radar point cloud, the coordinates of each map point in the point cloud map, and the predicted pose relationship are input, and the predicted pose relationship includes a rotation matrix and a translation vector. And carrying out coordinate transformation, and converting each scanning point in the laser radar point cloud into a coordinate system to which the point cloud map belongs according to the predicted pose relation to obtain the predicted coordinate of each scanning point. And then, performing feature association, and for any scanning point, searching for an associated map point closest to the scanning point from the point cloud map according to the predicted coordinate of the scanning point and the coordinates of each map point, wherein specifically, when performing associated map point search, the search process can be accelerated by using technologies such as a kd tree (k-dimensional tree, a data structure for dividing a k-dimensional data space) and the like. And then, filtering unstable features, and filtering unstable associated map points with unstable features in the searched associated map points by using a preset filtering method, wherein the main filtering method comprises normal vector filtering, distance filtering and the like, namely if an included angle between a normal vector of a current scanning point and a normal vector of the associated map points in the point cloud map is greater than a certain threshold value, and the distance between the current scanning point and the associated map points in the point cloud map is greater than a certain threshold value, rejecting the associated map points when pose solving is carried out. After the unstable features are filtered out, pose solving is carried out, the relative pose relation of the laser radar point cloud and the point cloud map is solved by utilizing nonlinear least square methods such as LM (Levenberg-Marquardt ) and the like according to the remaining associated map point coordinates and the radar coordinates of each scanning point, and particularly, the formula (10) can be utilized for solving.
Figure BDA0002532437680000191
Wherein R is a rotation matrix, T is a translation vector, R0To predict the rotation matrix, T0To predict translation vectors, biFor the ith scanning point m in the laser radar point cloud of the current frameiB in the point cloud mapiAssociated map point of niIs miEquation (10) will minimize the distance between the associated map points, solving for the rotation matrix R and translation vector T. In the process, the solved relative pose relationship is utilized to carry out coordinate transformation, namely, each scanning point in the laser radar point cloud is converted into a coordinate system to which a point cloud map belongs, the coordinate of each scanning point in the coordinate system is obtained, whether the scanning point is converged after the coordinate transformation is judged, if the scanning point is not converged, the step of feature association is returned, and if the scanning point is converged, the solved relative pose relationship can be determined to be the optimal relative pose relationship.
By combining the above embodiments, the point cloud map updating method provided by the embodiment of the present application, as shown in fig. 7, mainly includes several steps of motion compensation, semantic segmentation, laser positioning, and point cloud fusion. Firstly, a point cloud map of a target scene is established by using a SLAM algorithm, when mobile equipment such as an unmanned vehicle and an automatic robot runs in the target scene, the point cloud map is loaded, the mobile equipment enters a positioning process, after a laser radar acquires a frame of laser radar point cloud, motion compensation is carried out on the laser radar point cloud according to motion data of the mobile equipment, then semantic segmentation is carried out on the laser radar point cloud after the motion compensation, scanning points with semantic types of 'man and machine', and the like are removed according to a semantic segmentation result, and static and stable scanning points in the actual scene are reserved for positioning. And then, registering the laser radar point cloud and the point cloud map by using a laser positioning algorithm, and acquiring the pose of the laser radar point cloud in the point cloud map, wherein the point cloud fusion of the laser radar point cloud and the point cloud map is carried out only when the confidence coefficient of positioning registration is greater than a certain threshold value. The point cloud fusion mentioned here is not point cloud fusion in the traditional sense, but is a process of determining the observation probability of each map point based on each scanning point coordinate and each map point coordinate, and then determining whether to add a map point or delete a map point according to the observation probability, which has been described in detail in the foregoing embodiments and is not described here again.
Based on the above method embodiment, the point cloud map updating method according to the embodiment of the present application is explained below with reference to a 2D laser radar, where fig. 8(a) is a schematic diagram before laser positioning, fig. 8(b) is a schematic diagram after laser positioning, and the process of laser positioning is to traverse the current frame laser radar point cloud to search its nearest neighbor in the point cloud map, such as a distance D shown in fig. 8(a), and minimize the distance to solve the pose of the current frame laser radar point cloud with respect to the point cloud map. After laser positioning is finished, converting the current frame laser radar point cloud into a coordinate system to which the point cloud map belongs according to the relative pose of the current frame laser radar point cloud in the point cloud map, and then fusing the current frame laser radar point cloud and the point cloud map, namely searching the nearest neighbor of the current frame laser radar point cloud in the point cloud map, wherein scanning points in an area A in the point cloud map in the images 8(a) and 8(b) cannot find map points which belong to the same point in the point cloud map, so that the points are initialized in the point cloud map, the point is an open parking lot during map building, the point cloud map is a building during positioning, and the observation probability of the map points in the area can be increased all the time after a vehicle drives in the area for many times. For the map points in the area B in fig. 8(a) and 8(B), the scanning points belonging to the same point as the map points are not found, and the observation probability of the map points is reduced, wherein during map building, an obstacle road sealing is set due to construction reasons, construction is already completed during positioning, the obstacle is removed, and after the vehicle drives in the area for multiple times, the observation probability of the map points in the area is reduced until the map points are deleted from the point cloud map. For other scanning points, map points associated with the scanning points exist in the point cloud map, and the observation probability of the map points is increased.
Fig. 9(a) is a point cloud map before updating, and after the autonomous vehicle travels in the scene for a plurality of times, the point cloud map is updated as shown in fig. 9(b), a new building in the original parking lot area is added to the point cloud map, and points of the obstacle part of the original construction-enclosed road are deleted from the point cloud map, so that the point cloud map better conforms to the current actual scene.
Based on the foregoing method embodiment, an embodiment of the present application provides a point cloud map updating apparatus, as shown in fig. 10, the apparatus may include:
the acquisition module 1010 is configured to acquire a laser radar point cloud of a current scene scanned by a laser radar and a pre-stored point cloud map of the current scene, where the laser radar point cloud includes a plurality of scanning points and the point cloud map includes a plurality of map points;
a comparison module 1020, configured to compare the laser radar point cloud with the point cloud map, and determine a difference point between the laser radar point cloud and the point cloud map;
the calculating module 1030 is configured to determine observation probabilities of the different points, where the observation probabilities represent probabilities that one scanning point and one map point are the same point, and an initial value of the observation probability is set for the map point when one map point is added to the point cloud map and is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar each time;
the updating module 1040 is configured to delete the difference points in the point cloud map, where the observation probability is smaller than a preset threshold, to obtain an updated point cloud map.
Optionally, the apparatus may further include: a motion compensation module;
the motion compensation module is used for acquiring cloud blocks of all points in the laser radar point cloud and current motion data, wherein the cloud blocks are the minimum unit of the laser radar point cloud; calculating the pose transformation relation of each cloud block relative to the initial cloud block by using the current motion data; converting each cloud block into a coordinate system of the initial cloud block according to the pose transformation relation; and splicing the converted cloud blocks of each point to obtain an updated laser radar point cloud.
Optionally, the apparatus may further include: a semantic segmentation module;
the semantic segmentation module is used for acquiring point cloud information of the laser radar point cloud; performing semantic segmentation on the laser radar point cloud based on the point cloud information to obtain the semantic type of each scanning point in the laser radar point cloud; identifying a target scanning point with a designated semantic type, wherein the designated semantic type is a semantic type with motion characteristics; and deleting the target scanning point from the laser radar point cloud to obtain an updated laser radar point cloud.
Optionally, the alignment module 1020 may be specifically configured to:
mapping the laser radar point cloud to a coordinate system to which a point cloud map belongs to obtain the coordinates of each scanning point in the laser radar point cloud under the coordinate system;
acquiring coordinates of each map point in the point cloud map;
if only one map point or only one scanning point is determined at a position under the coordinate system according to the coordinates of each map point and each scanning point, determining that the point at the position is a difference point.
Optionally, the calculating module 1030 may be specifically configured to:
respectively calculating the distance between each map point and each scanning point under the coordinate system according to the coordinates of each map point and the coordinates of each scanning point;
aiming at any difference point, if the difference point is a map point and no scanning point with the distance from the difference point smaller than a preset distance threshold exists, reducing the observation probability of the difference point according to a preset reduction strategy;
for any difference point, if the difference point is a scanning point and no map point with the distance from the difference point being smaller than a preset distance threshold exists, adding a map point at a position corresponding to the coordinates of the difference point in a point cloud map, and initializing the observation probability of the difference point;
aiming at any difference point, if the difference point is a map point and a scanning point exists, wherein the distance between the difference point and the scanning point is smaller than a preset distance threshold value, the observation probability of the difference point is increased according to a preset increasing strategy.
Optionally, the apparatus may further include: a laser positioning module;
the laser positioning module is used for acquiring the predicted pose relationship and the radar coordinates of each scanning point in the laser radar point cloud; converting each scanning point in the laser radar point cloud to a coordinate system to which a point cloud map belongs according to the predicted pose relation to obtain a predicted coordinate of each scanning point; aiming at any scanning point, searching a related map point closest to the scanning point from the point cloud map according to the predicted coordinate of the scanning point and the coordinates of each map point; filtering unstable associated map points with unstable characteristics in the searched associated map points by using a preset filtering method; calculating the relative position and pose relationship between the laser radar point cloud and the point cloud map according to the coordinates of the rest associated map points and the radar coordinates of each scanning point;
the comparison module 1020 may be specifically configured to convert each scanning point in the laser radar point cloud to a coordinate system to which the point cloud map belongs according to the relative pose relationship, so as to obtain coordinates of each scanning point in the coordinate system.
Optionally, the apparatus may further include: a confidence calculation module;
the confidence coefficient calculation module is used for converting the laser radar point cloud and the point cloud map into the same coordinate system to obtain the coordinates of each scanning point in the laser radar point cloud and the coordinates of each map point in the point cloud map in the same coordinate system; acquiring noise information of the laser radar; calculating registration confidence coefficients of the laser radar point cloud and the point cloud map by utilizing a preset pose solving function according to the coordinates of each scanning point, the coordinates of each map point and noise information;
the comparison module 1020 may be specifically configured to compare the lidar point cloud with the point cloud map and determine a difference point between the lidar point cloud and the point cloud map if the registration confidence is greater than a preset confidence threshold.
Optionally, the laser radar point cloud includes a multi-frame laser radar point cloud;
the apparatus may further include: a fusion module;
and the fusion module is used for fusing the multi-frame laser radar point cloud to obtain fused laser radar point cloud.
By applying the embodiment of the application, the scanning laser radar point cloud of the current scene obtained by scanning is compared with the prestored point cloud map of the current scene to determine the difference points of the laser radar point cloud and the point cloud map, and the observation probability of each difference point is determined, the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map and is reduced after the laser radar does not scan the scanning point which is the same point as the map point each time, that is, if one map point in the point cloud map is the difference point with the laser radar point cloud and the observation probability of the map point is smaller, the probability that the scanning point associated with the map point exists in the actual scene is lower, and if the observation probability of the map point is smaller than the preset threshold, the map point does not exist in the actual scene at all, so that the map point is deleted from the point cloud map, the point cloud map is updated, and the updated point cloud map is more in line with the actual scene and more accurate.
The embodiment of the present application provides an electronic device, as shown in fig. 11, including a processor 1101 and a memory 1102, where the memory 1102 stores machine executable instructions that can be executed by the processor 1101, and the machine executable instructions are loaded and executed by the processor 1101 to implement the point cloud map updating method provided by the embodiment of the present application.
The Memory may include a RAM (Random Access Memory) or an NVM (Non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor including a CPU, an NP (Network Processor), and the like; but also a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
The memory 1102 and the processor 1101 may be connected by a wire or wireless connection, and the management server and the other devices may communicate with each other through a wire or wireless communication interface. Fig. 11 shows an example of data transmission via a bus, and the connection method is not limited to a specific connection method.
In the embodiment of the present application, the processor can realize that: comparing the scanned laser radar point cloud of the current scene with a prestored point cloud map of the current scene to determine difference points of the laser radar point cloud and the point cloud map, and determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and is reduced after the laser radar does not scan the scanning point which is the same point as the map point each time, that is, if one map point in the point cloud map is the difference point with the laser radar point cloud, and the observation probability of the map point is smaller, the probability that the scanning point associated with the map point exists in the actual scene is lower, and if the observation probability of the map point is smaller than a preset threshold value, the map point does not exist in the actual scene at all, therefore, the map point is deleted from the point cloud map, the point cloud map is updated, and the updated point cloud map is more in line with the actual scene and more accurate.
In addition, a machine-readable storage medium is provided, where machine-executable instructions are stored, and when the machine-executable instructions are loaded and executed by a processor, the method for updating a point cloud map provided in an embodiment of the present application is implemented.
In the embodiment of the present application, a machine-readable storage medium stores machine-executable instructions for executing the point cloud map updating method provided in the embodiment of the present application when running, so that the method can implement: comparing the scanned laser radar point cloud of the current scene with a prestored point cloud map of the current scene to determine difference points of the laser radar point cloud and the point cloud map, and determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and is reduced after the laser radar does not scan the scanning point which is the same point as the map point each time, that is, if one map point in the point cloud map is the difference point with the laser radar point cloud, and the observation probability of the map point is smaller, the probability that the scanning point associated with the map point exists in the actual scene is lower, and if the observation probability of the map point is smaller than a preset threshold value, the map point does not exist in the actual scene at all, therefore, the map point is deleted from the point cloud map, the point cloud map is updated, and the updated point cloud map is more in line with the actual scene and more accurate.
In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the point cloud map updating methods of the above embodiments.
An embodiment of the present application further provides a positioning system, as shown in fig. 12, the system includes an electronic device 1210 and a laser radar 1220;
the laser radar 1220 is used for scanning a laser radar point cloud of a current scene and sending the laser radar point cloud to the electronic device 1210;
the electronic device 1210 is configured to receive a laser radar point cloud sent by the laser radar 1220, and acquire a pre-stored point cloud map of a current scene, where the laser radar point cloud includes a plurality of scanning points, and the point cloud map includes a plurality of map points; comparing the laser radar point cloud with the point cloud map, and determining difference points of the laser radar point cloud and the point cloud map; determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and the initial value of the observation probability is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar every time; deleting the difference points of which the observation probability is smaller than a preset threshold value in the point cloud map to obtain an updated point cloud map; and carrying out target positioning by using the point cloud map.
By applying the embodiment of the application, the scanning laser radar point cloud of the current scene obtained by scanning is compared with the prestored point cloud map of the current scene to determine the difference points of the laser radar point cloud and the point cloud map, and the observation probability of each difference point is determined, the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map and is reduced after the laser radar does not scan the scanning point which is the same point as the map point each time, that is, if one map point in the point cloud map is the difference point with the laser radar point cloud and the observation probability of the map point is smaller, the probability that the scanning point associated with the map point exists in the actual scene is lower, and if the observation probability of the map point is smaller than the preset threshold, the map point does not exist in the actual scene at all, so the map point is deleted from the point cloud map, the point cloud map is updated, the updated point cloud map is more in line with the actual scene and more accurate, and further, the updated point cloud map is more accurate, and the positioning precision can be improved when the point cloud map is used for positioning.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber, DSL (Digital Subscriber Line)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD (Digital Versatile Disk)), or a semiconductor medium (e.g., a SSD (Solid State Disk)), etc.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, the electronic device, the machine-readable storage medium, the computer program product, and the positioning system embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and it is sufficient to refer to the partial description of the method embodiment for relevant points.
The above description is only for the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (10)

1. A point cloud map updating method is characterized by comprising the following steps:
the method comprises the steps of obtaining a laser radar point cloud of a current scene scanned by a laser radar and a pre-stored point cloud map of the current scene, wherein the laser radar point cloud comprises a plurality of scanning points, and the point cloud map comprises a plurality of map points;
comparing the laser radar point cloud with the point cloud map, and determining difference points of the laser radar point cloud and the point cloud map;
determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar every time;
and deleting the difference points with the observation probability smaller than a preset threshold value in the point cloud map to obtain the updated point cloud map.
2. The method of claim 1, wherein after said acquiring the lidar point cloud for the current scene of the lidar scan, the method further comprises:
acquiring cloud blocks of all points in the laser radar point cloud and current motion data, wherein the cloud blocks are the minimum unit of the laser radar point cloud;
calculating the pose transformation relation of the cloud blocks of each point relative to the cloud block of the initial point by using the current motion data;
converting the cloud blocks of each point into a coordinate system of the cloud block of the initial point according to the pose transformation relation;
and splicing the converted cloud blocks of each point to obtain the updated laser radar point cloud.
3. The method of claim 1, wherein after said acquiring the lidar point cloud for the current scene of the lidar scan, the method further comprises:
acquiring point cloud information of the laser radar point cloud;
performing semantic segmentation on the laser radar point cloud based on the point cloud information to obtain the semantic type of each scanning point in the laser radar point cloud;
identifying a target scanning point with a specified semantic type, wherein the specified semantic type is a semantic type with a motion characteristic;
and deleting the target scanning point from the laser radar point cloud to obtain the updated laser radar point cloud.
4. The method of claim 1, wherein comparing the lidar point cloud to the point cloud map to determine difference points of the lidar point cloud to the point cloud map comprises:
mapping the laser radar point cloud to a coordinate system to which the point cloud map belongs to obtain the coordinate of each scanning point in the laser radar point cloud under the coordinate system;
acquiring coordinates of each map point in the point cloud map;
and if only one map point or only one scanning point is determined at a position under the coordinate system according to the coordinates of the map points and the coordinates of the scanning points, determining that the points at the position are difference points.
5. The method of claim 4, wherein determining the observation probability of each difference point comprises:
respectively calculating the distance between each map point and each scanning point in the coordinate system according to the coordinates of each map point and the coordinates of each scanning point;
aiming at any difference point, if the difference point is a map point and no scanning point with the distance from the difference point smaller than a preset distance threshold exists, reducing the observation probability of the difference point according to a preset reduction strategy;
for any difference point, if the difference point is a scanning point and no map point with the distance from the difference point smaller than the preset distance threshold exists, adding a map point at the corresponding position of the coordinate of the difference point in the point cloud map and initializing the observation probability of the difference point;
and aiming at any difference point, if the difference point is a map point and a scanning point exists, wherein the distance between the difference point and the scanning point is less than the preset distance threshold, the observation probability of the difference point is increased according to a preset increasing strategy.
6. The method of claim 4, wherein before the mapping the lidar point cloud to the coordinate system to which the point cloud map belongs to obtain coordinates of each scanning point in the lidar point cloud in the coordinate system, the method further comprises:
acquiring a predicted pose relation and radar coordinates of each scanning point in the laser radar point cloud;
converting each scanning point in the laser radar point cloud to a coordinate system to which the point cloud map belongs according to the predicted pose relation to obtain a predicted coordinate of each scanning point;
aiming at any scanning point, searching a related map point closest to the scanning point from the point cloud map according to the predicted coordinate of the scanning point and the coordinates of each map point;
filtering unstable associated map points with unstable characteristics in the searched associated map points by using a preset filtering method;
calculating the relative position and pose relationship between the laser radar point cloud and the point cloud map according to the coordinates of the rest associated map points and the radar coordinates of the scanning points;
the step of mapping the laser radar point cloud to a coordinate system to which the point cloud map belongs to obtain the coordinates of each scanning point in the laser radar point cloud under the coordinate system comprises the following steps:
and converting each scanning point in the laser radar point cloud to a coordinate system to which the point cloud map belongs according to the relative pose relation to obtain the coordinate of each scanning point in the coordinate system.
7. The method of claim 1, wherein prior to said comparing said lidar point cloud to said point cloud map and determining difference points of said lidar point cloud to said point cloud map, the method further comprises:
converting the laser radar point cloud and the point cloud map into the same coordinate system to obtain the coordinates of each scanning point in the laser radar point cloud and the coordinates of each map point in the point cloud map in the same coordinate system;
acquiring noise information of the laser radar;
calculating registration confidence coefficients of the laser radar point cloud and the point cloud map by utilizing a preset pose solving function according to the coordinates of each scanning point, the coordinates of each map point and the noise information;
the step of comparing the laser radar point cloud with the point cloud map to determine the difference points of the laser radar point cloud and the point cloud map comprises the following steps:
and if the registration confidence coefficient is greater than a preset confidence coefficient threshold value, comparing the laser radar point cloud with the point cloud map, and determining the difference point between the laser radar point cloud and the point cloud map.
8. The method of claim 1, wherein the lidar point cloud comprises a plurality of frames of lidar point clouds;
before comparing the lidar point cloud with the point cloud map and determining difference points of the lidar point cloud and the point cloud map, the method further comprises:
and fusing the multi-frame laser radar point cloud to obtain fused laser radar point cloud.
9. A point cloud map updating apparatus, the apparatus comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a laser radar point cloud of a current scene scanned by a laser radar and a pre-stored point cloud map of the current scene, the laser radar point cloud comprises a plurality of scanning points, and the point cloud map comprises a plurality of map points;
the comparison module is used for comparing the laser radar point cloud with the point cloud map and determining the difference point between the laser radar point cloud and the point cloud map;
the calculation module is used for determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and the initial value of the observation probability is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar every time;
and the updating module is used for deleting the difference points with the observation probability smaller than a preset threshold value in the point cloud map to obtain the updated point cloud map.
10. A positioning system, comprising an electronic device and a lidar;
the laser radar is used for scanning laser radar point cloud of a current scene and sending the laser radar point cloud to the electronic equipment;
the electronic equipment is used for receiving the laser radar point cloud sent by the laser radar and acquiring a prestored point cloud map of the current scene, wherein the laser radar point cloud comprises a plurality of scanning points, and the point cloud map comprises a plurality of map points; comparing the laser radar point cloud with the point cloud map, and determining difference points of the laser radar point cloud and the point cloud map; determining the observation probability of each difference point, wherein the observation probability represents the probability that one scanning point and one map point are the same point, the initial value of the observation probability is set for the map point when one map point is added in the point cloud map, and is reduced after the scanning point which is the same point as the map point is not scanned by the laser radar every time; deleting the difference points with the observation probability smaller than a preset threshold value in the point cloud map to obtain an updated point cloud map; and positioning the target by using the point cloud map.
CN202010522053.5A 2020-06-10 2020-06-10 Point cloud map updating method and device, electronic equipment and positioning system Pending CN113776544A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010522053.5A CN113776544A (en) 2020-06-10 2020-06-10 Point cloud map updating method and device, electronic equipment and positioning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010522053.5A CN113776544A (en) 2020-06-10 2020-06-10 Point cloud map updating method and device, electronic equipment and positioning system

Publications (1)

Publication Number Publication Date
CN113776544A true CN113776544A (en) 2021-12-10

Family

ID=78834630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010522053.5A Pending CN113776544A (en) 2020-06-10 2020-06-10 Point cloud map updating method and device, electronic equipment and positioning system

Country Status (1)

Country Link
CN (1) CN113776544A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362445A (en) * 2021-05-25 2021-09-07 上海奥视达智能科技有限公司 Method and device for reconstructing object based on point cloud data
CN116148878A (en) * 2023-04-18 2023-05-23 浙江华是科技股份有限公司 Ship starboard height identification method and system
CN117128985A (en) * 2023-04-27 2023-11-28 荣耀终端有限公司 Point cloud map updating method and equipment
WO2023231212A1 (en) * 2022-06-02 2023-12-07 合众新能源汽车股份有限公司 Prediction model training method and apparatus, and map prediction method and apparatus

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010160777A (en) * 2008-12-12 2010-07-22 Toyota Central R&D Labs Inc Map generation device, track estimation device, movable area estimation device and program
US20180148047A1 (en) * 2016-11-29 2018-05-31 Ford Global Technologies, Llc Multi-sensor probabilistic object detection and automated braking
CN108732584A (en) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 Method and apparatus for updating map
WO2019073038A1 (en) * 2017-10-13 2019-04-18 Connaught Electronics Ltd. Automatically parking a vehicle in a parking slot
CN109781119A (en) * 2017-11-15 2019-05-21 百度在线网络技术(北京)有限公司 A kind of laser point cloud localization method and system
CN110533055A (en) * 2018-05-25 2019-12-03 北京京东尚科信息技术有限公司 A kind for the treatment of method and apparatus of point cloud data
CN110596683A (en) * 2019-10-25 2019-12-20 中山大学 Multi-group laser radar external parameter calibration system and method thereof
CN110763243A (en) * 2018-07-27 2020-02-07 杭州海康威视数字技术股份有限公司 Sliding map updating method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010160777A (en) * 2008-12-12 2010-07-22 Toyota Central R&D Labs Inc Map generation device, track estimation device, movable area estimation device and program
US20180148047A1 (en) * 2016-11-29 2018-05-31 Ford Global Technologies, Llc Multi-sensor probabilistic object detection and automated braking
CN108732584A (en) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 Method and apparatus for updating map
WO2019073038A1 (en) * 2017-10-13 2019-04-18 Connaught Electronics Ltd. Automatically parking a vehicle in a parking slot
CN109781119A (en) * 2017-11-15 2019-05-21 百度在线网络技术(北京)有限公司 A kind of laser point cloud localization method and system
CN110533055A (en) * 2018-05-25 2019-12-03 北京京东尚科信息技术有限公司 A kind for the treatment of method and apparatus of point cloud data
CN110763243A (en) * 2018-07-27 2020-02-07 杭州海康威视数字技术股份有限公司 Sliding map updating method and device
CN110596683A (en) * 2019-10-25 2019-12-20 中山大学 Multi-group laser radar external parameter calibration system and method thereof

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362445A (en) * 2021-05-25 2021-09-07 上海奥视达智能科技有限公司 Method and device for reconstructing object based on point cloud data
WO2023231212A1 (en) * 2022-06-02 2023-12-07 合众新能源汽车股份有限公司 Prediction model training method and apparatus, and map prediction method and apparatus
CN116148878A (en) * 2023-04-18 2023-05-23 浙江华是科技股份有限公司 Ship starboard height identification method and system
CN116148878B (en) * 2023-04-18 2023-07-07 浙江华是科技股份有限公司 Ship starboard height identification method and system
CN117128985A (en) * 2023-04-27 2023-11-28 荣耀终端有限公司 Point cloud map updating method and equipment
CN117128985B (en) * 2023-04-27 2024-05-31 荣耀终端有限公司 Point cloud map updating method and equipment

Similar Documents

Publication Publication Date Title
CN113776544A (en) Point cloud map updating method and device, electronic equipment and positioning system
CN111413721B (en) Vehicle positioning method, device, controller, intelligent vehicle and system
CN111709975B (en) Multi-target tracking method, device, electronic equipment and storage medium
CN109212532B (en) Method and apparatus for detecting obstacles
WO2018142900A1 (en) Information processing device, data management device, data management system, method, and program
CN112014857A (en) Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN110749901B (en) Autonomous mobile robot, map splicing method and device thereof, and readable storage medium
US11977159B2 (en) Method for determining a position of a vehicle
EP3851870A1 (en) Method for determining position data and/or motion data of a vehicle
CN114111774B (en) Vehicle positioning method, system, equipment and computer readable storage medium
CN113593017A (en) Method, device and equipment for constructing surface three-dimensional model of strip mine and storage medium
CN111937036A (en) Method, apparatus, and computer-readable storage medium having instructions for processing sensor data
CN112036359B (en) Method for obtaining topological information of lane line, electronic device and storage medium
CN114119659A (en) Multi-sensor fusion target tracking method
WO2021057324A1 (en) Data processing method and apparatus, chip system, and medium
CN116449392B (en) Map construction method, device, computer equipment and storage medium
CN113325389A (en) Unmanned vehicle laser radar positioning method, system and storage medium
CN115494533A (en) Vehicle positioning method, device, storage medium and positioning system
CN117218350A (en) SLAM implementation method and system based on solid-state radar
CN116958266B (en) Closed loop detection method, device, communication equipment and storage medium
CN111324686B (en) Target measurement track acquisition method and device, storage medium and electronic device
CN117570968A (en) Map construction and maintenance method and device based on visual road sign and storage medium
CN112614162B (en) Indoor vision rapid matching and positioning method and system based on space optimization strategy
CN115656991A (en) Vehicle external parameter calibration method, device, equipment and storage medium
Li et al. Lane marking verification for high definition map maintenance using crowdsourced images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination