WO2021254019A1 - 协同构建点云地图的方法、设备和*** - Google Patents

协同构建点云地图的方法、设备和*** Download PDF

Info

Publication number
WO2021254019A1
WO2021254019A1 PCT/CN2021/092280 CN2021092280W WO2021254019A1 WO 2021254019 A1 WO2021254019 A1 WO 2021254019A1 CN 2021092280 W CN2021092280 W CN 2021092280W WO 2021254019 A1 WO2021254019 A1 WO 2021254019A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
road section
cloud data
data
road
Prior art date
Application number
PCT/CN2021/092280
Other languages
English (en)
French (fr)
Inventor
孔旗
张金凤
Original Assignee
北京京东乾石科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京京东乾石科技有限公司 filed Critical 北京京东乾石科技有限公司
Priority to EP21824786.4A priority Critical patent/EP4170581A1/en
Priority to US18/002,092 priority patent/US20230351686A1/en
Publication of WO2021254019A1 publication Critical patent/WO2021254019A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Definitions

  • the present disclosure relates to the field of point cloud map construction, and in particular to a method, device and system for collaboratively constructing a point cloud map.
  • the point cloud map used in the field of autonomous driving is currently equipped with expensive and high-precision map data collection equipment, such as high-precision combined inertial navigation equipment, high-precision multi-line lidar equipment, etc., by special map data collection vehicles. Data collection is performed on the route, and a point cloud map is constructed after offline processing of the collected data by the back-end equipment.
  • the embodiment of the present disclosure proposes a scheme for collaboratively constructing a point cloud map by multiple vehicles.
  • Different vehicles collect point cloud data of different road sections in a preset route, and use the point cloud data of the overlapping area of two adjacent road sections to determine the points of two adjacent road sections.
  • the transformation matrix of cloud data is based on the transformed point cloud data to splice the point cloud data of different road sections to construct a point cloud map of the entire route.
  • the efficiency of constructing the point cloud map of the entire route is improved, and it is applicable to the business scenario of constructing the point cloud map online.
  • each vehicle collects point cloud data for a section of the road without collecting point cloud data for the entire route, thus reducing the accuracy requirements of the map data collection equipment carried by each vehicle.
  • Some embodiments of the present disclosure propose a method for collaboratively constructing a point cloud map, including:
  • the point cloud data of the first road section and the transformed point cloud data of the second road section are spliced together to construct a point cloud map of the preset route.
  • the point cloud data of each road section includes: each single frame point cloud of the road section and the corresponding marking data of each single frame point cloud; wherein it is determined that the point cloud data of the second road section is transformed to the first
  • the transformation matrix of the point cloud data of the road section includes: determining the point cloud of the second road section by registering the single frame point cloud of the overlapping area in the first road section and the single frame point cloud of the second road section with the same marking data The data is transformed into the transformation matrix of the point cloud data of the first road section.
  • determining the transformation matrix for transforming the point cloud data of the second road section to the point cloud data of the first road section includes:
  • the error is greater than the preset error, continue to perform the above steps iteratively. If the error is not greater than the preset error, stop the iteration, and determine the rotation and translation matrix corresponding to the error as the point cloud data of the second road section to be converted to the first road section. The transformation matrix of the point cloud data.
  • the marking data corresponding to each single frame point cloud of each road segment includes at least one of the following: the current global position information and heading information of the vehicle on the road segment when the single frame point cloud of the road segment is collected The environment image taken when the vehicles on the road section are collecting a single frame point cloud of the road section.
  • the similarity of the feature vectors of the environmental images corresponding to different single-frame point clouds is greater than the preset value, it is determined to be different single-frame point clouds with the same label data; the overall situation of vehicles corresponding to different single-frame point clouds When the position information and the heading information are the same, it is judged as a different single-frame point cloud with the same mark data.
  • each single-frame point cloud of the second road section includes: point cloud data of each single-frame point cloud of the second road section and local pose information corresponding to each single-frame point cloud, wherein, The point cloud data of the single frame point cloud of the second road section includes: the position information of each point constituting the single frame point cloud in the local coordinate system with the vehicle on the second road section as the origin, and the first The local pose information corresponding to the single-frame point cloud of the second road section is: the current pose of the vehicle on the second road section when the single-frame point cloud of the second road section is collected is relative to that of the vehicle on the second road section.
  • the relative pose information of the initial pose when the point cloud of the first single frame is first; using the transformation matrix to transform the point cloud data of the second road section, including: each single frame point cloud of the second road section corresponds to The local pose information of is first multiplied by the transformation matrix, and then multiplied by the point cloud data of the single frame point cloud.
  • the method further includes: clustering the points of each road section according to the point cloud data of each road section; removing from the point cloud data of each road section that matches the preset shape of the moving object The point set obtained by clustering.
  • the point cloud data of each road segment is the point cloud data after removing the preset point set of the moving object, wherein the removal operation includes the point cloud data of the vehicle of each road segment based on the road segment, Perform clustering on each point of the road section, and remove a point set obtained by clustering that matches the shape of a preset moving object from the point cloud data of the road section.
  • constructing the point cloud map of the preset route includes: using LeGO-LOAM to construct the points of the preset route according to the point cloud data of the first road section and the transformed point cloud data of the second road section. Cloud map.
  • it further includes:
  • the point cloud data of the fourth road section and the transformed point cloud data of the third road section are spliced together to construct a point cloud map of the preset route.
  • Some embodiments of the present disclosure propose a system for collaboratively constructing a point cloud map, including:
  • each vehicle is configured to collect the point cloud data of the corresponding road section when moving on the corresponding road section, and send the collected point cloud data to the device that collaboratively builds the point cloud map.
  • Different vehicles correspond to different preset routes Section of
  • the device for collaboratively constructing a point cloud map is configured to execute the method for collaboratively constructing a point cloud map described in any of the embodiments.
  • the device for collaboratively constructing a point cloud map is one of a plurality of vehicles.
  • Some embodiments of the present disclosure propose a device for collaboratively constructing a point cloud map, including:
  • the communication device is configured to acquire the point cloud data of the corresponding road sections separately collected when multiple vehicles are moving on different road sections;
  • a processor coupled to the memory, and the processor is configured to execute the method for collaboratively constructing a point cloud map described in any one of the embodiments based on instructions stored in the memory.
  • the device for collaboratively constructing a point cloud map is one of multiple vehicles; the device for collaboratively constructing a point cloud map further includes:
  • the point cloud data collection device is configured to collect each single frame point cloud of the corresponding road section when moving on the corresponding road section;
  • the pose detection device is configured to detect the relative pose information of the current pose corresponding to each single frame of point cloud with respect to the starting pose corresponding to the initial single frame of point cloud;
  • the marking data collecting device is configured to collect marking data corresponding to each single frame of point cloud at the same time that the single frame of point cloud is collected.
  • the marking data collection device includes: a global positioning and navigation device; or, includes: a camera device.
  • Some embodiments of the present disclosure propose a non-transitory computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the steps of the method for collaboratively constructing a point cloud map described in any of the embodiments are implemented.
  • FIG. 1 shows a schematic flowchart of a method for collaboratively constructing a point cloud map according to some embodiments of the present disclosure.
  • Fig. 2 shows a schematic diagram of an application scenario for collaboratively constructing a point cloud map according to some embodiments of the present disclosure.
  • FIG. 3 shows a schematic flowchart of a method for collaboratively constructing a point cloud map according to other embodiments of the present disclosure.
  • Fig. 4 shows a schematic diagram of a system for collaboratively constructing a point cloud map according to some embodiments of the present disclosure.
  • FIG. 5 shows a schematic diagram of a device for collaboratively constructing a point cloud map according to some embodiments of the present disclosure.
  • FIG. 1 shows a schematic flowchart of a method for collaboratively constructing a point cloud map according to some embodiments of the present disclosure.
  • the method of this embodiment includes: steps 110-140.
  • step 110 the device for collaboratively constructing a point cloud map acquires the point cloud data of the corresponding road sections collected when multiple vehicles are moving on different road sections. Different vehicles correspond to different road sections in the preset route, and any adjacent first road section There is an overlap area with the second road section.
  • the device for collaboratively constructing a point cloud map can be one of multiple vehicles, or other devices.
  • the vehicle that collects the point cloud data may be, for example, an autonomous vehicle.
  • the self-driving vehicle can collect point cloud data by using its own sensors, without the need for additional sensors.
  • the collected point cloud data of each road section includes: each single frame point cloud of the road section and the corresponding mark data of each single frame point cloud.
  • the single-frame point cloud of the road section includes: the point cloud data of the single-frame point cloud and the corresponding local pose information of the single-frame point cloud.
  • the point cloud data (or the first point cloud data) of a single frame point cloud of each road segment includes: the position information of each point constituting the single frame point cloud in the local coordinate system with the vehicle on the road segment as the origin .
  • the first point cloud data can be acquired by, for example, a point cloud data acquisition device, such as a lidar device.
  • the corresponding local pose information of the single frame point cloud of each road segment is: the current pose of the vehicle on the road segment when collecting each single frame point cloud of the road segment is relative to the current position of the vehicle at the beginning of the single frame point cloud collection of the road segment The relative pose information of the starting pose.
  • the local pose information corresponding to a single frame of point cloud may be acquired by a pose detection device, such as an inertial navigation device, for example.
  • the second point cloud data of the single frame point cloud can be determined.
  • the second point cloud data of the single frame point cloud of each road segment includes: the position information of each point constituting the single frame point cloud in the local coordinate system with the starting pose of the vehicle on the road segment as the origin.
  • the local pose matrix corresponding to a single frame of point cloud is multiplied by the first point cloud data of the single frame of point cloud to obtain the second point cloud data of the single frame of point cloud.
  • R 3 ⁇ 3 represents a pose matrix with 3 rows and 3 columns
  • P 3 ⁇ 1 represents a position matrix with 3 rows and 1 column
  • (x, y, z) represents a point in the first point cloud data
  • Position information, (x', y', z') represents the position information of a point in the second point cloud data.
  • the road section can be constructed Local point cloud map.
  • the marking data corresponding to the single frame point cloud of each road segment includes, for example, the current global position information and heading information of the vehicle on the road segment when the single frame point cloud of the road segment is collected, or the vehicle on the road segment is collecting the single frame point cloud of the road segment. At least one of the environmental images taken when the point cloud was framed.
  • the marking data is acquired by the marking data acquisition device.
  • global position information and heading information can be acquired by a global positioning and navigation device mounted on the vehicle, and environmental images can be acquired by a camera device.
  • the device for collaboratively constructing a point cloud map clusters each point of each road section according to the point cloud data of each road section, for example, using the point cloud data of each road section.
  • the second point cloud data of each single frame of point cloud that is, the position information of each point constituting the single frame of point cloud in the local coordinate system with the starting pose of the vehicle on the road section as the origin
  • Each point of the road section is clustered, and then, from the point cloud data of each road section, the point set obtained by the clustering that matches the shape of the preset moving object is removed.
  • noises formed by moving objects such as pedestrians and moving vehicles are removed.
  • the point cloud data of each road section sent by each vehicle to the device for collaboratively constructing the point cloud map may be the point cloud data after removing the preset point set of the moving object.
  • the removal operation includes that the vehicles on each road segment cluster the points of the road segment based on the point cloud data of the road segment, for example, using the second point cloud data of each single frame point cloud of the road segment (that is, forming the single The position information of each point of the frame point cloud in the local coordinate system with the starting pose of the vehicle on the road section as the origin), cluster the points of the road section, and then, from the point cloud data of the road section , Remove the point set obtained by clustering that matches the preset shape of the moving object.
  • noise formed by moving objects such as pedestrians and moving vehicles can be removed, and the amount of information transmission between the vehicle and the device that collaboratively builds the point cloud map can be reduced.
  • step 120 the device for collaboratively constructing a point cloud map uses the point cloud data of the overlapping area to determine a transformation matrix for transforming the point cloud data of the second road section to the point cloud data of the first road section.
  • the Iterative Closest Point (ICP) algorithm is used to perform registration between different point clouds.
  • the single frame point cloud of the overlapping area in the first road section and the single frame point cloud of the overlapping area in the second road section with the same mark data are used as the target point cloud and the source point cloud, respectively, to obtain the source point cloud and the target point Cloud matching point pairs; construct rotation and translation matrix based on matching point pairs; use the rotation and translation matrix to transform the source point cloud; calculate the error between the transformed source point cloud and the target point cloud; if the error is greater than the preset error, continue
  • the above steps are performed iteratively, and if the error is not greater than the preset error, the iteration is stopped, and the rotation and translation matrix corresponding to the error is determined as a transformation matrix from the point cloud data of the second road section to the point cloud data of the first road section.
  • step 130 the device for collaboratively constructing a point cloud map uses the transformation matrix to transform the point cloud data of the second road section.
  • the transformation method includes, for example, that the local pose information corresponding to each single frame of point cloud of the second road section is first multiplied by the transformation matrix, and then multiplied by the point cloud data of the single frame of point cloud (first point cloud data).
  • the point cloud data of the second link is transformed to the coordinate system of the first link.
  • N is an intermediate product of the calculation and has no practical meaning. Represents the transformation matrix.
  • step 140 the device for collaboratively constructing a point cloud map merges the point cloud data of the first road section and the transformed point cloud data of the second road section to construct a point cloud map of the preset route.
  • the point cloud data of the second road section has been transformed into the coordinate system of the first road section. Therefore, the second point cloud data of each single frame point cloud of the first road section and the single frame point cloud data of the second road section are changed. After the transformed point cloud data are spliced, the point cloud map of the first road section and the second road section can be constructed.
  • the method for constructing the point cloud map of other adjacent road sections is the same as the method for constructing the point cloud map of the first road section and the second road section, and will not be repeated here.
  • the LeGO-LOAM tool can be used to construct the point cloud of the preset route map. That is, by inputting the point cloud data of each road section into the LeGO-LOAM tool, a point cloud map of the entire route can be constructed.
  • the above scheme of collaboratively constructing a point cloud map by multiple vehicles different vehicles collect point cloud data of different road sections in a preset route, and use the point cloud data of the overlapping area of two adjacent road sections to determine the transformation of the point cloud data of the two adjacent road sections Matrix, based on the transformed point cloud data, the point cloud data of different road sections are spliced to construct a point cloud map of the entire route.
  • the efficiency of constructing the point cloud map of the entire route is improved, and it is applicable to the business scenario of constructing the point cloud map online.
  • each vehicle collects point cloud data for a section of the road without collecting point cloud data for the entire route, thus reducing the accuracy requirements of the map data collection equipment carried by each vehicle.
  • Fig. 2 shows a schematic diagram of an application scenario for collaboratively constructing a point cloud map according to some embodiments of the present disclosure.
  • Each vehicle is responsible for collecting the point cloud data of one of the road sections.
  • the adjacent road sections are all provided with overlapping areas.
  • One of the four vehicles is used as a device for collaboratively constructing a point cloud map and is called the master vehicle.
  • the other vehicles are set as car No. 1, car No. 2, and car No. 3 respectively.
  • the four vehicles move on the road section they are responsible for in the same running direction, and collect the point cloud data of the road section they are responsible for at the same time.
  • Car No. 1, No. 2 and No. 3 will transmit the collected point cloud data of corresponding road sections to the main car.
  • the point cloud data of the corresponding road sections collected by the No. 1, No. 2 and No. 3 cars, and the point cloud data of the corresponding road sections collected by the main car itself according to the method of the embodiment shown in Fig. 1, it can be constructed Point cloud map of the entire rectangular route.
  • the road section of the main vehicle and the road section of the No. 1 car are adjacent road sections.
  • the point cloud data of the road section of the No. 1 car is transformed to the coordinates of the road section of the main vehicle, and the point cloud data of the road section of the main vehicle is spliced and transformed.
  • the point cloud data of the road section of the No. 1 car can be used to construct the point cloud map of the road section of the main vehicle and the road section of the No. 1 car.
  • the road section of the main vehicle and the road section of the No. 3 car are adjacent road sections.
  • the point cloud data of the road section of car No. can construct the point cloud map of the road section of the main vehicle and the road section of the No. 3 car. Since the point cloud data of the road section of the main vehicle and the road section of the No. 1 car have been unified under the same coordinate system, the road section of the main vehicle and the road section of the No. 1 car are regarded as a combined road section, and the combined road section is the road section of the No. 2 car. It is an adjacent road section.
  • the point cloud data of the road section of car 2 is transformed to the coordinates of the merged road section (that is, the road section of the main vehicle), and the point cloud data of the merged road section and the transformed point cloud of the road section of car No. 2 are spliced Data, you can construct a point cloud map that merges the road section and the road section of car 2. According to similar processing, the point cloud map of the entire rectangular route is finally constructed.
  • FIG. 3 shows a schematic flowchart of a method for collaboratively constructing a point cloud map according to other embodiments of the present disclosure.
  • the method of this embodiment is based on the embodiment shown in FIG. 1, and further includes steps 350-380.
  • step 350 assuming that a certain road section (third road section) in the preset route changes, any vehicle collects the point cloud data of the changed third road section and sends it to the device for collaboratively constructing a point cloud map.
  • the device for collaboratively constructing a point cloud map obtains the point cloud data of the third road section that changes in the preset route collected by any vehicle, and the third road section and the road section that does not change in the preset route (the fourth road section) are set with an overlapping area .
  • step 360 the device for collaboratively constructing a point cloud map uses the point cloud data of the overlapping area to determine a transformation matrix for transforming the point cloud data of the third road section to the point cloud data of the fourth road section.
  • the method for determining the transformation matrix for transforming the point cloud data of the third road section to the point cloud data of the fourth road section is the same as the method for determining the transformation matrix for transforming the point cloud data of the second road section to the point cloud data of the first road section.
  • the method can refer to the foregoing.
  • step 370 the device for collaboratively constructing a point cloud map uses the transformation matrix to transform the point cloud data of the third road section.
  • the method of using the transformation matrix to transform the point cloud data of the third road section is the same as the method of using the transformation matrix to transform the point cloud data of the second road section.
  • the specific method can refer to the foregoing.
  • step 380 the device for collaboratively constructing a point cloud map merges the point cloud data of the fourth road section and the transformed point cloud data of the third road section to construct a point cloud map of the preset route.
  • Fig. 4 shows a schematic diagram of a system for collaboratively constructing a point cloud map according to some embodiments of the present disclosure.
  • the system of this embodiment includes a plurality of vehicles 410 and a device 420 for collaboratively constructing a point cloud map.
  • the device for collaboratively constructing a point cloud map may be, for example, one of multiple vehicles, called the master vehicle, or other devices.
  • each vehicle is configured to collect the point cloud data of the corresponding road section when moving on the corresponding road section, and send the collected point cloud data to the device that collaboratively constructs the point cloud map, and different vehicles correspond to the preset route Different sections.
  • the device 420 for collaboratively constructing a point cloud map is configured to execute the method for collaboratively constructing a point cloud map of any of the embodiments.
  • a device for collaboratively constructing a point cloud map acquires the point cloud data of the corresponding road sections collected when multiple vehicles are moving on different road sections.
  • Different vehicles correspond to different road sections in the preset route, and any adjacent first road section and first road section
  • the second road section is provided with an overlapping area; the point cloud data of the overlap area is used to determine the transformation matrix from the point cloud data of the second road section to the point cloud data of the first road section; the point cloud data of the second road section is transformed by the transformation matrix;
  • the point cloud data of the first road section and the transformed point cloud data of the second road section are spliced together to construct a point cloud map of the preset route.
  • a device for collaboratively constructing a point cloud map acquires point cloud data of a third road section that changes in a preset route collected by any vehicle, and an overlapping area is set between the third road section and the fourth road section that does not change in the preset route. ; Use the point cloud data of the overlapping area to determine the transformation matrix of the point cloud data of the third road section to the point cloud data of the fourth road section; use the transformation matrix to transform the point cloud data of the third road section; change the point cloud data of the fourth road section The cloud data and the transformed point cloud data of the third road section are spliced to construct a point cloud map of the preset route.
  • FIG. 5 shows a schematic diagram of a device for collaboratively constructing a point cloud map according to some embodiments of the present disclosure.
  • the device 420 for collaboratively constructing a point cloud map of this embodiment includes:
  • the communication device 421 is configured to acquire the point cloud data of the corresponding road sections respectively collected when multiple vehicles are moving on different road sections;
  • the processor 423 coupled to the memory is configured to execute the method for collaboratively constructing a point cloud map of any embodiment based on instructions stored in the memory.
  • the processor 423 coupled to the memory is configured to execute the method for collaboratively constructing a point cloud map of any embodiment based on instructions stored in the memory.
  • the communication device 421 may be, for example, a wireless communication device, such as a wireless local area network communication device, or a mobile network communication device, such as a fifth-generation mobile network communication device, a fourth-generation mobile network communication device, and the like.
  • a wireless communication device such as a wireless local area network communication device
  • a mobile network communication device such as a fifth-generation mobile network communication device, a fourth-generation mobile network communication device, and the like.
  • the memory 422 may include, for example, a system memory, a fixed non-volatile storage medium, and the like.
  • the system memory for example, stores an operating system, an application program, a boot loader (Boot Loader), and other programs.
  • the device 420 for collaboratively constructing a point cloud map may be, for example, one of multiple vehicles.
  • the device 420 for collaboratively constructing a point cloud map further includes:
  • the point cloud data collection device 424 is configured to collect each single frame point cloud of the corresponding road section when moving on the corresponding road section, such as a lidar device, where the point cloud data collected by the lidar device includes not only the position information of the point, but also Information such as reflected signal strength;
  • the pose detection device 425 is configured to detect the relative pose information of the current pose corresponding to each single frame of point cloud with respect to the starting pose corresponding to the initial single frame of point cloud, such as an inertial navigation device; and
  • the marking data collecting device 426 is configured to collect the corresponding marking data of each single frame of point cloud at the same time as each single frame of point cloud is collected, for example, a global positioning and navigation device, a camera device, etc.
  • the aforementioned devices 421-426 may be connected via a bus 427, for example.
  • the present disclosure also proposes a non-transitory computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps of the method for collaboratively constructing a point cloud map of any one of the embodiments are realized.
  • the embodiments of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the present disclosure may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present disclosure may take the form of a computer program product implemented on one or more non-transitory computer-readable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer program codes. .
  • These computer program instructions can also be stored in a computer-readable memory that can direct a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Automation & Control Theory (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Navigation (AREA)
  • Instructional Devices (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种协同构建点云地图的方法、设备和***,涉及点云地图构建领域。该方法包括:获取多个车辆在不同路段上移动时分别采集的相应路段的点云数据,不同的车辆对应预设路线中不同的路段,任意相邻的第一路段和第二路段设置有重叠区域(110);利用重叠区域的点云数据,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵(120);利用变换矩阵对所述第二路段的点云数据进行变换(130);将第一路段的点云数据和变换后的第二路段的点云数据进行拼接,以构建预设路线的点云地图(140)。从而,提高构建整个路线的点云地图的效率,可适用于在线构建点云地图的业务场景。

Description

协同构建点云地图的方法、设备和***
相关申请的交叉引用
本申请是以CN申请号为202010553190.5,申请日为2020年6月17的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及点云地图构建领域,特别涉及一种协同构建点云地图的方法、设备和***。
背景技术
自动驾驶领域使用的点云地图,目前由专门的地图数据采集车搭载昂贵的高精度的地图数据采集设备,如高精度的组合惯导设备、高精度的多线激光雷达设备等,对预设路线进行数据采集,由后端设备对采集的数据进行离线处理后构建点云地图。
发明内容
本公开实施例提出多个车辆协同构建点云地图的方案,不同的车辆采集预设路线中不同路段的点云数据,利用相邻两路段重叠区域的点云数据,确定相邻两路段的点云数据的变换矩阵,基于变换后的点云数据进行不同路段的点云数据的拼接,以构建整个路线的点云地图。从而,提高构建整个路线的点云地图的效率,可适用于在线构建点云地图的业务场景。此外,每个车辆采集一个路段的点云数据,无需采集整个路线的点云数据,因此,降低了每个车辆搭载的地图数据采集设备的精度要求。
本公开的一些实施例提出一种协同构建点云地图的方法,包括:
获取多个车辆在不同路段上移动时分别采集的相应路段的点云数据,不同的车辆对应预设路线中不同的路段,任意相邻的第一路段和第二路段设置有重叠区域;
利用重叠区域的点云数据,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵;
利用所述变换矩阵对所述第二路段的点云数据进行变换;
将第一路段的点云数据和变换后的第二路段的点云数据进行拼接,以构建所述预设路线的点云地图。
在一些实施例中,每个路段的点云数据包括:所述路段的各个单帧点云和每个单帧点云相应的标记数据;其中,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵,包括:通过配准具有相同标记数据的第一路段中重叠区域的单帧点云和第二路段中重叠区域的单帧点云,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵。
在一些实施例中,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵,包括:
将具有相同标记数据的第一路段中重叠区域的单帧点云和第二路段中重叠区域的单帧点云分别作为目标点云和源点云,获取源点云和目标点云的匹配点对;
基于匹配点对构造旋转平移矩阵;
利用旋转平移矩阵对源点云进行变换;
计算变换后的源点云与目标点云之间的误差;
如果误差大于预设误差,继续迭代地执行上述各步骤,如果误差不大于预设误差,停止迭代,将所述误差相应的旋转平移矩阵确定为第二路段的点云数据变换到第一路段的点云数据的变换矩阵。
在一些实施例中,每个路段的每个单帧点云相应的标记数据包括以下至少一项:所述路段的车辆在采集所述路段的单帧点云时当前的全局位置信息和航向信息;所述路段的车辆在采集所述路段的单帧点云时拍摄的环境图像。
在一些实施例中,不同单帧点云对应的环境图像的特征向量的相似度大于预设值时被判定为具有相同标记数据的不同单帧点云;不同单帧点云对应的车辆的全局位置信息和航向信息相同时被判定为具有相同标记数据的不同单帧点云。
在一些实施例中,所述第二路段的各个单帧点云包括:所述第二路段的各个单帧点云的点云数据和每个单帧点云相应的局部位姿信息,其中,所述第二路段的单帧点云的点云数据包括:构成所述单帧点云的每个点在以所述第二路段的车辆为原点的局部坐标系下的位置信息,所述第二路段的单帧点云相应的局部位姿信息是:所述第二路段的车辆在采集所述第二路段的单帧点云时的当前位姿相对于在采集所述第二路段的起始单帧点云时的起始位姿的相对位姿信息;利用所述变换矩阵对所述第二路段的点云数据进行变换,包括:所述第二路段的每个单帧点云相应的局部位姿信息先乘以所述变换矩阵,再乘以所述单帧点云的点云数据。
在一些实施例中,还包括:根据每个路段的点云数据,对每个路段的各个点进行 聚类;从每个路段的点云数据中,去除与预设的移动对象的形状相匹配的聚类得到的点集。
在一些实施例中,每个路段的点云数据是去除预设的移动对象的点集后的点云数据,其中,所述去除操作包括每个路段的车辆基于所述路段的点云数据,对所述路段的各个点进行聚类,从所述路段的点云数据中,去除与预设的移动对象的形状相匹配的聚类得到的点集。
在一些实施例中,构建所述预设路线的点云地图包括:根据第一路段的点云数据和变换后的第二路段的点云数据,利用LeGO-LOAM构建所述预设路线的点云地图。
在一些实施例中,还包括:
获取任一车辆采集的预设路线中发生变化的第三路段的点云数据,第三路段与预设路线中未发生变化的第四路段设置有重叠区域;
利用重叠区域的点云数据,确定第三路段的点云数据变换到第四路段的点云数据的变换矩阵;
利用所述变换矩阵对所述第三路段的点云数据进行变换;
将第四路段的点云数据和变换后的第三路段的点云数据进行拼接,以构建所述预设路线的点云地图。
本公开的一些实施例提出一种协同构建点云地图的***,包括:
多个车辆,每个车辆被配置为在相应路段上移动时采集相应路段的点云数据,并将采集的点云数据发送给协同构建点云地图的设备,不同的车辆对应预设路线中不同的路段;
协同构建点云地图的设备,被配置为执行任一个实施例所述的协同构建点云地图的方法。
在一些实施例中,所述协同构建点云地图的设备是多个车辆的其中一个车辆。
本公开的一些实施例提出一种协同构建点云地图的设备,包括:
通信装置,被配置为获取多个车辆在不同路段上移动时分别采集的相应路段的点云数据;
存储器;以及
耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行任一个实施例所述协同构建点云地图的方法。
在一些实施例中,所述协同构建点云地图的设备是多个车辆的其中一个车辆;所 述协同构建点云地图的设备还包括:
点云数据采集装置,被配置为在相应路段上移动时采集相应路段的各个单帧点云;
位姿检测装置,被配置为检测每个单帧点云相应的当前位姿相对于起始单帧点云相应的起始位姿的相对位姿信息;
标记数据采集装置,被配置为在每个单帧点云被采集的同时采集所述单帧点云相应的标记数据。
在一些实施例中,所述标记数据采集装置包括:全球定位和导航装置;或者,包括:摄像装置。
本公开的一些实施例提出一种非瞬时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现任一个实施例所述的协同构建点云地图的方法的步骤。
附图说明
下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍。根据下面参照附图的详细描述,可以更加清楚地理解本公开。
显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1示出本公开一些实施例的协同构建点云地图的方法的流程示意图。
图2示出本公开一些实施例的协同构建点云地图的应用场景示意图。
图3示出本公开另一些实施例的协同构建点云地图的方法的流程示意图。
图4示出本公开一些实施例的协同构建点云地图的***的示意图。
图5示出本公开一些实施例的协同构建点云地图的设备的示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述。
除非特别说明,本公开中的“第一”“第二”等描述用来表示不同的对象,并不用来表示大小或时序等含义。
图1示出本公开一些实施例的协同构建点云地图的方法的流程示意图。
如图1所示,该实施例的方法包括:步骤110-140。
在步骤110,协同构建点云地图的设备获取多个车辆在不同路段上移动时分别采集的相应路段的点云数据,不同的车辆对应预设路线中不同的路段,任意相邻的第一路段和第二路段设置有重叠区域。
协同构建点云地图的设备可以是多个车辆的其中一个车辆,也可以是其他的设备。采集点云数据的车辆例如可以是自动驾驶车辆。自动驾驶车辆利用自身携带的传感器就可以采集点云数据,无需额外增加其他的传感器。
采集的每个路段的点云数据包括:该路段的各个单帧点云和每个单帧点云相应的标记数据。该路段的单帧点云包括:单帧点云的点云数据和单帧点云相应的局部位姿信息。
每个路段的单帧点云的点云数据(或称,第一点云数据)包括:构成该单帧点云的每个点在以该路段的车辆为原点的局部坐标系下的位置信息。第一点云数据例如可以由点云数据采集装置,例如激光雷达设备获取。每个路段的单帧点云相应的局部位姿信息是:该路段的车辆在采集该路段的各单帧点云时的当前位姿相对于在采集该路段的起始单帧点云时的起始位姿的相对位姿信息。单帧点云相应的局部位姿信息例如可以由位姿检测装置,如惯性导航设备获取。
根据单帧点云的第一点云数据和单帧点云相应的局部位姿信息可以确定单帧点云的第二点云数据。每个路段的单帧点云的第二点云数据包括:构成该单帧点云的每个点在以该路段的车辆的起始位姿为原点的局部坐标系下的位置信息。例如,单帧点云相应的局部位姿矩阵乘以该单帧点云的第一点云数据得到单帧点云的第二点云数据。公式表示如下:
Figure PCTCN2021092280-appb-000001
其中,
Figure PCTCN2021092280-appb-000002
表示局部位姿矩阵,R 3×3表示3行3列的姿态矩阵,P 3×1表示3行1列的位置矩阵,(x,y,z)表示第一点云数据中的一个点的位置信息,(x′,y′,z′)表示第二点云数据中的一个点的位置信息。
基于一个路段的各个单帧点云的第一点云数据和单帧点云相应的局部位姿信息,或者,基于一个路段的各个单帧点云的第二点云数据,就可以构建该路段的局部点云地图。
每个路段的单帧点云相应的标记数据例如包括:该路段的车辆在采集该路段的单帧点云时当前的全局位置信息和航向信息,或者,该路段的车辆在采集该路段的单帧点云时拍摄的环境图像中的至少一项。标记数据由标记数据采集装置获取。例如,全局位置信息和航向信息可以由车辆上搭载的全球定位和导航装置获取,环境图像由摄像装置获取。
在一些实施例中,协同构建点云地图的设备在获取各个路段的点云数据之后,根据每个路段的点云数据,对每个路段的各个点进行聚类,例如,利用每个路段的各单帧点云的第二点云数据(即,构成该单帧点云的每个点在以该路段的车辆的起始位姿为原点的局部坐标系下的位置信息),对每个路段的各个点进行聚类,然后,从每个路段的点云数据中,去除与预设的移动对象的形状相匹配的聚类得到的点集。从而,从点云数据中,去除行人、行进车辆等移动对象所形成的噪声。
每个车辆发送给协同构建点云地图的设备的每个路段的点云数据可以是去除预设的移动对象的点集后的点云数据。该去除操作包括每个路段的车辆基于该路段的点云数据,对该路段的各个点进行聚类,例如,利用该路段的各单帧点云的第二点云数据(即,构成该单帧点云的每个点在以该路段的车辆的起始位姿为原点的局部坐标系下的位置信息),对该路段的各个点进行聚类,然后,从该路段的点云数据中,去除与预设的移动对象的形状相匹配的聚类得到的点集。从而,从点云数据中,去除行人、行进车辆等移动对象所形成的噪声,并且,可以减少车辆与协同构建点云地图的设备之间的信息传输量。
在步骤120,协同构建点云地图的设备利用重叠区域的点云数据,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵。
通过配准具有相同标记数据的第一路段中重叠区域的单帧点云和第二路段中重叠区域的单帧点云,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵。
例如,利用迭代最近点(Iterative Closest Point,ICP)算法进行不同点云之间的配准。具体来说,将具有相同标记数据的第一路段中重叠区域的单帧点云和第二路段中重叠区域的单帧点云分别作为目标点云和源点云,获取源点云和目标点云的匹配点对;基于匹配点对构造旋转平移矩阵;利用旋转平移矩阵对源点云进行变换;计算变换后的源点云与目标点云之间的误差;如果误差大于预设误差,继续迭代地执行上述各步骤,如果误差不大于预设误差,停止迭代,将该误差相应的旋转平移矩阵 确定为第二路段的点云数据变换到第一路段的点云数据的变换矩阵。
不同单帧点云对应的车辆的全局位置信息和航向信息相同时被判定为具有相同标记数据的不同单帧点云。或者,不同单帧点云对应的环境图像的特征向量的相似度大于预设值时被判定为具有相同标记数据的不同单帧点云。或者,不同单帧点云对应的车辆的全局位置信息和航向信息相同,且不同单帧点云对应的环境图像的特征向量的相似度大于预设值时,被判定为具有相同标记数据的不同单帧点云。
在步骤130,协同构建点云地图的设备利用该变换矩阵对该第二路段的点云数据进行变换。
变换方法例如包括:该第二路段的每个单帧点云相应的局部位姿信息先乘以该变换矩阵,再乘以该单帧点云的点云数据(第一点云数据)。从而,将第二路段的点云数据变换到第一路段的坐标系下。
Figure PCTCN2021092280-appb-000003
其中,
Figure PCTCN2021092280-appb-000004
表示单帧点云相应的局部位姿信息,(x,y,z)表示单帧点云的第一点云数据中的一个点的位置信息,(x″,y″,z″)表示单帧点云的变换后的点云数据中的一个点的位置信息,N是运算的中间产物,没有实际意义。
Figure PCTCN2021092280-appb-000005
表示变换矩阵。
在步骤140,协同构建点云地图的设备将第一路段的点云数据和变换后的第二路段的点云数据进行拼接,以构建该预设路线的点云地图。
根据前述,第二路段的点云数据已经变换到第一路段的坐标系下,因此,将第一路段的各个单帧点云的第二点云数据、以及第二路段的各个单帧点云变换后的点云数据进行拼接,就可以构建出第一路段和第二路段的点云地图。
如果预设路线被划分的路段多于两个路段,则其他相邻路段的点云地图的构建方法与第一路段和第二路段的点云地图的构建方法是相同的,这里不再赘述。
针对预设路线中任意向量的第一路段和第二路段,根据第一路段的点云数据和变换后的第二路段的点云数据,利用LeGO-LOAM工具可以构建该预设路线的点云地图。 也即,将各个路段的点云数据输入LeGO-LOAM工具,就可以构建整个线路的点云地图。
上述多个车辆协同构建点云地图的方案,不同的车辆采集预设路线中不同路段的点云数据,利用相邻两路段重叠区域的点云数据,确定相邻两路段的点云数据的变换矩阵,基于变换后的点云数据进行不同路段的点云数据的拼接,以构建整个路线的点云地图。从而,提高构建整个路线的点云地图的效率,可适用于在线构建点云地图的业务场景。此外,每个车辆采集一个路段的点云数据,无需采集整个路线的点云数据,因此,降低了每个车辆搭载的地图数据采集设备的精度要求。
图2示出本公开一些实施例的协同构建点云地图的应用场景示意图。
4个车辆协同构建图2所示的矩形路线的点云地图。每个车辆分别负责采集其中一个路段的点云数据。相邻的路段均设置有重叠区域。4个车辆的其中一个车辆作为协同构建点云地图的设备,称为主车。其他车辆分别设为1号车、2号车、3号车。4个车辆按照相同的运行方向,在各自所负责的路段上移动,同时采集各自所负责路段的点云数据。1号车、2号车、3号车将采集的相应路段的点云数据传输给主车。主车根据1号车、2号车、3号车采集的相应路段的点云数据,以及主车自己采集的相应路段的点云数据,按照图1所示实施例的方法,就可以构建出整个矩形路线的点云地图。
例如,主车的路段与1号车的路段是相邻路段,将1号车的路段的点云数据变换到主车的路段的坐标下,通过拼接主车的路段的点云数据和变换后的1号车的路段的点云数据,就可以构建出主车的路段和1号车的路段的点云地图。主车的路段与3号车的路段是相邻路段,将3号车的路段的点云数据变换到主车的路段的坐标下,通过拼接主车的路段的点云数据和变换后的3号车的路段的点云数据,就可以构建出主车的路段和3号车的路段的点云地图。由于主车的路段与1号车的路段的点云数据已经统一到同一坐标系下,因此,将主车的路段与1号车的路段作为一个合并路段,该合并路段与2号车的路段是相邻路段,将2号车的路段的点云数据变换到合并路段(即主车的路段)的坐标下,通过拼接合并路段的点云数据和变换后的2号车的路段的点云数据,就可以构建出合并路段和2号车的路段的点云地图。按照类似的处理,最终构建出整个矩形路线的点云地图。
图3示出本公开另一些实施例的协同构建点云地图的方法的流程示意图。
如图3所示,该实施例的方法在图1所示实施例的基础上,进一步还包括:步骤350-380。
在步骤350,假设预设路线中某个路段(第三路段)发生变化,由任一车辆去采集该发生变化的第三路段的点云数据,并发送给协同构建点云地图的设备。协同构建点云地图的设备获取任一车辆采集的预设路线中发生变化的第三路段的点云数据,第三路段与预设路线中未发生变化的路段(第四路段)设置有重叠区域。
在步骤360,协同构建点云地图的设备利用重叠区域的点云数据,确定第三路段的点云数据变换到第四路段的点云数据的变换矩阵。
第三路段的点云数据变换到第四路段的点云数据的变换矩阵的确定方法与第二路段的点云数据变换到第一路段的点云数据的变换矩阵的确定方法是相同的,具体方法可以参考前述。
在步骤370,协同构建点云地图的设备利用该变换矩阵对该第三路段的点云数据进行变换。
利用变换矩阵对第三路段的点云数据进行变换的方法与利用变换矩阵对第二路段的点云数据进行变换的方法是相同的,具体方法可以参考前述。
在步骤380,协同构建点云地图的设备将第四路段的点云数据和变换后的第三路段的点云数据进行拼接,以构建该预设路线的点云地图。拼接方法具体参考前述。
从而,在局部点云地图变化时,仅需要针对该局部路段的点云地图进行重建,提高整个路线的点云地图的更新效率。
图4示出本公开一些实施例的协同构建点云地图的***的示意图。
如图4所示,该实施例的***包括:多个车辆410和协同构建点云地图的设备420。该协同构建点云地图的设备例如可以是多个车辆的其中一个车辆,称为主车,也可以是其他设备。
多个车辆410,每个车辆被配置为在相应路段上移动时采集相应路段的点云数据,并将采集的点云数据发送给协同构建点云地图的设备,不同的车辆对应预设路线中不同的路段。
协同构建点云地图的设备420,被配置为执行任一个实施例的协同构建点云地图的方法。
例如,协同构建点云地图的设备获取多个车辆在不同路段上移动时分别采集的相应路段的点云数据,不同的车辆对应预设路线中不同的路段,任意相邻的第一路段和第二路段设置有重叠区域;利用重叠区域的点云数据,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵;利用变换矩阵对第二路段的点云数据进行变换; 将第一路段的点云数据和变换后的第二路段的点云数据进行拼接,以构建预设路线的点云地图。
又例如,协同构建点云地图的设备获取任一车辆采集的预设路线中发生变化的第三路段的点云数据,第三路段与预设路线中未发生变化的第四路段设置有重叠区域;利用重叠区域的点云数据,确定第三路段的点云数据变换到第四路段的点云数据的变换矩阵;利用变换矩阵对第三路段的点云数据进行变换;将第四路段的点云数据和变换后的第三路段的点云数据进行拼接,以构建预设路线的点云地图。
图5示出本公开一些实施例的协同构建点云地图的设备的示意图。
如图5所示,该实施例的协同构建点云地图的设备420包括:
通信装置421,被配置为获取多个车辆在不同路段上移动时分别采集的相应路段的点云数据;
存储器422;以及
耦接至该存储器的处理器423,该处理器被配置为基于存储在该存储器中的指令,执行任一个实施例的协同构建点云地图的方法,具体内容参考前述,这里不再赘述。
通信装置421例如可以是无线通信装置,如无线局域网通信装置,或者,移动网络通信装置,如第五代移动网络通信装置、***移动网络通信装置等。
存储器422例如可以包括***存储器、固定非易失性存储介质等。***存储器例如存储有操作***、应用程序、引导装载程序(Boot Loader)以及其他程序等。
在一些实施例中,协同构建点云地图的设备420例如可以是多个车辆的其中一个车辆,此时,协同构建点云地图的设备420还包括:
点云数据采集装置424,被配置为在相应路段上移动时采集相应路段的各个单帧点云,例如激光雷达设备,其中,激光雷达设备采集的点云数据除了包括点的位置信息,还包括反射信号强度等信息;
位姿检测装置425,被配置为检测每个单帧点云相应的当前位姿相对于起始单帧点云相应的起始位姿的相对位姿信息,例如惯性导航设备;以及
标记数据采集装置426,被配置为在每个单帧点云被采集的同时采集该单帧点云相应的标记数据,例如,全球定位和导航装置、摄像装置等。
上述的装置421-426例如可以通过总线427连接。
本公开还提出一种非瞬时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现任一个实施例的协同构建点云地图的方法的步骤。
本领域内的技术人员应当明白,本公开的实施例可提供为方法、***、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机程序代码的非瞬时性计算机可读存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解为可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅为本公开的较佳实施例,并不用以限制本公开,凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (16)

  1. 一种协同构建点云地图的方法,包括:
    获取多个车辆在不同路段上移动时分别采集的相应路段的点云数据,不同的车辆对应预设路线中不同的路段,任意相邻的第一路段和第二路段设置有重叠区域;
    利用重叠区域的点云数据,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵;
    利用所述变换矩阵对所述第二路段的点云数据进行变换;
    将第一路段的点云数据和变换后的第二路段的点云数据进行拼接,以构建所述预设路线的点云地图。
  2. 根据权利要求1所述的方法,其中,
    每个路段的点云数据包括:所述路段的各个单帧点云和每个单帧点云相应的标记数据;
    其中,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵,包括:
    通过配准具有相同标记数据的第一路段中重叠区域的单帧点云和第二路段中重叠区域的单帧点云,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵。
  3. 根据权利要求2所述的方法,其中,确定第二路段的点云数据变换到第一路段的点云数据的变换矩阵,包括:
    将具有相同标记数据的第一路段中重叠区域的单帧点云和第二路段中重叠区域的单帧点云分别作为目标点云和源点云,获取源点云和目标点云的匹配点对;
    基于匹配点对构造旋转平移矩阵;
    利用旋转平移矩阵对源点云进行变换;
    计算变换后的源点云与目标点云之间的误差;
    如果误差大于预设误差,继续迭代地执行上述各步骤,如果误差不大于预设误差,停止迭代,将停止迭代时的所述误差相应的旋转平移矩阵确定为第二路段的点云数据变换到第一路段的点云数据的变换矩阵。
  4. 根据权利要求2所述的方法,其中,每个路段的每个单帧点云相应的标记数据包括以下至少一项:
    所述路段的车辆在采集所述路段的单帧点云时当前的全局位置信息和航向信息;
    所述路段的车辆在采集所述路段的单帧点云时拍摄的环境图像。
  5. 根据权利要求4所述的方法,其中,
    不同单帧点云对应的环境图像的特征向量的相似度大于预设值时被判定为具有相同标记数据的不同单帧点云;或者,
    不同单帧点云对应的车辆的全局位置信息和航向信息相同时被判定为具有相同标记数据的不同单帧点云。
  6. 根据权利要求2所述的方法,其中,
    所述第二路段的各个单帧点云包括:所述第二路段的各个单帧点云的点云数据和每个单帧点云相应的局部位姿信息,其中,所述第二路段的单帧点云的点云数据包括:构成所述单帧点云的每个点在以所述第二路段的车辆为原点的局部坐标系下的位置信息,所述第二路段的单帧点云相应的局部位姿信息是:所述第二路段的车辆在采集所述第二路段的单帧点云时的当前位姿相对于在采集所述第二路段的起始单帧点云时的起始位姿的相对位姿信息;
    利用所述变换矩阵对所述第二路段的点云数据进行变换,包括:所述第二路段的每个单帧点云相应的局部位姿信息先乘以所述变换矩阵,再乘以所述单帧点云的点云数据。
  7. 根据权利要求1所述的方法,还包括:
    根据每个路段的点云数据,对每个路段的各个点进行聚类;
    从每个路段的点云数据中,去除与预设的移动对象的形状相匹配的聚类得到的点集。
  8. 根据权利要求1所述的方法,其中,
    每个路段的点云数据是去除预设的移动对象的点集后的点云数据,其中,所述去除操作包括每个路段的车辆基于所述路段的点云数据,对所述路段的各个点进行聚类,从所述路段的点云数据中,去除与预设的移动对象的形状相匹配的聚类得到的点集。
  9. 根据权利要求1所述的方法,其中,构建所述预设路线的点云地图包括:
    根据第一路段的点云数据和变换后的第二路段的点云数据,利用LeGO-LOAM构建所述预设路线的点云地图。
  10. 根据权利要求1所述的方法,还包括:
    获取任一车辆采集的预设路线中发生变化的第三路段的点云数据,第三路段与预 设路线中未发生变化的第四路段设置有重叠区域;
    利用重叠区域的点云数据,确定第三路段的点云数据变换到第四路段的点云数据的变换矩阵;
    利用所述变换矩阵对所述第三路段的点云数据进行变换;
    将第四路段的点云数据和变换后的第三路段的点云数据进行拼接,以构建所述预设路线的点云地图。
  11. 一种协同构建点云地图的***,包括:
    多个车辆,每个车辆被配置为在相应路段上移动时采集相应路段的点云数据,并将采集的点云数据发送给协同构建点云地图的设备,不同的车辆对应预设路线中不同的路段;
    协同构建点云地图的设备,被配置为执行权利要求1-10任一项所述的协同构建点云地图的方法。
  12. 根据权利要求11所述的***,其中,
    所述协同构建点云地图的设备是多个车辆的其中一个车辆。
  13. 一种协同构建点云地图的设备,包括:
    通信装置,被配置为获取多个车辆在不同路段上移动时分别采集的相应路段的点云数据;
    存储器;以及
    耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行权利要求1-10任一项所述的协同构建点云地图的方法。
  14. 根据权利要求13所述的设备,其中,
    所述协同构建点云地图的设备是多个车辆的其中一个车辆;
    所述协同构建点云地图的设备还包括:
    点云数据采集装置,被配置为在相应路段上移动时采集相应路段的各个单帧点云;
    位姿检测装置,被配置为检测每个单帧点云相应的当前位姿相对于起始单帧点云相应的起始位姿的相对位姿信息;
    标记数据采集装置,被配置为在每个单帧点云被采集的同时采集所述单帧点云相应的标记数据。
  15. 根据权利要求14所述的设备,其中,
    所述标记数据采集装置包括:全球定位和导航装置;或者,包括:摄像装置。
  16. 一种非瞬时性计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1-10任一项所述的协同构建点云地图的方法的步骤。
PCT/CN2021/092280 2020-06-17 2021-05-08 协同构建点云地图的方法、设备和*** WO2021254019A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21824786.4A EP4170581A1 (en) 2020-06-17 2021-05-08 Method, device and system for cooperatively constructing point cloud map
US18/002,092 US20230351686A1 (en) 2020-06-17 2021-05-08 Method, device and system for cooperatively constructing point cloud map

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010553190.5A CN111681172A (zh) 2020-06-17 2020-06-17 协同构建点云地图的方法、设备和***
CN202010553190.5 2020-06-17

Publications (1)

Publication Number Publication Date
WO2021254019A1 true WO2021254019A1 (zh) 2021-12-23

Family

ID=72435948

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092280 WO2021254019A1 (zh) 2020-06-17 2021-05-08 协同构建点云地图的方法、设备和***

Country Status (4)

Country Link
US (1) US20230351686A1 (zh)
EP (1) EP4170581A1 (zh)
CN (1) CN111681172A (zh)
WO (1) WO2021254019A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049946A (zh) * 2022-06-10 2022-09-13 安徽农业大学 一种基于点云变换的麦田生长状态判别方法与装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111681172A (zh) * 2020-06-17 2020-09-18 北京京东乾石科技有限公司 协同构建点云地图的方法、设备和***
US20220179424A1 (en) * 2020-12-09 2022-06-09 Regents Of The University Of Minnesota Systems and methods for autonomous navigation on sidewalks in various conditions
US20230213633A1 (en) * 2022-01-06 2023-07-06 GM Global Technology Operations LLC Aggregation-based lidar data alignment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160163114A1 (en) * 2014-12-05 2016-06-09 Stmicroelectronics S.R.L. Absolute rotation estimation including outlier detection via low-rank and sparse matrix decomposition
CN109781129A (zh) * 2019-01-28 2019-05-21 重庆邮电大学 一种基于车辆间通信的路面安全性检测***及方法
CN110795523A (zh) * 2020-01-06 2020-02-14 中智行科技有限公司 车辆定位方法、装置以及智能车辆
CN111208492A (zh) * 2018-11-21 2020-05-29 长沙智能驾驶研究院有限公司 车载激光雷达外参标定方法及装置、计算机设备及存储介质
CN111681172A (zh) * 2020-06-17 2020-09-18 北京京东乾石科技有限公司 协同构建点云地图的方法、设备和***

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455144B (zh) * 2013-08-22 2017-04-12 深圳先进技术研究院 车载人机交互***及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160163114A1 (en) * 2014-12-05 2016-06-09 Stmicroelectronics S.R.L. Absolute rotation estimation including outlier detection via low-rank and sparse matrix decomposition
CN111208492A (zh) * 2018-11-21 2020-05-29 长沙智能驾驶研究院有限公司 车载激光雷达外参标定方法及装置、计算机设备及存储介质
CN109781129A (zh) * 2019-01-28 2019-05-21 重庆邮电大学 一种基于车辆间通信的路面安全性检测***及方法
CN110795523A (zh) * 2020-01-06 2020-02-14 中智行科技有限公司 车辆定位方法、装置以及智能车辆
CN111681172A (zh) * 2020-06-17 2020-09-18 北京京东乾石科技有限公司 协同构建点云地图的方法、设备和***

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049946A (zh) * 2022-06-10 2022-09-13 安徽农业大学 一种基于点云变换的麦田生长状态判别方法与装置
CN115049946B (zh) * 2022-06-10 2023-09-26 安徽农业大学 一种基于点云变换的麦田生长状态判别方法与装置

Also Published As

Publication number Publication date
EP4170581A1 (en) 2023-04-26
CN111681172A (zh) 2020-09-18
US20230351686A1 (en) 2023-11-02

Similar Documents

Publication Publication Date Title
WO2021254019A1 (zh) 协同构建点云地图的方法、设备和***
JP6862409B2 (ja) 地図生成及び移動主体の位置決めの方法及び装置
CN108765487B (zh) 重建三维场景的方法、装置、设备和计算机可读存储介质
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
US11176701B2 (en) Position estimation system and position estimation method
JP6557973B2 (ja) 地図生成装置、地図生成方法およびプログラム
EP3671623B1 (en) Method, apparatus, and computer program product for generating an overhead view of an environment from a perspective image
JP2015148601A (ja) マッピング、位置特定、及び姿勢補正のためのシステム及び方法
JP2016157197A (ja) 自己位置推定装置、自己位置推定方法およびプログラム
CN114332360A (zh) 一种协同三维建图方法及***
US11699234B2 (en) Semantic segmentation ground truth correction with spatial transformer networks
CN114459471A (zh) 定位信息确定方法、装置、电子设备及存储介质
CN113838129A (zh) 一种获得位姿信息的方法、装置以及***
WO2020118623A1 (en) Method and system for generating an environment model for positioning
CN117635721A (zh) 目标定位方法及相关***、存储介质
US11514588B1 (en) Object localization for mapping applications using geometric computer vision techniques
CN115345944A (zh) 外参标定参数确定方法、装置、计算机设备和存储介质
CN112184906B (zh) 一种三维模型的构建方法及装置
CN111784798B (zh) 地图生成方法、装置、电子设备和存储介质
JP7229111B2 (ja) 地図更新データ生成装置及び地図更新データ生成方法
CN111860084B (zh) 图像特征的匹配、定位方法及装置、定位***
Wong et al. Position interpolation using feature point scale for decimeter visual localization
CN111461982B (zh) 用于拼接点云的方法和装置
CN117730239A (zh) 用于导航的设备和方法
KR20240081654A (ko) 시설물 정보 구축을 위한 선별적 데이터 전송 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21824786

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021824786

Country of ref document: EP

Effective date: 20230117