CN113639745B - Point cloud map construction method, device and storage medium - Google Patents

Point cloud map construction method, device and storage medium Download PDF

Info

Publication number
CN113639745B
CN113639745B CN202110887413.6A CN202110887413A CN113639745B CN 113639745 B CN113639745 B CN 113639745B CN 202110887413 A CN202110887413 A CN 202110887413A CN 113639745 B CN113639745 B CN 113639745B
Authority
CN
China
Prior art keywords
point cloud
image
information
map
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110887413.6A
Other languages
Chinese (zh)
Other versions
CN113639745A (en
Inventor
吴新开
刘云浩
霍向
宋涛
何山
马亚龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lobby Technology Co ltd
Beihang University
Original Assignee
Beijing Lobby Technology Co ltd
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lobby Technology Co ltd, Beihang University filed Critical Beijing Lobby Technology Co ltd
Priority to CN202110887413.6A priority Critical patent/CN113639745B/en
Publication of CN113639745A publication Critical patent/CN113639745A/en
Application granted granted Critical
Publication of CN113639745B publication Critical patent/CN113639745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method and a device for constructing a point cloud map and a storage medium, which are used for improving the accuracy of constructing the point cloud map. The method for constructing the point cloud map comprises the following steps: collecting first information of a first position; after the first information is acquired, moving to a second position, and acquiring second information of the second position; locally matching the second information with the first information to obtain a local point cloud map; deleting the dynamic target point cloud in the local point cloud map to obtain a laser point cloud map; the first information comprises a first environment point cloud image and a first environment image; the second information includes a second ambient point cloud image and a second ambient image. The application also provides a device for constructing the point cloud map and a storage medium.

Description

Point cloud map construction method, device and storage medium
Technical Field
The present application relates to the field of unmanned vehicles, and in particular, to a method and apparatus for constructing a point cloud map, and a storage medium.
Background
At present, a mapping technology is increasingly paid attention to as an important technology in the field of unmanned vehicles. In the existing mapping technology, the removal of dynamic targets in the point cloud map is not considered, so that the mapping result is not accurate enough.
Disclosure of Invention
Aiming at the technical problems, the embodiment of the application provides a method, a device and a storage medium for constructing a point cloud map, which are used for sensing a dynamic target in an environment and deleting the point cloud of the dynamic target in the point cloud map so as to obtain a more accurate environment point cloud map.
In a first aspect, a method for constructing a point cloud map provided by an embodiment of the present application includes:
collecting first information of a first position;
after the first information is acquired, moving to a second position, and acquiring second information of the second position;
locally matching the second information with the first information to obtain a local point cloud map;
deleting the dynamic target point cloud in the local point cloud map to obtain a laser point cloud map;
the first information comprises a first environment point cloud image and a first environment image;
the second information includes a second ambient point cloud image and a second ambient image.
Further, when the point cloud map construction is started, the first position is a starting position;
and when the next position starts to be acquired after the first position is acquired, the second position is the current acquisition position, and the first position is the last acquisition position.
Preferably, the acquisition device is turned on, the acquisition device comprising one or a combination of the following:
a laser radar;
binocular cameras;
a depth camera;
an inertial measurement unit IMU;
wheel speed odometer.
Preferably, after the whole map building area is collected, the collection is finished, and the laser point cloud maps of all the collection points are subjected to map splicing to obtain an environment point cloud map of the whole map building area.
Preferably, the acquiring the first information of the first location includes:
establishing a first environment point cloud image according to laser radar data acquired by the laser radar;
and establishing a first environment image according to the image data acquired by the binocular camera and the depth camera.
The acquiring the second information of the second location includes:
establishing a second environment point cloud image according to the laser radar data acquired by the laser radar;
and establishing a second environment image according to the image data acquired by the binocular camera and the depth camera.
Further, the method further comprises the following steps:
uploading the laser radar data acquired by the laser radar to an upper computer system;
and uploading the image data acquired by the binocular camera and the depth camera to an upper computer system.
Preferably, the deleting the dynamic target point cloud in the local point cloud map includes:
matching the image of the first position or the second position with the point cloud;
and deleting the point cloud corresponding to the dynamic target identified in the image from the point cloud map.
Preferably, the point cloud for characterizing the dynamic target includes:
a point cloud characterizing a pedestrian or a non-motor vehicle or a motor vehicle.
Preferably, the matching the image of the first location or the second location with the point cloud includes:
and matching the image information with the point cloud information according to the conversion matrix.
By using the method for constructing the point cloud map, provided by the application, the local matching is carried out according to the environmental image information of the front and rear acquisition points and the environmental point cloud map information, then the dynamic target is identified, and then the point cloud of the dynamic target is deleted from the local point cloud map, so that the accuracy of the map construction is improved. The map with the dynamic target has serious interference to the subsequent positioning and navigation by using the map, and the map built by the method does not contain the dynamic target, so that the accuracy is higher.
In a second aspect, an embodiment of the present application further provides a device for constructing a point cloud map, including:
the acquisition module is configured to acquire first information of a first position, move to a second position after the first information acquisition is completed, and acquire second information of the second position;
the local matching module is configured to locally match the second information with the first information to obtain a local point cloud map;
the dynamic target deleting module is configured to delete the dynamic target point cloud in the local point cloud map to obtain a laser point cloud map;
the first information comprises a first environment point cloud image and a first environment image;
the second information includes a second ambient point cloud image and a second ambient image.
In a third aspect, an embodiment of the present application further provides a device for constructing a point cloud map, including: a memory, a processor, and a user interface;
the memory is used for storing a computer program;
the user interface is used for realizing interaction with a user;
the processor is used for reading the computer program in the memory, and when the processor executes the computer program, the method for constructing the point cloud map is realized.
In a fourth aspect, an embodiment of the present application further provides a processor readable storage medium, where a computer program is stored in the processor readable storage medium, and when the processor executes the computer program, the method for constructing a point cloud map provided by the present application is implemented.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of point cloud map construction provided in this embodiment;
fig. 2 is a schematic diagram of environmental image and environmental point cloud image acquisition provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of first and second position information acquisition according to an embodiment of the present application;
FIG. 4 is a schematic diagram of dynamic object recognition according to an embodiment of the present application;
fig. 5 is a schematic diagram of a deleted dynamic target point cloud according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a device for constructing a point cloud map according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another device for constructing a point cloud map according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Some words appearing hereinafter are explained:
1. in the embodiment of the application, the term "and/or" describes the association relation of the association objects, which means that three relations can exist, for example, a and/or B can be expressed as follows: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
2. The term "plurality" in embodiments of the present application means two or more, and other adjectives are similar.
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, the display sequence of the embodiments of the present application only represents the sequence of the embodiments, and does not represent the advantages or disadvantages of the technical solutions provided by the embodiments.
Example 1
Referring to fig. 1, a schematic diagram of a method for constructing a point cloud map according to an embodiment of the present application is shown in fig. 1, and the method includes steps S101 to S104:
s101, acquiring first information of a first position;
s102, after the first information acquisition is completed, moving to a second position, and acquiring second information of the second position;
s103, carrying out local matching on the second information and the first information to obtain a local point cloud map;
s104, deleting the dynamic target point cloud in the local point cloud map to obtain a laser point cloud map;
the first information comprises a first environment point cloud image and a first environment image;
the second information includes a second ambient point cloud image and a second ambient image.
Specifically, the first environmental point cloud image and the second environmental point cloud image are established through laser radar data acquired by a laser radar, and the first environmental image and the second environmental image are established through image data acquired by a binocular camera and a depth camera.
It should be noted that the method is applied to an unmanned vehicle for constructing a point cloud map, and the unmanned vehicle carries at least one of a laser radar, a binocular camera, a depth camera, an Inertial Measurement Unit (IMU) and a wheel speed odometer. The laser radar is used for collecting laser radar data, the binocular camera and the depth camera are used for collecting image data, the IMU is used for collecting inertial measurement data, and the wheel speed odometer is used for collecting mileage data, as shown in fig. 2.
When the point cloud map starts to be built, the unmanned vehicle is located at a starting position, and the starting position is a first position. And after the first information of the first position is acquired, moving to the second position, and acquiring second information of the second position. Namely, when the point cloud map construction is started, the first position is a starting position; and when the next position starts to be acquired after the first position is acquired, the second position is the current acquisition position, and the first position is the last acquisition position.
As a preferred example, a lidar, a binocular camera, a depth camera, an Inertial Measurement Unit (IMU) and a wheel speed odometer carried on the drone are turned on before the start of the build.
It should be noted that, the second position in this embodiment is the next acquisition position after the first position is acquired, the second position moves in the whole mapping area, after the acquisition of the whole mapping area is completed, the acquisition is finished, and the laser point cloud maps of all the acquisition points are map-spliced to obtain the environment point cloud map of the whole mapping area.
As a preferred example, the second position may be moved in the entire map building area by manual movement, or may be automatically moved by the unmanned vehicle according to a preset rule, which is not limited in this embodiment.
As a preferred example, in the present embodiment S101, the acquiring the first information of the first location includes:
establishing a first environment point cloud image according to laser radar data acquired by the laser radar;
and establishing a first environment image according to the image data acquired by the binocular camera and the depth camera.
As a preferred example, in the present embodiment S102, the acquiring the second information of the second location includes:
establishing a second environment point cloud image according to the laser radar data acquired by the laser radar;
and establishing a second environment image according to the image data acquired by the binocular camera and the depth camera.
As a preferable example, after the unmanned vehicle collects the first information or the second information, the laser radar data collected by the laser radar is uploaded to an upper computer system; and uploading the image data acquired by the binocular camera and the depth camera to an upper computer system, and uploading the data acquired by other equipment to the upper computer system.
As a preferred example, in the present embodiment S103, locally matching the second information with the first information, and obtaining the local point cloud map includes:
the local matching process is to register the point clouds acquired by the laser radars at the front and rear positions and transform the point clouds at different positions to the same position through the information of the overlapped part. As a preferable example, two sets of point clouds are unified under a unified coordinate system by utilizing an ICP algorithm (Iterative Closest Point, nearest neighbor iterative algorithm), registration of the point clouds is performed, and a point cloud map obtained after registration becomes an environment local map.
As a preferred example, in the present embodiment S104, deleting the dynamic target point cloud in the local point cloud map, and obtaining the laser point cloud map includes:
matching the image of the first position or the second position with the point cloud;
and deleting the point cloud corresponding to the dynamic target identified in the image from the point cloud map. As shown in fig. 4, two dynamic objects are identified, and as shown in fig. 5, the point cloud map after the dynamic objects identified in fig. 4 are deleted.
As a preferred example, matching the image of the first location or the second location with the point cloud comprises:
and matching the image information with the point cloud information according to the conversion matrix. That is, the first ambient image is matched to the first ambient point cloud according to the transformation matrix or the second ambient image is matched to the second ambient point cloud according to the transformation matrix.
Preferably, the transformation matrix is determined by the steps of:
a1: and (5) calibrating.
The method comprises the steps of calibrating a plurality of sensors before the point cloud map is constructed, namely calibrating camera image information and radar point cloud information, namely opening a camera to collect image information in an environment with definite characteristic points, opening radar to collect the point cloud information, and then selecting characteristic positions on the image by staff and drawing corresponding positions on the point cloud information. As a preferred example, the extraction of each feature point on the image may be obtained by SIFT image feature point extraction algorithm or the like.
A2: a transformation matrix is determined.
The conversion relation between the drawn point cloud position and the selected image position is a conversion matrix, and the conversion matrix represents a direct mapping relation between the three-dimensional point cloud and the visual image pixels, namely the position on the image can find the point cloud position on the point cloud information through the conversion matrix.
Preferably, the point cloud for characterizing the dynamic object comprises:
a point cloud characterizing a pedestrian or a non-motor vehicle or a motor vehicle. I.e. the dynamic object comprises one or a combination of the following: pedestrians, motor vehicles, and non-motor vehicles. Other objects moving within the mapping area may also be included, such as ground reptiles, flying objects in the air, and other moving objects. As shown in fig. 3, the dynamic target is a vehicle.
A specific example of the present embodiment is given below:
s1, starting a laser radar, a binocular camera, a depth camera, an Inertial Measurement Unit (IMU) and a wheel speed odometer carried on an unmanned vehicle at a starting position, and starting to acquire laser radar data, image data, inertial Measurement Unit (IMU) data and wheel speed odometer data;
s2, uploading laser radar data acquired by a laser radar to an upper computer system by an unmanned vehicle, establishing an environment point cloud image, and uploading image data acquired by a binocular camera and a depth camera to the upper computer system by the unmanned vehicle, and establishing an environment image;
s3, after the establishment of the environment point cloud image and the environment image of the unmanned aerial vehicle at the initial position is completed, a worker observes the surrounding environment through a camera carried by the unmanned aerial vehicle to remotely control the unmanned aerial vehicle to move to the next position;
s4, continuously collecting laser radar data, image data, inertial Measurement Unit (IMU) data and wheel speed odometer data by using a laser radar, a binocular camera, a depth camera, an Inertial Measurement Unit (IMU) and a wheel speed odometer carried on the unmanned vehicle, and uploading the laser radar data, the image data, the Inertial Measurement Unit (IMU) data and the wheel speed odometer data to an upper computer;
s5, the upper computer establishes an environment point cloud image of a new position according to the newly acquired laser radar data, and establishes an environment image of the new position according to the newly acquired image data;
s6, locally matching the environmental point cloud image and the environmental image of the new position with the environmental point cloud image and the environmental image of the previous position to form a local point cloud map;
s7, deleting the point clouds represented by the dynamic targets such as pedestrians, non-motor vehicles, motor vehicles and the like on the local point cloud map which are subjected to the local matching by utilizing the image information, and deleting the dynamic point clouds by utilizing the point cloud information of the new position and the last position;
s8, storing the laser point cloud map after the dynamic point cloud is deleted;
according to the method, local matching is carried out according to the environmental image information of the front and rear acquisition points and the environmental point cloud image information, then the dynamic target is identified, and then the point cloud of the dynamic target is deleted from the local point cloud map, so that the accuracy of image construction is improved. The map with the dynamic target has serious interference to the subsequent positioning and navigation by using the map, and the map built by the method does not contain the dynamic target, so that the accuracy is higher.
Example two
Based on the same inventive concept, the embodiment of the application also provides a device for constructing a point cloud map, as shown in fig. 6, the device comprises:
the acquisition module 601 is configured to acquire first information of a first position, move to a second position after the first information acquisition is completed, and acquire second information of the second position;
the local matching module 602 is configured to locally match the second information with the first information, so as to obtain a local point cloud map;
a dynamic target deleting module 603 configured to delete a dynamic target point cloud in the local point cloud map to obtain a laser point cloud map;
the first information comprises a first environment point cloud image and a first environment image;
the second information includes a second ambient point cloud image and a second ambient image.
In this embodiment, when the point cloud map construction is started, the first position is a starting position;
and when the next position starts to be acquired after the first position is acquired, the second position is the current acquisition position, and the first position is the last acquisition position.
As a preferred example, the acquisition module 601 is further configured to turn on an acquisition device before the point cloud mapping starts, the acquisition device comprising one or a combination of the following:
a laser radar;
binocular cameras;
a depth camera;
an inertial measurement unit IMU;
wheel speed odometer.
As a preferred example, the acquisition module 601 is further configured to establish a first environmental point cloud image according to laser radar data acquired by the laser radar; and establishing a first environment image according to the image data acquired by the binocular camera and the depth camera. Establishing a second environment point cloud image according to the laser radar data acquired by the laser radar; and establishing a second environment image according to the image data acquired by the binocular camera and the depth camera.
As a preferred example, the acquisition module 601 is further configured to upload laser radar data acquired by the laser radar to a host computer system; and uploading the image data acquired by the binocular camera and the depth camera to an upper computer system.
As a preferred example, the local matching module 602 is further configured to locally match the second information with the first information according to the following method:
and (3) carrying out registration processing on point clouds acquired by the laser radars at the front position and the rear position (namely the first position and the second position), and transforming the point clouds at different positions to the same position through information of the overlapped part. As a preferable example, two groups or point clouds are unified under a unified coordinate system by utilizing an ICP algorithm (Iterative Closest Point, nearest neighbor iterative algorithm), registration of the point clouds is performed, and a point cloud map obtained after registration becomes an environment local map.
As a preferred example, the dynamic target deletion module 603 is further configured to:
matching the image of the first position or the second position with the point cloud;
and deleting the point cloud corresponding to the dynamic target identified in the image from the point cloud map.
Wherein matching the image of the first location or the second location with the point cloud comprises:
and matching the image information with the point cloud information according to the conversion matrix. That is, the first ambient image is matched to the first ambient point cloud according to the transformation matrix or the second ambient image is matched to the second ambient point cloud according to the transformation matrix.
Preferably, the dynamic object deletion module 603 is further configured to determine the transformation matrix according to the following steps:
a1: and (5) calibrating.
The method comprises the steps of calibrating a plurality of sensors before the point cloud map is constructed, namely calibrating camera image information and radar point cloud information, namely opening a camera to collect image information in an environment with definite characteristic points, opening radar to collect the point cloud information, and then selecting characteristic positions on the image by staff and drawing corresponding positions on the point cloud information. As a preferred example, the extraction of each feature point on the image may be obtained by SIFT image feature point extraction algorithm or the like.
A2: a transformation matrix is determined.
The conversion relation between the drawn point cloud position and the selected image position is a conversion matrix, and the conversion matrix represents a direct mapping relation between the three-dimensional point cloud and the visual image pixels, namely the position on the image can find the point cloud position on the point cloud information through the conversion matrix.
Wherein characterizing the point cloud of the dynamic object comprises: a point cloud characterizing a pedestrian or a non-motor vehicle or a motor vehicle.
As a preferred example, the dynamic target deletion module 603 is further configured to:
after the whole map building area is collected, the collection is finished, and the laser point cloud maps of all the collection points are subjected to map splicing to obtain an environment point cloud map of the whole map building area.
It should be noted that, the acquisition module 601 provided in the present embodiment can implement all the functions included in steps S101 and S102 in the first embodiment, solve the same technical problem, achieve the same technical effect, and are not described herein again;
it should be noted that, the local matching module 602 provided in the present embodiment can implement all the functions included in step S103 in the first embodiment, solve the same technical problems, achieve the same technical effects, and are not described herein again;
it should be noted that, the dynamic target deleting module 603 provided in the present embodiment can implement all the functions included in step S104 in the first embodiment, solve the same technical problem, achieve the same technical effect, and are not described herein again;
it should be noted that, the device provided in the second embodiment and the method provided in the first embodiment belong to the same inventive concept, solve the same technical problem, achieve the same technical effect, and the device provided in the second embodiment can implement all the methods in the first embodiment, and the same points are not repeated.
Example III
Based on the same inventive concept, the embodiment of the application also provides a device for constructing a point cloud map, as shown in fig. 7, the device comprises:
including a memory 702, a processor 701, and a user interface 703;
the memory 702 is used for storing a computer program;
the user interface 703 is configured to interact with a user;
the processor 701 is configured to read a computer program in the memory 702, where the processor 701 implements:
collecting first information of a first position;
after the first information is acquired, moving to a second position, and acquiring second information of the second position;
locally matching the second information with the first information to obtain a local point cloud map;
deleting the dynamic target point cloud in the local point cloud map to obtain a laser point cloud map;
the first information comprises a first environment point cloud image and a first environment image;
the second information includes a second ambient point cloud image and a second ambient image.
Preferably, when the point cloud map construction is started, the first position is a starting position;
and when the next position starts to be acquired after the first position is acquired, the second position is the current acquisition position, and the first position is the last acquisition position.
As a preferred example, the processor 701 implements:
starting acquisition equipment before starting point cloud map construction, wherein the acquisition equipment comprises one or a combination of the following components:
a laser radar;
binocular cameras;
a depth camera;
an inertial measurement unit IMU;
wheel speed odometer.
As a preferred example, the processor 701 implements:
after the whole map building area is collected, the collection is finished, and the laser point cloud maps of all the collection points are subjected to map splicing to obtain an environment point cloud map of the whole map building area.
As a preferred example, the processor 701 implements:
establishing a first environment point cloud image according to laser radar data acquired by the laser radar; and establishing a first environment image according to the image data acquired by the binocular camera and the depth camera. Establishing a second environment point cloud image according to the laser radar data acquired by the laser radar; and establishing a second environment image according to the image data acquired by the binocular camera and the depth camera.
Further, the laser radar data acquired by the laser radar are uploaded to an upper computer system; and uploading the image data acquired by the binocular camera and the depth camera to an upper computer system.
As a preferred example, the processor 701 implements:
identifying a point cloud representing a dynamic target on the local point cloud map according to the second environment image and the first environment image; and deleting the point cloud representing the dynamic target in the local point cloud map according to the second environment point cloud map and the first environment point cloud map.
Wherein characterizing the point cloud of the dynamic object comprises: a point cloud characterizing a pedestrian or a non-motor vehicle or a motor vehicle.
As a preferred example, the processor 701 implements:
locally matching the second information with the first information according to the following method:
and (3) carrying out registration processing on point clouds acquired by the laser radars at the front position and the rear position (namely the first position and the second position), and transforming the point clouds at different positions to the same position through information of the overlapped part. As a preferable example, two groups or point clouds are unified under a unified coordinate system by utilizing an ICP algorithm (Iterative Closest Point, nearest neighbor iterative algorithm), registration of the point clouds is performed, and a point cloud map obtained after registration becomes an environment local map.
As a preferred example, the processor 701 implements: matching the image of the first position or the second position with the point cloud;
and deleting the point cloud corresponding to the dynamic target identified in the image from the point cloud map.
Wherein matching the image of the first location or the second location with the point cloud comprises:
and matching the image information with the point cloud information according to the conversion matrix. That is, the first ambient image is matched to the first ambient point cloud according to the transformation matrix or the second ambient image is matched to the second ambient point cloud according to the transformation matrix.
The processor 701, when executing the computer program, implements: the conversion matrix is determined according to the following steps:
a1: and (5) calibrating.
The method comprises the steps of calibrating a plurality of sensors before the point cloud map is constructed, namely calibrating camera image information and radar point cloud information, namely opening a camera to collect image information in an environment with definite characteristic points, opening radar to collect the point cloud information, and then selecting characteristic positions on the image by staff and drawing corresponding positions on the point cloud information. As a preferred example, the extraction of each feature point on the image may be obtained by SIFT image feature point extraction algorithm or the like.
A2: a transformation matrix is determined.
The conversion relation between the drawn point cloud position and the selected image position is a conversion matrix, and the conversion matrix represents a direct mapping relation between the three-dimensional point cloud and the visual image pixels, namely the position on the image can find the point cloud position on the point cloud information through the conversion matrix.
Where in FIG. 7, a bus architecture may comprise any number of interconnected buses and bridges, and in particular one or more processors represented by the processor 701 and various circuits of the memory represented by the memory 702, are linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. The processor 701 is responsible for managing the bus architecture and general processing, and the memory 702 may store data used by the processor 701 in performing operations.
The processor 701 may be CPU, ASIC, FPGA or a CPLD, and the processor 701 may also employ a multi-core architecture.
When the processor 701 executes the computer program stored in the memory 702, the method for constructing any point cloud map in the first embodiment is implemented.
It should be noted that, the device provided in the third embodiment and the method provided in the first embodiment belong to the same inventive concept, solve the same technical problem, achieve the same technical effect, and the device provided in the third embodiment can implement all the methods in the first embodiment, and the same points are not repeated.
The application also proposes a processor readable storage medium. The processor-readable storage medium stores a computer program, and when the processor executes the computer program, the method for constructing any point cloud map in the first embodiment is implemented.
It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (12)

1. The method for constructing the point cloud map is characterized by comprising the following steps of:
collecting first information of a first position;
after the first information is acquired, moving to a second position, and acquiring second information of the second position;
locally matching the second information with the first information to obtain a local point cloud map;
deleting the dynamic target point cloud in the local point cloud map to obtain a laser point cloud map;
the first information comprises a first environment point cloud image and a first environment image;
the second information comprises a second environment point cloud image and a second environment image;
the deleting the dynamic target point cloud in the local point cloud map comprises the following steps:
matching the image of the first position or the second position with the point cloud;
and deleting the point cloud corresponding to the dynamic target identified in the image from the point cloud map.
2. The method according to claim 1, characterized in that it comprises:
when the point cloud map construction is started, the first position is a starting position;
and when the next position starts to be acquired after the first position is acquired, the second position is the current acquisition position, and the first position is the last acquisition position.
3. The method of claim 2, wherein before the point cloud mapping is started, further comprising:
turning on an acquisition device comprising one or a combination of:
a laser radar;
binocular cameras;
a depth camera;
an inertial measurement unit IMU;
wheel speed odometer.
4. The method as recited in claim 2, further comprising:
after the whole map building area is collected, the collection is finished, and the laser point cloud maps of all the collection points are subjected to map splicing to obtain an environment point cloud map of the whole map building area.
5. A method according to claim 3, wherein said acquiring first information of the first location comprises:
establishing a first environment point cloud image according to laser radar data acquired by the laser radar;
and establishing a first environment image according to the image data acquired by the binocular camera and the depth camera.
6. A method according to claim 3, wherein said acquiring second information of the second location comprises:
establishing a second environment point cloud image according to the laser radar data acquired by the laser radar;
and establishing a second environment image according to the image data acquired by the binocular camera and the depth camera.
7. The method according to claim 5 or 6, further comprising:
uploading the laser radar data acquired by the laser radar to an upper computer system;
and uploading the image data acquired by the binocular camera and the depth camera to an upper computer system.
8. The method of claim 1, wherein the characterizing the point cloud of the dynamic object comprises:
a point cloud characterizing a pedestrian or a non-motor vehicle or a motor vehicle.
9. The method of claim 1, wherein matching the image of the first location or the second location to the point cloud comprises:
and matching the image information with the point cloud information according to the conversion matrix.
10. The device for constructing the point cloud map is characterized by comprising the following components:
the acquisition module is configured to acquire first information of a first position, move to a second position after the first information acquisition is completed, and acquire second information of the second position;
the local matching module is configured to locally match the second information with the first information to obtain a local point cloud map;
the dynamic target deleting module is configured to delete the dynamic target point cloud in the local point cloud map to obtain a laser point cloud map;
the first information comprises a first environment point cloud image and a first environment image;
the second information comprises a second environment point cloud image and a second environment image;
the deleting the dynamic target point cloud in the local point cloud map comprises the following steps:
matching the image of the first position or the second position with the point cloud;
and deleting the point cloud corresponding to the dynamic target identified in the image from the point cloud map.
11. The device for constructing the point cloud map is characterized by comprising a memory, a processor and a user interface;
the memory is used for storing a computer program;
the user interface is used for realizing interaction with a user;
the processor is configured to read a computer program in the memory, and when the processor executes the computer program, implement the method for constructing a point cloud map according to one of claims 1 to 9.
12. A processor-readable storage medium, characterized in that the processor-readable storage medium stores a computer program, which when executed by the processor implements the method of constructing a point cloud map according to one of claims 1 to 9.
CN202110887413.6A 2021-08-03 2021-08-03 Point cloud map construction method, device and storage medium Active CN113639745B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110887413.6A CN113639745B (en) 2021-08-03 2021-08-03 Point cloud map construction method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110887413.6A CN113639745B (en) 2021-08-03 2021-08-03 Point cloud map construction method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113639745A CN113639745A (en) 2021-11-12
CN113639745B true CN113639745B (en) 2023-10-20

Family

ID=78419450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110887413.6A Active CN113639745B (en) 2021-08-03 2021-08-03 Point cloud map construction method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113639745B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114646936A (en) * 2022-03-30 2022-06-21 北京洛必德科技有限公司 Point cloud map construction method and device and electronic equipment
CN115170630B (en) * 2022-06-30 2023-11-21 小米汽车科技有限公司 Map generation method, map generation device, electronic equipment, vehicle and storage medium
CN116385622B (en) * 2023-05-26 2023-09-12 腾讯科技(深圳)有限公司 Cloud image processing method, cloud image processing device, computer and readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109282822A (en) * 2018-08-31 2019-01-29 北京航空航天大学 Construct storage medium, the method and apparatus of navigation map
CN111508072A (en) * 2020-04-23 2020-08-07 东软睿驰汽车技术(上海)有限公司 Map construction method and device, electronic equipment and storage medium
CN111681163A (en) * 2020-04-23 2020-09-18 北京三快在线科技有限公司 Method and device for constructing point cloud map, electronic equipment and storage medium
CN111915657A (en) * 2020-07-08 2020-11-10 浙江大华技术股份有限公司 Point cloud registration method and device, electronic equipment and storage medium
CN111929699A (en) * 2020-07-21 2020-11-13 北京建筑大学 Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system
CN112652062A (en) * 2019-10-10 2021-04-13 北京京东乾石科技有限公司 Point cloud map construction method, device, equipment and storage medium
WO2021128297A1 (en) * 2019-12-27 2021-07-01 深圳市大疆创新科技有限公司 Method, system and device for constructing three-dimensional point cloud map
CN113168717A (en) * 2021-03-11 2021-07-23 华为技术有限公司 Point cloud matching method and device, navigation method and equipment, positioning method and laser radar

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108732584B (en) * 2017-04-17 2020-06-30 百度在线网络技术(北京)有限公司 Method and device for updating map
US20210180958A1 (en) * 2019-12-16 2021-06-17 Automotive Research & Testing Center Graphic information positioning system for recognizing roadside features and method using the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109282822A (en) * 2018-08-31 2019-01-29 北京航空航天大学 Construct storage medium, the method and apparatus of navigation map
CN112652062A (en) * 2019-10-10 2021-04-13 北京京东乾石科技有限公司 Point cloud map construction method, device, equipment and storage medium
WO2021128297A1 (en) * 2019-12-27 2021-07-01 深圳市大疆创新科技有限公司 Method, system and device for constructing three-dimensional point cloud map
CN111508072A (en) * 2020-04-23 2020-08-07 东软睿驰汽车技术(上海)有限公司 Map construction method and device, electronic equipment and storage medium
CN111681163A (en) * 2020-04-23 2020-09-18 北京三快在线科技有限公司 Method and device for constructing point cloud map, electronic equipment and storage medium
CN111915657A (en) * 2020-07-08 2020-11-10 浙江大华技术股份有限公司 Point cloud registration method and device, electronic equipment and storage medium
CN111929699A (en) * 2020-07-21 2020-11-13 北京建筑大学 Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system
CN113168717A (en) * 2021-03-11 2021-07-23 华为技术有限公司 Point cloud matching method and device, navigation method and equipment, positioning method and laser radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于激光雷达的移动机器人三维建图与定位;殷江;林建德;孔令华;邹诚;游通飞;易定容;;福建工程学院学报(第04期);67-71 *

Also Published As

Publication number Publication date
CN113639745A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
CN113639745B (en) Point cloud map construction method, device and storage medium
KR102273559B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
JP6862409B2 (en) Map generation and moving subject positioning methods and devices
CN108763287B (en) Construction method of large-scale passable regional driving map and unmanned application method thereof
CN112069856A (en) Map generation method, driving control method, device, electronic equipment and system
JP2021509993A (en) Systems and methods to reduce data storage in machine learning
CN112465970B (en) Navigation map construction method, device, system, electronic device and storage medium
CN111681163A (en) Method and device for constructing point cloud map, electronic equipment and storage medium
EP4170581A1 (en) Method, device and system for cooperatively constructing point cloud map
CN114485698B (en) Intersection guide line generation method and system
CN111784835A (en) Drawing method, drawing device, electronic equipment and readable storage medium
CN117576652A (en) Road object identification method and device, storage medium and electronic equipment
CN111982133A (en) Method and device for positioning vehicle based on high-precision map and electronic equipment
CN114187357A (en) High-precision map production method and device, electronic equipment and storage medium
CN116309943B (en) Parking lot semantic map road network construction method and device and electronic equipment
CN110827340B (en) Map updating method, device and storage medium
CN116125980A (en) Unmanned truck driving method and device, electronic equipment and storage medium
EP3842757B1 (en) Verification method and device for modeling route, unmanned vehicle, and storage medium
CN113227713A (en) Method and system for generating environment model for positioning
CN112669196B (en) Method and equipment for optimizing data by factor graph in hardware acceleration engine
CN116762094A (en) Data processing method and device
CN112528918A (en) Road element identification method, map marking method and device and vehicle
CN113421332A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN116883504B (en) Method and device for calibrating vehicle orientation, computer equipment and storage medium
CN111060114A (en) Method and device for generating feature map of high-precision map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant