CN115311337A - Point cloud registration method, device, equipment and storage medium - Google Patents

Point cloud registration method, device, equipment and storage medium Download PDF

Info

Publication number
CN115311337A
CN115311337A CN202210977460.4A CN202210977460A CN115311337A CN 115311337 A CN115311337 A CN 115311337A CN 202210977460 A CN202210977460 A CN 202210977460A CN 115311337 A CN115311337 A CN 115311337A
Authority
CN
China
Prior art keywords
point cloud
cloud data
dimensional
data set
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210977460.4A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chengshi Wanglin Information Technology Co Ltd
Original Assignee
Beijing Chengshi Wanglin Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chengshi Wanglin Information Technology Co Ltd filed Critical Beijing Chengshi Wanglin Information Technology Co Ltd
Priority to CN202210977460.4A priority Critical patent/CN115311337A/en
Publication of CN115311337A publication Critical patent/CN115311337A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the application provides a point cloud registration method, a point cloud registration device and a point cloud registration equipment and a storage medium. In the embodiment of the application, based on door body information in a space object, the position and pose estimation is performed on the three-dimensional point cloud data sets of all the acquisition point positions by combining door body connection information among the three-dimensional point cloud data sets, specifically, two-dimensional door point information in a two-dimensional live-action image is detected, the two-dimensional door point information is converted into three-dimensional door point information, relative position and pose information among the three-dimensional point cloud data sets is estimated by combining the door body connection information based on the three-dimensional door point information of the three-dimensional point cloud data sets, in the whole process, enough feature matching pairs are not needed, point cloud registration is performed according to the three-dimensional door point information corresponding to the door body information, and the accuracy of determining the relative position and pose information is improved.

Description

Point cloud registration method, device, equipment and storage medium
Technical Field
The present application relates to the field of three-dimensional reconstruction technologies, and in particular, to a point cloud registration method, apparatus, device, and storage medium.
Background
The point cloud registration is a process of calculating a relative pose (rigid transformation or Euclidean transformation) between two point clouds and transforming a source point cloud (source cloud) into a coordinate system with the same target cloud (target cloud). At present, in order to obtain an ideal point cloud registration result, enough feature matching pairs can be obtained through feature descriptors such as Scale-invariant feature transform (SIFT), FAST feature point extraction and description (ordered FAST and rolling BRIEF, ORB), histogram features (Signatures of Histograms, short), and the like, and then point cloud registration is performed based on the feature matching pairs to determine the relative pose between two point clouds. However, the method of matching the features to solve the relative pose results in low accuracy of solving the relative pose and affects the final point cloud registration result.
Disclosure of Invention
Aspects of the present disclosure provide a method, an apparatus, a device, and a storage medium for point cloud registration, so as to improve accuracy of point cloud registration.
The embodiment of the application provides a point cloud registration method, which comprises the following steps: acquiring a three-dimensional point cloud data set and a two-dimensional live-action image which are acquired at each acquisition point position in a plurality of space objects, wherein each three-dimensional point cloud data set and each two-dimensional live-action image comprise at least one door body information in the space object to which the three-dimensional point cloud data set belongs; the system comprises a plurality of space objects, a target physical space and a plurality of communication terminals, wherein the plurality of space objects belong to the target physical space, and one or more acquisition point positions are arranged in each space object; converting two-dimensional door point information in each two-dimensional live-action image into a three-dimensional point cloud data set corresponding to the two-dimensional live-action image according to a conversion relation between a radar coordinate system and a camera coordinate system to obtain three-dimensional door point information of the three-dimensional point cloud data set, wherein the two-dimensional door point information is intersection point information of corner points and the ground in the door body information; and determining first relative pose information between the three-dimensional point cloud data sets of the acquisition point locations according to the three-dimensional gate point information of each three-dimensional point cloud data set and the gate body connection information between the three-dimensional point cloud data sets of the acquisition point locations so as to realize point cloud registration between the three-dimensional point cloud data sets of the acquisition point locations.
The embodiment of the present application further provides a point cloud registration apparatus, including: the device comprises an acquisition module, a conversion module and a determination module; the acquisition module is used for acquiring a three-dimensional point cloud data set and a two-dimensional live-action image which are acquired at each acquisition point in a plurality of space objects, and each three-dimensional point cloud data set and each two-dimensional live-action image comprise at least one door body information in the space object to which the three-dimensional point cloud data set belongs; the system comprises a plurality of space objects, a target physical space and a plurality of sensors, wherein the plurality of space objects belong to the target physical space, and one or more acquisition points are arranged in each space object; the conversion module is used for converting the two-dimensional door point information in each two-dimensional live-action image into a three-dimensional point cloud data set corresponding to the two-dimensional live-action image according to the conversion relation between the radar coordinate system and the camera coordinate system to obtain the three-dimensional door point information of the three-dimensional point cloud data set, wherein the two-dimensional door point information is intersection point information of corner points and the ground in the door body information; and the determining module is used for determining first relative pose information between the three-dimensional point cloud data sets of the acquisition point locations according to the three-dimensional gate point information of each three-dimensional point cloud data set and the door body connection information between the three-dimensional point cloud data sets of the acquisition point locations so as to realize point cloud registration between the three-dimensional point cloud data sets of the acquisition point locations.
The embodiment of the present application further provides a point cloud registration apparatus, including: a memory and a processor; a memory for storing a computer program; and a processor, coupled to the memory, for executing a computer program to implement the steps in the point cloud registration method provided in the embodiments of the present application.
Embodiments of the present application further provide a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement the steps in the point cloud registration method provided by the embodiments of the present application.
In the embodiment of the application, based on door body information in a space object, the position and pose estimation is performed on the three-dimensional point cloud data sets of all the acquisition point positions by combining door body connection information among the three-dimensional point cloud data sets, specifically, two-dimensional door point information in a two-dimensional live-action image is detected, the two-dimensional door point information is converted into three-dimensional door point information, relative position and pose information among the three-dimensional point cloud data sets is estimated by combining the door body connection information based on the three-dimensional door point information of the three-dimensional point cloud data sets, in the whole process, enough feature matching pairs are not needed, point cloud registration is performed according to the three-dimensional door point information corresponding to the door body information, and the accuracy of determining the relative position and pose information is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flow chart of a point cloud registration method provided in an exemplary embodiment of the present application;
fig. 2 is a schematic structural diagram of a point cloud registration apparatus according to an exemplary embodiment of the present application;
fig. 3 is a schematic structural diagram of a point cloud registration apparatus according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Aiming at the problem of low accuracy of point cloud registration in the prior art, in the embodiment of the application, based on door body information in a space object and in combination with door body connection information between three-dimensional point cloud data sets, the position and orientation of the three-dimensional point cloud data sets of each acquisition point position are estimated, specifically, two-dimensional door point information in a two-dimensional live-action image is detected and converted into three-dimensional door point information, and based on the three-dimensional door point information of the three-dimensional point cloud data sets, in combination with the door body connection information, the relative position and orientation information between the three-dimensional point cloud data sets is estimated.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a point cloud registration method according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the method includes:
101. acquiring a three-dimensional point cloud data set and a two-dimensional live-action image which are acquired at each acquisition point position in a plurality of space objects, wherein each three-dimensional point cloud data set and each two-dimensional live-action image comprise at least one door body information in the space object to which the three-dimensional point cloud data set belongs; the system comprises a plurality of space objects, a target physical space and a plurality of sensors, wherein the plurality of space objects belong to the target physical space, and one or more acquisition points are arranged in each space object;
102. converting two-dimensional door point information in each two-dimensional live-action image into a three-dimensional point cloud data set corresponding to the two-dimensional live-action image according to a conversion relation between a radar coordinate system and a camera coordinate system to obtain three-dimensional door point information of the three-dimensional point cloud data set, wherein the two-dimensional door point information is intersection point information of corner points and the ground in the door body information;
103. and determining first relative pose information between the three-dimensional point cloud data sets of the acquisition point locations according to the three-dimensional gate point information of each three-dimensional point cloud data set and the gate body connection information between the three-dimensional point cloud data sets of the acquisition point locations so as to realize point cloud registration between the three-dimensional point cloud data sets of the acquisition point locations.
In the present embodiment, the target physical space refers to a specific spatial region, which includes a plurality of spatial objects, in other words, the plurality of spatial objects constitute the target physical space. For example, the target physical space refers to a set of houses, and the plurality of space objects included in the houses may be a kitchen, a bedroom, a living room, a bathroom, or the like. One or more acquisition points can be set in each space object, and the number of specific acquisition points can be determined according to the size and shape of the space object or the placement condition of objects in a physical space.
In this embodiment, a three-dimensional point cloud data set may be collected at each collection point. A Laser Radar (Laser Radar) can be used to collect a three-dimensional point cloud data set of a space object to which the Laser Radar belongs at each collection point. Among them, the laser radar is a system that detects a spatial structure of a target physical space by emitting a laser beam. The working principle of the system is that detection signals (laser beams) are transmitted to objects (such as walls, doors or windows) in a target physical space at each acquisition point, and then received signals (echoes) reflected from the objects are compared with the transmitted signals to obtain related information of the objects, such as parameters of distance, direction, height, speed, posture, shape and the like. When a laser beam irradiates the surface of an object, the reflected laser beam carries information such as direction, distance and the like. When a laser beam is scanned along a certain trajectory and reflected laser spot information is recorded while scanning, a large number of laser spots can be obtained by extremely fine scanning, and thus a three-dimensional point cloud data set can be formed.
Wherein, can adopt the camera to gather two-dimentional outdoor scene image. The two-dimensional live-action image is implemented in different ways according to different cameras, for example, if the camera is implemented as a camera of a panoramic camera, the two-dimensional live-action image is implemented as a panoramic image, and if the camera is implemented as a camera of a fisheye camera, the two-dimensional live-action image is implemented as a fisheye image.
The installation positions of the camera and the laser radar are not limited. For example, there is a certain angle between the camera and the lidar in the horizontal direction, e.g. 90 degrees, 180 degrees, 270 degrees, etc., and a certain distance between the camera and the lidar in the vertical direction, e.g. 0cm, 1cm, 5cm, etc. The camera and the laser radar can also be fixed on the holder equipment of the support and rotate along with the rotation of the holder equipment, in the rotating process of the holder equipment, the laser radar acquires a three-dimensional point cloud data set corresponding to a space object on a collection point position, and the camera collects a two-dimensional live-action image corresponding to the space object on the collection point position. And the conversion relation between the radar coordinate system and the camera coordinate system can be obtained according to the installation positions of the camera and the laser radar.
In the present embodiment, each space object includes at least one door body information. For example, the living room includes three pieces of door information, the living room has the same door information as the main bed, the sub bed and the toilet respectively, the main bed includes one piece of door information, the sub bed includes one piece of door information, and the toilet includes one piece of door information. Based on the above, the three-dimensional point cloud data set acquired based on the laser radar and the two-dimensional live-action image acquired based on the camera include at least one door body information in the space object.
The two-dimensional live-action image comprises two-dimensional door point information, the two-dimensional door point information is intersection point information of corner points in the door body information and the ground, and under a normal condition, one door body information is provided with four corner points, and two of the four corner points are intersected with the ground, namely the two-dimensional door point information corresponds to two corner point information in the door body information.
In this embodiment, the door and window detection may be performed on each two-dimensional live-action image to obtain two-dimensional door point information included in each two-dimensional live-action image. For example, the door and window detection is performed on each two-dimensional live-action image through a target detection algorithm, which includes but is not limited to: fast regional Convolutional Neural Networks (Fast-R-CNNs), such as the You can Only see Once (youonly Look one) model or Single Shot multi box Detector (SSD), etc.
After the two-dimensional gate point information in the two-dimensional live-action image is detected, the two-dimensional gate point information can be converted into a radar coordinate system from a camera coordinate system, and the three-dimensional gate point information in the three-dimensional point cloud data set corresponding to the two-dimensional live-action image is obtained. The three-dimensional point cloud data set corresponding to the two-dimensional live-action image is a three-dimensional point cloud data set collected at the same collection point with the two-dimensional live-action image. Specifically, according to a conversion relation between a radar coordinate system and a camera coordinate system, two-dimensional gate point information in each two-dimensional live-action image is converted into a three-dimensional point cloud data set corresponding to the two-dimensional live-action image, and three-dimensional gate point information of the three-dimensional point cloud data set is obtained.
For example, in the case where the two-dimensional live view image is implemented as a panorama, the coordinates of the two-dimensional gate point information in the image coordinate system are (c, r); converting the two-dimensional gate point information under the image coordinate system into the spherical coordinate system according to the conversion relation between the image coordinate system and the spherical coordinate system to obtain three-dimensional gate point information Pb = (xb, yb, zb) under the spherical coordinate system; assuming that the height of the camera is hc, converting three-dimensional gate point information in a spherical coordinate system into that in a camera coordinate system to obtain Pc = hc/yb Pb = hc/yb (xb, yb, zb); assuming that a calibration matrix of a camera coordinate system and a radar coordinate system is Tm, coordinates of three-dimensional gate point information under radar coordinates are Pl = Pc + Tm = hc/yb + Pb + Tm = hc/yb + Rm + Pb + Tm, where Rm and Tm are a rotation matrix and a translation matrix of the calibration matrix Tm, respectively.
In this embodiment, considering that point cloud registration between two three-dimensional point cloud data sets involves rotation or translation transformation, the point cloud registration problem is actually a nonlinear non-convex function and has an optimization problem of a large number of local extrema, in order to obtain an ideal point cloud registration result, sufficient feature matching pairs can be obtained through feature descriptors such as SIFT, ORB, SHOT and the like, and then the relative pose between the two three-dimensional point cloud data sets is solved. The method for solving the relative pose by adopting the feature matching pairs is possibly limited by the fact that the number of the feature matching pairs is small, and the accuracy of solving the relative pose information is low.
Considering that a target physical space comprises a plurality of door bodies, each door body can be connected with two space objects, if each door body of one space object is connected with other space objects, the space object is considered not to be connected with other space objects, namely, a three-dimensional point cloud data set corresponding to the space object does not need to be registered with three-dimensional point cloud data sets corresponding to other space objects. Based on this, in this embodiment, the relative pose between the three-dimensional point cloud data sets is estimated according to the three-dimensional door point information corresponding to the door body information and by combining the door body connection relationship between the three-dimensional point cloud data sets, which is not limited by the number of feature matching pairs, and the accuracy of pose information estimation is improved. Specifically, according to the three-dimensional door point information of each three-dimensional point cloud data set, and door body connection information between the three-dimensional point cloud data sets, first relative pose information between the three-dimensional point cloud data sets of each acquisition point is determined.
The door body connection information can embody the connection relation between the door body in one space object and other space objects, and each space object comprises one or more door body information. For example, the target physical space includes: the living room comprises three door body information, namely a door body A1, a door body A2 and a door body A3, the main horizontal position comprises the door body A1, the secondary horizontal position comprises the door body A2, and the toilet comprises the door body A3; the door body connection information can be that a living room is connected with a main bed through a door body A1, the living room is connected with a secondary bed through a door body A2, the living room is connected with a toilet through a door body A3, and the like, and meanwhile, the door body connection information can also embody each space object, or the door body connection information can be that a three-dimensional point cloud data set E1 corresponding to the living room is connected with a three-dimensional point cloud data set E2 corresponding to the main bed through a door body A1, a three-dimensional point cloud data set E1 corresponding to the living room is connected with a three-dimensional point cloud data set E3 corresponding to the secondary bed through a door body A2, and a three-dimensional point cloud data set E1 corresponding to the living room is connected with a three-dimensional point cloud data set E4 corresponding to the toilet through a door body A3.
The door body connection information may be preset, for example, the door body connection information may include connection relationship information of each door body information in the target physical space; or, the door body connection information may be established while performing point cloud registration on the three-dimensional point cloud data sets, for example, after performing point cloud registration on two three-dimensional point cloud data sets each time, the door body connection information between the two three-dimensional point cloud data sets is established, and subsequently, if door bodies in the three-dimensional point cloud data set M1 and other three-dimensional point cloud data sets establish door body connection information, the three-dimensional point cloud data set M1 will not participate in subsequent estimation of relative pose information, thereby reducing the amount of computation of pose estimation. For example, a three-dimensional point cloud data set which can participate in pose estimation can be determined according to door body connection information among the three-dimensional point cloud data sets; first relative pose information between three-dimensional point cloud data sets is determined according to three-dimensional gate point information of the three-dimensional point cloud data sets which can participate in pose estimation.
For example, if the door connection information between the three-dimensional point cloud data sets indicates that the space object F1 and the space object F2 have the same door information, it may determine a two-dimensional live-action image B1 corresponding to the space object F1 and a two-dimensional live-action image B2 corresponding to the space object F2, where a door X in the two-dimensional live-action image B1 corresponds to a door Y in the two-dimensional live-action image B2, obtain two-dimensional door point information in the two-dimensional live-action image B1 and the two-dimensional live-action image B2, and convert the two-dimensional door point information in the two-dimensional live-action image B1 and the two-dimensional live-action image B2 into three-dimensional door point information in the three-dimensional point cloud data set C1 and the three-dimensional point cloud data set C2, respectively; and determining first relative pose information between the three-dimensional point cloud data set C1 and the three-dimensional point cloud data set C2 according to the three-dimensional gate point information in the three-dimensional point cloud data set C1 and the three-dimensional point cloud data set C2.
For another example, the target physical space includes a three-dimensional point cloud data set G1, a three-dimensional point cloud data set G2, and a three-dimensional point cloud data set G3, the door connection information indicates that the three-dimensional point cloud data set G1 and the three-dimensional point cloud data set G2 have the same door information, the three-dimensional point cloud data set G1 and the three-dimensional point cloud data set G3 also have the same door information, and all the door information of the three-dimensional point cloud data set G2 establishes a connection relationship with the doors in the other three-dimensional point cloud data sets, so that the first relative pose information between the three-dimensional point cloud data set G1 and the three-dimensional point cloud data set G3 can be determined according to the three-dimensional door point information of the three-dimensional point cloud data set G1 and the three-dimensional door point information of the three-dimensional point cloud data set G3.
Optionally, after the first relative pose information between the three-dimensional point cloud data sets of the acquisition point locations, point cloud fusion may be performed on the three-dimensional point cloud data sets of the acquisition point locations according to the first relative position information, so as to obtain a three-dimensional point cloud data set corresponding to the target physical space. Or after the first relative pose information between the three-dimensional Point cloud data sets of the acquisition points, the first relative position information may be used as initial relative pose information between the three-dimensional Point cloud data sets, the three-dimensional Point cloud data sets of the acquisition points are precisely registered by using an Iterative Closest Point (ICP) algorithm or a Normal Distribution Transformation (NDT) algorithm, and the three-dimensional Point cloud data sets of the acquisition points are fused based on the pose information of the three-dimensional Point cloud data sets of the acquisition points obtained by precise registration to obtain the three-dimensional Point cloud data set corresponding to the target physical space.
In the embodiment of the application, based on door body information in a space object, the position and pose estimation is performed on the three-dimensional point cloud data sets of all the acquisition point positions by combining door body connection information among the three-dimensional point cloud data sets, specifically, two-dimensional door point information in a two-dimensional live-action image is detected, the two-dimensional door point information is converted into three-dimensional door point information, relative position and pose information among the three-dimensional point cloud data sets is estimated by combining the door body connection information based on the three-dimensional door point information of the three-dimensional point cloud data sets, in the whole process, enough feature matching pairs are not needed, point cloud registration is performed according to the three-dimensional door point information corresponding to the door body information, and the accuracy of determining the relative position and pose information is improved.
In an optional embodiment, an implementation manner of determining first relative pose information between three-dimensional point cloud data sets of each acquisition point location according to three-dimensional gate point information of each three-dimensional point cloud data set and by combining gate body connection information between the three-dimensional point cloud data sets of each acquisition point location includes: sequentially determining a target point cloud data set according to a set point cloud registration sequence; the set point cloud registration sequence can be the sequence of collecting the three-dimensional point cloud data sets, and can also be the point cloud registration sequence among the three-dimensional point cloud data sets according to the relative position relation of a plurality of space objects; and determining at least one candidate point cloud data set according to the door body connection information among the three-dimensional point cloud data sets of the acquisition point positions. Each three-dimensional point cloud data set may include one door body information or a plurality of door body information, each door body information maintains door body connection information and represents that the door body information establishes a connection relationship with the door body information in other three-dimensional point cloud data sets, and at least one candidate point cloud data set is a three-dimensional point cloud data set in which the door body information does not establish a connection relationship with the door body information in other three-dimensional point cloud data sets. Estimating second relative pose information corresponding to each candidate point cloud data set according to the target point cloud data set and the three-dimensional gate point information of each candidate point cloud data set; selecting a first candidate point cloud data set from at least one candidate point cloud data set as a source point cloud data set according to second relative pose information corresponding to each candidate point cloud data set; and taking the second relative pose information corresponding to the first candidate point cloud data set as the first relative pose information between the source point cloud data set and the target point cloud data set. And the second relative pose information corresponding to the first candidate point cloud data set is the second relative pose information which enables the point cloud error between the candidate point cloud data set and the target point cloud data set to be minimum.
Optionally, an embodiment of selecting a first candidate point cloud data set from at least one candidate point cloud data set as a source point cloud data set according to the second relative pose information corresponding to each candidate point cloud data set includes: and performing pose conversion on each candidate point cloud data set according to the second relative pose information corresponding to each candidate point cloud data set, and calculating first distance information between each candidate point cloud data set and the target point cloud data set after the pose conversion. The manner of calculating the first distance information between each candidate point cloud data set after pose transformation and the target point cloud data set is not limited, and for example, the first distance information may be calculated for the three-dimensional point p of each candidate point cloud data set after pose transformation i Obtaining a three-dimensional point p i Nearest neighbor three-dimensional point q in target point cloud data set i The three-dimensional point q i The normal vector of (A) is n i Thereby, a three-dimensional point p can be calculated i And a three-dimensional point q i Point-to-surface distance of plane
Figure BDA0003798899130000091
Considering that a large number of outer points often exist between the registered candidate point cloud data set and the target point cloud data set, in order to enhance robustness to the outer points, a third distance threshold d is set m If the point-surface distance d exceeds the third distance threshold, the third distance threshold d is set m As the point-surface distance d, a plurality of three-dimensional points p are then calculated i And a three-dimensional point q i The point-to-surface distances between the candidate point cloud data set and the target point cloud data set are averaged to serve as first distance information between the candidate point cloud data set and the target point cloud data set, and the first distance information can serve as a registration quality score (score) of second relative pose information corresponding to the candidate point cloud data set, that is:
Figure BDA0003798899130000092
wherein m is i Representing a candidate point cloud dataset and a candidate point cloud data setThe number of three-dimensional points of the point cloud is registered in the target point cloud data set, and the score reflects the fit degree of the candidate point cloud data set and the target point cloud data set.
After calculating the first distance information between each candidate point cloud data set after pose transformation and the target point cloud data set, the first candidate point cloud data set can be selected from at least one candidate point cloud data set as the source point cloud data set according to the first distance information between the target point cloud data set and each candidate point cloud data set after pose transformation. For example, the candidate point cloud data set corresponding to the minimum first distance information may be used as the first candidate point cloud data set, and the first candidate point cloud data set may be used as the source point cloud data set. For another example, a plurality of candidate point cloud data sets corresponding to first distance information exceeding a set first distance threshold may be determined, and the first candidate point cloud data set may be selected from the plurality of three-dimensional point cloud data sets as a source point cloud data set according to a relative positional relationship of the spatial object; the relative position relationship of the space object is obtained by other sensors, for example, a GPS positioning module, a WiFi positioning module, or a Simultaneous Localization And Mapping (SLAM) module. In an optional embodiment, after point cloud registration is performed on the three-dimensional point cloud data set, door body connection information between the registered three-dimensional point clouds may be established, specifically, second distance information between three-dimensional door point information in the source point cloud data set and three-dimensional door point information in the destination point cloud data set is calculated according to first relative pose information between the source point cloud data set and the destination point cloud data set, for example, pose transformation is performed on the three-dimensional door point information in the source point cloud data set according to the first relative pose information, and second distance information between the three-dimensional door point information after the pose transformation and the three-dimensional door point information in the destination point cloud data set is calculated; if the second distance information is smaller than the set second distance threshold, the source point cloud data set and the three-dimensional door point information in the target point cloud data set are considered to be successfully matched, and door body connection information between the three-dimensional door point information in the source point cloud data set and the three-dimensional door point information in the target point cloud data set can be established.
Optionally, each three-dimensional gate point information comprises: the two pieces of three-dimensional corner point information can be used for respectively calculating the center point information of the two pieces of three-dimensional corner point information in each piece of three-dimensional gate point information aiming at the source point cloud data set and the target point cloud data set to respectively obtain source center point information and target center point information; the source point cloud data set comprises one or more pieces of three-dimensional gate point information, and for each piece of three-dimensional gate point information, center point information of two pieces of three-dimensional corner point information in the three-dimensional gate point information is calculated and called as source center point information; similarly, the target point cloud data set may also include one or more pieces of three-dimensional gate point information, and for each piece of three-dimensional gate point information, center point information of two pieces of three-dimensional corner point information in the three-dimensional gate point information is calculated and called target center point information; calculating third distance information between the source central point information and the target central point information according to the first relative pose information between the source point cloud data set and the target point cloud data set; for example, pose transformation is performed on the source central point information through the first relative pose information, and third distance information between the source central point information and the destination central point information after the pose transformation is calculated; and taking the third distance information as second distance information between the three-dimensional gate point information in the source point cloud data set and the target point cloud data set.
Optionally, if the second distance information is greater than or equal to a set second distance threshold, it is considered that the three-dimensional gate point information in the source point cloud data set and the destination point cloud data set is not successfully matched, indicating that the accuracy of point cloud registration between the destination point cloud data set and the source point cloud data set is low, and acquiring pose information of the destination point cloud data set and at least one candidate point cloud data set provided by other sensors; other sensors include at least: a WIFI sensor, a GPS sensor or a SLAM module; selecting a source point cloud data set corresponding to the target point cloud data from at least one candidate point cloud data set according to the relative position relation of the plurality of space objects; and determining first relative pose information between the target point cloud data set and the source point cloud data set according to the pose information of the target point cloud data set and the source point cloud data set provided by other sensors.
In an alternative embodiment, an implementation of determining at least one candidate point cloud data set according to portal body connection information between three-dimensional point cloud data sets includes: under the condition that point cloud registration is carried out on the three-dimensional point cloud data sets in the target space object for the first time, the connection relation is not established among the three-dimensional gate point information among the three-dimensional point cloud data sets, at the moment, the three-dimensional point cloud data sets corresponding to the acquisition point positions can be used as at least one candidate point cloud data set, and the three-dimensional point cloud data sets except the target point cloud data set are used as the candidate point cloud data sets. Under the condition that point cloud registration is not performed on a three-dimensional point cloud data set in a target space object for the first time, a connection relation between three-dimensional door point information participating in the point cloud registration is already established in the previous point cloud registration process, that is, door body connection information between the three-dimensional door point information participating in the point cloud registration can be acquired, if the door body connection information indicates that the three-dimensional door point information contained in the first three-dimensional point cloud data set has already established a connection relation with other three-dimensional point cloud data sets, it indicates that the first three-dimensional point cloud data set can not perform point cloud registration with other three-dimensional point cloud data sets, three-dimensional point cloud data sets corresponding to the acquisition point positions can be used as at least one candidate data set, wherein the three-dimensional point cloud data sets except for a target point cloud data set and the first three-dimensional point cloud data set.
In an alternative embodiment, an implementation of estimating second relative pose information between a destination point cloud dataset and each candidate point cloud dataset from three-dimensional gate point information of the destination point cloud dataset and each candidate point cloud dataset includes: matching at least two gate point pair information according to the three-dimensional gate point information of the target point cloud data set and each candidate point cloud data set; and estimating second relative pose information of the target point cloud data set and each candidate three-dimensional point cloud data set according to the at least two gate point pair information.
For example, the destination point cloud data set may include one or more three-dimensional gate point information, each of which includes two three-dimensional corner point information, and similarly, the candidate point cloud data set may also include one or more three-dimensional gate point information; for example, one piece of three-dimensional gate point information H0 in the target point cloud dataset includes three-dimensional corner point information H1 and three-dimensional corner point information H2, one piece of three-dimensional gate point information J0 in the candidate point cloud dataset includes three-dimensional corner point information J1 and three-dimensional corner point information J2, and the three-dimensional gate point information H0 is matched with the three-dimensional gate point information J0, so that two pieces of gate point pair information can be obtained, which are: gate-pair information K1 (H1, J1) and gate-pair information K2 (H2, J2); if the target point cloud data set or the candidate point cloud data set contains a plurality of three-dimensional gate point information, a plurality of (even number) gate point pair information can be obtained.
Considering that the laser radar can be fixed on a rotating shaft of a support or a cloud platform device and rotates along with the rotation of the rotating shaft or the cloud platform device, so that a three-dimensional point cloud data set is collected at a collection point position, wherein the rotation operation of the three-dimensional point cloud data set is performed around a vertical shaft, the translation operation of the three-dimensional point cloud data set is performed on the horizontal plane of the collection point position, the relative pose information between the three-dimensional point cloud data sets belongs to two-dimensional rigid body transformation, and three degrees of freedom (the degree of freedom of a y axis during rotation and the degree of freedom of an x axis and a z axis during translation) exist, wherein each gate point pair obtained through matching can provide 2 linear equations, and a second relative pose information corresponding to a candidate point cloud data set can be estimated based on each gate point pair; that is to say, for each three-dimensional gate point information in the target point cloud data set and the candidate point cloud data set, two gate point pair information can be obtained through matching, so that two second relative pose information corresponding to the candidate point cloud data set can be obtained.
Correspondingly, for two second relative pose information corresponding to the candidate point cloud data sets, respectively performing pose transformation on the candidate point cloud data sets, and calculating first distance information between each candidate point cloud data set after the pose transformation and the target point cloud data set; first distance information between each candidate point cloud data set after pose conversion and a target point cloud data set can be calculated according to the candidate point cloud data sets, the candidate point cloud data set corresponding to the minimum first distance information serves as a source point cloud data set, and second relative pose information used for calculating the minimum first distance information serves as first relative pose information between the source point cloud data set and the target point cloud data set.
In an alternative embodiment, there may be an external point in the three-dimensional point cloud data set, for example, the three-dimensional point cloud data set is collected in a living room and a restaurant respectively, the three-dimensional points viewed by the two collected point locations are mainly in the space of the living room and the restaurant, and the three-dimensional points outside the space of the living room and the restaurant are external points, for example, the three-dimensional points outside the living room or the restaurant. Based on the three-dimensional door point information, the three-dimensional points except the door body information corresponding to the three-dimensional door point information can be filtered according to the three-dimensional door point information in the three-dimensional point cloud data set. Before estimating first relative pose information between three-dimensional point cloud data sets according to respective three-dimensional gate point information of each three-dimensional point cloud data set and door body connection information between the three-dimensional point cloud data sets, determining a plane where a door body corresponding to the three-dimensional gate point information is located according to the three-dimensional gate point information of each three-dimensional point cloud data set; and filtering three-dimensional points outside the space object corresponding to the three-dimensional point cloud data set according to the position relation between the three-dimensional points in the three-dimensional point cloud data set and the plane where the door body is located. For example, the plane where the door body is located can be estimated from the three-dimensional door point information in the three-dimensional point cloud data set and the vertical direction where the three-dimensional door point information is located, and it is assumed that the estimated plane where the door body is located is:
Figure BDA0003798899130000131
wherein n is a normal vector of a plane where the door body is located, ρ is a constant for representing the position of the plane where the door body is located, and x is a three-dimensional point in the three-dimensional point cloud data set. If it satisfies
Figure BDA0003798899130000132
The three-dimensional point is considered to be out of the door, and the three-dimensional point needs to be filtered; if it is
Figure BDA0003798899130000133
And the three-dimensional point is represented on the plane where the door body is located, or is located in the door, so that the three-dimensional point does not need to be filtered.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 101 to 103 may be device a; for another example, the execution subject of steps 101 and 102 may be device a, and the execution subject of step 103 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 101, 102, etc., are merely used for distinguishing different operations, and the sequence numbers do not represent any execution order per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 2 is a schematic structural diagram of a point cloud registration apparatus according to an exemplary embodiment of the present application, and the point cloud registration apparatus shown in fig. 2 includes: an acquisition module 21, a conversion module 22 and a determination module 23.
The acquisition module 21 is configured to acquire a three-dimensional point cloud data set and a two-dimensional live-action image acquired at each acquisition point in a plurality of space objects, where each three-dimensional point cloud data set and each two-dimensional live-action image include at least one piece of door body information in the space object to which the three-dimensional point cloud data set belongs; the system comprises a plurality of space objects, a target physical space and a plurality of sensors, wherein the plurality of space objects belong to the target physical space, and one or more acquisition points are arranged in each space object;
the conversion module 22 is configured to convert the two-dimensional gate point information in each two-dimensional live-action image into a three-dimensional point cloud dataset corresponding to the two-dimensional live-action image according to a conversion relationship between a radar coordinate system and a camera coordinate system, so as to obtain three-dimensional gate point information of the three-dimensional point cloud dataset, where the two-dimensional gate point information is intersection point information of a corner point in the door body information and the ground;
the determining module 23 is configured to determine, according to the three-dimensional gate point information of each three-dimensional point cloud data set, first relative pose information between the three-dimensional point cloud data sets of each acquisition point location by combining gate body connection information between the three-dimensional point cloud data sets of each acquisition point location, so as to implement point cloud registration between the three-dimensional point cloud data sets of each acquisition point location.
In an optional embodiment, the determining module is specifically configured to: sequentially determining a target point cloud data set according to a set point cloud registration sequence; determining at least one candidate point cloud data set according to door body connection information among the three-dimensional point cloud data sets of the acquisition point locations; estimating second relative pose information corresponding to each candidate point cloud data set according to the target point cloud data set and the three-dimensional gate point information of each candidate point cloud data set; selecting a first candidate point cloud data set from at least one candidate point cloud data set as a source point cloud data set according to second relative pose information corresponding to each candidate point cloud data set; and taking the second relative pose information corresponding to the first candidate point cloud data set as the first relative pose information between the source point cloud data set and the target point cloud data set.
In an optional embodiment, the determining module is specifically configured to: performing pose conversion on each candidate point cloud data set according to second relative pose information corresponding to each candidate point cloud data set, and calculating first distance information between each candidate point cloud data set and a target point cloud data set after the pose conversion; and selecting a first candidate point cloud data set from at least one candidate point cloud data set as a source point cloud data set according to first distance information between each candidate point cloud data set after pose conversion and a target point cloud data set.
In an optional embodiment, the determining module is specifically configured to: under the condition of carrying out point cloud registration on a three-dimensional point cloud data set in a target physical space for the first time, taking the three-dimensional point cloud data set except for a target point cloud data set as at least one candidate point cloud data set; under the condition that point cloud registration is not performed on a three-dimensional point cloud data set in a target physical space for the first time, if door body connection information among the three-dimensional point cloud data sets participating in the point cloud registration indicates that connection relations between three-dimensional door point information contained in a first three-dimensional point cloud data set and other three-dimensional point cloud data sets are established, the three-dimensional point cloud data sets except a target point cloud data set and the first three-dimensional point cloud data set are used as at least one candidate point cloud data set.
In an optional embodiment, the determining module is specifically configured to: matching at least two gate point pair information according to the three-dimensional gate point information of the target point cloud data set and each candidate point cloud data set; and estimating second relative pose information corresponding to each candidate three-dimensional point cloud data set according to the at least two gate point pair information.
In an optional embodiment, the point cloud registration apparatus further comprises: a calculation module and an establishment module; the computing module is used for computing second distance information between three-dimensional gate point information in the source point cloud data set and the target point cloud data set according to first relative pose information between the source point cloud data set and the target point cloud data set; and the establishing module is used for establishing door body connection information between the three-dimensional door point information in the source point cloud data set and the target point cloud data set if the second distance information is smaller than a set second distance threshold value.
In an alternative embodiment, the three-dimensional door point information includes: two three-dimensional corner point information; the calculation module is specifically configured to: respectively calculating the center point information of two three-dimensional corner point information in each three-dimensional gate point information aiming at the source point cloud data set and the target point cloud data set to respectively obtain source center point information and target center point information; calculating third distance information between the source central point information and the target central point information according to the first relative pose information between the source point cloud data set and the target point cloud data set; and taking the third distance information as second distance information between the three-dimensional gate point information in the source point cloud data set and the target point cloud data set.
In an optional embodiment, the point cloud registration apparatus further comprises: a selection module; the acquisition module is further configured to: if the second distance information is larger than or equal to a set second distance threshold, acquiring pose information of a target point cloud data set and at least one candidate point cloud data set provided by other sensors; other sensors include at least: a wireless communication sensor, a positioning sensor or an instant positioning and map building module; the selection module is used for selecting a source point cloud data set corresponding to the target point cloud data from at least one candidate point cloud data set according to the relative position relation of the plurality of space objects; the determination module is further configured to: and determining first relative pose information between the target point cloud data set and the source point cloud data set according to the pose information of the target point cloud data set and the source point cloud data set provided by other sensors.
In an optional embodiment, the point cloud registration apparatus further comprises: a filtering module; before estimating first relative pose information between the three-dimensional point cloud data sets according to respective three-dimensional door point information of each three-dimensional point cloud data set and door body connection information between the three-dimensional point cloud data sets, the determining module is further configured to: determining a plane where a door body corresponding to the three-dimensional door point information is located according to the three-dimensional door point information of each three-dimensional point cloud data set; the filtering module is used for: and filtering three-dimensional points outside the space object corresponding to the three-dimensional point cloud data set according to the position relation between the three-dimensional points in the three-dimensional point cloud data set and the plane where the door body is located.
For details of the implementation of the point cloud registration apparatus, reference may be made to the foregoing embodiments, and details are not repeated herein.
The point cloud registration device provided by the embodiment of the application estimates the position and orientation of the three-dimensional point cloud data set of each acquisition point location based on the door body information in the space object and by combining the door body connection information between the three-dimensional point cloud data sets, specifically detects the two-dimensional door point information in the two-dimensional live-action image, converts the two-dimensional door point information into the three-dimensional door point information, estimates the relative position and orientation information between the three-dimensional point cloud data sets based on the three-dimensional door point information of the three-dimensional point cloud data sets and by combining the door body connection information, does not need enough feature matching pairs in the whole process, performs point cloud registration according to the three-dimensional door point information corresponding to the door body information, and improves the accuracy of determining the relative position and orientation information.
Fig. 3 is a schematic structural diagram of a point cloud registration apparatus according to an exemplary embodiment of the present application. As shown in fig. 3, the apparatus includes: a memory 34 and a processor 35.
A memory 34 for storing a computer program and may be configured to store other various data to support operations on the point cloud registration device. Examples of such data include instructions for any application or method operating on a point cloud registration device.
The memory 34 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 35, coupled to the memory 34, for executing the computer program in the memory 34 for: acquiring a three-dimensional point cloud data set and a two-dimensional live-action image which are acquired at each acquisition point position in a plurality of space objects, wherein each three-dimensional point cloud data set and each two-dimensional live-action image comprise at least one door body information in the space object to which the three-dimensional point cloud data set belongs; the system comprises a plurality of space objects, a target physical space and a plurality of sensors, wherein the plurality of space objects belong to the target physical space, and one or more acquisition points are arranged in each space object; converting two-dimensional door point information in each two-dimensional live-action image into a three-dimensional point cloud data set corresponding to the two-dimensional live-action image according to a conversion relation between a radar coordinate system and a camera coordinate system to obtain three-dimensional door point information of the three-dimensional point cloud data set, wherein the two-dimensional door point information is intersection point information of corner points and the ground in the door body information; and determining first relative pose information between the three-dimensional point cloud data sets of the acquisition point locations according to the three-dimensional gate point information of each three-dimensional point cloud data set and the gate body connection information between the three-dimensional point cloud data sets of the acquisition point locations so as to realize point cloud registration between the three-dimensional point cloud data sets of the acquisition point locations.
In an optional embodiment, when determining the first relative pose information between the three-dimensional point cloud data sets of the acquisition point locations according to the three-dimensional gate point information of each three-dimensional point cloud data set and by combining the gate body connection information between the three-dimensional point cloud data sets of the acquisition point locations, the processor 35 is specifically configured to: sequentially determining a target point cloud data set according to a set point cloud registration sequence; determining at least one candidate point cloud data set according to door body connection information among the three-dimensional point cloud data sets of the acquisition point locations; estimating second relative pose information corresponding to each candidate point cloud data set according to the target point cloud data set and the three-dimensional gate point information of each candidate point cloud data set; selecting a first candidate point cloud data set from at least one candidate point cloud data set as a source point cloud data set according to second relative pose information corresponding to each candidate point cloud data set; and taking the second relative pose information corresponding to the first candidate point cloud data set as the first relative pose information between the source point cloud data set and the target point cloud data set.
In an optional embodiment, when the processor 35 selects the first candidate point cloud data set from the at least one candidate point cloud data set as the source point cloud data set according to the second relative pose information corresponding to each candidate point cloud data set, the processor is specifically configured to: performing pose conversion on each candidate point cloud data set according to second relative pose information corresponding to each candidate point cloud data set, and calculating first distance information between each candidate point cloud data set and a target point cloud data set after the pose conversion; and selecting a first candidate point cloud data set from at least one candidate point cloud data set as a source point cloud data set according to first distance information between each candidate point cloud data set and the target point cloud data set after pose conversion.
In an optional embodiment, when determining at least one candidate point cloud data set according to the gate connection information between the three-dimensional point cloud data sets of the respective acquisition point locations, the processor 35 is specifically configured to: under the condition of carrying out point cloud registration on a three-dimensional point cloud data set in a target physical space for the first time, taking the three-dimensional point cloud data set except for a target point cloud data set as at least one candidate point cloud data set; under the condition that point cloud registration is not performed on a three-dimensional point cloud data set in a target physical space for the first time, if door body connection information among the three-dimensional point cloud data sets participating in the point cloud registration indicates that connection relations between three-dimensional door point information contained in a first three-dimensional point cloud data set and other three-dimensional point cloud data sets are established, the three-dimensional point cloud data sets except a target point cloud data set and the first three-dimensional point cloud data set are used as at least one candidate point cloud data set.
In an optional embodiment, the processor 35, when estimating the second relative pose information corresponding to each candidate point cloud data set according to the three-dimensional gate point information of the target point cloud data set and each candidate point cloud data set, is specifically configured to: matching at least two gate point pair information according to the three-dimensional gate point information of the target point cloud data set and each candidate point cloud data set; and estimating second relative pose information corresponding to each candidate three-dimensional point cloud data set according to the at least two gate point pair information.
In an alternative embodiment, processor 35 is further configured to: calculating second distance information between three-dimensional gate point information in the source point cloud data set and the target point cloud data set according to first relative pose information between the source point cloud data set and the target point cloud data set; and if the second distance information is smaller than the set second distance threshold, establishing door body connection information between the three-dimensional door point information in the source point cloud data set and the three-dimensional door point information in the target point cloud data set.
In an alternative embodiment, the three-dimensional door point information includes: two three-dimensional corner point information; the processor 35 is specifically configured to, when calculating second distance information between three-dimensional gate point information in the source point cloud data set and the destination point cloud data set according to the first relative pose information between the source point cloud data set and the destination point cloud data set: respectively calculating the center point information of two three-dimensional corner point information in each three-dimensional gate point information aiming at the source point cloud data set and the target point cloud data set to respectively obtain source center point information and target center point information; calculating third distance information between the source central point information and the target central point information according to the first relative pose information between the source point cloud data set and the target point cloud data set; and taking the third distance information as second distance information between the three-dimensional gate point information in the source point cloud data set and the target point cloud data set.
In an alternative embodiment, processor 35 is further configured to: if the second distance information is larger than or equal to a set second distance threshold, acquiring pose information of a target point cloud data set and at least one candidate point cloud data set provided by other sensors; other sensors include at least: a wireless communication sensor, a positioning sensor or an instant positioning and map building module; selecting a source point cloud data set corresponding to the target point cloud data from at least one candidate point cloud data set according to the relative position relation of the plurality of space objects; and determining first relative pose information between the target point cloud data set and the source point cloud data set according to the pose information of the target point cloud data set and the source point cloud data set provided by other sensors.
In an optional embodiment, the processor 35, before estimating the first relative pose information between the three-dimensional point cloud data sets according to the respective three-dimensional door point information of each three-dimensional point cloud data set and the door body connection information between the three-dimensional point cloud data sets, is further configured to: determining a plane where a door body corresponding to the three-dimensional door point information is located according to the three-dimensional door point information of each three-dimensional point cloud data set; and filtering three-dimensional points outside the space object corresponding to the three-dimensional point cloud data set according to the position relation between the three-dimensional points in the three-dimensional point cloud data set and the plane where the door body is located.
For details of implementation of the point cloud registration device, reference may be made to the foregoing embodiments, which are not described herein again.
The point cloud registration equipment provided by the embodiment of the application carries out position and orientation estimation on the three-dimensional point cloud data sets of all the acquisition point positions by combining the door body information in the space object with the door body connection information among the three-dimensional point cloud data sets, specifically detects the two-dimensional door point information in the two-dimensional live-action image, converts the two-dimensional door point information into the three-dimensional door point information, estimates the relative position and orientation information among the three-dimensional point cloud data sets by combining the door body connection information with the three-dimensional door point information based on the three-dimensional door point information of the three-dimensional point cloud data sets, does not need enough feature matching pairs in the whole process, carries out point cloud registration according to the three-dimensional door point information corresponding to the door body information, and improves the accuracy of determining the relative position and orientation information.
Further, as shown in fig. 3, the point cloud registration apparatus further includes: communication components 36, display 37, power components 38, audio components 39, and the like. Only some of the components are schematically shown in fig. 3, and it is not meant that the point cloud registration apparatus includes only the components shown in fig. 3. It should be noted that the components within the dashed box in fig. 3 are optional components, not necessary components, and may be determined according to the product form of the point cloud registration apparatus.
Accordingly, the present application also provides a computer readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement the steps of the method shown in fig. 1 provided by the present application.
The communication component of fig. 3 described above is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The display in fig. 3 described above includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply assembly of fig. 3 described above provides power to the various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component of fig. 3 described above may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. A point cloud registration method, comprising:
acquiring a three-dimensional point cloud data set and a two-dimensional live-action image which are acquired at each acquisition point position in a plurality of space objects, wherein each three-dimensional point cloud data set and each two-dimensional live-action image comprise at least one door body information in the space object to which the three-dimensional point cloud data set belongs; the space objects belong to a target physical space, and one or more acquisition point positions are arranged in each space object;
converting the two-dimensional door point information in each two-dimensional live-action image into a three-dimensional point cloud data set corresponding to the two-dimensional live-action image according to the conversion relation between a radar coordinate system and a camera coordinate system to obtain the three-dimensional door point information of the three-dimensional point cloud data set, wherein the two-dimensional door point information is the intersection point information of corner points and the ground in the door body information;
and determining first relative pose information between the three-dimensional point cloud data sets of the acquisition point locations according to the three-dimensional gate point information of each three-dimensional point cloud data set and the gate body connection information between the three-dimensional point cloud data sets of the acquisition point locations so as to realize point cloud registration between the three-dimensional point cloud data sets of the acquisition point locations.
2. The method of claim 1, wherein determining first relative pose information between the three-dimensional point cloud datasets for each acquisition point location based on three-dimensional gate point information for each three-dimensional point cloud dataset in combination with gate body connection information between the three-dimensional point cloud datasets for the acquisition point location comprises:
sequentially determining a target point cloud data set according to a set point cloud registration sequence;
determining at least one candidate point cloud data set according to door body connection information among the three-dimensional point cloud data sets of the acquisition point locations;
estimating second relative pose information corresponding to each candidate point cloud data set according to the three-dimensional gate point information of the target point cloud data set and each candidate point cloud data set;
selecting a first candidate point cloud data set from at least one candidate point cloud data set as a source point cloud data set according to the second relative posture information corresponding to each candidate point cloud data set;
and taking the second relative pose information corresponding to the first candidate point cloud data set as the first relative pose information between the source point cloud data set and the target point cloud data set.
3. The method of claim 2, wherein selecting a first candidate point cloud dataset from at least one candidate point cloud dataset as a source point cloud dataset according to the second relative pose information corresponding to each candidate point cloud dataset comprises:
performing pose conversion on each candidate point cloud data set according to second relative pose information corresponding to each candidate point cloud data set, and calculating first distance information between each candidate point cloud data set and a target point cloud data set after the pose conversion;
and selecting a first candidate point cloud data set from at least one candidate point cloud data set as a source point cloud data set according to first distance information between each candidate point cloud data set after pose conversion and a target point cloud data set.
4. The method of claim 2, wherein determining at least one candidate point cloud dataset according to portal body connection information between the three-dimensional point cloud datasets of each collection point location comprises:
under the condition of carrying out point cloud registration on a three-dimensional point cloud data set in a target physical space for the first time, taking the three-dimensional point cloud data set except the target point cloud data set as at least one candidate point cloud data set;
under the condition that point cloud registration is not performed on a three-dimensional point cloud data set in a target physical space for the first time, if door body connection information among the three-dimensional point cloud data sets participating in the point cloud registration indicates that connection relations between three-dimensional door point information contained in a first three-dimensional point cloud data set and other three-dimensional point cloud data sets are established, the three-dimensional point cloud data sets except the target point cloud data set and the first three-dimensional point cloud data set are used as at least one candidate point cloud data set.
5. The method of claim 2, wherein estimating second relative pose information corresponding to each candidate point cloud dataset from three-dimensional gate point information of the target point cloud dataset and each candidate point cloud dataset comprises:
matching at least two gate point pair information according to the three-dimensional gate point information of the target point cloud data set and each candidate point cloud data set;
and estimating second relative pose information corresponding to each candidate three-dimensional point cloud data set according to the at least two gate point pair information.
6. The method of claim 2, further comprising:
calculating second distance information between three-dimensional door point information in the source point cloud data set and the target point cloud data set according to first relative pose information between the source point cloud data set and the target point cloud data set;
and if the second distance information is smaller than a set second distance threshold, establishing door body connection information between the three-dimensional door point information in the source point cloud data set and the three-dimensional door point information in the target point cloud data set.
7. The method of claim 6, wherein the three-dimensional gate point information comprises: two three-dimensional corner point information; calculating second distance information between three-dimensional gate point information in the source point cloud data set and the target point cloud data set according to first relative pose information between the source point cloud data set and the target point cloud data set, wherein the second distance information comprises:
respectively calculating the center point information of two three-dimensional corner point information in each three-dimensional door point information aiming at the source point cloud data set and the target point cloud data set to respectively obtain source center point information and target center point information;
calculating third distance information between the source central point information and the target central point information according to first relative pose information between the source point cloud data set and the target point cloud data set;
and taking the third distance information as second distance information between the three-dimensional gate point information in the source point cloud data set and the target point cloud data set.
8. The method of claim 6, further comprising:
if the second distance information is larger than or equal to a set second distance threshold, acquiring pose information of a target point cloud data set and at least one candidate point cloud data set provided by other sensors; the other sensors include at least: a wireless communication sensor, a positioning sensor or an instant positioning and map building module;
selecting a source point cloud data set corresponding to the target point cloud data from at least one candidate point cloud data set according to the relative position relation of the plurality of space objects;
and determining first relative pose information between the target point cloud data set and the source point cloud data set according to the pose information of the target point cloud data set and the source point cloud data set provided by the other sensors.
9. The method of claim 1, further comprising, prior to estimating first relative pose information between the three-dimensional point cloud data sets from respective three-dimensional door point information for each three-dimensional point cloud data set in conjunction with door body connection information between the three-dimensional point cloud data sets: determining a plane where a door body corresponding to the three-dimensional door point information is located according to the three-dimensional door point information of each three-dimensional point cloud data set;
and filtering three-dimensional points out of the space object corresponding to the three-dimensional point cloud data set according to the position relation between the three-dimensional points in the three-dimensional point cloud data set and the plane where the door body is located.
10. A point cloud registration apparatus, comprising: the device comprises an acquisition module, a conversion module and a determination module;
the acquisition module is used for acquiring a three-dimensional point cloud data set and a two-dimensional live-action image which are acquired at each acquisition point in a plurality of space objects, and each three-dimensional point cloud data set and each two-dimensional live-action image comprise at least one door body information in the space object to which the three-dimensional point cloud data set belongs; the space objects belong to a target physical space, and one or more acquisition point positions are arranged in each space object;
the conversion module is used for converting the two-dimensional gate point information in each two-dimensional live-action image into a three-dimensional point cloud data set corresponding to the two-dimensional live-action image according to the conversion relation between a radar coordinate system and a camera coordinate system to obtain the three-dimensional gate point information of the three-dimensional point cloud data set, wherein the two-dimensional gate point information is the intersection point information of a corner point and the ground in the door body information;
the determining module is used for determining first relative pose information between the three-dimensional point cloud data sets of the acquisition point locations according to the three-dimensional gate point information of each three-dimensional point cloud data set and the gate body connection information between the three-dimensional point cloud data sets of the acquisition point locations so as to realize point cloud registration between the three-dimensional point cloud data sets of the acquisition point locations.
11. A point cloud registration apparatus, comprising: a memory and a processor; the memory for storing a computer program; the processor, coupled to the memory, is configured to execute the computer program to implement the steps of the method of any of claims 1-9.
12. A computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 9.
CN202210977460.4A 2022-08-15 2022-08-15 Point cloud registration method, device, equipment and storage medium Pending CN115311337A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210977460.4A CN115311337A (en) 2022-08-15 2022-08-15 Point cloud registration method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210977460.4A CN115311337A (en) 2022-08-15 2022-08-15 Point cloud registration method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115311337A true CN115311337A (en) 2022-11-08

Family

ID=83862700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210977460.4A Pending CN115311337A (en) 2022-08-15 2022-08-15 Point cloud registration method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115311337A (en)

Similar Documents

Publication Publication Date Title
CN108717710B (en) Positioning method, device and system in indoor environment
CN115330966B (en) House type diagram generation method, system, equipment and storage medium
CN107782322B (en) Indoor positioning method and system and indoor map establishing device thereof
CN109887003B (en) Method and equipment for carrying out three-dimensional tracking initialization
CN108297115B (en) Autonomous repositioning method for robot
US8644859B2 (en) Apparatus to provide augmented reality service using location-based information and computer-readable medium and method of the same
CN114529566B (en) Image processing method, device, equipment and storage medium
CN114663618B (en) Three-dimensional reconstruction and correction method, device, equipment and storage medium
Sarlin et al. Lamar: Benchmarking localization and mapping for augmented reality
WO2020007483A1 (en) Method, apparatus and computer program for performing three dimensional radio model construction
KR102398478B1 (en) Feature data management for environment mapping on electronic devices
CN102959946A (en) Augmenting image data based on related 3d point cloud data
CN115375860B (en) Point cloud splicing method, device, equipment and storage medium
CN115330652B (en) Point cloud splicing method, equipment and storage medium
CN114842156A (en) Three-dimensional map construction method and device
CN114494487B (en) House type graph generation method, device and storage medium based on panorama semantic stitching
CN111511017B (en) Positioning method and device, equipment and storage medium
Feng et al. Visual Map Construction Using RGB‐D Sensors for Image‐Based Localization in Indoor Environments
CN114529621B (en) Household type graph generation method and device, electronic equipment and medium
CN115222602B (en) Image stitching method, device, equipment and storage medium
CN115311337A (en) Point cloud registration method, device, equipment and storage medium
CN114494486B (en) Method, device and storage medium for generating user type graph
CN113125434A (en) Image analysis system and method of controlling photographing of sample image
CN113515978B (en) Data processing method, device and storage medium
Ruiz-Ruiz et al. A multisensor architecture providing location-based services for smartphones

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination