CN114820955A - Symmetric plane completion method, device, equipment and storage medium - Google Patents

Symmetric plane completion method, device, equipment and storage medium Download PDF

Info

Publication number
CN114820955A
CN114820955A CN202210755141.9A CN202210755141A CN114820955A CN 114820955 A CN114820955 A CN 114820955A CN 202210755141 A CN202210755141 A CN 202210755141A CN 114820955 A CN114820955 A CN 114820955A
Authority
CN
China
Prior art keywords
point cloud
transformation
feature
characteristic
symmetric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210755141.9A
Other languages
Chinese (zh)
Other versions
CN114820955B (en
Inventor
胡兰
王一夫
张如高
虞正华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Moshi Intelligent Technology Co ltd
Original Assignee
Suzhou Moshi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Moshi Intelligent Technology Co ltd filed Critical Suzhou Moshi Intelligent Technology Co ltd
Priority to CN202210755141.9A priority Critical patent/CN114820955B/en
Publication of CN114820955A publication Critical patent/CN114820955A/en
Application granted granted Critical
Publication of CN114820955B publication Critical patent/CN114820955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a method, a device, equipment and a storage medium for complementing a symmetry plane, in particular to the technical field of computer vision. The method comprises the following steps: acquiring a first characteristic point cloud, a second characteristic point cloud, a first transformation point cloud, a second transformation point cloud, a first symmetrical point cloud and a second symmetrical point cloud; calculating a first residual error between the first symmetrical point cloud and the first characteristic point cloud as well as the second transformation point cloud; calculating a second residual error between the first characteristic point cloud and the second transformation point cloud and the first symmetrical point cloud; calculating a third residual error between the first transformed point cloud and the second characteristic point cloud as well as the second symmetrical point cloud; and iteratively updating the pose transformation parameters and the symmetrical transformation parameters by taking the minimum weighted sum of the first residual error, the second residual error and the third residual error as a target condition. According to the scheme, the registration of point clouds at different angles is considered while the symmetric transformation parameters are obtained, so that the obtained symmetric transformation parameters are more accurate, and the object completion accuracy is improved.

Description

Symmetric plane completion method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of computer vision, in particular to a method, a device, equipment and a storage medium for complementing a symmetric plane.
Background
In an automatic driving scene, front-end perception is one of main functional modules and is used for collecting information in the environment and outputting the information to subsequent positioning, decision making and other functions.
Most autonomous driving perception modules today use deep learning based detection, segmentation, etc. methods, which typically require large amounts and accurate training data. Therefore, a large-scale high-quality three-dimensional automatic driving data set has great significance in the automatic driving field. However, in the data acquisition process, due to the limitations of occlusion, observation angle, and the like, the observed objects are usually incomplete, for example, vehicles parked on the roadside, and the like, so that the quality of the object model in the data set can be greatly improved by completing the incomplete objects in the scene, and the performance of the deep learning network after training is further improved. At present, two main types of methods are mainly used for completing the incomplete objects, one is an object generation model based on deep learning, and a network has the capability of completing the objects by means of a large amount of training data. However, such networks generally require a network of objects of a certain type, otherwise the accuracy cannot be guaranteed. The second method is a compensation method based on a symmetry plane, because most of the man-made objects are symmetrical, and self-compensation of the object can be achieved to some extent by using symmetry plane information (i.e. symmetry parameters including the normal of the symmetry plane and the plane depth).
In the completion method based on the symmetric plane, for the shielding object, the computer is difficult to accurately acquire the symmetric plane information of the object in the image, so that the accuracy of object completion is low.
Disclosure of Invention
The application provides a method and a device for completing a symmetry plane, computer equipment and a storage medium, which improve the accuracy of object completion.
In one aspect, a method for complementing a symmetry plane is provided, where the method includes:
acquiring a first characteristic point cloud and a second characteristic point cloud; the first characteristic point cloud and the second characteristic point cloud are characteristic point clouds acquired from different directions on a target object;
based on pose transformation parameters, respectively carrying out pose transformation on the first characteristic point cloud and the second characteristic point cloud, and correspondingly obtaining a first transformation point cloud and a second transformation point cloud;
respectively carrying out symmetric transformation on the first characteristic point cloud and the second characteristic point cloud based on symmetric transformation parameters to correspondingly obtain a first symmetric point cloud and a second symmetric point cloud;
calculating a first residual error between the first symmetrical point cloud and the first characteristic point cloud and the second transformation point cloud;
calculating a second residual error between the first characteristic point cloud and the second transformation point cloud and the first symmetrical point cloud;
calculating a third residual error between the first transformed point cloud and the second feature point cloud, and the second symmetric point cloud;
and iteratively updating the pose transformation parameter and the symmetric transformation parameter by taking the minimum weighted sum of the first residual, the second residual and the third residual as a target condition so as to perform a completion operation on the target object according to the updated symmetric transformation parameter.
In yet another aspect, a symmetry plane completion apparatus is provided, the apparatus including:
the characteristic point cloud obtaining module is used for obtaining a first characteristic point cloud and a second characteristic point cloud; the first characteristic point cloud and the second characteristic point cloud are characteristic point clouds acquired from different directions on a target object;
the pose transformation module is used for respectively carrying out pose transformation on the first characteristic point cloud and the second characteristic point cloud based on pose transformation parameters to correspondingly obtain a first transformation point cloud and a second transformation point cloud;
the symmetrical transformation module is used for respectively carrying out symmetrical transformation on the first characteristic point cloud and the second characteristic point cloud based on symmetrical transformation parameters to correspondingly obtain a first symmetrical point cloud and a second symmetrical point cloud;
the first residual error calculation module is used for calculating a first residual error between the first symmetrical point cloud and the first characteristic point cloud as well as the second transformation point cloud;
the second residual error calculation module is used for calculating a second residual error between the first characteristic point cloud and the second transformation point cloud and the first symmetrical point cloud;
a third residual calculation module, configured to calculate a third residual between the first transformed point cloud and the second feature point cloud, and the second symmetric point cloud;
and the object completion module is used for iteratively updating the pose transformation parameter and the symmetric transformation parameter by taking the weighted sum of the first residual, the second residual and the third residual as a target condition so as to execute completion operation on the target object according to the updated symmetric transformation parameter.
In one possible implementation, the pose transformation parameters include a pose transformation matrix;
the pose transformation module is also used for,
coordinate transformation is carried out on each coordinate point in the first characteristic point cloud through a pose transformation matrix, and a first transformation point cloud is correspondingly obtained;
and carrying out coordinate transformation on each coordinate point in the second characteristic point cloud through an inverse matrix of the pose transformation matrix to correspondingly obtain a second transformation point cloud.
In a possible implementation manner, the first residual calculation module is further configured to,
selecting feature association points which are most adjacent to each feature point in the second transformation point cloud from the first symmetrical point cloud and the first feature point cloud to form a first adjacent point set;
and calculating the square sum of residuals between each characteristic point in the first adjacent point set and each characteristic point in the second transformation point cloud to be used as the first residual.
In a possible implementation manner, the first residual calculation module is further configured to,
aiming at each transformed feature point in the second transformed point cloud, selecting a point with the minimum Euler distance from the transformed feature point in the first feature point cloud as a candidate associated point;
when the target Euler distance is smaller than or equal to a target residual error threshold value, determining the candidate associated point in the first feature point cloud as a feature associated point which is most adjacent to the transformed feature point;
and when the target Euler distance is larger than a target residual error threshold value, selecting a point with the minimum Euler distance with the transformed feature point from the first transformed point cloud as a feature association point which is closest to the transformed feature point.
In a possible implementation manner, the second residual calculation module is further configured to,
selecting feature association points which are most adjacent to each feature point in the first symmetrical point cloud from the first feature point cloud and the second transformation point cloud to form a second adjacent point set;
calculating the square sum of residual errors between each feature associated point in the second adjacent point set and each feature point in the first symmetrical point cloud to serve as the second residual error;
in a possible implementation manner, the third residual calculation module is further configured to,
selecting feature association points which are most adjacent to each feature point in the second symmetrical point cloud from the first transformation point cloud and the second feature point cloud to form a third adjacent point set;
and calculating the square sum of the residual errors between each feature associated point in the third adjacent point set and each feature point in the second symmetrical point cloud to serve as the third residual error.
In a possible implementation manner, in the iterative updating process, the initial value of the pose transformation parameter may be a transformation pose from the second feature point cloud to the first feature point cloud, which is acquired by the information acquisition device.
In one possible implementation, the object completion module is further configured to,
according to the updated symmetric transformation parameters, performing symmetric transformation on the first characteristic point cloud to obtain a third characteristic point cloud;
and fusing the characteristic points among the first characteristic point cloud, the second transformed point cloud and the third characteristic point cloud to obtain a target characteristic point cloud so as to indicate the completion state of the target object.
In yet another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the above-mentioned symmetry plane completion method.
In yet another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the above-mentioned symmetry plane completion method.
In yet another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and executes the computer instructions, so that the computer device executes the symmetry plane completion method.
The technical scheme provided by the application can comprise the following beneficial effects:
when object compensation is required to be performed on a target object, a first characteristic point cloud and a second characteristic point cloud acquired by acquiring the target object from different directions can be acquired, then the first characteristic point cloud and the second characteristic point cloud are respectively subjected to symmetrical transformation and pose transformation, introducing a first symmetrical point cloud in the process of aligning the point sets of the first characteristic point cloud and the second converted point cloud, introducing a second converted point cloud in the symmetrical detection of the first characteristic point cloud and the first symmetrical point cloud, introducing a first converted point cloud in the symmetrical detection of the second characteristic point cloud and the second symmetrical point cloud, and carrying out iterative optimization according to residual errors obtained in the detection processes of the first characteristic point cloud, the second characteristic point cloud and the second symmetrical point cloud, the method has the advantages that the registration of point clouds at different angles is considered while the symmetric transformation parameters are obtained, so that the obtained symmetric transformation parameters are more accurate, and the object completion accuracy is improved.
Drawings
In order to more clearly illustrate the detailed description of the present application or the technical solutions in the prior art, the drawings needed to be used in the detailed description of the present application or the prior art description will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic configuration diagram showing a vehicle control system according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating a method of symmetry plane completion in accordance with an exemplary embodiment.
FIG. 3 is a flow chart illustrating a method of symmetry plane completion in accordance with an exemplary embodiment.
Fig. 4 is a block diagram illustrating a structure of a symmetry plane completion apparatus according to an exemplary embodiment.
Fig. 5 shows a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
The technical solutions of the present application will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be understood that "indication" mentioned in the embodiments of the present application may be a direct indication, an indirect indication, or an indication of an association relationship. For example, a indicates B, which may mean that a directly indicates B, e.g., B may be obtained by a; it may also mean that a indicates B indirectly, e.g. a indicates C, by which B may be obtained; it can also mean that there is an association between a and B.
In the description of the embodiments of the present application, the term "correspond" may indicate that there is a direct correspondence or an indirect correspondence between the two, may also indicate that there is an association between the two, and may also indicate and be indicated, configure and configured, and so on.
In the embodiment of the present application, "predefining" may be implemented by saving a corresponding code, table, or other manners that may be used to indicate related information in advance in a device (for example, including a terminal device and a network device), and the present application is not limited to a specific implementation manner thereof.
Fig. 1 is a schematic configuration diagram showing a vehicle control system according to an exemplary embodiment. The vehicle control system includes a server 110 and a target vehicle 120. The target vehicle 120 may include modules such as a data processing device, an information collecting device, and a data storage module.
Optionally, the target vehicle 120 includes an information acquisition device and a data storage module, and the information acquisition device may acquire information of an environment around the target vehicle during an operation process of the target vehicle, and store the acquired feature points in the data storage module in the target vehicle.
Optionally, the target vehicle 120 is communicatively connected to the server 110 through a transmission network (e.g., a wireless communication network), and the target vehicle 120 may upload each data (e.g., the collected feature points) stored in the data storage module to the server 110 through the wireless communication network, so that the server 110 processes the collected feature points.
Optionally, the target vehicle further includes a data processing device, and the data processing device may identify an object corresponding to the feature point when the information acquisition device of the target vehicle 120 acquires the feature point, and provide functions such as subsequent positioning and decision-making according to an object existing in the environment.
Optionally, a machine learning model is loaded in the target vehicle, and the machine learning model is configured to process the feature points acquired by the information acquisition device, and output a corresponding prediction result to instruct the target vehicle to perform a decision (such as braking, turning, and the like).
Optionally, the information collecting device on the target vehicle may collect information of a surrounding environment of the target vehicle during the operation process, and in the data collecting process, due to limitations of occlusion, observation angle, and the like, an observed object is usually incomplete, such as a vehicle parked on a roadside, and since most of artificial objects are symmetric, a processor (or a server) on the target vehicle may complement information of an object in the feature points through a symmetry plane before the collected feature points are used as training feature points of the machine learning model.
Optionally, the server may be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing cloud computing services such as cloud services, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware services, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform.
Optionally, the system may further include a management device, where the management device is configured to manage the system (e.g., manage connection states between the modules and the server, and the management device is connected to the server through a communication network. Optionally, the communication network is a wired network or a wireless network.
Optionally, the wireless network or wired network described above uses standard communication techniques and/or protocols. The network is typically the internet, but may be any other network including, but not limited to, a local area network, a metropolitan area network, a wide area network, a mobile, a limited or wireless network, a private network, or any combination of virtual private networks. In some embodiments, data exchanged over the network is represented using techniques and/or formats including hypertext markup language, extensible markup language, and the like. All or some of the links may also be encrypted using conventional encryption techniques such as secure sockets layer, transport layer security, virtual private network, internet protocol security, and the like. In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
FIG. 2 is a flowchart illustrating a symmetry plane completion method according to an exemplary embodiment. The method is performed by a computer device, which may be a data processing device in a target vehicle as shown in fig. 1. As shown in fig. 2, the symmetry plane completion method may include the following steps:
step 201, a first characteristic point cloud and a second characteristic point cloud are obtained.
The first characteristic point cloud and the second characteristic point cloud are characteristic point clouds acquired from different directions on a target object.
Optionally, in an application scenario of automatic driving of an automobile, that is, in a driving process of a target vehicle, an information acquisition device on the target vehicle may acquire feature point information of an environment in a specified direction, where the first feature point cloud may be a feature point of a target object in an image acquired by the information acquisition device on the target vehicle at a first time; and the second feature point cloud may be a feature point of the target object in the image captured at the second time by the information capturing device on the target vehicle.
That is to say, in the driving process of the target vehicle, the feature points acquired by the information acquisition device at different times may correspond to the same object, and since the relative posture between the target vehicle and the target object is changed when the target vehicle is driving, the first feature point cloud and the second feature point cloud are actually acquired by the information acquisition device from different directions to the target object, and at this time, the feature points on the first feature point cloud and the second feature point cloud are actually the feature points of the target object and are observed from different directions.
Step 202, based on the pose transformation parameters, respectively performing pose transformation on the first feature point cloud and the second feature point cloud, and correspondingly obtaining a first transformation point cloud and a second transformation point cloud.
When the target vehicle is in the driving process, the characteristic points of the target object acquired by the information acquisition equipment at two different moments still have the possibility of overlapping parts. Therefore, the first characteristic point cloud and the second characteristic point cloud are obtained by respectively acquiring the target object from different angles by the information acquisition equipment, and the same characteristic point part exists in the characteristic points on the first characteristic point cloud and the second characteristic point cloud, so that the computer equipment can convert the first characteristic point cloud into the first conversion point cloud through proper pose conversion parameters, and the first conversion point cloud is attached to the second characteristic point cloud as much as possible; similarly, the computer equipment can also transform the second characteristic point cloud into a second transformed point cloud according to the pose transformation parameters, so that the second transformed point cloud is attached to the first characteristic point cloud as much as possible.
Step 203, based on the symmetric transformation parameters, respectively performing symmetric transformation on the first characteristic point cloud and the second characteristic point cloud to correspondingly obtain a first symmetric point cloud and a second symmetric point cloud.
Most of the artificial objects are symmetrical, so when a proper symmetrical point is selected, the characteristic point cloud can be subjected to symmetrical transformation processing, and self-complementing operation of the object is realized.
However, since the symmetric transformation parameters and pose transformation parameters are difficult to select, and the first transformed Point cloud, the second transformed Point cloud, the first symmetric Point cloud, and the second symmetric Point cloud in steps 202 and 203 all have errors to a certain extent, the pose transformation parameters and the symmetric transformation parameters need to be iteratively updated by the computer device according to the symmetric transformation errors and the pose transformation errors, for example, by an ICP (Iterative Closest Point) algorithm, so that the pose transformation parameters and the symmetric transformation parameters that meet the actual conditions are obtained as much as possible.
Step 204, calculating a first residual error between the first symmetric point cloud and the first feature point cloud, and the second transformed point cloud.
The second transformation point cloud is the first characteristic point cloud, and the pose transformation is carried out according to the pose transformation parameter, so that the second transformation point cloud is actually close to the first characteristic point cloud, and the computer equipment can search the nearest characteristic point of each characteristic point in the second transformation point cloud in the first symmetrical point cloud and the first characteristic point cloud so as to obtain a first residual error which is used as the residual error when the second characteristic point cloud is transformed into the second characteristic point cloud, so that the pose transformation parameter and the symmetrical transformation parameter can be updated later.
Step 205, calculate a second residual between the first feature point cloud and the second transformed point cloud, and the first symmetric point cloud.
The first symmetric point cloud is obtained by symmetrically transforming the first characteristic point cloud according to the symmetric transformation parameters, so that theoretically, after the first characteristic point cloud is symmetrically processed by the most appropriate symmetric transformation parameters, the sum of residual errors between the first characteristic point cloud and each characteristic point in the first symmetric point cloud and the second transformed point cloud is the minimum, and therefore, the computer equipment can firstly calculate the sum of residual errors between the first characteristic point cloud and each characteristic point in the first symmetric point cloud and the second transformed point cloud and generate a second residual error so as to update the attitude transformation parameters and the symmetric transformation parameters subsequently.
Step 206, calculating a third residual error between the first transformed point cloud and the second feature point cloud, and the second symmetric point cloud.
The second symmetric point cloud is obtained by symmetrically transforming the second feature point cloud according to the symmetric transformation parameters, so that theoretically, after the second feature point cloud is symmetrically processed by the most appropriate symmetric transformation parameters, the sum of residual errors between the second feature point cloud and each feature point in the second symmetric point cloud and each feature point in the first transformed point cloud is the minimum, and therefore, the computer equipment can calculate the sum of residual errors between the second feature point cloud and each feature point in the second symmetric point cloud and each feature point in the first transformed point cloud firstly and generate a third residual error so as to update the posture transformation parameters and the symmetric transformation parameters subsequently.
Step 207, iteratively updating the pose transformation parameter and the symmetric transformation parameter by using the minimum weighted sum of the first residual, the second residual and the third residual as a target condition, so as to perform a completion operation on the target object according to the updated symmetric transformation parameter.
After the computer device calculates the first residual error, the second residual error and the third residual error, the symmetry plane detection and the object registration can be jointly performed through an iterative update algorithm (for example, a global ICP algorithm) according to a minimum weighted sum of the first residual error, the second residual error and the third residual error. When the weighted sum of the first residual error, the second residual error and the third residual error is minimum, the pose transformation parameter can enable the first characteristic point cloud to be more accurately attached to the second characteristic point cloud through pose transformation, and the first characteristic point cloud and the second transformation point cloud which are attached to each other have better symmetry with the first symmetrical point cloud obtained through the symmetrical transformation parameter (namely the sum of the residual errors between corresponding characteristic point pairs is minimum); and the second characteristic point cloud and the first conversion point cloud which are mutually attached have better symmetry with a second symmetrical point cloud obtained by symmetrical conversion parameters.
Therefore, the scheme combines the solution problem of the symmetric transformation parameters and the attitude transformation parameters, simultaneously realizes the object registration and the global optimization of the symmetric plane, and simultaneously improves the registration precision and the detection precision of the incomplete object symmetric plane, thereby improving the object completion accuracy.
In summary, when object compensation is required to be performed on a target object, a first feature point cloud and a second feature point cloud acquired from different orientations of the target object may be acquired, and then symmetric transformation and pose transformation are performed on the first feature point cloud and the second feature point cloud respectively, introducing a first symmetrical point cloud in the process of aligning the point sets of the first characteristic point cloud and the second converted point cloud, introducing a second converted point cloud in the symmetrical detection of the first characteristic point cloud and the first symmetrical point cloud, introducing a first converted point cloud in the symmetrical detection of the second characteristic point cloud and the second symmetrical point cloud, and carrying out iterative optimization according to residual errors obtained in the detection processes of the first characteristic point cloud, the second characteristic point cloud and the second symmetrical point cloud, the method has the advantages that the registration of point clouds at different angles is considered while the symmetric transformation parameters are obtained, so that the obtained symmetric transformation parameters are more accurate, and the object completion accuracy is improved.
FIG. 3 is a flow chart illustrating a method of symmetry plane completion in accordance with an exemplary embodiment. The method is performed by a computer device, which may be a data processing device in a target vehicle as shown in fig. 1. As shown in fig. 3, the symmetry plane completion method may include the following steps:
step 301, a first feature point cloud and a second feature point cloud are obtained.
Optionally, the first feature point cloud and the second feature point cloud are feature point clouds acquired from different orientations for the target object.
When the computer equipment is data processing equipment in the target vehicle, the computer equipment controls information acquisition equipment in the target vehicle to acquire the characteristic points of the environment with the specified direction during the running process of the target vehicle, so that the first characteristic points are acquired at the first moment and the second characteristic points are acquired at the second moment.
In one possible implementation, the computer device performs object labeling on the first feature point (for example, when the detection target is a vehicle), and the computer device may perform feature point labeling on the vehicle feature point on the first feature point (for example, labeling by a common 3D point cloud labeling tool). Similarly, the computer device may further perform object labeling on the second feature point, that is, the computer device may perform feature point labeling on the vehicle feature point on the second feature point.
In a possible implementation manner, the acquisition time between the first feature point and the second feature point is less than a threshold, that is, since the acquisition time between the first feature point and the second feature point is shorter, the objects marked by the 3D point cloud marking tool may be considered to be the same object.
In a possible implementation manner, after the computer device performs object labeling on the first feature point to obtain a first feature map and performs object labeling on the second feature point to obtain a second feature map, the computer device may compare the feature points in the first feature map and the second feature map (for example, compare the distances between the feature points and other feature points, or compare the feature points with the numbers of feature points), so as to determine the feature point clouds in the first feature map and the second feature map as the first feature point cloud and the second feature point cloud acquired by the same object under different angles.
For example, a computer device in the target vehicle may record two frames of point clouds scanned by a laser radar (i.e., an information acquisition device) at times t1 and t2, label each frame of point cloud using a 3D point cloud labeling tool, and segment out local incomplete point clouds X and Y (i.e., a first feature point cloud and a second feature point cloud) corresponding to the same target object.
And 302, respectively carrying out pose transformation on the first characteristic point cloud and the second characteristic point cloud based on pose transformation parameters to correspondingly obtain a first transformation point cloud and a second transformation point cloud.
In one possible implementation, the pose transformation parameters include a pose transformation matrix; the computer equipment carries out coordinate transformation on each coordinate point in the first characteristic point cloud through the pose transformation matrix to correspondingly obtain a first transformation point cloud; and the computer equipment performs coordinate transformation on each coordinate point in the second characteristic point cloud through an inverse matrix of the pose transformation matrix to correspondingly obtain a second transformation point cloud.
For example, can use
Figure 627056DEST_PATH_IMAGE001
And
Figure 13038DEST_PATH_IMAGE002
to represent imperfect observations of the object to be registered at different times, i.e. at different times
Figure 261617DEST_PATH_IMAGE003
For the first feature point cloud to be registered,
Figure 125668DEST_PATH_IMAGE004
for each feature point in the first feature point cloud,
Figure 674461DEST_PATH_IMAGE005
for the second feature point cloud to be registered,
Figure 231344DEST_PATH_IMAGE006
the characteristic points in the second characteristic point cloud are taken as the characteristic points.
And defining the relative poses (namely pose transformation parameters) of the first characteristic point cloud X and the second characteristic point cloud Y by transformation consisting of the rotation matrix R and the translation matrix t, and transforming the second characteristic point cloud Y by the pose transformation parameters to align the second characteristic point cloud Y with the second characteristic point cloud X. Further defining a first transformed point cloud
Figure 964289DEST_PATH_IMAGE007
And a second transformed point cloud
Figure 897610DEST_PATH_IMAGE008
Is the corresponding point cloud after rotational transformation in each direction.
Step 303, based on the symmetric transformation parameters, respectively performing symmetric transformation on the first feature point cloud and the second feature point cloud to obtain a first symmetric point cloud and a second symmetric point cloud correspondingly.
In a possible implementation manner of the embodiment of the present application, in the iterative update process, the initial symmetric transformation parameter is obtained by performing global search according to the first feature point cloud and the second feature point cloud.
In the embodiment of the application, the computer device can estimate a normal n and a plane depth d from the search interval, the symmetric operation can be transformed through a symmetric plane, and the symmetric transformation can transform a point X in the first characteristic point cloud X through a geometric relation
Figure 35330DEST_PATH_IMAGE009
Conversion to corresponding reflection point
Figure 763115DEST_PATH_IMAGE010
. Definition of
Figure 986286DEST_PATH_IMAGE011
Is a symmetrical point set corresponding to the first characteristic point cloud X.
For example, when the symmetric transformation parameters include a symmetric normal and a depth, and when the symmetric normal and the depth need to be obtained, the computer device needs to perform a global search, and generally normalizes the point cloud to between [0, 1], so that the range of the normal is [ -pi/2, pi/2], and the range of the depth is [ -1,1], so that the computer device can select the symmetric normal and the depth within the range to perform symmetric transformation on the first feature point cloud and the second feature point cloud, so as to obtain the first symmetric point cloud and the second symmetric point cloud, and then the computer device further performs a search on the symmetric normal and the depth according to an error in a subsequent process until a proper symmetric transformation parameter is searched.
Step 304, calculate a first residual between the first symmetric point cloud and the first feature point cloud, and the second transformed point cloud.
In a possible implementation manner, in the first symmetric point cloud and the first feature point cloud, feature association points which are most adjacent to each feature point in the second transformed point cloud are selected to form a first adjacent point set;
and calculating the square sum of the residuals between each feature point in the first adjacent point set and each feature point in the second transformation point cloud to serve as the first residual.
In a possible implementation manner, for each transformed feature point in the second transformed point cloud, selecting a candidate associated point with the smallest euler distance with the transformed feature point in the first feature point cloud;
when the target Euler distance is smaller than or equal to a target residual error threshold value, determining the candidate associated point in the first feature point cloud as a feature point most adjacent to the transformed feature point;
and when the target Euler distance is larger than a target residual error threshold value, selecting a second candidate feature point with the minimum Euler distance from the first converted point cloud and the converted feature point as a feature association point most adjacent to the converted feature point.
That is, the computer device may calculate a residual error with respect to the point cloud registration, since the standard solution to the point cloud registration problem is given by the nearest neighbor Iterative (ICP) algorithm, mainly minimizing the alignment error, which takes the formula:
Figure 457719DEST_PATH_IMAGE012
wherein
Figure 715525DEST_PATH_IMAGE013
And
Figure 614210DEST_PATH_IMAGE014
from a set of a first characteristic point cloud X and a first symmetrical point cloud X, respectively S The corresponding nearest neighbor feature point selected. The algorithm will prioritize points in X as associated point pairs for registration, and if the corresponding residual for a point selected from X is greater than the selected parameter c, then switch to from X S Searching for the nearest neighbor. Given the initial transformation R and t, the ICP algorithm constructs a new point pairing relation by alternately searching the nearest neighbor points, and alternately selects the nearest neighbor points and updates the optimized residual until the optimal registration pose is found.
In a possible implementation manner, after the feature point closest to the transformed feature point is determined, if a residual error between the two is greater than a truncation threshold, a matching point pair formed by the two is ignored.
That is, for each feature point in the second transformed point cloud, when the residual between the nearest neighboring feature point found in the first feature point cloud and the first symmetric point cloud and the feature point in the second transformed point cloud is greater than the truncation threshold, then there is no need to merge the residual value into the first residual, thereby filtering matching point pairs for which the residual is too large.
Step 305, calculating a second residual error between the first feature point cloud and the second transformed point cloud and the first symmetric point cloud.
In a possible implementation manner, in the first feature point cloud and the second transformed point cloud, feature associated points most adjacent to each feature point in the first symmetric point cloud are selected to form a second adjacent point set;
and calculating the square sum of the residual errors between each feature associated point in the second adjacent point set and each feature point in the first symmetrical point cloud to serve as the second residual error.
I.e. the computer device can also calculate the residual error detected with respect to the symmetry plane, the problem of symmetry plane detection can be expressed as minimizing the error by the defined symmetry distance
Figure 324678DEST_PATH_IMAGE015
. Wherein
Figure 334222DEST_PATH_IMAGE016
Is about
Figure 446534DEST_PATH_IMAGE017
Point pair of (1) to residual error, point
Figure 250542DEST_PATH_IMAGE017
Is a point
Figure 58092DEST_PATH_IMAGE018
Nearest feature points in the point cloud X, such that
Figure 806081DEST_PATH_IMAGE019
. The objective function of the plane of symmetry fit about X is therefore
Figure 976162DEST_PATH_IMAGE020
In one possible implementation, for each feature point in the first symmetric point cloud, when a residual between a nearest neighboring feature point found in the second neighboring point set and the feature point in the first symmetric point cloud is greater than a truncation threshold, the residual value does not need to be merged into the second residual, so as to filter matching point pairs with too large residual.
Step 306, calculate a third residual error between the first transformed point cloud and the second feature point cloud, and the second symmetric point cloud.
In a possible implementation manner, in the first transformed point cloud and the second feature point cloud, feature points most adjacent to each feature point in the second symmetric point cloud are selected to form a third adjacent point set;
and calculating the square sum of the residual errors between each characteristic point in the third adjacent point set and each characteristic point in the second symmetrical point cloud to serve as the third residual error.
Figure 685492DEST_PATH_IMAGE021
Is a set of X and U r In relation to
Figure 839393DEST_PATH_IMAGE018
The objective function of the plane of symmetry detection with respect to Y is
Figure 190740DEST_PATH_IMAGE022
Figure 12066DEST_PATH_IMAGE023
Is from the set X U Y r The closest point of the selection of (a) is,
Figure 689035DEST_PATH_IMAGE024
and
Figure 595811DEST_PATH_IMAGE025
for Y obtained from geometric transformations r The plane of symmetry parameter of (a). And constructing a new point pairing relation by alternately searching nearest neighbor points, and alternately selecting the nearest neighbor points and updating the optimized residual error until an optimal normal n and a plane depth d are found.
In one possible implementation, for each feature point in the second symmetric point cloud, when a residual between a nearest neighboring feature point found in the third neighboring point set and the feature point in the second symmetric point cloud is greater than a truncation threshold, there is no need to merge the residual value into the third residual, so as to filter matching point pairs with too large residual.
And 307, iteratively updating the pose transformation parameter and the symmetric transformation parameter by taking the minimum weighted sum of the first residual error, the second residual error and the third residual error as a target condition.
Optionally, the target condition, i.e. the population of iterative updatesAn objective function of
Figure 750849DEST_PATH_IMAGE026
Wherein
Figure 692260DEST_PATH_IMAGE027
Is a balance parameter for registration and symmetry plane detection. The computer device may utilize an ICP algorithm to achieve joint symmetry plane detection and object registration.
Optionally, in this embodiment of the application, the initial values of the pose transformation parameter and the symmetric transformation parameter may be predetermined.
In one possible implementation, the algorithm rotates the target vehicle between the acquisition of the first and second feature point clouds
Figure 540130DEST_PATH_IMAGE028
And amount of translation
Figure 934202DEST_PATH_IMAGE029
(namely the transformation pose from the second characteristic point cloud to the first characteristic point cloud acquired by the information acquisition equipment) is used as the initial R and t (namely the pose transformation parameter) of the minimized objective function, so that the execution speed can be greatly accelerated. To obtain a global optimal solution, a global search is performed for all possible normal vectors around the initial value of the rotation transformation.
And 308, performing symmetric transformation on the first characteristic point cloud according to the updated symmetric transformation parameters to obtain a third characteristic point cloud.
After obtaining the symmetric transformation parameters, the computer device may perform symmetric transformation on the first feature point cloud according to the updated symmetric transformation parameters, so as to obtain a third feature point cloud, where the third feature point cloud may represent the feature point of the object on the other side of the symmetric plane more accurately.
Step 309, fusing the feature points among the first feature point cloud, the second transformed point cloud and the third feature point cloud to obtain a target feature point cloud so as to indicate the completion state of the target object.
The computer device obtains the third feature point cloud obtained by symmetrically transforming the first feature point cloud through a symmetric plane (namely, symmetric transformation parameters), can directly fuse the feature points of the first feature point cloud, the second transformed point cloud and the third feature point cloud, and then performs down-sampling treatment, thereby obtaining the completion state of the target.
In summary, when object compensation is required to be performed on a target object, a first feature point cloud and a second feature point cloud acquired from different orientations of the target object may be acquired, and then symmetric transformation and pose transformation are performed on the first feature point cloud and the second feature point cloud respectively, introducing a first symmetrical point cloud in the process of aligning the point sets of the first characteristic point cloud and the second converted point cloud, introducing a second converted point cloud in the symmetrical detection of the first characteristic point cloud and the first symmetrical point cloud, introducing a first converted point cloud in the symmetrical detection of the second characteristic point cloud and the second symmetrical point cloud, and carrying out iterative optimization according to residual errors obtained in the detection processes of the first characteristic point cloud, the second characteristic point cloud and the second symmetrical point cloud, the method has the advantages that the registration of point clouds at different angles is considered while the symmetric transformation parameters are obtained, so that the obtained symmetric transformation parameters are more accurate, and the object completion accuracy is improved.
Fig. 4 is a block diagram illustrating a structure of a symmetry plane completion apparatus according to an exemplary embodiment. This symmetry plane completion device includes:
a feature point cloud obtaining module 401, configured to obtain a first feature point cloud and a second feature point cloud; the first characteristic point cloud and the second characteristic point cloud are characteristic point clouds acquired from different directions on a target object;
a pose transformation module 402, configured to perform pose transformation on the first feature point cloud and the second feature point cloud respectively based on a pose transformation parameter, and obtain a first transformation point cloud and a second transformation point cloud correspondingly;
a symmetric transformation module 403, configured to perform symmetric transformation on the first feature point cloud and the second feature point cloud respectively based on symmetric transformation parameters, so as to obtain a first symmetric point cloud and a second symmetric point cloud correspondingly;
a first residual calculation module 404, configured to calculate a first residual between the first symmetric point cloud and the first feature point cloud, and the second transformed point cloud;
a second residual calculation module 405, configured to calculate a second residual between the first feature point cloud and the second transformed point cloud, and the first symmetric point cloud;
a third residual calculation module 406, configured to calculate a third residual between the first transformed point cloud and the second feature point cloud, and the second symmetric point cloud;
an object completion module 407, configured to iteratively update the pose transformation parameter and the symmetric transformation parameter with a minimum weighted sum of the first residual, the second residual, and the third residual as a target condition, so as to perform a completion operation on the target object according to the updated symmetric transformation parameter.
In one possible implementation, the pose transformation parameters include a pose transformation matrix;
the pose transformation module is also used for,
coordinate transformation is carried out on each coordinate point in the first characteristic point cloud through a pose transformation matrix, and a first transformation point cloud is correspondingly obtained;
and carrying out coordinate transformation on each coordinate point in the second characteristic point cloud through an inverse matrix of the pose transformation matrix to correspondingly obtain a second transformation point cloud.
In a possible implementation manner, the first residual calculation module is further configured to,
selecting feature association points which are most adjacent to each feature point in the second transformation point cloud from the first symmetrical point cloud and the first feature point cloud to form a first adjacent point set;
and calculating the square sum of residuals between each characteristic point in the first adjacent point set and each characteristic point in the second transformation point cloud to be used as the first residual.
In a possible implementation manner, the first residual calculation module is further configured to,
aiming at each transformed feature point in the second transformed point cloud, selecting a point with the minimum Euler distance from the transformed feature point in the first feature point cloud as a candidate associated point;
when the target Euler distance is smaller than or equal to a target residual error threshold value, determining the candidate associated point in the first feature point cloud as an associated feature point which is most adjacent to the transformed feature point;
and when the target Euler distance is larger than a target residual error threshold value, selecting a second candidate feature point with the minimum Euler distance from the first converted point cloud and the converted feature point as a feature association point most adjacent to the converted feature point.
In a possible implementation manner, the second residual calculation module is further configured to,
selecting feature association points which are most adjacent to each feature point in the first symmetrical point cloud from the first feature point cloud and the second transformation point cloud to form a second adjacent point set;
calculating the square sum of residual errors between each feature associated point in the second adjacent point set and each feature point in the first symmetrical point cloud to serve as the second residual error;
in a possible implementation manner, the third residual calculation module is further configured to,
selecting feature points which are most adjacent to the feature points in the second symmetrical point cloud from the first transformation point cloud and the second feature point cloud to form a third adjacent point set;
and calculating the square sum of the residual errors between each characteristic point in the third adjacent point set and each characteristic point in the second symmetrical point cloud to be used as the third residual error.
In a possible implementation manner, in the iterative updating process, the initial value of the pose transformation parameter may be a transformation pose from the second feature point cloud to the first feature point cloud, which is acquired by the information acquisition device.
In one possible implementation, the object completion module is further configured to,
according to the updated symmetric transformation parameters, performing symmetric transformation on the first characteristic point cloud to obtain a third characteristic point cloud;
and fusing the feature points among the first feature point cloud, the second transformed point cloud and the third feature point cloud to obtain a target feature point cloud so as to indicate the completion state of the target object.
In summary, when object compensation is required to be performed on a target object, a first characteristic point cloud and a second characteristic point cloud acquired from different directions of the target object may be acquired first, and then the first characteristic point cloud and the second characteristic point cloud are subjected to symmetric transformation and pose transformation respectively, introducing a first symmetrical point cloud in the process of aligning the point sets of the first characteristic point cloud and the second converted point cloud, introducing a second converted point cloud in the symmetrical detection of the first characteristic point cloud and the first symmetrical point cloud, introducing a first converted point cloud in the symmetrical detection of the second characteristic point cloud and the second symmetrical point cloud, and carrying out iterative optimization according to residual errors obtained in the detection processes of the first characteristic point cloud, the second characteristic point cloud and the second symmetrical point cloud, the method has the advantages that the registration of point clouds at different angles is considered while the symmetric transformation parameters are obtained, so that the obtained symmetric transformation parameters are more accurate, and the object completion accuracy is improved.
Fig. 5 shows a block diagram of a computer device 500 according to an exemplary embodiment of the present application. The computer device may be implemented as a server in the above-mentioned aspects of the present application. The computer apparatus 500 includes a Central Processing Unit (CPU) 501, a system Memory 504 including a Random Access Memory (RAM) 502 and a Read-Only Memory (ROM) 503, and a system bus 505 connecting the system Memory 504 and the CPU 501. The computer device 500 also includes a mass storage device 506 for storing an operating system 509, application programs 510, and other program modules 511.
The mass storage device 506 is connected to the central processing unit 501 through a mass storage controller (not shown) connected to the system bus 505. The mass storage device 506 and its associated computer-readable media provide non-volatile storage for the computer device 500. That is, the mass storage device 506 may include a computer-readable medium (not shown) such as a hard disk or Compact Disc-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other solid state Memory technology, CD-ROM, Digital Versatile Disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 504 and mass storage device 506 described above may be collectively referred to as memory.
The computer device 500 may also operate as a remote computer connected to a network through a network, such as the internet, in accordance with various embodiments of the present disclosure. That is, the computer device 500 may be connected to a network 508 through the network interface unit 507 connected to the system bus 505, or may be connected to another type of network or remote computer system (not shown) using the network interface unit 507.
The memory further includes at least one computer program, the at least one computer program is stored in the memory, and the central processing unit 501 executes the at least one computer program to implement all or part of the steps of the methods shown in the above embodiments.
In an exemplary embodiment, a computer readable storage medium is also provided for storing at least one computer program, which is loaded and executed by a processor to implement all or part of the steps of the above method. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or a computer program is also provided, which comprises computer instructions, which are stored in a computer readable storage medium. The computer instructions are read by a processor of the computer device from a computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform all or part of the steps of the method shown in any one of the embodiments of fig. 2 or fig. 3.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of symmetry plane completion, the method comprising:
acquiring a first characteristic point cloud and a second characteristic point cloud; the first characteristic point cloud and the second characteristic point cloud are characteristic point clouds acquired from different directions on a target object;
based on pose transformation parameters, respectively carrying out pose transformation on the first characteristic point cloud and the second characteristic point cloud, and correspondingly obtaining a first transformation point cloud and a second transformation point cloud;
respectively carrying out symmetric transformation on the first characteristic point cloud and the second characteristic point cloud based on symmetric transformation parameters to correspondingly obtain a first symmetric point cloud and a second symmetric point cloud;
calculating a first residual error between the first symmetrical point cloud and the first characteristic point cloud and the second transformation point cloud;
calculating a second residual error between the first characteristic point cloud and the second transformation point cloud and the first symmetrical point cloud;
calculating a third residual error between the first transformed point cloud and the second feature point cloud, and the second symmetric point cloud;
and iteratively updating the pose transformation parameters and the symmetric transformation parameters by taking the minimum weighted sum of the first residual, the second residual and the third residual as a target condition so as to execute a completion operation on the target object according to the updated symmetric transformation parameters.
2. The method of claim 1, wherein the pose transformation parameters comprise a pose transformation matrix;
the pose transformation is respectively carried out on the first characteristic point cloud and the second characteristic point cloud based on the pose transformation parameters, and a first transformation point cloud and a second transformation point cloud are correspondingly obtained, and the pose transformation method comprises the following steps:
coordinate transformation is carried out on each coordinate point in the first characteristic point cloud through a pose transformation matrix, and a first transformation point cloud is correspondingly obtained;
and carrying out coordinate transformation on each coordinate point in the second characteristic point cloud through an inverse matrix of the pose transformation matrix to correspondingly obtain a second transformation point cloud.
3. The method of claim 2, wherein said computing a first residual of said first symmetric point cloud and said first feature point cloud with said second transformed point cloud comprises:
selecting feature association points which are most adjacent to each feature point in the second transformation point cloud from the first symmetrical point cloud and the first feature point cloud to form a first adjacent point set;
and calculating the square sum of residuals between each characteristic point in the first adjacent point set and each characteristic point in the second transformation point cloud to be used as the first residual.
4. The method of claim 3, wherein selecting, from the first symmetrical point cloud and the first feature point cloud, a feature association point that is closest to each feature point in the second transformed point cloud comprises:
aiming at each transformed feature point in the second transformed point cloud, selecting a point with the minimum Euler distance from the transformed feature point in the first feature point cloud as a candidate associated point;
when the target Euler distance is smaller than or equal to a target residual error threshold value, determining the candidate associated point in the first feature point cloud as a feature associated point which is most adjacent to the transformed feature point;
and when the target Euler distance is larger than a target residual error threshold value, selecting a point with the minimum Euler distance with the transformed feature point from the first transformed point cloud as a feature association point which is closest to the transformed feature point.
5. The method of claim 1, wherein said computing a second residual of said first feature point cloud and said second transformed point cloud from said first symmetric point cloud comprises:
selecting feature association points which are most adjacent to each feature point in the first symmetrical point cloud from the first feature point cloud and the second transformation point cloud to form a second adjacent point set;
calculating the square sum of residual errors between each feature associated point in the second adjacent point set and each feature point in the first symmetrical point cloud to serve as the second residual error;
the calculating a third residual error between the first transformed point cloud and the second feature point cloud, and the second symmetric point cloud, comprises:
selecting feature association points which are most adjacent to each feature point in the second symmetrical point cloud from the first transformation point cloud and the second feature point cloud to form a third adjacent point set;
and calculating the square sum of the residual errors between each feature associated point in the third adjacent point set and each feature point in the second symmetrical point cloud to serve as the third residual error.
6. The method according to claim 1, wherein in the iterative updating process, the initial value of the pose transformation parameter can be the transformation pose of the second feature point cloud to the first feature point cloud acquired by the information acquisition equipment.
7. The method of claim 1, wherein performing a completion operation on the target object according to the updated symmetric transformation parameters comprises:
according to the updated symmetric transformation parameters, performing symmetric transformation on the first characteristic point cloud to obtain a third characteristic point cloud;
and fusing the feature points among the first feature point cloud, the second transformed point cloud and the third feature point cloud to obtain a target feature point cloud so as to indicate the completion state of the target object.
8. A symmetry plane completion apparatus, comprising:
the characteristic point cloud obtaining module is used for obtaining a first characteristic point cloud and a second characteristic point cloud; the first characteristic point cloud and the second characteristic point cloud are characteristic point clouds acquired from different directions on a target object;
the pose transformation module is used for respectively carrying out pose transformation on the first characteristic point cloud and the second characteristic point cloud based on pose transformation parameters to correspondingly obtain a first transformation point cloud and a second transformation point cloud;
the symmetrical transformation module is used for respectively carrying out symmetrical transformation on the first characteristic point cloud and the second characteristic point cloud based on symmetrical transformation parameters to correspondingly obtain a first symmetrical point cloud and a second symmetrical point cloud;
the first residual error calculation module is used for calculating a first residual error between the first symmetrical point cloud and the first characteristic point cloud as well as the second transformation point cloud;
the second residual error calculation module is used for calculating a second residual error between the first characteristic point cloud and the second transformation point cloud and the first symmetrical point cloud;
a third residual calculation module, configured to calculate a third residual between the first transformed point cloud and the second feature point cloud, and the second symmetric point cloud;
and the object completion module is used for iteratively updating the pose transformation parameter and the symmetric transformation parameter by taking the weighted sum of the first residual, the second residual and the third residual as a target condition so as to execute completion operation on the target object according to the updated symmetric transformation parameter.
9. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to implement the symmetry-plane completion method of any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon at least one instruction which is loaded and executed by a processor to implement the method of symmetry plane completion as claimed in any one of claims 1 to 7.
CN202210755141.9A 2022-06-30 2022-06-30 Symmetric plane completion method, device, equipment and storage medium Active CN114820955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210755141.9A CN114820955B (en) 2022-06-30 2022-06-30 Symmetric plane completion method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210755141.9A CN114820955B (en) 2022-06-30 2022-06-30 Symmetric plane completion method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114820955A true CN114820955A (en) 2022-07-29
CN114820955B CN114820955B (en) 2022-11-18

Family

ID=82522576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210755141.9A Active CN114820955B (en) 2022-06-30 2022-06-30 Symmetric plane completion method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114820955B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872354A (en) * 2019-01-28 2019-06-11 深圳市易尚展示股份有限公司 Multi-angle of view point cloud registration method and system based on nonlinear optimization
CN111325663A (en) * 2020-02-21 2020-06-23 深圳市易尚展示股份有限公司 Three-dimensional point cloud matching method and device based on parallel architecture and computer equipment
CN114127785A (en) * 2021-04-15 2022-03-01 商汤国际私人有限公司 Point cloud completion method, network training method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872354A (en) * 2019-01-28 2019-06-11 深圳市易尚展示股份有限公司 Multi-angle of view point cloud registration method and system based on nonlinear optimization
CN111325663A (en) * 2020-02-21 2020-06-23 深圳市易尚展示股份有限公司 Three-dimensional point cloud matching method and device based on parallel architecture and computer equipment
CN114127785A (en) * 2021-04-15 2022-03-01 商汤国际私人有限公司 Point cloud completion method, network training method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
严璐等: "激光雷达三维点云目标补全算法", 《电力与电子技术》 *
赵新灿等: "3D点云形状补全GAN", 《计算机科学》 *
赵毅强等: "基于特征融合的深度学习点云补全算法", 《天津大学学报(自然科学与工程技术版)》 *

Also Published As

Publication number Publication date
CN114820955B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
US11094198B2 (en) Lane determination method, device and storage medium
CN107481292B (en) Attitude error estimation method and device for vehicle-mounted camera
CN108871353B (en) Road network map generation method, system, equipment and storage medium
CN112179330B (en) Pose determination method and device of mobile equipment
CN111192331B (en) External parameter calibration method and device for laser radar and camera
WO2021143286A1 (en) Method and apparatus for vehicle positioning, controller, smart car and system
CN109407073B (en) Reflection value map construction method and device
CN113240734B (en) Vehicle cross-position judging method, device, equipment and medium based on aerial view
CN114972490B (en) Automatic data labeling method, device, equipment and storage medium
CN115436920A (en) Laser radar calibration method and related equipment
CN115376109A (en) Obstacle detection method, obstacle detection device, and storage medium
CN113189989A (en) Vehicle intention prediction method, device, equipment and storage medium
CN115494533A (en) Vehicle positioning method, device, storage medium and positioning system
CN114001706B (en) Course angle estimation method and device, electronic equipment and storage medium
CN114897988A (en) Multi-camera positioning method, device and equipment in hinge type vehicle
CN115097419A (en) External parameter calibration method and device for laser radar IMU
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN113450389B (en) Target tracking method and device and electronic equipment
CN112733971B (en) Pose determination method, device and equipment of scanning equipment and storage medium
CN114820955B (en) Symmetric plane completion method, device, equipment and storage medium
CN112632415A (en) Web map real-time generation method and image processing server
CN110148205B (en) Three-dimensional reconstruction method and device based on crowdsourcing image
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN115965923A (en) Bird's-eye view target detection method and device based on depth estimation and electronic equipment
CN113763481B (en) Multi-camera visual three-dimensional map construction and self-calibration method in mobile scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant