CN111009029B - Data processing method and device for three-dimensional reconstruction, electronic equipment and storage medium - Google Patents

Data processing method and device for three-dimensional reconstruction, electronic equipment and storage medium Download PDF

Info

Publication number
CN111009029B
CN111009029B CN201911177578.3A CN201911177578A CN111009029B CN 111009029 B CN111009029 B CN 111009029B CN 201911177578 A CN201911177578 A CN 201911177578A CN 111009029 B CN111009029 B CN 111009029B
Authority
CN
China
Prior art keywords
pose
target
information
determining
transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911177578.3A
Other languages
Chinese (zh)
Other versions
CN111009029A (en
Inventor
曾灿灿
张小军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shichen Information Technology Shanghai Co ltd
Original Assignee
Shichen Information Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shichen Information Technology Shanghai Co ltd filed Critical Shichen Information Technology Shanghai Co ltd
Priority to CN201911177578.3A priority Critical patent/CN111009029B/en
Publication of CN111009029A publication Critical patent/CN111009029A/en
Application granted granted Critical
Publication of CN111009029B publication Critical patent/CN111009029B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides data processing, a device, electronic equipment and a storage medium for three-dimensional reconstruction, wherein the method comprises the following steps: acquiring first point cloud information, a first picture set, second point cloud information and a second picture set, wherein the first point cloud information is obtained when a three-dimensional scene is reconstructed on line, and the first picture set is a set of pictures used when the three-dimensional scene is reconstructed on line; the second point cloud information is obtained when the three-dimensional scene is reconstructed offline, and the second picture set is a set of pictures used when the three-dimensional scene is reconstructed offline; determining K first target pictures in the first picture set and K second target pictures in a second picture set; determining first position and attitude information corresponding to the first target picture and second position and attitude information corresponding to the second target picture; and determining target transformation parameters of the first point cloud information and the second point cloud information according to the K pieces of first position information and the K pieces of second position information.

Description

Data processing method and device for three-dimensional reconstruction, electronic equipment and storage medium
Technical Field
The present invention relates to the field of three-dimensional reconstruction, and in particular, to a data processing method and apparatus for three-dimensional reconstruction, an electronic device, and a storage medium.
Background
Three-dimensional reconstruction techniques are widely studied in the fields of robotics, computer vision, and computer graphics, and are currently widely used in the fields of Augmented Reality (AR), Virtual Reality (VR), mobile robots, and automatic driving.
Because the high-precision three-dimensional reconstruction has huge calculation amount and is difficult to be completed on line, but the on-line three-dimensional reconstruction has important practical significance in the fields of augmented reality, mobile robots and the like, in part of the prior art, the low-precision three-dimensional reconstruction can be performed at a mobile end to obtain the on-line reconstruction result, so that part of requirements (such as preview or navigation) are met, and simultaneously or afterwards, the high-precision off-line reconstruction can be performed to obtain the high-precision off-line reconstruction result. Furthermore, the result of the online reconstruction can be matched with the result of the offline reconstruction to obtain the final required three-dimensional reconstruction result.
However, for the results of the offline reconstruction and the results of the online reconstruction, there may be variations (e.g., positions, orientations, scales, etc.) between the results of the offline reconstruction and the results of the online reconstruction, and thus, it is difficult to accurately match the results.
Disclosure of Invention
The invention provides a data processing method and device for three-dimensional reconstruction, electronic equipment and a storage medium, and aims to solve the problem that an offline reconstruction result is difficult to accurately match with an online reconstruction result.
According to a first aspect of the present invention, there is provided a data processing method for three-dimensional reconstruction, comprising:
acquiring first point cloud information, a first picture set, second point cloud information and a second picture set, wherein the first point cloud information is obtained when a three-dimensional scene is reconstructed on line, and the first picture set is a set of pictures used when the three-dimensional scene is reconstructed on line; the second point cloud information is obtained when the three-dimensional scene is reconstructed offline, and the second picture set is a set of pictures used when the three-dimensional scene is reconstructed offline;
determining K first target pictures in the first picture set and K second target pictures in the second picture set; the K first target pictures are respectively matched with K second target pictures in the second picture set; wherein K is a positive integer greater than or equal to 3;
determining first position and attitude information corresponding to the first target picture and second position and attitude information corresponding to the second target picture;
and determining target transformation parameters of the first point cloud information and the second point cloud information according to the K pieces of first position information and the K pieces of second position information.
Optionally, determining target transformation parameters of the first point cloud information and the second point cloud information according to K pieces of first position information and K pieces of second position information, including:
determining initial transformation parameters between the K first target pictures and the K second target pictures according to the first position information and the second position information;
and determining the target transformation parameters according to the initial transformation parameters.
Optionally, determining initial transformation parameters between the K first target pictures and the K second target pictures according to the first pose information and the second pose information includes:
randomly determining a current pose pair set for N times, wherein the pose pair set comprises L pairs of pose pairs, and each pair of pose pairs comprises paired first pose information and paired second pose information;
calculating matching transformation parameters of the current pose pair set each time when the current pose pair set is determined, and determining a transformation effect evaluation result when the current pose pair set is transformed by adopting the matching transformation parameters;
determining a pose pair set with the best transformation effect evaluation result as a target pose pair set;
determining the initial transformation parameters according to at least part of the pose pairs of the target pose pair set.
Optionally, calculating matching transformation parameters of the current pose pair set, and determining a transformation effect evaluation result of the current pose pair set when transformed by using the matching transformation parameters, includes:
substituting the first pose information and the second pose information of the pose pairs in the current pose pair set into a preset first objective function E (t, R, c), and determining a transformation parameter (t) when the function value of the first objective function is minimum0,R0,c0) For the matching transformation parameters:
wherein:
first objective function
Figure GDA0002967281300000021
A second objective function e (t, R, c) ═ yi-(cRxi+t)||2
t is used for representing scale parameters in the transformation parameters;
r is used for representing rotation parameters in the transformation parameters;
c is used for representing translation parameters in the transformation parameters;
xia translation part for representing the first position information in the ith position pair;
yia translation part for representing the second position information in the ith position pair;
for each pose pair, substituting the matching transformation parameters and two translation parts in the pose pair into the second objective function to obtain a function value of the second objective function at the moment;
if the obtained function value of the second objective function is smaller than a preset threshold value, recording the pose pair as an interior point element in a corresponding interior point set so as to enable: the transformation effect evaluation result can be characterized by the number of interior point elements in the interior point set;
and the pose pair set with the best transformation effect evaluation result is the pose pair set corresponding to the interior point set with the largest number of interior point elements.
Optionally, determining the initial transformation parameters according to at least part of pose pairs of the target pose pair set includes:
and substituting the first pose information and the second pose information of each pose pair into the first objective function according to the interior point set corresponding to the target pose pair set, and determining a transformation parameter when the function value of the first objective function is minimum as the initial transformation parameter.
Optionally, determining the target transformation parameter according to the initial transformation parameter includes:
and refining the initial transformation parameters by using an iteration closest point mode to obtain the target transformation parameters.
Optionally, the target transformation parameters include a target translation parameter and a target rotation parameter.
Optionally, the target transformation parameter further has a target scale parameter.
According to a second aspect of the present invention, there is provided a data processing apparatus for three-dimensional reconstruction, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring first point cloud information, a first picture set, second point cloud information and a second picture set, the first point cloud information is obtained when a three-dimensional scene is reconstructed on line, and the first picture set is a set of pictures used when the three-dimensional scene is reconstructed on line; the second point cloud information is obtained when the three-dimensional scene is reconstructed offline, and the second picture set is a set of pictures used when the three-dimensional scene is reconstructed offline;
a positioning module, configured to determine K first target pictures in the first picture set and K second target pictures in the second picture set; the K first target pictures are respectively matched with K second target pictures in the second picture set; wherein K is a positive integer greater than or equal to 3;
the pose determining module is used for determining first pose information corresponding to the first target picture and second pose information corresponding to the second target picture;
and the target transformation parameter determining module is used for determining target transformation parameters of the first point cloud information and the second point cloud information according to the K pieces of first position information and the K pieces of second position information.
Optionally, the target transformation parameter determining module includes:
an initial transformation parameter determining submodule, configured to determine initial transformation parameters between the K first target pictures and the K second target pictures according to the first pose information and the second pose information;
and the target transformation parameter determining submodule is used for determining the target transformation parameters according to the initial transformation parameters.
Optionally, the initial transformation parameter determining sub-module includes:
the random unit is used for randomly determining a current pose pair set for N times, wherein the pose pair set comprises L pairs of pose pairs, and each pair of pose pairs comprises paired first pose information and paired second pose information;
the calculation evaluation unit is used for calculating the matching transformation parameters of the current pose pair set every time when the current pose pair set is determined, and determining the transformation effect evaluation result when the current pose pair set is transformed by adopting the matching transformation parameters;
the target pose pair set determining unit is used for determining a pose pair set with the best transformation effect evaluation result as a target pose pair set;
and the initial transformation parameter determining unit is used for determining the initial transformation parameters according to at least part of the pose pairs in the target pose pair set.
Optionally, the calculation and evaluation unit includes:
a first sub-unit for substituting the first pose information and the second pose information of the pose pair in the current pose pair set into a preset first objective function E (t, R, c) and determining a transformation parameter (t, R, c) when the first objective function value is minimum0,R0,c0) For the matching transformation parameters:
wherein:
first objective function
Figure GDA0002967281300000041
A second objective function e (t, R, c) ═ yi-(cRxi+t)||2
t is used for representing scale parameters in the transformation parameters;
r is used for representing rotation parameters in the transformation parameters;
c is used for representing translation parameters in the transformation parameters;
xia translation part for representing the first position information in the ith position pair;
yia translation part for representing the second position information in the ith position pair;
a second substitution subunit, configured to substitute, for each pose pair, the matching transformation parameter and both translation portions in the pose pair into the second objective function, so as to obtain a function value of the second objective function at this time;
an interior point set updating subunit, configured to record, if the obtained function value of the second objective function is smaller than a preset threshold, the pose pair as an interior point element in a corresponding interior point set, so that: the transformation effect evaluation result can be characterized by the number of interior point elements in the interior point set;
and the pose pair set with the best transformation effect evaluation result is the pose pair set corresponding to the interior point set with the largest number of interior point elements.
Optionally, the initial transformation parameter determining unit is specifically configured to:
and substituting the first pose information and the second pose information of each pose pair into the first objective function according to the inner points recorded in the inner point set corresponding to the target pose pair set, and determining the transformation parameter when the function value of the first objective function is minimum as the initial transformation parameter.
Optionally, the target transformation parameter determining submodule is specifically configured to:
and refining the initial transformation parameters by using an iteration closest point mode to obtain the target transformation parameters.
Optionally, the target transformation parameters include a target translation parameter and a target rotation parameter.
Optionally, the target transformation parameter further has a target scale parameter.
According to a third aspect of the invention, there is provided an electronic device comprising a memory and a processor, wherein:
the memory is used for storing codes;
the processor is adapted to execute code in the memory for implementing the method steps of the first aspect and its alternatives.
According to a fourth aspect of the present invention, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the method of the first aspect and its alternatives.
According to the three-dimensional reconstruction data processing method, the three-dimensional reconstruction data processing device, the electronic equipment and the storage medium, the pictures used for online reconstruction and offline reconstruction can be found through the matched first target picture and the second target picture, the poses of the pictures in respective coordinate systems are known, basis and constraint can be provided for determining target transformation parameters between the first point cloud information and the second point cloud information, and therefore registration between an offline reconstruction result and an online reconstruction result is facilitated.
In the specific scheme, a rough registration result (namely an initial transformation parameter) can be obtained indirectly through registration of the pose of the target picture, and then a final registration result (namely a target transformation parameter) is obtained by refining in a common point cloud registration mode such as an iterative closest point mode.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a data processing method for three-dimensional reconstruction according to an embodiment of the present invention;
FIG. 2 is a first flowchart illustrating the step S14 according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating step S141 according to an embodiment of the present invention;
FIG. 4 is a second flowchart illustrating the step S14 according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of program modules of a data processing apparatus for three-dimensional reconstruction according to an embodiment of the present invention;
FIG. 6 is a block diagram of the program sub-modules of the object transformation parameter determination module in accordance with an embodiment of the present invention;
FIG. 7 is a block diagram of an initial transformation parameter determination submodule in accordance with an embodiment of the present invention;
FIG. 8 is a diagram of program sub-elements of a computational evaluation unit in accordance with an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 1 is a schematic flow chart of a data processing method for three-dimensional reconstruction according to an embodiment of the present invention.
Referring to fig. 1, a data processing method for three-dimensional reconstruction includes:
s11: and acquiring the first point cloud information, the first picture set, the second point cloud information and the second picture set.
The first point cloud information, which is obtained when the three-dimensional scene is reconstructed online, may be understood as any information capable of describing a point cloud in a result of the online reconstruction, and specifically may include, for example, coordinate information and pose information of each point.
The second point cloud information, which is obtained when the three-dimensional scene is reconstructed offline, may be understood as any information capable of describing the point cloud in the result of offline reconstruction, and specifically may include, for example, coordinate information and pose information of each point.
The first picture set may be understood as a set of pictures used in the online reconstruction of the three-dimensional scene. Correspondingly, the second picture set may be understood as a set of pictures used when the three-dimensional scene is reconstructed offline. The first picture set and the second picture set have the same pictures, and therefore the same pictures can be positioned, and the transformation parameters are determined according to the pictures.
In the implementation process, { I } can be utilized1,I2…In|CpCharacterizing a first target picture and first point cloud information, wherein I1,I2…InPictures used for on-line reconstruction, i.e. pictures in the first set of pictures, CpThe point cloud obtained by online reconstruction is the point cloud described by the first point cloud information; may also utilize { J1,J2…Jm|CqCharacterizing a second target picture and second point cloud information, wherein J1,J2…JmFor pictures used in offline reconstruction, i.e. pictures in the second set of pictures, CqAnd reconstructing the obtained point cloud offline, namely the point cloud described by the second point cloud information.
After step S11, the method may include:
s12: determining K first target pictures in the first picture set and K second target pictures in the second picture set; (ii) a
S13: and determining first position and attitude information corresponding to the first target picture and second position and attitude information corresponding to the second target picture.
Wherein the K first target pictures are respectively matched with K second target pictures in the second picture set, which can be understood as: the K first target pictures are respectively the same as one second target picture, and further, the embodiment positions the corresponding first target picture and the second target picture in a picture matching manner; wherein K is a positive integer greater than or equal to 3.
In one embodiment, step S12 may locate K first target pictures in the first picture set, for example, based on a second target picture in the second picture set, where the locating may be performed by feature point matching or a vocabulary tree. In particular, can be directed to I referred to hereinbefore1,I2…InIn { J1,J2…Jm|CqThe successfully located picture can be, for example, the picture
Figure GDA0002967281300000081
Figure GDA0002967281300000082
A total of k pictures.
Further, can note
Figure GDA0002967281300000083
Figure GDA0002967281300000084
The poses of the pictures in the online reconstruction are respectively
Figure GDA0002967281300000085
Figure GDA0002967281300000086
The poses of the pictures in the off-line reconstruction are respectively
Figure GDA0002967281300000087
In addition, in step S12, as described above, the first target picture may be located in the first picture set based on the second target picture in the second picture set, and in other alternatives, the second target picture may also be located in the second picture set based on the first target picture in the first picture set, meanwhile, this embodiment does not exclude a manner of determining the corresponding picture first and then locating the first target picture and the second target picture in the two picture sets, respectively. In any way, as long as the result of the positioning of the first target picture and the second target picture (i.e., the same picture) is achieved, the description of the present embodiment is not deviated from.
After step S13, it may include:
s14: and determining target transformation parameters of the first point cloud information and the second point cloud information according to the K pieces of first position information and the K pieces of second position information.
The target transformation parameters are understood to be parameters suitable for achieving a transformation between an off-line reconstructed point cloud and an on-line reconstructed point cloud.
In one embodiment, the target transformation parameters include a target translation parameter and a target rotation parameter. In a specific implementation process, the target transformation parameter further comprises a target scale parameter.
Correspondingly, in the specific implementation process, the other transformation parameters, such as the matching transformation parameters and the initial transformation parameters, which are referred to later, may include a translation parameter, a rotation parameter, and a scale parameter.
In contrast, in the related art, the registration between two groups of point clouds is not achieved by combining the scale difference. Furthermore, the existing point cloud registration methods assume that the scales of two groups of point clouds are consistent, so that two groups of point clouds with large scale difference cannot be registered. The above embodiment, and the manner of adopting the following formula one and formula two, can fully consider the scale difference, thereby effectively improving the accuracy of registration.
The translation parameter can be characterized by t, the rotation parameter can be characterized by R, and the scale parameter can be characterized by c.
Let the translation, rotation and scale differences between the two sets of point clouds be t, R and c, respectively. The main means of registering the point clouds is to solve for t, R and c.
Because the transformation relation between the first position and orientation information of the same picture can embody the transformation relation between part of the point clouds reconstructed on line and the point clouds reconstructed off line, the embodiment takes the transformation relation as the basis and the constraint, and can embody the transformation relation between the point clouds to a certain extent.
Correspondingly, any target transformation parameters determined based on and constrained by the pose information of the commonly used pictures do not depart from the description of the embodiment regardless of the way of calculating and using the pose information of the pictures.
Furthermore, through the matched first target picture and second target picture, pictures used for both online reconstruction and offline reconstruction can be found, the poses of the pictures in respective coordinate systems are known, and basis and constraint can be provided for determining target transformation parameters between the first point cloud information and the second point cloud information, so that the registration between an offline reconstruction result and an online reconstruction result is facilitated.
Specifically, in the subsequent scheme, a rough registration result (i.e., an initial transformation parameter) can be obtained indirectly by registering the pose of the target picture, and then a final registration result (i.e., a target transformation parameter) can be obtained by performing refinement in a common point cloud registration manner, such as an iterative closest point manner.
Fig. 2 is a first flowchart illustrating the step S14 according to an embodiment of the present invention.
Referring to fig. 2, step S14 may include:
s141: determining initial transformation parameters between the K first target pictures and the K second target pictures according to the first position information and the second position information.
The initial transformation parameters may be understood as parameters describing a transformation manner between the first target picture and the second target picture. As mentioned above, the initial transformation parameter may have, for example, an initial translation parameter, an initial rotation parameter, and an initial scale parameter.
Fig. 3 is a flowchart illustrating step S141 according to an embodiment of the present invention.
In one embodiment, referring to fig. 3, step S141 may include:
s1411: and randomly determining a current pose pair set.
The pose pair set comprises L pairs of pose information, each pair of pose information comprises first pose information and second pose information which are paired, and the paired pose information can be understood as the same picture or the first pose information and the second pose information of matched pictures in different coordinate systems.
The pose information (i.e. the first pose information and the second pose information) of the picture may include two parts, i.e. translation and rotation, in the specific example, the above mentioned
Figure GDA0002967281300000101
And
Figure GDA0002967281300000102
can be characterized as
Figure GDA0002967281300000103
And
Figure GDA0002967281300000104
after step S1411, the method may further include:
s1412: and calculating matching transformation parameters of the current pose pair set, and determining a transformation effect evaluation result when the matching transformation parameters are adopted to transform the current pose pair set.
Matching transformation parameters, which may be understood as being the transformation parameters determined by the calculation for all pose pairs in the set of pose pairs for which the current pose is matched, may also be understood as: and aiming at the current pose pair set, the transformation parameter can ensure that the error of each first pose information and each second pose information is minimum after transformation.
The result of the evaluation of the transformation effect can be understood as any quantifiable data that can evaluate the transformation effect.
After step S1412, the method may further include:
s1413: whether the maximum number of repetitions is reached.
The maximum number of repetitions may be any preset number of times, for example, may be 100 times; according to the different quantity of the target pictures, the corresponding quantity can be matched and changed.
If the determination result in step S1413 is yes, the following steps may be performed:
s1414: determining a pose pair set with the best transformation effect evaluation result as a target pose pair set;
s1415: determining the initial transformation parameters according to at least part of the pose pairs of the target pose pair set.
Wherein, if the maximum number of repetitions is N, the above process can also be described as:
randomly determining a current pose pair set for N times, wherein the pose pair set comprises L pairs of pose pairs, and each pair of pose pairs comprises paired first pose information and paired second pose information;
calculating matching transformation parameters of the current pose pair set each time when the current pose pair set is determined, and determining a transformation effect evaluation result when the current pose pair set is transformed by adopting the matching transformation parameters;
determining a pose pair set with the best transformation effect evaluation result as a target pose pair set;
determining the initial transformation parameters according to at least part of the pose pairs of the target pose pair set.
The number N of iterations may be determined by a corresponding formula according to the probability of obtaining a correct result, for example: n ═ log (1-p)/log (1-w)L);
Where p is the probability that a correct result is desired, which may be set to 99%, i.e., 0.99;
w is the probability of successful positioning of each picture, which can be determined by different positioning algorithms, for example, 0.5 can be taken;
to reduce the number of iterations for L, i.e. the number of pose pairs referred to earlier, a minimum number of 3 may be taken, for example.
In addition, the number of iterations N may also be empirically determined, for example, N may be empirically determined to be a number between 100 and 1000.
In a specific implementation process, step S1412 may include:
substituting the first pose information and the second pose information of the pose pairs in the current pose pair set into a preset first objective function E (t, R, c), and determining a transformation parameter (t) when the function value of the first objective function is minimum0,R0,c0) For the matching transformation parameters:
wherein:
first objective function
Figure GDA0002967281300000111
A second objective function e (t, R, c) ═ yi-(cRxi+t)||2
t is used for representing scale parameters in the transformation parameters;
r is used for representing rotation parameters in the transformation parameters;
c is used for representing translation parameters in the transformation parameters;
xia translation part for representing the first position information in the ith position pair;
yia translation part for representing the second position information in the ith position pair;
for each pose pair, substituting the matching transformation parameters and two translation parts in the pose pair into the second objective function to obtain a function value of the second objective function at the moment;
if the obtained function value of the second objective function is smaller than a preset threshold value, recording the pose pair as an interior point element in a corresponding interior point set so as to enable: the transformation effect evaluation result can be characterized by the number of interior point elements in the interior point set;
and the pose pair set with the best transformation effect evaluation result is the pose pair set corresponding to the interior point set with the largest number of interior point elements.
In one embodiment, in step S1415, the method may specifically include:
and substituting the first pose information and the second pose information of each pose pair into the first objective function according to the inner points recorded in the inner point set corresponding to the target pose pair set, and determining the transformation parameter when the function value of the first objective function is minimum as the initial transformation parameter.
In order to find the inlier set with the largest number of inlier elements, in one example, an optimal solution of transformation parameters and the inlier set of the corresponding optimal solution may be defined, and then, after step S1412, if the current pose matches the transformation parameters (t) of the set0,R0,c0) The number of the corresponding interior point elements in the interior point set is greater than that of the interior point set of the current optimal solution, the matching transformation parameter can be used as the optimal solution, and then the interior point set is the interior point set of the optimal solution, so that the subsequent step S1413 can be continued, otherwise, the optimal solution and the interior point set thereof are determined to be unchanged. And simultaneously, determining a target pose pair set, namely determining a corresponding inner point set of the target pose pair set.
In a specific implementation process, the initial optimal solution may be (0, 0, 0), and the interior point set is empty, that is, the number of interior point elements in the interior point set is initially 0.
Meanwhile, the embodiment does not exclude that the matching transformation parameters and the transformation effect evaluation results of each pose pair set are respectively calculated and cached, and after all the repeated processes are completed, the pose pair set with the best transformation effect evaluation result is determined according to the cached data.
In one example, the initial transformation parameters may be determined by:
the optimal solution is initialized to t ═ 0, 0, 0, R is the 3-dimensional identity matrix and c ═ 1. With the inner set of points Z being empty, the inner elements beingNumber Nmax=0;
Randomly selecting L pairs of pose pairs as a pose pair set, and recording translation parts of the first pose information and the second pose information as
Figure GDA0002967281300000121
And
Figure GDA0002967281300000122
this process may be understood as a process of randomly selecting a current pose pair set in step S1411;
substituting the L pairs of pose pairs into the first objective function related to the previous step to solve a least square solution, and recording the result of the solution as the current solution, namely the matching transformation parameter (t)0,R0,c0) (ii) a This process may correspond to the process of calculating the matching transformation parameters in step S1412;
then, the optimal solution can be updated based on the current L alignment posture pair; specifically, the inlier set Z of the optimal solution and the inlier set Z of the current solution can be comparediNumber of middle element, if ZiIf the number of the medium elements is more, the (t, R, c) is updated to (t)0,R0,c0) Updating the inner point set Z of the optimal solution to be Zi
Repeating the processes of randomly selecting pose pairs, calculating matching transformation parameters and updating the optimal solution, iteratively updating the optimal solution and the interior point set thereof, wherein each pose pair set can correspondingly determine an interior point set, so that the interior point set of the optimal solution is determined, namely the target pose pair set is determined; the process may correspondingly achieve the purpose of step S1414;
finally, all poses in the inner point set Z of the optimal solution can be substituted into the first objective function to obtain the final optimal solution, namely: initial transformation parameters.
As can be seen, in the above embodiment, when there are multiple sets of corresponding poses, a set of optimal transformation parameters may be selected by a Random Sample Consensus (RANSAC) method. Because the positioning of the picture may have errors or failures, the above embodiment adopts a random sampling consistency method to obtain an optimal registration result, so that the method can estimate a better result for two groups of point clouds with noise and even structural differences.
After step S141, the method may further include:
s142: and determining the target transformation parameters according to the initial transformation parameters.
In one embodiment, if the common point cloud registration mode is not combined, the initial transformation parameters can also be directly used as target transformation parameters.
Fig. 4 is a flowchart illustrating a second step S14 according to an embodiment of the present invention.
In another embodiment, referring to fig. 4, step S142 may specifically include:
s1421: and refining the initial transformation parameters by using an iteration closest point mode to obtain the target transformation parameters.
The way in which the closest point is iterated can be understood as: ICP, IteratedLosest Points.
Its processing logic can be understood as: in step S141, a rough difference between the off-line reconstructed point cloud and the on-line reconstructed point cloud can be obtained, and in step S1421, the difference can be further refined and determined.
Further, the above embodiment can obtain robust and highly accurate target transformation parameters. The method also has the advantages of general flow, successful matching, high precision and the like under the condition that large rotational translation and obvious scale difference exist among reconstruction results.
In summary, according to the data processing method for three-dimensional reconstruction provided by this embodiment, through the first target picture and the second target picture which are matched, pictures used for both online reconstruction and offline reconstruction can be found, poses of the pictures in respective coordinate systems are known, and basis and constraint can be provided for determining a target transformation parameter between the first point cloud information and the second point cloud information, so that registration between an offline reconstruction result and an online reconstruction result is facilitated.
In the specific scheme, a rough registration result (namely an initial transformation parameter) can be obtained indirectly through registration of the pose of the target picture, and then a final registration result (namely a target transformation parameter) is obtained by refining in a common point cloud registration mode such as an iterative closest point mode.
Fig. 5 is a schematic diagram of program modules of a data processing apparatus for three-dimensional reconstruction according to an embodiment of the present invention.
Referring to fig. 5, the data processing apparatus 2 for three-dimensional reconstruction includes:
the acquiring module 21 is configured to acquire first point cloud information, a first picture set, second point cloud information, and a second picture set, where the first point cloud information is obtained when a three-dimensional scene is reconstructed online, and the first picture set is a set of pictures used when the three-dimensional scene is reconstructed online; the second point cloud information is obtained when the three-dimensional scene is reconstructed offline, and the second picture set is a set of pictures used when the three-dimensional scene is reconstructed offline;
a positioning module 22, configured to determine K first target pictures in the first picture set and K second target pictures in the second picture set; the K first target pictures are respectively matched with K second target pictures in the second picture set;
a pose determining module 23, configured to determine first pose information corresponding to the first target picture and second pose information corresponding to the second target picture;
and the target transformation parameter determining module 24 is configured to determine target transformation parameters of the first point cloud information and the second point cloud information according to the K pieces of first position information and the K pieces of second position information.
FIG. 6 is a block diagram of the program sub-modules of the object transformation parameter determination module according to an embodiment of the present invention.
Referring to fig. 6, the target transformation parameter determining module 24 includes:
an initial transformation parameter determining submodule 241, configured to determine initial transformation parameters between the K first target pictures and the K second target pictures according to the first pose information and the second pose information;
and a target transformation parameter determining sub-module 242, configured to determine the target transformation parameter according to the initial transformation parameter.
Fig. 7 is a schematic diagram of the program elements of the initial transformation parameter determination submodule in an embodiment of the present invention.
Referring to fig. 7, the initial transformation parameter determining sub-module 241 includes:
a random unit 2411, configured to randomly determine a current pose pair set N times, where the pose pair set includes L pairs of pose positions, and each pair of pose positions includes paired first pose information and paired second pose information;
a calculation and evaluation unit 2412, configured to calculate a matching transformation parameter of the current pose pair set each time a current pose pair set is determined, and determine a transformation effect evaluation result when the current pose pair set is transformed by using the matching transformation parameter;
a target pose pair set determining unit 2413, configured to determine a pose pair set with an optimal transformation effect evaluation result as a target pose pair set;
an initial transformation parameter determining unit 2414, configured to determine the initial transformation parameters according to at least some pose pairs in the set of target pose pairs.
FIG. 8 is a diagram of program sub-elements of a computational evaluation unit in accordance with an embodiment of the present invention;
referring to fig. 8, the calculation evaluation unit 2412 includes:
a first entering subunit 24121, configured to substitute the first pose information and the second pose information of the pose pair in the current pose pair set into a preset first objective function E (t, R, c), and determine a transformation parameter (t) when the first objective function value is minimum0,R0,c0) For the matching transformation parameters:
wherein:
first objective function
Figure GDA0002967281300000151
A second objective function e (t, R, c) ═ yi-(cRxi+t)||2
t is used for representing scale parameters in the transformation parameters;
r is used for representing rotation parameters in the transformation parameters;
c is used for representing translation parameters in the transformation parameters;
xia translation part for representing the first position information in the ith position pair;
yia translation part for representing the second position information in the ith position pair;
a second generation subunit 24122, configured to substitute, for each pose pair, the matching transformation parameter and both translation portions in the pose pair into the second objective function, so as to obtain a function value of the second objective function at this time;
an interior point set updating subunit 24123, configured to record, if the obtained function value of the second objective function is smaller than a preset threshold, the pose pair as an interior point element in a corresponding interior point set, so that: the transformation effect evaluation result can be characterized by the number of interior point elements in the interior point set;
and the pose pair set with the best transformation effect evaluation result is the pose pair set corresponding to the interior point set with the largest number of interior point elements.
Optionally, the initial transformation parameter determining unit 2414 is specifically configured to:
and substituting the first pose information and the second pose information of each pose pair into the first objective function according to the inner points recorded in the inner point set corresponding to the target pose pair set, and determining the transformation parameter when the function value of the first objective function is minimum as the initial transformation parameter.
Optionally, the target transformation parameter determining sub-module 242 is specifically configured to:
and refining the initial transformation parameters by using an iteration closest point mode to obtain the target transformation parameters.
Optionally, the target transformation parameters include a target translation parameter and a target rotation parameter.
Optionally, the target transformation parameter further has a target scale parameter.
In summary, the data processing apparatus for three-dimensional reconstruction provided in this embodiment can find, through the first target picture and the second target picture which are matched, pictures used for both online reconstruction and offline reconstruction, poses of the pictures in respective coordinate systems are known, and it can provide basis and constraint for determining a target transformation parameter between the first point cloud information and the second point cloud information, thereby facilitating implementation of registration between an offline reconstruction result and an online reconstruction result.
In the specific scheme, a rough registration result (namely an initial transformation parameter) can be obtained indirectly through registration of the pose of the target picture, and then a final registration result (namely a target transformation parameter) is obtained by refining in a common point cloud registration mode such as an iterative closest point mode.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Referring to fig. 9, an electronic device is provided, which includes:
a processor 31; and the number of the first and second groups,
a memory 32 for storing executable instructions of the processor;
wherein the processor 31 is configured to perform the above-mentioned method via execution of the executable instructions.
The processor 31 is capable of communicating with the memory 32 via a bus 33.
The present embodiment also provides a computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the above-mentioned method.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (14)

1. A data processing method for three-dimensional reconstruction, comprising:
acquiring first point cloud information, a first picture set, second point cloud information and a second picture set, wherein the first point cloud information is obtained when a three-dimensional scene is reconstructed on line, and the first picture set is a set of pictures used when the three-dimensional scene is reconstructed on line; the second point cloud information is obtained when the three-dimensional scene is reconstructed offline, and the second picture set is a set of pictures used when the three-dimensional scene is reconstructed offline;
determining K first target pictures in the first picture set and K second target pictures in the second picture set; the K first target pictures are respectively matched with K second target pictures in the second picture set; wherein K is a positive integer greater than or equal to 3;
determining first position and attitude information corresponding to the first target picture and second position and attitude information corresponding to the second target picture;
determining target transformation parameters of the first point cloud information and the second point cloud information according to K pieces of first position information and K pieces of second position information;
determining target transformation parameters of the first point cloud information and the second point cloud information according to the K pieces of first position information and the K pieces of second position information, wherein the target transformation parameters comprise:
determining initial transformation parameters between the K first target pictures and the K second target pictures according to the first position information and the second position information;
determining the target transformation parameters according to the initial transformation parameters;
determining initial transformation parameters between the K first target pictures and the K second target pictures according to the first pose information and the second pose information, including:
randomly determining a current pose pair set for N times, wherein the pose pair set comprises L pairs of pose pairs, and each pair of pose pairs comprises paired first pose information and paired second pose information;
calculating matching transformation parameters of the current pose pair set each time when the current pose pair set is determined, and determining a transformation effect evaluation result when the current pose pair set is transformed by adopting the matching transformation parameters;
determining a pose pair set with the best transformation effect evaluation result as a target pose pair set;
determining the initial transformation parameters according to at least part of the pose pairs of the target pose pair set.
2. The method of claim 1, wherein calculating matching transformation parameters for the set of current pose pairs and determining a transformation effect evaluation result for the set of current pose pairs when transformed with the matching transformation parameters comprises:
substituting the first pose information and the second pose information of the pose pairs in the current pose pair set into a preset first objective function E (t, R, c), and determining a transformation parameter (t) when the function value of the first objective function is minimum0,R0,c0) For the matching transformation parameters:
wherein:
first objective function
Figure FDA0002967281290000021
A second objective function e (t, R, c) ═ yi-(cRxi+t)||2
t is used for representing scale parameters in the transformation parameters;
r is used for representing rotation parameters in the transformation parameters;
c is used for representing translation parameters in the transformation parameters;
xia translation part for representing the first position information in the ith position pair;
yia translation part for representing the second position information in the ith position pair;
for each pose pair, substituting the matching transformation parameters and two translation parts in the pose pair into the second objective function to obtain a function value of the second objective function at the moment;
if the obtained function value of the second objective function is smaller than a preset threshold value, recording the pose pair as an interior point element in a corresponding interior point set so as to enable: the transformation effect evaluation result can be characterized by the number of interior point elements in the interior point set;
and the pose pair set with the best transformation effect evaluation result is the pose pair set corresponding to the interior point set with the largest number of interior point elements.
3. The method of claim 2, wherein determining the initial transformation parameters from at least some pose pairs of the set of target pose pairs comprises:
and substituting the first pose information and the second pose information of each pose pair into the first objective function according to the interior point set corresponding to the target pose pair set, and determining a transformation parameter when the function value of the first objective function is minimum as the initial transformation parameter.
4. The method according to any of claims 1 to 3, wherein determining the target transformation parameters from the initial transformation parameters comprises:
and refining the initial transformation parameters by using an iteration closest point mode to obtain the target transformation parameters.
5. The method of claim 1, wherein the target transformation parameters comprise a target translation parameter and a target rotation parameter.
6. The method of claim 5, wherein the target transformation parameter further has a target scale parameter.
7. A data processing apparatus for three-dimensional reconstruction, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring first point cloud information, a first picture set, second point cloud information and a second picture set, the first point cloud information is obtained when a three-dimensional scene is reconstructed on line, and the first picture set is a set of pictures used when the three-dimensional scene is reconstructed on line; the second point cloud information is obtained when the three-dimensional scene is reconstructed offline, and the second picture set is a set of pictures used when the three-dimensional scene is reconstructed offline;
a positioning module, configured to determine K first target pictures in the first picture set and K second target pictures in the second picture set; the K first target pictures are respectively matched with K second target pictures in the second picture set; wherein K is a positive integer greater than or equal to 3;
the pose determining module is used for determining first pose information corresponding to the first target picture and second pose information corresponding to the second target picture;
the target transformation parameter determining module is used for determining target transformation parameters of the first point cloud information and the second point cloud information according to the K pieces of first position information and the K pieces of second position information;
the target transformation parameter determination module comprises:
an initial transformation parameter determining submodule, configured to determine initial transformation parameters between the K first target pictures and the K second target pictures according to the first pose information and the second pose information;
the target transformation parameter determining submodule is used for determining the target transformation parameters according to the initial transformation parameters;
the initial transformation parameter determination submodule includes:
the random unit is used for randomly determining a current pose pair set for N times, wherein the pose pair set comprises L pairs of pose pairs, and each pair of pose pairs comprises paired first pose information and paired second pose information;
the calculation evaluation unit is used for calculating the matching transformation parameters of the current pose pair set every time when the current pose pair set is determined, and determining the transformation effect evaluation result when the current pose pair set is transformed by adopting the matching transformation parameters;
the target pose pair set determining unit is used for determining a pose pair set with the best transformation effect evaluation result as a target pose pair set;
and the initial transformation parameter determining unit is used for determining the initial transformation parameters according to at least part of the pose pairs in the target pose pair set.
8. The apparatus of claim 7, wherein the computational evaluation unit comprises:
a first sub-unit for substituting the first pose information and the second pose information of the pose pair in the current pose pair set into a preset first objective function E (t, R, c) and determining a transformation parameter (t, R, c) when the first objective function value is minimum0,R0,c0) For the matching transformation parameters:
wherein:
first objective function
Figure FDA0002967281290000041
A second objective function e (t, R, c) ═ yi-(cRxi+t)||2
t is used for representing scale parameters in the transformation parameters;
r is used for representing rotation parameters in the transformation parameters;
c is used for representing translation parameters in the transformation parameters;
xia translation part for representing the first position information in the ith position pair;
yia translation part for representing the second position information in the ith position pair; a second substitution subunit, configured to substitute, for each pose pair, the matching transformation parameter and both translation portions in the pose pair into the second objective function, so as to obtain a function value of the second objective function at this time;
an interior point set updating subunit, configured to record, if the obtained function value of the second objective function is smaller than a preset threshold, the pose pair as an interior point element in a corresponding interior point set, so that: the transformation effect evaluation result can be characterized by the number of interior point elements in the interior point set;
and the pose pair set with the best transformation effect evaluation result is the pose pair set corresponding to the interior point set with the largest number of interior point elements.
9. The apparatus according to claim 8, wherein the initial transformation parameter determining unit is specifically configured to:
and substituting the first pose information and the second pose information of each pose pair into the first objective function according to the inner points recorded in the inner point set corresponding to the target pose pair set, and determining the transformation parameter when the function value of the first objective function is minimum as the initial transformation parameter.
10. The apparatus according to any one of claims 7 to 9, wherein the target transformation parameter determination submodule is specifically configured to:
and refining the initial transformation parameters by using an iteration closest point mode to obtain the target transformation parameters.
11. The apparatus of claim 7, wherein the target transformation parameters comprise a target translation parameter and a target rotation parameter.
12. The apparatus of claim 11, wherein the target transformation parameter further has a target scale parameter.
13. An electronic device comprising a memory and a processor, wherein:
the memory is used for storing codes;
the processor configured to execute the code in the memory to implement the method steps of any of claims 1 to 6.
14. A storage medium having a computer program stored thereon, characterized in that the computer program, when being executed by a processor, carries out the method of any one of claims 1 to 6.
CN201911177578.3A 2019-11-25 2019-11-25 Data processing method and device for three-dimensional reconstruction, electronic equipment and storage medium Active CN111009029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911177578.3A CN111009029B (en) 2019-11-25 2019-11-25 Data processing method and device for three-dimensional reconstruction, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911177578.3A CN111009029B (en) 2019-11-25 2019-11-25 Data processing method and device for three-dimensional reconstruction, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111009029A CN111009029A (en) 2020-04-14
CN111009029B true CN111009029B (en) 2021-05-11

Family

ID=70112026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911177578.3A Active CN111009029B (en) 2019-11-25 2019-11-25 Data processing method and device for three-dimensional reconstruction, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111009029B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112697146B (en) * 2020-11-19 2022-11-22 北京电子工程总体研究所 Steady regression-based track prediction method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493375A (en) * 2018-10-24 2019-03-19 深圳市易尚展示股份有限公司 The Data Matching and merging method of three-dimensional point cloud, device, readable medium
CN109801316A (en) * 2018-12-19 2019-05-24 中国农业大学 A kind of top fruit sprayer three-dimensional point cloud automation method for registering and reconstructing method
CN110264502A (en) * 2019-05-17 2019-09-20 华为技术有限公司 Point cloud registration method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063894B (en) * 2014-06-13 2017-02-22 中国科学院深圳先进技术研究院 Point cloud three-dimensional model reestablishing method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493375A (en) * 2018-10-24 2019-03-19 深圳市易尚展示股份有限公司 The Data Matching and merging method of three-dimensional point cloud, device, readable medium
CN109801316A (en) * 2018-12-19 2019-05-24 中国农业大学 A kind of top fruit sprayer three-dimensional point cloud automation method for registering and reconstructing method
CN110264502A (en) * 2019-05-17 2019-09-20 华为技术有限公司 Point cloud registration method and device

Also Published As

Publication number Publication date
CN111009029A (en) 2020-04-14

Similar Documents

Publication Publication Date Title
CN109523597B (en) Method and device for calibrating external parameters of camera
CN111160298B (en) Robot and pose estimation method and device thereof
CN110766716A (en) Method and system for acquiring information of space unknown moving target
US10755139B2 (en) Random sample consensus for groups of data
CN114119864A (en) Positioning method and device based on three-dimensional reconstruction and point cloud matching
Sinko et al. 3D registration of the point cloud data using ICP algorithm in medical image analysis
CN112686950B (en) Pose estimation method, pose estimation device, terminal equipment and computer readable storage medium
CN113625288A (en) Camera and laser radar pose calibration method and device based on point cloud registration
CN112734776A (en) Minimally invasive surgical instrument positioning method and system
CN111951335A (en) Method, device, processor and image acquisition system for determining camera calibration parameters
CN114219717A (en) Point cloud registration method and device, electronic equipment and storage medium
CN111612731A (en) Measuring method, device, system and medium based on binocular microscopic vision
CN111009029B (en) Data processing method and device for three-dimensional reconstruction, electronic equipment and storage medium
CN113658194B (en) Point cloud splicing method and device based on reference object and storage medium
CN109785372B (en) Basic matrix robust estimation method based on soft decision optimization
CN112270748B (en) Three-dimensional reconstruction method and device based on image
CN116597246A (en) Model training method, target detection method, electronic device and storage medium
CN113920267B (en) Three-dimensional scene model construction method, device, equipment and storage medium
CN112991445B (en) Model training method, gesture prediction method, device, equipment and storage medium
CN115222912A (en) Target pose estimation method and device, computing equipment and storage medium
CN111681270A (en) Method, device and storage medium for realizing registration between image frames
CN111489439A (en) Three-dimensional line graph reconstruction method and device and electronic equipment
CN114485608B (en) Local point cloud rapid registration method for high-precision map making
CN116503570B (en) Three-dimensional reconstruction method and related device for image
CN114494429B (en) Large-scale uncontrolled three-dimensional adjustment net geometric positioning gross error detection and processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant