CN109544599B - Three-dimensional point cloud registration method based on camera pose estimation - Google Patents

Three-dimensional point cloud registration method based on camera pose estimation Download PDF

Info

Publication number
CN109544599B
CN109544599B CN201811400144.0A CN201811400144A CN109544599B CN 109544599 B CN109544599 B CN 109544599B CN 201811400144 A CN201811400144 A CN 201811400144A CN 109544599 B CN109544599 B CN 109544599B
Authority
CN
China
Prior art keywords
dimensional
point cloud
pts
dimensional point
relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811400144.0A
Other languages
Chinese (zh)
Other versions
CN109544599A (en
Inventor
荆海龙
朱江平
周佩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201811400144.0A priority Critical patent/CN109544599B/en
Publication of CN109544599A publication Critical patent/CN109544599A/en
Application granted granted Critical
Publication of CN109544599B publication Critical patent/CN109544599B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a three-dimensional point cloud registration method based on camera pose estimation, which organically combines a three-dimensional modeling process problem with a three-dimensional point cloud automatic registration problem and performs flow optimization on a multi-view three-dimensional modeling process, so that the three-dimensional point cloud automatic registration problem is converted into a relative position pose estimation problem of a camera under an adjacent view angle. Compared with the traditional point cloud registration method, the method has the advantages that the internal parameters of the camera are fully utilized to simplify the process of obtaining the corresponding point set, the obtaining speed is improved, and meanwhile, the corresponding point set is screened according to the reliability information of the three-dimensional points in the three-dimensional modeling process, so that the influence of abnormal points on the matching result of the three-dimensional point cloud is effectively avoided, the convergence speed of the matching process is increased, and the matching precision is improved.

Description

Three-dimensional point cloud registration method based on camera pose estimation
Technical Field
The invention relates to the field of reverse engineering, in particular to a three-dimensional point cloud registration method based on camera pose estimation.
Background
With the advance of China manufacturing 2025 and the 'industry 4.0' plan of all countries around the world, the three-dimensional modeling technology is rapidly developed and rapidly permeates into a plurality of manufacturing fields. Three-dimensional modeling technology is rapidly spreading to the consumer electronics field from the marketing of Microsoft Kinect depth sensors, Intel realsense, apple iPhone X. The three-dimensional modeling technology is rapidly developed and widely applied to various industries and fields.
The method for acquiring the three-dimensional information of the object by the measurement mode is the most accurate method for acquiring the space geometric information of the object in the field of three-dimensional modeling. The collection of three-dimensional points obtained by three-dimensional measurements is often referred to as a point cloud. Due to the influence of shadow and shielding, most of three-dimensional measuring instruments cannot acquire complete object appearance information through one-time measurement, and only can acquire three-dimensional data of a part of surface of an object under a certain view angle through one-time measurement. To complete three-dimensional measurement of the whole surface of an object, data fusion needs to be performed on three-dimensional measurement structures of the object at different viewing angles, and since three-dimensional data acquired for multiple times are not in the same coordinate system, in order to complete multi-view three-dimensional point cloud data fusion, three-dimensional point clouds obtained at various viewing angles need to be unified to the same coordinate system, and the process is called point cloud registration. The registration of multi-view three-dimensional point cloud is the basis of the subsequent three-dimensional point cloud data splicing and fusion, and the speed and precision of point cloud registration directly influence the three-dimensional modeling quality and influence the application effect of the three-dimensional point cloud data in the actual business process.
In the field of point cloud registration, the most classical and used method is a closest point iterative method (ICP) algorithm proposed by Besl and the like (Besl P J, Mckay N D.A method for registration of 3D maps, ieee), a newton iteration or search method is used to find the closest point pairs corresponding to two groups of point clouds, and an euclidean distance is used as a target function to perform iteration, so that three-dimensional rigid body transformation is obtained. Because the ICP algorithm has higher accuracy, the ICP algorithm becomes a mainstream algorithm in multi-view point cloud registration quickly, and with the wide application of the ICP algorithm, researchers make many detailed researches on the ICP algorithm. The most currently available methods are the development and improvement of the method.
The normal distribution transformation (The Three-Dimensional normative distribution transformation-an Efficient reconstruction for Registration surface analysis and Loop Detection) proposed by Martin Magnus applies a statistical model with Three-Dimensional points, using standard optimization techniques to determine The optimal match between two point clouds, because it does not use feature calculation and matching of each corresponding point in The Registration process, so The processing time is faster than The closest point iteration method.
However, the two existing methods consider the point cloud registration problem from the three-dimensional point cloud data, information in the single-view three-dimensional data acquisition process is not effectively utilized, and no method related report about the method for integrating and splicing the three-dimensional data acquisition and the point cloud to consider the complete surface modeling process of the whole object exists at present.
Disclosure of Invention
The invention aims to: the new three-dimensional point cloud registration method can effectively utilize information in the single-view three-dimensional data acquisition process so as to improve the point cloud registration accuracy and speed of the three-dimensional point cloud.
In order to achieve the above purpose, the invention provides the following technical scheme:
a three-dimensional point cloud registration method based on camera pose estimation comprises the following steps:
step S01: reading a new set of three-dimensional modeling images, and performing three-dimensional modeling based on the read three-dimensional modeling images;
step S02: extracting a reliable region of the three-dimensional modeling image to obtain a reliable image;
step S03: according to the reliable graph and the calibrated parameter information of the measuring system, calculating the three-dimensional coordinates Pts corresponding to each point pixel point (u, v) in the reliable areau,vAnd from all said three-dimensional coordinates Pts calculatedu,vThree-dimensional point cloud { Pts) forming current view anglecur};
Step S04: three-dimensional point cloud { Pts) of current view anglecurCarrying out depth information characteristic extraction and color information characteristic extraction to obtain multivariate characteristics { Features formed by space domain characteristics and color domain characteristicscur};
Step S05: the relative movement of the measured target and the camera piece is expressed by adopting the transformation of the position and the posture of the camera, and the relative pose RT between the camera at the previous visual angle and the camera at the current visual angle is initializedrelativeWherein the displacement parameter T is 0, and the attitude parameter R is a unit matrix;
step S06: the previous visual angle is converted to the current visual angle, and the three-dimensional point cloud { Pts) of the previous visual angle is calculated according to the internal parameters and the relative pose of the cameralastAnd finding the pixel coordinate (u, v) with the minimum Euclidean distance in the reliable area of the current three-dimensional modeling image according to the pixel coordinate (u ', v') corresponding to each three-dimensional data point in the three-dimensional modeling imagenearestThree-dimensional point cloud { Pts) from the above perspectivelastThe three-dimensional point corresponding to the pixel coordinate (u ', v') in (f) and (Pts)curCoordinates of middle pixel (u, v)nearestThe corresponding three-dimensional points form a corresponding point Pair Pair (u, v), and all the corresponding point Pairs form a corresponding point set { Pairs };
step S07: similarity-basedMeasuring, screening the corresponding point set { Pairs } to obtain a credible corresponding point set { Pairs }relability};
Step S08: based on the set of credible corresponding points { PairsrelabilityAnd calculating a rotation translation matrix RT between corresponding pointsaccmuThe RT isaccmuActing on relative pose RT between previous visual angle and current visual angle camera through matrix operationrelativeAnd updating the relative pose RT using the calculation resultrelative
Step S09: judging whether an iteration termination condition is met; if yes, go to step 10, otherwise, go to step S06 again;
step S10: according to the specific reason of the iteration termination of the step S09, the three-dimensional point cloud { Pts) of the current view angle is judgedcurJudging whether the frame is a key frame, if so, entering the step 11, otherwise, entering the step S16;
wherein, the three-dimensional point cloud { Pts of the current view angle is judgedcurThe way of judging whether the key frame is as follows:
(1) if the three-dimensional measurement of the current visual angle is the first measurement, the default of the three-dimensional point cloud of the current visual angle is the key frame;
(2) if the three-dimensional point cloud { Pts of the current view anglecurAnd the three-dimensional point cloud of the previous view { Pts }lastRegistering successfully, and the three-dimensional measurement of the current visual angle is separated from the three-dimensional measurement of the previous key frame by NKeyFrameIntervalRelative pose RT of secondary three-dimensional measurement or three-dimensional measurement of current view anglerelativeIf the relative pose of the three-dimensional point cloud of the previous key frame exceeds a certain threshold, the three-dimensional point cloud { Pts) of the current view anglecurThe key frames are of a first type;
(3) if the three-dimensional point cloud { Pts of the current view anglecurAnd the three-dimensional point cloud of the previous view { Pts }lastIf the registration is unsuccessful, the current view three-dimensional point cloud is a second type key frame;
step S11: judging the type of the current key frame, if the current key frame is the first type key frame, entering the step S12, if the current key frame is the second type key frame, entering the step S13;
step S12: all three-dimensional point clouds { Pts with completed registrationallAs the three-dimensional point cloud { Pts) of the previous view anglelastAnd at the relative pose RT updated through step S08relativeExecuting the steps S05 to S09 for the initial value of the relative pose, and finishing the registration of the three-dimensional point cloud and then updating the relative pose RTrelativeStep S16 is substituted;
step S13: adopting an optimal corresponding point set searching method based on affine invariance theory to carry out three-dimensional point cloud { Pts) of the current view anglecurMultivariate Features of { Features }curAnd all three-dimensional point clouds (Pts) with completed registrationallCorresponding multivariate characteristics { Features }allFinding the optimal corresponding point set to obtain the optimal corresponding point set { KeyPairs }feature};
Step S14: according to the optimal corresponding point set { KeyPairsfeatureAnd calculating three-dimensional point cloud { Pts) of the current view anglecurWith all three-dimensional point clouds { Pts) having completed registrationallRelative pose RT betweenrelativeAnd calculating the relative pose RTrelativeAs a result of the coarse registration;
step S15: judging whether the point cloud registration of the steps S12 to S14 is successful; if the registration is successful, the step S16 is entered, otherwise, the step S17 is entered;
step S16: the relative pose RT calculated in the step S14 is compared withrelativeActing on all three-dimensional point clouds { Pts) with completed registrationallThen three-dimensional point cloud { Pts) of the current view anglecurAdd to all three-dimensional point clouds { Pts) that have completed registrationallIn (b), and use RTrelativeUpdating the relative pose of the last key frame;
step S17: discarding the three-dimensional point cloud { Pts) of the current view anglecurData;
step S18: judging whether three-dimensional data acquisition is finished or not; if not, the process proceeds to step S01, and registration of the three-dimensional point cloud data of the next view angle is performed.
According to a specific implementation mode, in the three-dimensional point cloud registration method based on camera pose estimation, the mode of judging whether the registration is successful is as follows: judging whether the mean value and the variance of the distances between all corresponding points in the corresponding point set meet the requirements or not; or judging whether the feature similarity statistics of the corresponding points in the corresponding point set meets the requirement or not.
According to a specific implementation mode, in the three-dimensional point cloud registration method based on camera pose estimation, the three-dimensional modeling image is a single-frame or multi-frame random coding image or a stripe structure light image.
According to a specific implementation mode, in the three-dimensional point cloud registration method based on camera pose estimation, when the three-dimensional modeling image is a random coding image, average sub-region gray level fluctuation is used as a reliability evaluation index to obtain the reliable image; wherein the content of the first and second substances,
let the average subregion grayscale fluctuation be:
Figure GDA0002432840150000061
where N (u, V, H, V) represents a neighborhood of HxV around the pixel (u, V), Sp(mu, ν) represents the grayscale fluctuation of the p-th sub-neighborhood; the sub-region grayscale fluctuation S (μ, ν) of the 3 neighborhood of the pixel (u, v) is:
Figure GDA0002432840150000062
wherein, IiFor the ith three-dimensional modeling image, N is the number of three-dimensional modeling images taken by a single camera to complete one three-dimensional modeling,
Figure GDA0002432840150000063
the gray level mean of the 3 neighborhoods of all three-dimensional modeling images corresponding to the pixel (u, v).
According to a specific implementation mode, in the three-dimensional point cloud registration method based on camera pose estimation, when the three-dimensional modeling image is a stripe structured light three-dimensional modeling, the modulation degree of the stripe is used as a reliability evaluation index to obtain the reliable image; the modulation degree of the nth stripe is as follows:
Figure GDA0002432840150000064
wherein, I0(x, y) is background light intensity, C0(x, y) is the contrast of the fringes, and N represents the number of phase-shift steps of the structured light of the fringes.
According to a specific implementation mode, in the three-dimensional point cloud registration method based on camera pose estimation, the three-dimensional modeling result is as follows:
Pts3D=Calc3D(ImgVec,R,SysInfo)
wherein Calc3D is a modeling function, ImgVec is a modeling image, R is the reliability map, and SysInfo is a system parameter.
According to a specific embodiment, in the three-dimensional point cloud registration method based on camera pose estimation of the present invention, in step S04, the obtained multivariate features are:
Pts3Dfeature=feature(Pts3D,ImgVec,Radius,FeaID)
wherein Radius is the size of the selected surrounding neighborhood when calculating the characteristics of the pixel (u, v), FeaID is the combination form of the characteristics, and Pts3D is the three-dimensional modeling result; ImgVec is the modeled image.
According to a specific embodiment, in the three-dimensional point cloud registration method based on camera pose estimation of the present invention, in step S07, based on the similarity measure, the method for screening the corresponding point set { pair } is as follows: corresponding point pairs with similarity measurement exceeding a certain threshold value are reserved, and the rest corresponding point pairs are deleted; wherein the similarity measure is:
SmilarityPair=w1SimilarityColor(Pair)+w2SimilarityGeometry(Pair)
wherein, the SimilarityColor(Pair) is a color feature Similarity metric, Similarity, for Pair (u, v) for the corresponding point PairGeometry(Pair) is a geometric feature similarity metric, w, for Pair (u, v) for the corresponding point Pair1、w2The weights of the color feature and the geometric feature, respectively.
Further, the similarity measure of the color features is the covariance of the color features of the corresponding point Pair (u, v) in the color domain; the similarity measure of the geometric features is the covariance of the geometric features of the corresponding point Pair Pair (u, v) in the spatial domain.
Compared with the prior art, the invention has the beneficial effects that: according to the three-dimensional point cloud registration method based on camera pose estimation, the problem of the three-dimensional modeling process and the problem of automatic registration of the three-dimensional point cloud are organically combined, and the process optimization is carried out on the multi-view three-dimensional modeling process, so that the problem of automatic registration of the three-dimensional point cloud is converted into the problem of relative position pose estimation of a camera under an adjacent view angle. Compared with the traditional point cloud registration method, the method has the advantages that the internal parameters of the camera are fully utilized to simplify the process of obtaining the corresponding point set, the obtaining speed is improved, and meanwhile, the corresponding point set is screened according to the reliability information of the three-dimensional points in the three-dimensional modeling process, so that the influence of abnormal points on the matching result of the three-dimensional point cloud is effectively avoided, the convergence speed of the matching process is increased, and the matching precision is improved. In particular, the method is particularly suitable for acquiring the multi-view three-dimensional point cloud through the relative motion of the target object and the three-dimensional data acquisition device, and has strong adaptability to the condition that the difference of adjacent angles is large so that the classical point cloud registration method fails.
Description of the drawings:
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The present invention will be described in further detail with reference to test examples and specific embodiments. It should be understood that the scope of the above-described subject matter is not limited to the following examples, and any techniques implemented based on the disclosure of the present invention are within the scope of the present invention.
As shown in fig. 1, the three-dimensional point cloud registration method based on camera pose estimation of the present invention includes the following steps:
step S01: reading a new set of three-dimensional modeling images, and performing three-dimensional modeling based on the read three-dimensional modeling images; specifically, in the implementation of the present invention, the read three-dimensional modeling image is a single-frame or multi-frame randomly coded image, or a stripe-structured light image. Of course, the three-dimensional modeling image read by the invention can also be a coded image in other forms as long as the three-dimensional modeling can be realized under the support of a certain algorithm.
Step S02: and extracting a reliable region of the three-dimensional modeling image to obtain a reliable image.
Specifically, when the three-dimensional modeling image is a random coding image, the average subregion gray level fluctuation is used as a reliability evaluation index to obtain a reliability map; wherein the content of the first and second substances,
let the average subregion grayscale fluctuation be:
Figure GDA0002432840150000091
where N (u, V, H, V) represents a neighborhood of HxV around the pixel (u, V), Sp(mu, ν) represents the grayscale fluctuation of the p-th sub-neighborhood; the sub-region grayscale fluctuation S (μ, ν) of the 3 neighborhood of the pixel (u, v) is:
Figure GDA0002432840150000092
wherein, IiFor the ith three-dimensional modeling image, N is the number of three-dimensional modeling images taken by a single camera to complete one three-dimensional modeling,
Figure GDA0002432840150000093
the gray level mean of the 3 neighborhoods of all three-dimensional modeling images corresponding to the pixel (u, v). Of course, other dimensions of the neighborhood may be selected instead of the 3 neighborhood.
Specifically, when the three-dimensional modeling image is a fringe structured light three-dimensional modeling, the modulation degree of the fringe is used as a reliability evaluation index, and a reliability map is obtained. For example, taking N-step phase shift of the fringe structure light as an example, a set of phase shift fringes are projected onto the surface of the measured object, and the intensity distribution of the nth phase shift fringe on the surface of the object can be expressed as:
In(x,y)=I0(x,y){1+C0(x,y)cos[Φ(x,y)+2πn/N]}
wherein, I0(x, y) is background light intensity, C0(x, y) is the contrast of the fringe and Φ (x, y) is the phase distribution of the fringe, then the modulation of the fringe is expressed as
Figure GDA0002432840150000094
Simple and available
Figure GDA0002432840150000095
It can be seen that the modulation degrees M (x, y) and I of the stripes0(x,y)C0(x, y) are in direct proportion, and only one scaling factor is different between the two
Figure GDA0002432840150000096
Can be used as a reliability evaluation standard of the three-dimensional modeling image.
In specific implementation, when the reliability index is greater than a certain threshold, the three-dimensional modeling result corresponding to the pixel (u, v) is judged to be credible, otherwise, the three-dimensional modeling result is regarded as incredible collocation 0. And setting a threshold value to eliminate an unreliable area in the three-dimensional modeling image, so that the credible effective pixel points form an effective area capable of effectively carrying out three-dimensional modeling.
Step S03: according to the reliable graph and the calibrated parameter information of the measuring system, calculating the three-dimensional coordinates Pts corresponding to each point pixel point (u, v) in the reliable areau,vAnd from all three-dimensional coordinates Pts calculatedu,vThree-dimensional point cloud { Pts) forming current view anglecur}。
Specifically, in order to improve the calculation speed through data parallel in the subsequent three-dimensional point cloud registration process, the organization form of the three-dimensional point cloud data can be similar to a color image and is a rectangle with the same size as the input three-dimensional modeling image row and column, and each data element is a three-dimensional space coordinate formed by three data of x, y and z.
The three-dimensional modeling result is as follows:
Pts3D=Calc3D(ImgVec,R,SysInfo)
wherein Calc3D is a modeling function, ImgVec is a modeling image, R is a reliability map, and SysInfo is a system parameter.
Step S04: three-dimensional point cloud { Pts) of current view anglecurCarrying out depth information characteristic extraction and color information characteristic extraction to obtain multivariate characteristics { Features formed by space domain characteristics and color domain characteristicscur}。
In implementation, in order to realize parallel acceleration, various features can be selected to be combined according to actual measurement conditions. For example, the depth information feature in the spatial domain is composed of a normal, a normal statistical distribution of the spatial neighborhood, a topological structure of points in the spatial neighborhood, a distance statistical distribution of points in the spatial neighborhood, a chromaticity statistical distribution of points in the spatial neighborhood, and the like, or may be a weighting of the above plurality of geometric features, where the points participating in the feature statistics may be all points in the spatial neighborhood or may be partial points in the spatial neighborhood. The selection scheme of the interior division points of the spatial neighborhood can be selected in various ways, such as cross-shaped, meter-shaped, random and the like.
When the color feature is extracted, the image carrying the color information is converted into a YUV or HSV color space, and the brightness information is not considered in the color feature, so that the brightness change factor caused by the shooting visual angle and the surface shape of the object is eliminated.
Specifically, the multivariate characteristics are:
Pts3Dfeature=feature(Pts3D,ImgVec,Radius,FeaID)
wherein Radius is the size of the selected surrounding neighborhood when calculating the characteristics of the pixel (u, v), FeaID is the combination form of the characteristics, and Pts3D is the three-dimensional modeling result; ImgVec is the modeled image. The combination of features is in the form of a single color information feature or depth information feature, or a color information feature and a depth information feature.
Step S05: the relative movement of the measured target and the camera piece is expressed by adopting the transformation of the position and the posture of the camera, and the relative pose RT between the camera at the previous visual angle and the camera at the current visual angle is initializedrelativeWherein the displacement parameter TIs 0 and the attitude parameter R is a unit matrix.
Step S06: the previous visual angle is converted to the current visual angle, and the three-dimensional point cloud { Pts) of the previous visual angle is calculated according to the internal parameters and the relative pose of the cameralastAnd finding the pixel coordinate (u, v) with the minimum Euclidean distance in the reliable area of the current three-dimensional modeling image according to the pixel coordinate (u ', v') corresponding to each three-dimensional data point in the three-dimensional modeling imagenearestThree-dimensional point cloud { Pts) from the above perspectivelastThe three-dimensional point corresponding to the pixel coordinate (u ', v') in (f) and (Pts)curCoordinates of middle pixel (u, v)nearestThe corresponding three-dimensional points form a corresponding point Pair Pair (u, v), and all the corresponding point Pairs form a corresponding point set { Pairs }.
Specifically, three-dimensional point cloud data { Pts) measured from the previous view angle is acquired according to the external parameters of the camera at the moment and the internal parameters of the camera calibrated by the eventlastAnd projecting each effective point to the current view angle in an imaging mode according to the camera model, so as to obtain the imaging pixel coordinates (u ', v') of each effective point at the current view angle. The organization form of the current perspective three-dimensional point cloud in step S03 is a matrix form.
The camera model and the perspective projection imaging of the three-dimensional points on the pixel plane coordinate system according to the camera model are the basic knowledge in the industry field, and are not described in detail in the patent.
Step S07: based on similarity measurement, screening the corresponding point set { Pairs } to obtain a credible corresponding point set { Pairs }relability}. Shadows and occlusions exist during the measurement process and the corresponding point pairs formed in these areas are unreliable. Therefore, the method for screening the corresponding point set { Pairs } is as follows: corresponding point pairs with similarity measurement exceeding a certain threshold value are reserved, and the rest corresponding point pairs are deleted; wherein the similarity measure is:
SmilarityPair=w1SimilarityColor(Pair)+w2SimilarityGeometry(Pair)
wherein, the SimilarityColor(Pair) is the color feature Similarity metric, Similarity, for Pair (u, v) corresponding to the point PairGeometry(Pair) is the geometric feature similarity metric for Pair (u, v) for the corresponding point Pair, w1、w2The weights of the color feature and the geometric feature, respectively.
Further, the similarity measure of the color features is the covariance of the color features of the corresponding point Pair (u, v) in the color domain; the similarity measure of the geometric features is the covariance of the geometric features of the corresponding point Pair Pair (u, v) in the spatial domain.
Step S08: point set { Pairs) based on credible corresponding pointsrelabilityAnd calculating a rotation translation matrix RT between corresponding pointsaccmuThe RT isaccmuActing on relative pose RT between previous visual angle and current visual angle camera through matrix operationrelativeAnd updating the relative pose RT using the calculation resultrelative
Step S09: judging whether an iteration termination condition is met; if yes, the process proceeds to step 10, otherwise, the process proceeds to step S06 again. Specifically, the termination condition includes iteration times, convergence state judgment of the iteration process, and the like, where the convergence state of the iteration process refers to a matrix difference obtained by two adjacent iterations, and the setting of the iteration termination condition belongs to the prior art, and is not described in detail herein.
Step S10: according to the specific reason of the iteration termination of the step S09, the three-dimensional point cloud { Pts) of the current view angle is judgedcurJudging whether the frame is a key frame, if so, entering the step 11, otherwise, entering the step S16;
wherein, the three-dimensional point cloud { Pts of the current view angle is judgedcurThe way of judging whether the key frame is as follows:
(1) if the three-dimensional measurement of the current visual angle is the first measurement, the default of the three-dimensional point cloud of the current visual angle is the key frame;
(2) if the three-dimensional point cloud { Pts of the current view anglecurAnd the three-dimensional point cloud of the previous view { Pts }lastRegistering successfully, and the three-dimensional measurement of the current visual angle is separated from the three-dimensional measurement of the previous key frame by NKeyFrameIntervalRelative pose RT of secondary three-dimensional measurement or three-dimensional measurement of current view anglerelativeThe relative pose of the three-dimensional point cloud of the last key frame exceeds a certain threshold valueThen three-dimensional point cloud { Pts) of current view anglecurThe key frames are of a first type;
(3) if the three-dimensional point cloud { Pts of the current view anglecurAnd the three-dimensional point cloud of the previous view { Pts }lastAnd if the registration is unsuccessful, the current view three-dimensional point cloud is the second type of key frame.
Step S11: judging the type of the current key frame, if the current key frame is the first type key frame, then entering step S12, and if the current key frame is the second type key frame, then entering step S13.
The type of the current key frame is judged in the following mode: if the registration of the current visual angle three-dimensional point cloud and the previous visual angle three-dimensional point cloud is successful, and the current key frame and the previous key frame are at least separated by NKeyFrameIntervalRelative position T of frame or current view and previous key frame viewKeyFrameAnd attitude NKeyFrameRotateIf the current key frame exceeds a certain threshold value, the current key frame is a first type key frame; and if the registration of the current visual angle three-dimensional point cloud and the previous visual angle three-dimensional point cloud is unsuccessful, the current key frame is the second type key frame.
Step S12: all three-dimensional point clouds { Pts with completed registrationallAs the three-dimensional point cloud { Pts) of the previous view anglelastAnd at the relative pose RT updated through step S08relativeExecuting the steps S05 to S09 for the initial value of the relative pose, and finishing the registration of the three-dimensional point cloud and then updating the relative pose RTrelativeStep S16 is substituted.
Step S13: adopting an optimal corresponding point set searching method based on affine invariance theory to carry out three-dimensional point cloud { Pts) of the current view anglecurMultivariate Features of { Features }curAnd all three-dimensional point clouds (Pts) with completed registrationallCorresponding multivariate characteristics { Features }allFinding the optimal corresponding point set to obtain the optimal corresponding point set { KeyPairs }feature}。
Step S14: according to the optimal corresponding point set { KeyPairsfeatureAnd calculating three-dimensional point cloud { Pts) of the current view anglecurWith all three-dimensional point clouds { Pts) having completed registrationallRelative pose RT betweenrelativeAnd calculating the relative pose RTrelativeAs a result of the coarse registration.
Step S15: judging whether the point cloud registration of the steps S12 to S14 is successful; if the registration is successful, the process proceeds to step S16, otherwise, the process proceeds to step S17.
Step S16: the relative pose RT calculated in the step S14 is compared withrelativeActing on all three-dimensional point clouds { Pts) with completed registrationallThen three-dimensional point cloud { Pts) of the current view anglecurAdd to all three-dimensional point clouds { Pts) that have completed registrationallIn (b), and use RTrelativeUpdating the relative pose of the last key frame; the relative pose is the three-dimensional point cloud { Pts) of the last key frame relative to the newly added current view anglecurAnd (6) pose of the electronic device.
Step S17: discarding the three-dimensional point cloud { Pts) of the current view anglecurData are multiplied.
Step S18: judging whether three-dimensional data acquisition is finished or not; if not, the process proceeds to step S01, and registration of the three-dimensional point cloud data of the next view angle is performed.
In the invention, the mode of judging whether the registration is successful is as follows: judging whether the mean value and the variance of the corresponding point distances of the corresponding point set meet the requirements or not; or judging whether the feature similarity statistics of the corresponding points of the corresponding point set meet the requirements.
According to the three-dimensional point cloud registration method based on camera pose estimation, the problem of the three-dimensional modeling process and the problem of automatic registration of the three-dimensional point cloud are organically combined, and the process optimization is carried out on the multi-view three-dimensional modeling process, so that the problem of automatic registration of the three-dimensional point cloud is converted into the problem of relative position pose estimation of a camera under an adjacent view angle. Compared with the traditional point cloud registration method, the method has the advantages that the internal parameters of the camera are fully utilized to simplify the process of obtaining the corresponding point set, the obtaining speed is improved, and meanwhile, the corresponding point set is screened according to the reliability information of the three-dimensional points in the three-dimensional modeling process, so that the influence of abnormal points on the matching result of the three-dimensional point cloud is effectively avoided, the convergence speed of the matching process is increased, and the matching precision is improved. In particular, the method is particularly suitable for acquiring the multi-view three-dimensional point cloud through the relative motion of the target object and the three-dimensional data acquisition device, and has strong adaptability to the condition that the difference of adjacent angles is large so that the classical point cloud registration method fails.

Claims (9)

1. A three-dimensional point cloud registration method based on camera pose estimation is characterized by comprising the following steps:
step S01: reading a new set of three-dimensional modeling images, and performing three-dimensional modeling based on the read three-dimensional modeling images;
step S02: extracting a reliable region of the three-dimensional modeling image to obtain a reliable image;
step S03: according to the reliable graph and the calibrated parameter information of the measuring system, calculating the three-dimensional coordinates Pts corresponding to each point pixel point (u, v) in the reliable areau,vAnd from all said three-dimensional coordinates Pts calculatedu,vThree-dimensional point cloud { Pts) forming current view anglecur};
Step S04: three-dimensional point cloud { Pts) of current view anglecurCarrying out depth information characteristic extraction and color information characteristic extraction to obtain multivariate characteristics { Features formed by space domain characteristics and color domain characteristicscur};
Step S05: the relative movement of the measured target and the camera piece is expressed by adopting the transformation of the position and the posture of the camera, and the relative pose RT between the camera at the previous visual angle and the camera at the current visual angle is initializedrelativeWherein the displacement parameter T is 0, and the attitude parameter R is a unit matrix;
step S06: the previous visual angle is converted to the current visual angle, and the three-dimensional point cloud { Pts) of the previous visual angle is calculated according to the internal parameters and the relative pose of the cameralastAnd finding the pixel coordinate (u, v) with the minimum Euclidean distance in the reliable area of the current three-dimensional modeling image according to the pixel coordinate (u ', v') corresponding to each three-dimensional data point in the three-dimensional modeling imagenearestThree-dimensional point cloud { Pts) from the above perspectivelastThe three-dimensional point corresponding to the pixel coordinate (u ', v') in (f) and (Pts)curCoordinates of middle pixel (u, v)nearestThe corresponding three-dimensional points form a corresponding point Pair Pair (u, v), and all the corresponding point Pairs form a corresponding point set { Pairs };
step S07: based on similarity measurement, screening the corresponding point set { Pairs } to obtain a credible corresponding point set { Pairs }relability};
Step S08: based on the set of credible corresponding points { PairsrelabilityAnd calculating a rotation translation matrix RT between corresponding pointsaccmuThe RT isaccmuActing on relative pose RT between previous visual angle and current visual angle camera through matrix operationrelativeAnd updating the relative pose RT using the calculation resultrelative
Step S09: judging whether an iteration termination condition is met; if yes, go to step 10, otherwise, go to step S06 again;
step S10: according to the specific reason of the iteration termination of the step S09, the three-dimensional point cloud { Pts) of the current view angle is judgedcurJudging whether the frame is a key frame, if so, entering the step 11, otherwise, entering the step S16;
wherein, the three-dimensional point cloud { Pts of the current view angle is judgedcurThe way of judging whether the key frame is as follows:
(1) if the three-dimensional measurement of the current visual angle is the first measurement, the default of the three-dimensional point cloud of the current visual angle is the key frame;
(2) if the three-dimensional point cloud { Pts of the current view anglecurAnd the three-dimensional point cloud of the previous view { Pts }lastRegistering successfully, and the three-dimensional measurement of the current visual angle is separated from the three-dimensional measurement of the previous key frame by NKeyFrameIntervalRelative pose RT of secondary three-dimensional measurement or three-dimensional measurement of current view anglerelativeIf the relative pose of the three-dimensional point cloud of the previous key frame exceeds a certain threshold, the three-dimensional point cloud { Pts) of the current view anglecurThe key frames are of a first type;
(3) if the three-dimensional point cloud { Pts of the current view anglecurAnd the three-dimensional point cloud of the previous view { Pts }lastIf the registration is unsuccessful, the current view three-dimensional point cloud is a second type key frame;
step S11: judging the type of the current key frame, if the current key frame is the first type key frame, entering the step S12, if the current key frame is the second type key frame, entering the step S13;
step S12: all three-dimensional point clouds { Pts with completed registrationallAs the three-dimensional point cloud { Pts) of the previous view anglelastAnd at the relative pose RT updated through step S08relativeExecuting the steps S05 to S09 for the initial value of the relative pose, and finishing the registration of the three-dimensional point cloud and then updating the relative pose RTrelativeStep S16 is substituted;
step S13: adopting an optimal corresponding point set searching method based on affine invariance theory to carry out three-dimensional point cloud { Pts) of the current view anglecurMultivariate Features of { Features }curAnd all three-dimensional point clouds (Pts) with completed registrationallCorresponding multivariate characteristics { Features }allFinding the optimal corresponding point set to obtain the optimal corresponding point set { KeyPairs }feature};
Step S14: according to the optimal corresponding point set { KeyPairsfeatureAnd calculating three-dimensional point cloud { Pts) of the current view anglecurWith all three-dimensional point clouds { Pts) having completed registrationallRelative pose RT betweenrelativeAnd calculating the relative pose RTrelativeAs a result of the coarse registration;
step S15: judging whether the point cloud registration of the steps S12 to S14 is successful; if the registration is successful, the step S16 is entered, otherwise, the step S17 is entered;
step S16: the relative pose RT calculated in the step S14 is compared withrelativeActing on all three-dimensional point clouds { Pts) with completed registrationallThen three-dimensional point cloud { Pts) of the current view anglecurAdd to all three-dimensional point clouds { Pts) that have completed registrationallIn (b), and use RTrelativeUpdating the relative pose of the last key frame;
step S17: discarding the three-dimensional point cloud { Pts) of the current view anglecurData;
step S18: judging whether three-dimensional data acquisition is finished or not; if not, the process proceeds to step S01, and registration of the three-dimensional point cloud data of the next view angle is performed.
2. The three-dimensional point cloud registration method based on camera pose estimation according to claim 1, wherein the mode for judging whether the registration is successful is as follows: judging whether the mean value and the variance of the distances between all corresponding points in the corresponding point set meet the requirements or not; or judging whether the feature similarity statistics of the corresponding points in the corresponding point set meets the requirement or not.
3. The camera pose estimation-based three-dimensional point cloud registration method according to claim 1, wherein the three-dimensional modeling image is a single-frame or multi-frame randomly coded image or a striped structured light image.
4. The three-dimensional point cloud registration method based on camera pose estimation according to claim 3, wherein when the three-dimensional modeling image is a random coding image, the reliability map is obtained by taking average sub-region gray level fluctuation as a reliability evaluation index; wherein the content of the first and second substances,
let the average subregion grayscale fluctuation be:
Figure FDA0002432840140000041
where N (u, V, H, V) represents a neighborhood of H x V around the pixel (u, V), Sp(mu, ν) represents the grayscale fluctuation of the p-th sub-neighborhood; the sub-region grayscale fluctuation S (μ, ν) of the 3 neighborhood of the pixel (u, v) is:
Figure FDA0002432840140000042
wherein, IiFor the ith three-dimensional modeling image, N is the number of three-dimensional modeling images taken by a single camera to complete one three-dimensional modeling,
Figure FDA0002432840140000043
for all three-dimensional modelled images to which pixel (u, v) correspondsThe gray level mean of 3 neighborhoods.
5. The three-dimensional point cloud registration method based on camera pose estimation according to claim 3, wherein when the three-dimensional modeling image is a fringe structured light three-dimensional modeling, the modulation degree of the fringes is used as a reliability evaluation index to obtain the reliable image; the modulation degree of the nth stripe is as follows:
Figure FDA0002432840140000044
wherein, I0(x, y) is background light intensity, C0(x, y) is the contrast of the fringes, and N represents the number of phase-shift steps of the structured light of the fringes.
6. The camera pose estimation-based three-dimensional point cloud registration method according to claim 4 or 5, wherein the three-dimensional modeling result is:
Pts3D=Calc3D(ImgVec,R,SysInfo)
wherein Calc3D is a modeling function, ImgVec is a modeling image, R is the reliability map, and SysInfo is a system parameter.
7. The three-dimensional point cloud registration method based on camera pose estimation according to claim 4 or 5, wherein in the step S04, the obtained multivariate features are:
Pts3Dfeature=feature(Pts3D,ImgVec,Radius,FeaID)
wherein Radius is the size of the selected surrounding neighborhood when calculating the characteristics of the pixel (u, v), FeaID is the combination form of the characteristics, and Pts3D is the three-dimensional modeling result; ImgVec is the modeled image.
8. The three-dimensional point cloud registration method based on camera pose estimation according to claim 1, wherein in the step S07, based on the similarity measure, the corresponding point set { Pairs } is screened by: corresponding point pairs with similarity measurement exceeding a certain threshold value are reserved, and the rest corresponding point pairs are deleted; wherein the similarity measure is:
SmilarityPair=w1SimilarityColor(Pair)+w2SimilarityGeometry(Pair)
wherein, the SimilarityColor(Pair) is a color feature Similarity metric, Similarity, for Pair (u, v) for the corresponding point PairGeometry(Pair) is a geometric feature similarity metric, w, for Pair (u, v) for the corresponding point Pair1、w2The weights of the color feature and the geometric feature, respectively.
9. The camera pose estimation based three-dimensional point cloud registration method of claim 8, wherein the similarity measure of color features is the covariance of the color features of the corresponding point Pair Pair (u, v) in the color domain; the similarity measure of the geometric features is the covariance of the geometric features of the corresponding point Pair Pair (u, v) in the spatial domain.
CN201811400144.0A 2018-11-22 2018-11-22 Three-dimensional point cloud registration method based on camera pose estimation Expired - Fee Related CN109544599B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811400144.0A CN109544599B (en) 2018-11-22 2018-11-22 Three-dimensional point cloud registration method based on camera pose estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811400144.0A CN109544599B (en) 2018-11-22 2018-11-22 Three-dimensional point cloud registration method based on camera pose estimation

Publications (2)

Publication Number Publication Date
CN109544599A CN109544599A (en) 2019-03-29
CN109544599B true CN109544599B (en) 2020-06-23

Family

ID=65850002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811400144.0A Expired - Fee Related CN109544599B (en) 2018-11-22 2018-11-22 Three-dimensional point cloud registration method based on camera pose estimation

Country Status (1)

Country Link
CN (1) CN109544599B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188583B (en) * 2023-04-23 2023-07-14 禾多科技(北京)有限公司 Method, device, equipment and computer readable medium for generating camera pose information

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310331B (en) * 2019-06-18 2023-04-14 哈尔滨工程大学 Pose estimation method based on combination of linear features and point cloud features
CN110675440B (en) * 2019-09-27 2022-07-12 深圳市易尚展示股份有限公司 Confidence evaluation method and device for three-dimensional depth data and computer equipment
CN111476841B (en) * 2020-03-04 2020-12-29 哈尔滨工业大学 Point cloud and image-based identification and positioning method and system
CN112710318B (en) * 2020-12-14 2024-05-17 深圳市商汤科技有限公司 Map generation method, path planning method, electronic device, and storage medium
CN112880562A (en) * 2021-01-19 2021-06-01 佛山职业技术学院 Method and system for measuring pose error of tail end of mechanical arm
CN113112532B (en) * 2021-04-13 2023-04-21 中山大学 Real-time registration method for multi-TOF camera system
CN113409367B (en) * 2021-07-08 2023-08-18 西安交通大学 Stripe projection measurement point cloud point-by-point weighting registration method, equipment and medium
CN113974834B (en) * 2021-09-14 2023-10-31 武汉联影智融医疗科技有限公司 Method and device for determining sleeve pose of surgical robot system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
JP2018063236A (en) * 2016-10-13 2018-04-19 バイドゥ ネットコム サイエンス アンド テクノロジー(ペキン) カンパニー リミテッド Method and apparatus for annotating point cloud data
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976353B (en) * 2016-04-14 2020-01-24 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018063236A (en) * 2016-10-13 2018-04-19 バイドゥ ネットコム サイエンス アンド テクノロジー(ペキン) カンパニー リミテッド Method and apparatus for annotating point cloud data
CN106780576A (en) * 2016-11-23 2017-05-31 北京航空航天大学 A kind of camera position and orientation estimation method towards RGBD data flows
CN108648240A (en) * 2018-05-11 2018-10-12 东南大学 Based on a non-overlapping visual field camera posture scaling method for cloud characteristics map registration

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A fast and robust local descriptor for 3D point cloud registration;Jiaqi Yang et al.;《Journal of LATEX Templates》;20170912;第1-21页 *
大型工件位姿估计中的稀疏点云配准方法;姜德涛 等;《北京信息科技大学学报》;20170228;第27卷(第1期);第89-94页 *
应用摄像机位姿估计的点云初始配准;郭清达 等;《光学精密工程》;20170630;第25卷(第6期);第1635-1644页 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188583B (en) * 2023-04-23 2023-07-14 禾多科技(北京)有限公司 Method, device, equipment and computer readable medium for generating camera pose information

Also Published As

Publication number Publication date
CN109544599A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109544599B (en) Three-dimensional point cloud registration method based on camera pose estimation
US7616807B2 (en) System and method for using texture landmarks for improved markerless tracking in augmented reality applications
CN108921895B (en) Sensor relative pose estimation method
US9525862B2 (en) Method for estimating a camera motion and for determining a three-dimensional model of a real environment
US9679384B2 (en) Method of detecting and describing features from an intensity image
Fang et al. Light filed image quality assessment by local and global features of epipolar plane image
Takimoto et al. 3D reconstruction and multiple point cloud registration using a low precision RGB-D sensor
Sheng et al. Geometric occlusion analysis in depth estimation using integral guided filter for light-field image
CN108921864A (en) A kind of Light stripes center extraction method and device
US10607350B2 (en) Method of detecting and describing features from an intensity image
KR20210005621A (en) Method and system for use in coloring point clouds
AliAkbarpour et al. Fast structure from motion for sequential and wide area motion imagery
Yuan et al. Combining maps and street level images for building height and facade estimation
Kagarlitsky et al. Piecewise-consistent color mappings of images acquired under various conditions
CN111563952A (en) Method and system for realizing stereo matching based on phase information and spatial texture characteristics
CN108447092B (en) Method and device for visually positioning marker
KR20110021500A (en) Method for real-time moving object tracking and distance measurement and apparatus thereof
CN110443228B (en) Pedestrian matching method and device, electronic equipment and storage medium
CN117870659A (en) Visual inertial integrated navigation algorithm based on dotted line characteristics
CN116935013B (en) Circuit board point cloud large-scale splicing method and system based on three-dimensional reconstruction
Teixeira et al. Epipolar based light field key-location detector
JP6754717B2 (en) Object candidate area estimation device, object candidate area estimation method, and object candidate area estimation program
Liu et al. Match selection and refinement for highly accurate two-view structure from motion
Avidar et al. Local-to-global point cloud registration using a dictionary of viewpoint descriptors
JP2018195070A (en) Information processing apparatus, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200623

Termination date: 20201122

CF01 Termination of patent right due to non-payment of annual fee