CN112907669A - Camera pose measuring method and device based on coplanar feature points - Google Patents

Camera pose measuring method and device based on coplanar feature points Download PDF

Info

Publication number
CN112907669A
CN112907669A CN202110223982.0A CN202110223982A CN112907669A CN 112907669 A CN112907669 A CN 112907669A CN 202110223982 A CN202110223982 A CN 202110223982A CN 112907669 A CN112907669 A CN 112907669A
Authority
CN
China
Prior art keywords
pose data
camera
local
pose
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110223982.0A
Other languages
Chinese (zh)
Inventor
尹首一
周凯
韩慧明
刘雷波
魏少军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202110223982.0A priority Critical patent/CN112907669A/en
Publication of CN112907669A publication Critical patent/CN112907669A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method and a device for measuring a camera pose based on coplanar feature points, which relate to the technical field of computer vision, and comprise the following steps: acquiring a normalized coordinate of the coplanar feature point and a first coordinate of the coplanar feature point on a target plane of a world coordinate system; calculating a homography matrix according to the normalized coordinates and the first coordinates; calculating first initial pose data of the camera by using the homography matrix; determining first local pose data and second local pose data according to the first initial pose data; and determining a camera pose measurement result according to the first local pose data and the second local pose data. The invention considers the ambiguity problem of pose measurement under the condition of feature point coplanarity, and the final camera pose measurement result is more stable and accurate by generating the first local pose data and the second local pose data and optimizing the process of the camera pose measurement result based on the first local pose data and the second local pose data.

Description

Camera pose measuring method and device based on coplanar feature points
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a device for measuring a camera pose based on coplanar feature points.
Background
The camera pose measurement technology has wide application in augmented reality and autonomous robot positioning and navigation. The technique calculates the position and posture of a camera in a three-dimensional space under the condition that the distribution of a plurality of feature points in the three-dimensional space and the projection positions of the feature points on a camera image are known. In particular, when all feature points are located on the same plane, the problem translates into a pose measurement problem based on coplanar feature points. The existing measurement method can be generally divided into an analytic method and an iterative method, wherein the analytic method is high in speed but low in precision. The iterative method generally has higher precision, but can face the ambiguity problem of the pose solution under the condition that the feature points are coplanar, so that the calculation result is unstable.
Disclosure of Invention
The invention provides a coplanar feature point-based camera pose measurement method and device, which can relieve the ambiguity problem of pose measurement under the condition of coplanar feature points, and the final camera pose measurement result is more stable and accurate.
In a first aspect, an embodiment of the present invention provides a method for measuring a pose of a camera based on coplanar feature points, where the method includes: acquiring a normalized coordinate of a coplanar feature point and a first coordinate of the coplanar feature point on a target plane of a world coordinate system; the plane where the coplanar feature points are located is coincident with the target plane; calculating a homography matrix according to the normalized coordinates and the first coordinates; calculating first initial pose data of the camera by using the homography matrix; determining first local pose data and second local pose data according to the first initial pose data; determining a camera pose measurement result according to the first local pose data and the second local pose data.
In a second aspect, an embodiment of the present invention further provides a coplanar feature point-based camera pose measurement apparatus, where the apparatus includes: the acquisition module is used for acquiring the normalized coordinates of the coplanar feature points and the first coordinates of the coplanar feature points on a target plane of a world coordinate system; the plane where the coplanar feature points are located is coincident with the target plane; the matrix module is used for calculating a homography matrix according to the normalized coordinates and the first coordinates; a pose module for calculating first initial pose data of the camera using the homography matrix; the iteration module is used for determining first local pose data and second local pose data according to the first initial pose data; and the measurement module is used for determining a camera pose measurement result according to the first local pose data and the second local pose data.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor, when executing the computer program, implements the above-mentioned coplanar feature point-based camera pose measurement method.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium storing a computer program for executing the above coplanar feature point-based camera pose measurement method.
The embodiment of the invention has the following beneficial effects: the embodiment of the invention provides a coplanar feature point-based camera pose measurement scheme, which comprises the following steps: acquiring a normalized coordinate of the coplanar feature point and a first coordinate of the coplanar feature point on a target plane of a world coordinate system; the plane where the coplanar characteristic points are located is superposed with the target plane; calculating a homography matrix according to the normalized coordinates and the first coordinates; calculating first initial pose data of the camera by using the homography matrix; determining first local pose data and second local pose data according to the first initial pose data; and determining a camera pose measurement result according to the first local pose data and the second local pose data. The method and the device consider the ambiguity problem of pose measurement under the condition of coplanar feature points, and make the final camera pose measurement result more stable and accurate by generating the first local pose data and the second local pose data and optimizing the process of the camera pose measurement result based on the first local pose data and the second local pose data.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a method for measuring pose of a camera based on coplanar feature points according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a relationship between a world coordinate system and a plane where feature points are located according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a mirror inversion provided in an embodiment of the present invention;
FIG. 4 is a structural block diagram of a coplanar feature point-based camera pose measurement apparatus according to an embodiment of the present invention;
FIG. 5 is a block diagram of another coplanar feature point-based camera pose measurement apparatus according to an embodiment of the present invention;
FIG. 6 is a structural block diagram of a pose module according to an embodiment of the present invention;
FIG. 7 is a block diagram of an iterative block architecture provided by an embodiment of the present invention;
fig. 8 is a block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a coplanar feature point-based camera pose measuring method and device, which are applied to the technical field of image processing and computer vision, and improve the stability of a calculation result while ensuring higher precision.
In order to facilitate understanding of the embodiment, a detailed description is first given of a coplanar feature point-based camera pose measurement method disclosed in the embodiment of the present invention.
The embodiment of the invention provides a coplanar feature point-based camera pose measuring method, which is shown in a flow chart of a coplanar feature point-based camera pose measuring method shown in figure 1 and comprises the following steps:
step S102, acquiring the normalized coordinate of the coplanar feature point and a first coordinate of the coplanar feature point on a target plane of a world coordinate system.
In an embodiment of the invention, the normalized coordinates are coordinates of the coplanar feature points in a camera coordinate system. The plane of the coplanar characteristic point in the physical space is superposed with a certain plane in the world coordinate system, and the plane is taken as a target plane. The first coordinate is a coordinate of the coplanar feature point in a world coordinate system.
And step S104, calculating a homography matrix according to the normalized coordinates and the first coordinates.
In the embodiment of the present invention, referring to the relationship diagram of the world coordinate system and the plane where the feature points are located shown in fig. 2, the homography matrix from the plane where the feature points are located in the physical space to the image plane of the camera is calculated by using the normalized coordinates and the first coordinates.
And S106, calculating first initial pose data of the camera by using the homography matrix.
In the embodiment of the invention, after the homography matrix is obtained, the homography matrix is decomposed to obtain first initial pose data.
And S108, determining the first local pose data and the second local pose data according to the first initial pose data.
In the embodiment of the invention, iterative processing is carried out based on the first initial pose data to obtain first local pose data, and second local pose data is obtained by utilizing the first local pose data through calculation and iteration. According to the embodiment of the invention, the iterative algorithm is adopted to obtain the first local pose data and the second local pose data, so that the precision of the final measurement result can be improved.
And step S110, determining a camera pose measurement result according to the first local pose data and the second local pose data.
In the embodiment of the invention, the first local pose data and the second local pose data are screened to obtain more accurate global optimal pose as a camera pose measurement result.
The embodiment of the invention provides a coplanar feature point-based camera pose measurement scheme, which comprises the following steps: acquiring a normalized coordinate of the coplanar feature point and a first coordinate of the coplanar feature point on a target plane of a world coordinate system; the plane where the coplanar characteristic points are located is superposed with the target plane; calculating a homography matrix according to the normalized coordinates and the first coordinates; calculating first initial pose data of the camera by using the homography matrix; determining first local pose data and second local pose data according to the first initial pose data; and determining a camera pose measurement result according to the first local pose data and the second local pose data. The method and the device consider the ambiguity problem of pose measurement under the condition of coplanar feature points, and make the final camera pose measurement result more stable and accurate by generating the first local pose data and the second local pose data and optimizing the process of the camera pose measurement result based on the first local pose data and the second local pose data.
In one embodiment, before obtaining the normalized coordinates of the coplanar feature points and the first coordinates of the coplanar feature points on the target plane of the world coordinate system, the following steps may be further performed:
normalizing the pixel coordinates of the coplanar feature points on the image to obtain normalized coordinates; the image is shot by a camera; establishing a world coordinate system; the plane of the coplanar characteristic points is coincident with a plane of the world coordinate system.
In the embodiment of the invention, an image comprising a plurality of coplanar feature points is shot by a camera, and the pixel coordinates of the coplanar feature points on the image are normalized, so that the pixel coordinates in an image coordinate system are converted into normalized coordinates in a camera coordinate system. In order to facilitate data processing, a world coordinate system needs to be established, and a plane where the feature points are located is overlapped with a certain plane of the world coordinate system.
In one embodiment, the pixel coordinates of the coplanar feature points on the image are normalized according to the following formula to obtain normalized coordinates:
Figure BDA0002956134970000051
wherein (x)i yi)TIs the pixel coordinate of the feature point, (u)i vi)TIs the normalized coordinates of the feature points, fxAnd fyIs the focal length of the camera, cxAnd cyIs the principal point coordinates of the camera and N is the number of coplanar feature points.
In one embodiment, calculating the homography matrix from the normalized coordinates and the first coordinates may be performed as follows.
And calculating a homography matrix from the characteristic point plane to the image plane by adopting a direct linear transformation method.
In the embodiment of the invention, the shape can be obtained by adopting a direct linear transformation method to calculate
Figure BDA0002956134970000052
The homography matrix of (a).
In one embodiment, calculating the first initial pose data of the camera using the homography matrix may be performed as follows:
determining a first matrix and a second matrix according to the homography matrix; performing singular value decomposition on the first matrix to obtain a singular value decomposition result; determining a rotation matrix of the camera by using a singular value decomposition result; determining a translation vector of the camera by using the singular value decomposition result and the second matrix; and taking the rotation matrix and the translation vector as first initial pose data of the camera.
In an embodiment of the invention, the first matrix comprises a part of the elements of the homography matrix. The second matrix includes elements of the homography matrix other than the elements included in the first matrix.
In one embodiment, the first matrix and the second matrix are determined from the homography matrix according to the following formula:
Figure BDA0002956134970000053
H1:2=[h1 h2]where H is a homography matrix, H1:2Is a first matrix, h3Is a second matrix.
In one embodiment, the singular value decomposition of the first matrix is performed according to the following formula to obtain a singular value decomposition result:
Figure BDA0002956134970000054
wherein H1:2For the first matrix, U, S, V is the singular value decomposition result.
In one embodiment, the singular value decomposition results are used to determine a rotation matrix for the camera as follows:
Figure BDA0002956134970000061
Figure BDA0002956134970000062
R=[m1 m2 m1×m2]
where U, V is the singular value decomposition result and R is the rotation matrix of the camera.
In one embodiment, the singular value decomposition result and the second matrix are used to determine the translation vector of the camera according to the following formula:
Figure BDA0002956134970000063
wherein h is3Is a second matrix, s1、s2And t is the translation vector of the camera as a singular value decomposition result.
In the embodiment of the present invention, it is,
Figure BDA0002956134970000064
in one embodiment, determining the first and second local pose data from the first initial pose data may be performed as follows:
taking the first initial pose data as an iteration initial point, and calculating first local pose data by using a preset iteration algorithm; mirror image turning processing is carried out on the first local pose data to obtain second initial pose data; and taking the second initial pose data as an iteration starting point, and calculating second local pose data by using a preset iteration algorithm.
In the embodiment of the present invention, the same iterative algorithm may be used for generating the first local pose data and the second local pose data, and the iteration starting points of the two iterations are the first initial pose data and the second initial pose data, respectively.
In one embodiment, mirror image turning processing is performed on the first local pose data according to the following formula to obtain second initial pose data:
Rop=[r1op r2op r3op]
Figure BDA0002956134970000071
R′=[r1′r2′r1′×r2′]
t′=top
wherein [ R ]op,top]Is first local position posture data, [ R ', t']For the second initial pose data, r1op,r2op,r3opIs RopThree column vectors.
In one embodiment, the preset iterative algorithm is a lie algebra disturbance model iterative method or an orthogonal iterative algorithm.
In the embodiment of the present invention, an actually used iterative algorithm may be set according to an actual requirement, which is not specifically limited in the embodiment of the present invention.
In one embodiment, the lie algebra perturbation model iterative method calculates local pose data according to the following formula:
Figure BDA0002956134970000072
Figure BDA0002956134970000073
Figure BDA0002956134970000074
Figure BDA0002956134970000075
Figure BDA0002956134970000076
Figure BDA0002956134970000077
Figure BDA0002956134970000078
wherein [ R ]t,tt]For pose data before update, [ Rt+1,tt+1]For the updated pose data, Δ ξ is the pose increment, N represents the number of feature points, JiIs the Jacobian matrix corresponding to the ith characteristic point, is the reprojection error vector corresponding to the ith characteristic point, (x)i yi)TIs the pixel coordinate of the feature point, PiIs the three-dimensional coordinate of the feature point in the world coordinate system, fxAnd fyIs the focal length of the camera, cxAnd cyAre the principal point coordinates of the camera.
In the embodiment of the present invention, it is,
Figure BDA0002956134970000085
the pixel coordinate is obtained by re-projecting the three-dimensional coordinate of the feature point in the world coordinate system through the camera pose and is a function of R and t. It should be noted that, in the following description,
Δξ=(Δρ Δφ)T=(Δρ θa)T=(Δρ1 Δρ2 Δρ3 θa1 θa2 θa3)Tis a lie algebra representation of pose increments, where Δ ρ represents the change in camera position, Δ φ represents the change in camera pose, θ represents the modulus of Δ φ, a1、a2、a3Representing the direction of Δ φ, is a unit vector, Δ ρ1、Δρ2、Δρ3Representing the component of Δ φ, Δ ξ is used to describe the camera pose change. Δ ξ^Representing a mapping of deltaξ from a vector to a matrix,
Figure BDA0002956134970000081
in one embodiment, determining camera pose measurements from the first and second local pose data may be performed as follows:
calculating a first reprojection error according to the first local position and posture data; calculating a second reprojection error according to the second local pose data; and determining a camera pose measurement result according to the first reprojection error and the second reprojection error.
In the embodiment of the invention, the sizes of the first reprojection error and the second reprojection error are compared, and the local pose data with smaller errors are selected as the camera pose measurement result.
In one embodiment, the reprojection error is calculated from the local pose data as follows:
Figure BDA0002956134970000082
Figure BDA0002956134970000083
Pi=(Xi Yi Zi)T
Figure BDA0002956134970000084
wherein E is a reprojection error, N represents the number of the feature points, (x)i yi)TIs the pixel coordinate of the feature point, PiIs the three-dimensional coordinate of the feature point in the world coordinate system, [ R ]t,tt]For local pose data, fxAnd fyIs the focal length of the camera, cxAnd cyAre the principal point coordinates of the camera.
The implementation of the method is described below in a specific embodiment.
1. And converting the pixel coordinates of the N characteristic points on the image into normalized coordinates. Specifically, the conversion formula is as follows:
Figure BDA0002956134970000091
wherein (x)i yi)TIs the pixel coordinate of the feature point, (u)i vi)TIs the normalized coordinates of the feature points, fxAnd fyIs the focal length of the camera, cxAnd cyAre the principal point coordinates of the camera.
2. A world coordinate system is established so that the plane of the feature points coincides with the XOY plane of the world coordinate system, as shown in fig. 2. Using the coordinates (X) of the N feature points on the XOY planei Yi)TI-1, … N, and the normalized coordinates (u) of the N feature pointsi vi)TAnd i is 1, … N, and a homography matrix H from the feature point plane to the image plane is calculated. The calculation method employs DLT (direct linear transformation). The resulting H can be represented as:
Figure BDA0002956134970000092
3. taking the first two columns of H to form a matrix H1:2=[h1 h2]And to H1:2Singular value decomposition is carried out to obtain:
Figure BDA0002956134970000093
order:
Figure BDA0002956134970000094
namely, it is
Figure BDA0002956134970000095
Calculating a first initial pose solution [ R, t ] of the camera by using the following formula,
R=[m1 m2 m1×m2],
Figure BDA0002956134970000101
where R represents the rotation matrix of the camera and t represents the translation vector of the camera.
4. Solve [ R, t ] for the first initial pose]As an iteration starting point, a lie algebra disturbance model iteration method or an orthogonal iteration algorithm is adopted to obtain a first local optimal solution [ R ]op,top]。
Specifically, in the lie algebraic perturbation model iteration method, the reprojection error is defined:
Figure BDA0002956134970000102
wherein N represents the number of feature points, (x)i yi)TIs the pixel coordinates of the feature point(s),
Figure BDA0002956134970000103
the method is characterized in that the three-dimensional coordinates of the feature points in a world coordinate system are pixel coordinates after the re-projection of the camera pose, namely:
Figure BDA0002956134970000104
wherein
Figure BDA0002956134970000105
Pi=(Xi Yi Zi)TIs the three-dimensional coordinate of the feature point in the world coordinate system, [ R ]t,tt]The pose of the current iteration. f. ofxAnd fyIs the focal length of the camera, cxAnd cyAre the principal point coordinates of the camera.
The iteration method of the lie algebra disturbance model takes the initial pose solution [ R, t ] as a starting point, and iteratively updates the pose to ensure that the reprojection error E reaches a local minimum value. The calculation method of the pose increment in each iteration comprises the following steps:
Figure BDA0002956134970000106
where Δ ξ ═ Δ ρ Δ φ)T=(Δρ θa)T=(Δρ1 Δρ2 Δρ3 θa1 θa2 θa3)TIs a lie algebra representation of pose increments.
Figure BDA0002956134970000107
Is the jacobian corresponding to the ith feature point,
Figure BDA0002956134970000108
is the reprojection error vector corresponding to the ith feature point.
After the pose increment delta xi is obtained, the pose updating method comprises the following steps:
Figure BDA0002956134970000109
[Rt+1,tt+1]and the updated pose is used as the current pose of the next iteration.
Figure BDA0002956134970000111
Wherein
Figure BDA0002956134970000112
The iterative process is repeated until the modulus of delta xi is less than a given threshold, and the pose of the current iteration is taken as the first local optimal solution [ R ]op,top]。
5. And carrying out mirror image inversion on the first local optimal solution, and taking the inverted solution as a second initial pose solution. Order: rop=[r1op r2op r3op]Wherein r is1op,r2op,r3opIs RopThree column vectors. The mirror image turning method comprises the following steps: respectively calculate r1opAnd r2opAt topThe component in the direction is inverted, and the other components are kept unchanged to obtain r1' and r2', as shown in FIG. 3. The formula is expressed as:
Figure BDA0002956134970000113
the solution after mirror flip is R' ═ R1′ r2′ r1′×r2′],t′=top. Prepared from [ R ', t']As a second initial pose solution.
6. Solving [ R ', t ' in a second initial pose ']Obtaining a second local optimal solution [ R 'by adopting a lie algebra disturbance model iteration method or an orthogonal iteration algorithm as an iteration starting point'op,t′op]. Specifically, the iterative method of the lie algebra disturbance model is the same as step 4.
7. For the first local optimum [ R ]op,top]And a second locally optimal solution [ R'op,t′op]And judging, and taking a solution which enables the reprojection error E to be smaller as a final pose measurement result. Specifically, separately reacting [ R ]op,top]And [ R'op,t′op]Substituting into the calculation formula of the reprojection error E in the step 4:
Figure BDA0002956134970000114
substitution of [ Rt,tt]And obtaining respective corresponding reprojection errors, and then judging the size of the reprojection errors so as to select a final measurement result.
The invention provides a method and a device for measuring a camera pose based on coplanar feature points. And then carrying out mirror image inversion on the first local optimal pose to obtain a second initial pose solution, and obtaining a second local optimal pose by adopting an iteration method with the second initial pose solution as a starting point. And finally, selecting one of the two local optimal poses with smaller reprojection error as a global optimal pose. The method considers the ambiguity problem of pose measurement under the condition of coplanar characteristic points, and leads the final camera pose measurement result to be more stable and accurate through two initial solutions to guide the iterative process.
The embodiment of the invention also provides a coplanar feature point-based camera pose measuring device, which is described in the following embodiment. Because the principle of solving the problems of the device is similar to the coplanar feature point-based camera pose measurement method, the implementation of the device can refer to the implementation of the coplanar feature point-based camera pose measurement method, and repeated parts are not repeated. Referring to a structural block diagram of a coplanar feature point-based camera pose measurement device shown in fig. 4, the device includes:
an obtaining module 41, configured to obtain a normalized coordinate of the coplanar feature point and a first coordinate of the coplanar feature point on a target plane of a world coordinate system; the plane where the coplanar characteristic points are located is superposed with the target plane; a matrix module 42 for calculating a homography matrix from the normalized coordinates and the first coordinates; a pose module 43 for calculating first initial pose data of the camera using the homography matrix; an iteration module 44 configured to determine first and second local pose data from the first initial pose data; and the measuring module 45 is used for determining a camera pose measuring result according to the first local pose data and the second local pose data.
In one embodiment, referring to a structural block diagram of another coplanar feature point-based camera pose measurement apparatus shown in fig. 5, the apparatus further includes: a pre-processing module 46 for: normalizing the pixel coordinates of the coplanar feature points on the image to obtain normalized coordinates; the image is shot by a camera; establishing a world coordinate system; the plane of the coplanar characteristic points is coincident with a plane of the world coordinate system.
In one embodiment, the preprocessing module is specifically configured to: normalizing the pixel coordinates of the coplanar feature points on the image according to the following formula to obtain normalized coordinates:
Figure BDA0002956134970000121
i=1, … N; wherein (x)i yi)TIs the pixel coordinate of the feature point, (u)i vi)TIs the normalized coordinates of the feature points, fxAnd fyIs the focal length of the camera, cxAnd cyIs the principal point coordinates of the camera and N is the number of coplanar feature points.
In one embodiment, the matrix module is specifically configured to: and calculating a homography matrix from the characteristic point plane to the image plane by adopting a direct linear transformation method.
In one embodiment, referring to the structural block diagram of the pose module shown in fig. 6, the pose module includes: a data unit 61 for determining a first matrix and a second matrix from the homography matrix; a decomposition unit 62, configured to perform singular value decomposition on the first matrix to obtain a singular value decomposition result; a first determination unit 63 for determining a rotation matrix of the camera using the singular value decomposition result; a second determination unit 64 for determining 5 a translation vector of the camera using the singular value decomposition result and the second matrix; a pose unit 65 for taking the rotation matrix and the translation vector as first initial pose data of the camera.
In one embodiment, the data unit is specifically configured to: determining a first matrix and a second matrix according to the homography matrix according to the following formula:
Figure BDA0002956134970000131
H1:2=[h1 h2](ii) a Where H is a homography matrix, H1:2Is a first matrix, h3Is a second matrix.
In one embodiment, the decomposition unit is specifically configured to: performing singular value decomposition on the first matrix according to the following formula to obtain a singular value decomposition result:
Figure BDA0002956134970000132
wherein H1:2For the first matrix, U, S, V is the singular value decomposition result.
In an embodiment, the first determining unit is specifically configured to: determining the rotation moment of the camera according to the following formula by using the singular value decomposition resultArraying:
Figure BDA0002956134970000133
Figure BDA0002956134970000134
R=[m1 m2 m1×m2](ii) a Where U, V is the singular value decomposition result and R is the rotation matrix of the camera.
In an embodiment, the second determining unit is specifically configured to: and determining the translation vector of the camera according to the following formula by using the singular value decomposition result and the second matrix:
Figure BDA0002956134970000135
wherein h is3Is a second matrix, s1、s2And t is the translation vector of the camera as a singular value decomposition result.
In one embodiment, referring to the structural block diagram of the iteration module shown in fig. 7, the iteration module includes: a first iteration unit 71, configured to calculate the first local pose data by using a preset iteration algorithm, with the first initial pose data as an iteration starting point; the mirror image turning unit 72 is used for performing mirror image turning processing on the first local pose data to obtain second initial pose data; and a second iteration unit 73, configured to calculate second local pose data by using a preset iteration algorithm, with the second initial pose data as an iteration starting point.
In one embodiment, the mirror image flipping unit is specifically configured to: carrying out mirror image turning processing on the first local pose data according to the following formula to obtain second initial pose data: rop=[r1op r2op r3op];
Figure BDA0002956134970000141
R′=[r1′ r2′ r1′×r2′];t′=top(ii) a Wherein [ R ]op,top]Is first local position posture data, [ R ', t']For the second initial pose data, r1op,r2op,r3opIs RopThree column vectors.
In one embodiment, the preset iterative algorithm is a lie algebra disturbance model iterative method or an orthogonal iterative algorithm.
In one embodiment, the lie algebra perturbation model iterative method calculates local pose data according to the following formula:
Figure BDA0002956134970000142
Figure BDA0002956134970000143
Figure BDA0002956134970000144
wherein [ R ]t,tt]For pose data before update, [ Rt+1,tt+1]For the updated pose data, Δ ξ is the pose increment, N represents the number of feature points, JiIs the Jacobian matrix corresponding to the ith characteristic point, is the reprojection error vector corresponding to the ith characteristic point, (x)i yi)TIs the pixel coordinate of the feature point, PiIs the three-dimensional coordinate of the feature point in the world coordinate system, fxAnd fyIs the focal length of the camera, cxAnd cyAre the principal point coordinates of the camera.
In one embodiment, a measurement module to: calculating a first reprojection error according to the first local position and posture data; calculating a second reprojection error according to the second local pose data; and determining a camera pose measurement result according to the first reprojection error and the second reprojection error.
In one embodiment, the measurement module is specifically configured to: calculating a reprojection error according to the local pose data according to the following formula:
Figure BDA0002956134970000151
Figure BDA0002956134970000152
Pi=(Xi Yi Zi)T
Figure BDA0002956134970000153
wherein E is a reprojection error, N represents the number of the feature points, (x)i yi)TIs the pixel coordinate of the feature point, PiIs the three-dimensional coordinate of the feature point in the world coordinate system, [ R ]t,tt]For local pose data, fxAnd fyIs the focal length of the camera, cxAnd cyAre the principal point coordinates of the camera.
The embodiment of the present invention further provides a computer device, referring to the schematic block diagram of the structure of the computer device shown in fig. 8, the computer device includes a memory 81, a processor 82, and a computer program stored in the memory and executable on the processor, and the processor implements any of the above-mentioned steps of the coplanar feature point-based camera pose measurement method when executing the computer program.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the computer device described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program for executing any one of the above methods for measuring a pose of a camera based on coplanar feature points is stored in the computer-readable storage medium.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A coplanar feature point-based camera pose measurement method is characterized by comprising the following steps:
acquiring a normalized coordinate of a coplanar feature point and a first coordinate of the coplanar feature point on a target plane of a world coordinate system; the plane where the coplanar feature points are located is coincident with the target plane;
calculating a homography matrix according to the normalized coordinates and the first coordinates;
calculating first initial pose data of the camera by using the homography matrix;
determining first local pose data and second local pose data according to the first initial pose data;
determining a camera pose measurement result according to the first local pose data and the second local pose data.
2. The method of claim 1, wherein obtaining the normalized coordinates of the coplanar feature and the first coordinates of the coplanar feature on the target plane of the world coordinate system is preceded by:
normalizing the pixel coordinates of the coplanar feature points on the image to obtain normalized coordinates; the image is obtained by shooting with the camera;
establishing a world coordinate system; the plane of the coplanar characteristic points is coincident with a plane of the world coordinate system.
3. The method of claim 1, wherein computing a homography matrix from the normalized coordinates and the first coordinates comprises:
and calculating a homography matrix from the characteristic point plane to the image plane by adopting a direct linear transformation method.
4. The method according to claim 1, wherein calculating first initial pose data of the camera using the homography matrix comprises:
determining a first matrix and a second matrix according to the homography matrix;
performing singular value decomposition on the first matrix to obtain a singular value decomposition result;
determining a rotation matrix of the camera by using the singular value decomposition result;
determining a translation vector of the camera by using the singular value decomposition result and the second matrix;
and taking the rotation matrix and the translation vector as first initial pose data of the camera.
5. The method of claim 1, wherein determining first and second local pose data from the first initial pose data comprises:
taking the first initial pose data as an iteration initial point, and calculating first local pose data by using a preset iteration algorithm;
carrying out mirror image turning processing on the first local pose data to obtain second initial pose data;
and taking the second initial pose data as an iteration starting point, and calculating second local pose data by using a preset iteration algorithm.
6. The method of claim 5, comprising: carrying out mirror image turning processing on the first local pose data according to the following formula to obtain second initial pose data:
Rop=[r1op r2op r3op]
Figure FDA0002956134960000021
R′=[r1′ r2′ r1′×r2′]
t′=top
wherein [ R ]op,top]Is first local position posture data, [ R ', t']For the second initial pose data, r1op,r2op,r3opIs RopThree column vectors.
7. The method of claim 1, wherein determining camera pose measurements from the first and second local pose data comprises:
calculating a first reprojection error according to the first local position and posture data;
calculating a second reprojection error according to the second local pose data;
and determining a camera pose measurement result according to the first reprojection error and the second reprojection error.
8. A camera pose measuring device based on coplanar feature points is characterized by comprising:
the acquisition module is used for acquiring the normalized coordinates of the coplanar feature points and the first coordinates of the coplanar feature points on a target plane of a world coordinate system; the plane where the coplanar feature points are located is coincident with the target plane;
the matrix module is used for calculating a homography matrix according to the normalized coordinates and the first coordinates;
a pose module for calculating first initial pose data of the camera using the homography matrix;
the iteration module is used for determining first local pose data and second local pose data according to the first initial pose data;
and the measurement module is used for determining a camera pose measurement result according to the first local pose data and the second local pose data.
9. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the coplanar feature point-based camera pose measurement method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium characterized by storing a computer program for executing the coplanar feature point-based camera pose measurement method according to any one of claims 1 to 7.
CN202110223982.0A 2021-03-01 2021-03-01 Camera pose measuring method and device based on coplanar feature points Pending CN112907669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110223982.0A CN112907669A (en) 2021-03-01 2021-03-01 Camera pose measuring method and device based on coplanar feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110223982.0A CN112907669A (en) 2021-03-01 2021-03-01 Camera pose measuring method and device based on coplanar feature points

Publications (1)

Publication Number Publication Date
CN112907669A true CN112907669A (en) 2021-06-04

Family

ID=76108138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110223982.0A Pending CN112907669A (en) 2021-03-01 2021-03-01 Camera pose measuring method and device based on coplanar feature points

Country Status (1)

Country Link
CN (1) CN112907669A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658248A (en) * 2021-08-09 2021-11-16 煤炭科学研究总院 Attitude monitoring method and device for self-moving tail and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658248A (en) * 2021-08-09 2021-11-16 煤炭科学研究总院 Attitude monitoring method and device for self-moving tail and electronic equipment

Similar Documents

Publication Publication Date Title
CN104537709B (en) It is a kind of that method is determined based on the real-time three-dimensional reconstruction key frame that pose changes
CN112669359B (en) Three-dimensional point cloud registration method, device, equipment and storage medium
CN112053447B (en) Augmented reality three-dimensional registration method and device
CN111168719B (en) Robot calibration method and system based on positioning tool
US11568601B2 (en) Real-time hand modeling and tracking using convolution models
WO2024037658A1 (en) Method and apparatus for controlling pointing action of robot, and electronic device and storage medium
CN114494150A (en) Design method of monocular vision odometer based on semi-direct method
CN112967340A (en) Simultaneous positioning and map construction method and device, electronic equipment and storage medium
JP2019109747A (en) Position attitude estimation apparatus, position attitude estimation method, and program
CN112907669A (en) Camera pose measuring method and device based on coplanar feature points
CN113420590B (en) Robot positioning method, device, equipment and medium in weak texture environment
CN113313200B (en) Point cloud precision matching method based on normal constraint
CN117340879A (en) Industrial machine ginseng number identification method and system based on graph optimization model
CN111553954B (en) Online luminosity calibration method based on direct method monocular SLAM
Tahri et al. Efficient iterative pose estimation using an invariant to rotations
JP2002046087A (en) Three-dimensional position measuring method and apparatus, and robot controller
CN116363205A (en) Space target pose resolving method based on deep learning and computer program product
CN115511935A (en) Normal distribution transformation point cloud registration method based on iterative discretization and linear interpolation
CN113379840B (en) Monocular vision pose estimation method based on coplanar target
CN114821113A (en) Monocular vision inertia SLAM method and system based on adaptive robust kernel
CN111932628A (en) Pose determination method and device, electronic equipment and storage medium
Hwang et al. Primitive object grasping for finger motion synthesis
CN113592907A (en) Visual servo tracking method and device based on optical flow
Comport et al. Efficient model-based tracking for robot vision
Ming et al. A real-time monocular visual SLAM based on the bundle adjustment with adaptive robust kernel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination