CN114066977B - Incremental camera rotation estimation method - Google Patents

Incremental camera rotation estimation method Download PDF

Info

Publication number
CN114066977B
CN114066977B CN202010777773.6A CN202010777773A CN114066977B CN 114066977 B CN114066977 B CN 114066977B CN 202010777773 A CN202010777773 A CN 202010777773A CN 114066977 B CN114066977 B CN 114066977B
Authority
CN
China
Prior art keywords
rotation
absolute rotation
absolute
image
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010777773.6A
Other languages
Chinese (zh)
Other versions
CN114066977A (en
Inventor
高翔
陈震
解则晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202010777773.6A priority Critical patent/CN114066977B/en
Publication of CN114066977A publication Critical patent/CN114066977A/en
Application granted granted Critical
Publication of CN114066977B publication Critical patent/CN114066977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an incremental camera rotation estimation method, which comprises the following steps: s1: selecting an initial image triplet from the epipolar geometry map as a seed view, and estimating absolute rotation of the seed view; s2: selecting the next optimal view of absolute pose estimation according to the absolute rotation of the currently estimated seed view; s3: estimating the absolute rotation of the next optimal view, optimizing the absolute rotation estimated value, and taking the optimized absolute rotation as a camera rotation estimated value; s4: and repeating S2 and S3 until the absolute rotation estimated values of all the cameras are obtained. The method realizes absolute rotation estimation in an incremental mode, carries out rotation vector estimation based on the optimal view, and improves the robustness and accuracy of rotation estimation.

Description

Incremental camera rotation estimation method
Technical Field
The invention relates to the technical field of image processing, in particular to an incremental camera rotation estimation method.
Background
The motion recovery structure (structure from motion) is a key step in the large-scale scene three-dimensional modeling based on images, has rapid development in recent years, is input into image feature matching, is output into a absolute pose of a camera and a scene structure, can estimate the real motion trail of the camera through the motion trail of an image picture, and is widely applied to the fields of robots SLAM, autopilot and the like.
Based on the initialization mode of the camera pose, the method for recovering the structure from motion can be roughly divided into an incremental mode and a global mode. The incremental method initializes the camera pose by iteratively performing camera pose estimation and scene structure expansion. Random sample consistency (random sample consensus) and bundling (bundle adjustment) techniques are also introduced in the iterative process to cope with unavoidable feature matching outliers. For the global approach, the camera pose is initialized using a moving averaging technique, including rotation and translation averaging (rotation/translation averaging). Incremental recovery from motion structural methods are generally more accurate and robust than global methods due to frequent calls to model estimation based on random sample consistency and parameter optimization techniques based on binding adjustments.
Rotational averaging (rotation averaging) refers to estimating the absolute pose of the camera given relative rotation measurements, which are typically estimated and resolved by an essential matrix (ESSENTIAL MATRIX). Most global motion-recovery structure methods take the strategy of sequentially performing rotation and translation averaging, and to simplify the problem, absolute rotation is also fixed and introduced as a known quantity when absolute translation estimation is performed. Therefore, rotational averaging is critical in the global method of recovering structure from motion.
However, the rotation averaging problem remains largely unsolved due to the unavoidable measured outliers (outlies) caused by feature mismatch in the relative rotation on the epipolar geometry map (epipolar-geometry map). This phenomenon is particularly evident in the collection of images downloaded over the internet. To address the above problems, existing approaches attempt to design a robust loss function to make the optimization process more robust, or develop effective outlier filtering strategies to decontaminate outlier geometry contaminated with outliers. Although proven to be more efficient and integrated into some global motion-from-motion structure flows, these approaches still suffer from some drawbacks in terms of accuracy and robustness.
Disclosure of Invention
The invention aims to solve the technical problems and provide an incremental camera rotation estimation method with high accuracy and high robustness.
In order to achieve the above purpose, the invention adopts the following technical scheme:
an incremental camera rotation estimation method, comprising the steps of:
s1: selecting an initial image triplet from the epipolar geometry map as a seed view, and estimating absolute rotation of the seed view;
S2: selecting the next optimal view of absolute pose estimation according to the absolute rotation of the currently estimated seed view;
s3: estimating the absolute rotation of the next optimal view, optimizing the absolute rotation estimated value, and taking the optimized absolute rotation as a camera rotation estimated value;
S4: and repeating S2 and S3 until the absolute rotation estimated values of all the cameras are obtained.
In some embodiments of the present invention, after selecting an initial triplet, further performing optimization processing on the initial triplet to obtain an optimized image triplet as a seed view; the method for optimizing the initial triplet comprises the following steps:
Selecting images with numbers of i, j and k to form an initial triplet, and marking the initial triplet as t ijk;
Acquiring absolute rotation of each image under the triplet local coordinate system:
Wherein, Absolute rotation of optimized image i,/>Representing the absolute rotation of the optimized image j,/>Representing the absolute rotation of image k; n ij denotes the number of feature matches between image pairs i, j, n ik denotes the number of feature matches between image pairs i, k, and n jk denotes the number of feature matches between image pairs j, k; /(I)Measured value R ij and estimated value/>, representing relative rotation between image pairs i, jAngular distance between/(Measured value R ik and estimated value/>, representing relative rotation between image pair i, kAngular distance between/(Measured value R jk and estimated value/>, representing relative rotation between image pairs j, kAngular distance between; /(I)Representing the 2-norm of the vector;
Based on the selection cost c ijk of the triplet and the initial triplet t ijk, determining the final optimized triplet as a seed view, specifically:
wherein i *,j*,k* represents the optimized image sequence number of the initial triplet, and the image is selected from the epipolar geometry atlas according to the sequence number to construct the image triplet; For the absolute rotation of the image i * after optimization,/> For an optimized absolute rotation of image j *,/>The image k * is an optimized absolute rotation.
In some embodiments of the present invention, the first N edges with the greatest number of feature matches of the epipolar geometry are selected to form a triplet set, and an image triplet is selected from the triplet set as an initial triplet.
In some embodiments of the present invention, the method for selecting the next optimal view is:
the set of vertices v i1 of the image in the initialized image triplet, for which absolute rotation is currently estimated, is noted as
The set of vertices v i2 for which absolute rotations are not currently estimated for images other than the initialized triplet is noted as
At the collectionIn and/>The middle vertex has vertices with shared edges, denoted v m, and the vertex set is denoted/>
Obtaining vertex v m and vertex setThe shared edge e im between the two is constructed into a shared edge set epsilon 1m which is marked as e im∈ε1m;
computing the absolute rotation of edge e im corresponding to vertex v m:
Wherein, Representing the absolute rotation of the vertex v m corresponding to the camera calculated from the edge e im, R im representing the relative rotation between the two cameras connected by the edge e im, R i representing the absolute rotation of the vertex v i1 for which the absolute rotation has been estimated;
Calculation of The selection cost of each absolute rotation/>
Where e jm represents any one of the edges ε 1m, n jm represents the number of feature matches between pairs of images connected by edge e jm,Representing the absolute rotation of the corresponding camera for vertex v m calculated from edge e jm;
Based on selection cost And edge e jm, determine the optimized value of sequence number i, denoted as i *:
i * identification set Representative absolute rotation number of (a);
Calculating the corresponding selection cost based on the sequence number i *
The next optimal view is selected as follows:
Wherein m * represents the next optimal view selection sequence number, Representing collections/>The selection cost of the representative absolute rotation in the vertex set/>, based on the sequence number m * In selecting vertices/>And constructing an optimal view.
In some embodiments of the invention, in the aggregateSelected and combined/>The first n vertex constructs/>, with shared edge numbers in
In some embodiments of the present invention, the method for optimizing the absolute rotation estimation value includes a local optimization method, where the local optimization method is:
Vertex for next best view Is recorded as/>Its most recently estimated absolute rotation is/>
Acquiring an inner value edge set:
Wherein, Representing vertex set/>And vertex/>Edge set between,/>Representation/>Any one side of/>Representation/>Inner value edge set of (1)/>Representing edges/>Relative rotation between two cameras connected,/>A current estimate representing the absolute rotation corresponding to vertex v i, θ th representing the error threshold for the angular distance between the two rotations;
For a pair of Optimizing;
Wherein, Representing absolute rotation/>Weighted local optimization results,/>Representation/>Is provided with a plurality of grooves, wherein the grooves are arranged on the surface of the substrate,And/>Respectively represent edges/>The number of feature matches and the relative rotation between the connected image pairs.
In some embodiments of the present invention, the method for optimizing the absolute rotation estimation value further includes a global optimization method, where the global optimization method is:
edge set from all current estimated absolute rotations Obtain inner value edge set/>
Wherein ε 1 representsEdge set between all vertices in (a), e ij represents/>R ij represents the relative rotation between the two cameras connected by edge e ij,/>A current estimate representing the absolute rotation corresponding to vertex v j;
For a pair of All absolute rotations among them are globally optimized by:
Wherein, Absolute rotation set representing global optimization, e ij represents/>Is provided with a plurality of grooves, wherein the grooves are arranged on the surface of the substrate,And/>Respectively represent edges/>The number of feature matches and the relative rotation between the connected image pairs.
In some embodiments of the invention, global optimization of the rotation estimation is performed after the current estimated absolute rotation number increase ratio reaches a threshold.
Compared with the prior art, the camera rotation estimation method has the beneficial effects that:
According to the method, all absolute rotations are estimated at the same time unlike a traditional rotation averaging method, the absolute rotation estimation is realized in an incremental mode, the rotation vector is estimated based on an optimal view, and the robustness and the accuracy of the rotation estimation are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of an incremental camera rotation estimation method.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides an incremental camera rotation estimation method which is used for estimating the absolute rotation position and the gesture of a camera of a large-scale scene modeling system.
The data source of the incremental camera rotation estimation method is an epipolar geometry map, which is recorded asWherein/>For the vertex set of the epipolar geometry graph, ε is the edge set of the epipolar geometry graph.
The input to the incremental rotation estimation algorithm includes the relative rotation between the matching image pairs and the number of feature matches, noted as: Where e ij denotes any one of the edges, R ij denotes the relative rotation between the image pairs connected to the edge, and n ij denotes the number of feature matches between the image pairs connected to the edge. The output of the incremental rotation estimation algorithm is the absolute pose of each camera after optimization, and is recorded as/> Wherein v i represents/>Any one vertex of/>Representing the absolute rotation of the optimized image corresponding to v i.
The incremental rotation estimation algorithm is as follows.
S1: and selecting an image triplet in the epipolar geometry map as a seed view, and estimating absolute rotation of the seed view. Specifically, an initial triplet selection strategy based on local optimization is provided for realizing seed view selection and construction.
The method of selecting a seed view based on the triplet is as follows.
Before the initial triplet selection, care should be taken that the epipolar geometry map typically contains a large number of triples, which is particularly apparent when the number of vertices is large. In order to make the triplet selection process more efficient, the invention selects the triples based on the feature matching quantity of the triples.
Selecting the top N sides with the greatest feature matching quantity of the epi-polar geometry diagram to form a triplet, and marking the triplet set asWherein/>Representing the selected triplet set, and t ijk represents any triplet; n can be selected according to the algorithm requirement, and the value of N determines the size of the triple view sample set, and in this embodiment, N is set to 100.
For each triplet in the three-dimensional image, the invention acquires absolute rotation of each image under a local coordinate system of the triplet by adopting the following optimization based on the triplet:
Wherein, Absolute rotation of optimized image i,/>Representing the absolute rotation of the optimized image j,/>Representing the absolute rotation of image k; n ij denotes the number of feature matches between image pairs i, j, n ik denotes the number of feature matches between image pairs i, k, and n jk denotes the number of feature matches between image pairs j, k; /(I)Measured value R ij and estimated value/>, representing relative rotation between image pairs i, jAngular distance between/(Measured value R ik and estimated value/>, representing relative rotation between image pair i, kAngular distance between/(Measured value R jk and estimated value/>, representing relative rotation between image pairs j, kAngular distance between; /(I)Representing the 2-norm of the vector.
The selection cost of the image triplet among the image i, the image j and the image k is represented by adopting a substitute c ijk triplet t ijk,cijk, wherein:
cijk=cij+cik+cjk
Based on the selection cost c ijk of the triplet and the initial triplet t ijk, the final optimized triplet is determined as a seed view. Specific:
wherein i *,j*,k* denotes the respective image numbers of the selected initial triplet, For absolute rotation of image i after optimization,/>For absolute rotation of image j after optimization,/>The image k is an optimized absolute rotation.
The three absolute rotations of the respective triad-based optimizations form the seed view construction of the present invention. In the epipolar geometry graph set, images are selected according to image sequence numbers, and initial triples are constructed.
In addition to the above methods, the seed view may be selected using the following embodiments.
In the prior art, a pair of images with the largest feature matching quantity can be selected from the epipolar set graph to serve as initial seed graphs. Sometimes, the matching result of the local features of the image is unreliable due to the problems of repeated textures, symmetrical structures and the like, so that the method is not selected to be adopted for pursuing robustness and processing precision.
S2: and selecting the next optimal view of absolute pose estimation according to the absolute rotation of the current estimated seed view.
The next optimal view may simply be chosen as the camera with the largest number of connected edges from the cameras that have currently estimated absolute rotation. However, since different edges in ε have different measurement errors, all edges should not be treated equally during the next optimal view selection. In order to improve the robustness of the method, the invention provides the following optimal view selection strategy based on the weighted support set.
The set of vertices v i1 of the image in the initialized image triplet, for which absolute rotation is currently estimated, is noted as
The set of vertices v i2 for which absolute rotations are not currently estimated for images other than the initialized triplet is noted as
Wherein/>Is the vertex set of the epipolar geometry graph.
At the collectionIn selecting vertices, in particular, select and/>The middle vertex has a vertex with a shared edge, and the vertex is denoted as v m and the vertex set is denoted as/>To conserve computing resources, one can choose to aggregate/>Selected and combined/>The first n vertex constructs/>, with shared edge numbers inIn this embodiment, n is 10, i.e., only consider/>Zhonghe/>The top 10 vertices with the largest number of shared edges.
Obtaining vertex v m and vertex setThe shared edge set is marked as e im∈ε1m; wherein ε 1m represents v m and/>And a shared edge set in between, e im represents any one edge in ε 1m.
For each edge e im in ε 1m, the absolute rotation of v m can be calculated:
Wherein, Representing the absolute rotation of the corresponding camera for vertex v m calculated from edge e im, R im representing the relative rotation between the two cameras connected by edge e im, and R i representing the absolute rotation of the corresponding vertex v i1 for which the absolute rotation has been estimated.
Ideally, the set of rotationsThe elements in (a) should be kept consistent. However, the above phenomenon does not exist in actual cases due to the existence of the relative rotation measurement error.
Calculation ofThe selection cost of each absolute rotation/>
Wherein,Representing absolute rotation/>E jm represents any one edge (except e im) in ε 1m, n jm represents the number of feature matches between pairs of images connected by edge e jm,/>Representing the absolute rotation of the corresponding camera for vertex v m calculated from edge e jm.
Based on selection costAnd edge e jm, determine the optimized value of sequence number i, denoted as i *:
Based on sequence number i * in absolute rotation set The absolute rotation sequence number after optimization is selected, and the corresponding selection cost is/>
The next optimal view is selected as follows:
Wherein m * represents the next optimal view selection sequence number, Representing collections/>The cost of choosing a representative absolute rotation in (a). Based on the sequence number m *, in vertex set/>In selecting vertices/>And constructing an optimal view.
The next optimal view selection strategy proposed by the present invention is based on a support set weighted by the number of feature matches and relative rotation errors, and therefore is more sensitive to relative rotation measurement outliers.
S3: and estimating the absolute rotation of the next optimal view, and optimizing the absolute rotation estimated value, wherein the absolute rotation after optimization is the camera rotation estimated value.
Vertex for next best viewIs recorded as/>To obtain a more accurate and robust estimate, the present invention optimizes the absolute rotation, whether it is currently estimated, locally or globally.
Wherein local optimization refers to: absolute rotation of the latest estimateThe optimization is performed while the other absolute rotations are fixed.
Wherein global optimization refers to: for all current estimated absolute rotationsAnd simultaneously optimizing.
In order to ensure the processing efficiency, the invention adopts local optimization under normal conditions, and performs global optimization in a intermittent manner, namely, performs global optimization once only after the absolute rotation number estimated currently is increased to a certain degree, and performs global optimization once when the absolute rotation number is increased by 40% in the implementation. Where 40% is the set ratio threshold, in some embodiments, other ratio thresholds may be optionally set according to the requirements. Similar to the initial triplet, i.e. the next optimal view selection, here the local and global optimization is also done in a weighted manner. In addition, after each global optimization, a re-rotation averaging is performed on the local epi-polar geometry.
The method of local optimization is as follows.
For weighted local optimization, based on selection and initialization of the next optimal view, we first obtain the inner value edge set by:
Wherein, Representing vertex set/>And vertex/>Edge set between,/>Representation/>Any one side of/>Representation/>Inner value edge set of (1)/>Representing edges/>Relative rotation between two cameras connected,/>The current estimate representing the absolute rotation corresponding to vertex v i, θ th represents the error threshold for the angular distance between the two rotations.
Absolute rotationOptimization is performed by the following formula:
Wherein, Representing absolute rotation/>Weighted local optimization results,/>Representation/>Is provided with a plurality of grooves, wherein the grooves are arranged on the surface of the substrate,And/>Respectively represent edges/>The number of feature matches and the relative rotation between the connected image pairs.
The method of global optimization is as follows.
For weighted global optimization, the present invention first uses the following equation to compute the absolute rotated edge set from all current estimates, similar to weighted local optimizationObtain inner value edge set/>
Wherein ε 1 representsEdge set between all vertices in (a), e ij represents/>R ij represents the relative rotation between the two cameras connected by edge e ij,/>Representing the current estimate of the absolute rotation corresponding to vertex v j. Then/>All absolute rotations among them are globally optimized by:
Wherein, Absolute rotation set representing global optimization, e ij represents/>Is provided with a plurality of grooves, wherein the grooves are arranged on the surface of the substrate,And/>Respectively represent edges/>The number of feature matches and the relative rotation between the connected image pairs.
In some embodiments of the present invention, in order to obtain a more robust absolute rotation estimation result, the absolute rotation is further subjected to a re-travel rotation averaging process after global optimization of the absolute rotation is performed.
For heavy rotation averaging, the absolute rotation set is optimized according to the globalThe invention re-acquires the inner value edge set and re-optimizes the absolute rotation of the current estimation by using a mode in weighted global optimization.
S4: and repeating S2 and S3 until the absolute rotation estimated values of all the cameras are obtained.
To test the effect of the present invention, a test was performed on a 1DSfM dataset. And during testing, taking the absolute rotation of the camera obtained by Bundler calibration as a true value and taking the median error of the rotation angle as an evaluation index.
Table 1 ablation experimental results
To verify the effectiveness of the key technique proposed in the present invention, we performed several ablation experiments, including no initial triplet selection based on local optimization (ablation one), no next optimal view selection based on weighted support set (ablation two), no weighted (ablation three), no re-rotation averaging (ablation four), no weighted global optimization (ablation five), no weighted local optimization (ablation six). These six cases are briefly described below.
1) Under the condition that the selection of the next optimal view based on the weighted support set is not available, the rotation averaging is initialized by selecting the image pair with the largest number of feature matches; 2) Without the selection of the next optimal view based on the weighted support set, the next optimal view is preferably the camera with the largest number of connecting edges (in weighted form) with the camera currently having undergone absolute rotation estimation; 3) Under the condition of no weight, the feature matching quantity is not considered in the optimization process, and all relative rotation measured values are treated uniformly; 4) In the case of non-heavy rotation averaging, heavy rotation averaging is not performed after each weighted global optimization; 5) In the case of no weighted global optimization, in the incremental absolute rotation calculation process, neither weighted global optimization nor re-rotation averaging operation is performed; 6) In the case of no weighted local optimization, no optimization operations are performed during the incremental absolute rotation calculation.
The results of the ablation experiments are shown in Table 1, from which it is clear that the accuracy of the rotational averaging results is significantly reduced in most of the ablation experiments over most of the test data, relative to the method of the present invention. Therefore, the key technologies provided by the invention can improve the accuracy and the robustness of the method.
In a comparative experiment, we compared the method of the present invention with six other methods, each of which is disclosed in the following comparative documents:
contrast document 1:R.Hartley,J.Trumpf,Y.Dai,and H.Li,"Rotation averaging,"International Journal of Computer Vision,vol.103,p.267–305,2013.
Contrast document 2:D.Crandall,A.Owens,N.Snavely,and D.Huttenlocher,"SfM with MRFs:Discrete-continuous optimization for large-scale structure from motion,"IEEE Transactions on Pattern Analysis and Machine Intelligence,vol.35,no.12,pp.2841–2853,2013.
Contrast document 3:A.Chatterjee and V.M.Govindu,"Efficient and robust large-scale rotation averaging,"in IEEE International Conference on Computer Vision(ICCV),2013,pp.521–528.
Contrast document 4:A.Chatterjee and V.M.Govindu,"Robust relative rotation averaging,"IEEE Transactions on Pattern Analysis and Machine Intelligence,vol.40,no.4,pp.958–972,2018.
Contrast document 5:V.M.Govindu,"Robustness in motion averaging,"in Asian Conference on Computer Vision,2006,pp.457–466.
Contrast document 6:H.Cui,S.Shen,W.Gao,H.Liu,and Z.Wang,"Efficient and robust large-scale structure-from-motion via track selection and camera prioritization,"ISPRS Journal of Photogrammetry and Remote Sensing,vol.156,pp.202–214,2019.
The results of the comparative experiments are shown in Table 2, from which it is clear that in all the comparative methods, the method of the present invention achieves overall optimal results, which verify the effectiveness of the incremental rotational averaging method proposed by the present invention and its robustness to relative rotational measurement outliers.
Table 2 comparative experimental results
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (4)

1. An incremental camera rotation estimation method, comprising the steps of:
s1: selecting an initial image triplet from the epipolar geometry graph, optimizing the initial triplet to obtain an optimized image triplet as a seed view, and estimating absolute rotation of the seed view; the method for optimizing the initial triplet comprises the following steps:
Selecting images with numbers of i, j and k to form an initial triplet, and marking the initial triplet as t ijk;
Acquiring absolute rotation of each image under the triplet local coordinate system:
Wherein, Representing the absolute rotation of image i after optimization,/>Representing the absolute rotation of the optimized image j,/>Representing the absolute rotation of the optimized image k; n ij denotes the number of feature matches between image pairs i, j, n ik denotes the number of feature matches between image pairs i, k, and n jk denotes the number of feature matches between image pairs j, k; /(I)Measured value R ij and estimated value/>, representing relative rotation between image pairs i, jAngular distance between/(Measured value R ik and estimated value/>, representing relative rotation between image pair i, kAngular distance between/(Measured value R jk and estimated value/>, representing relative rotation between image pairs j, kAngular distance between; /(I)Representing the 2-norm of the vector;
Based on the selection cost c ijk of the triplet and the initial triplet t ijk, determining the final optimized triplet as a seed view, specifically:
Wherein i *,j*,k* represents the optimized image sequence number of the initial triplet, and the image is selected from the epipolar geometry atlas according to the sequence number to construct the image triplet; For the absolute rotation of the image i * after optimization,/> For an optimized absolute rotation of image j *,/>Absolute rotation of the optimized image k *;
s2: selecting the next optimal view of absolute pose estimation according to the absolute rotation of the currently estimated seed view; the method for selecting the next optimal view comprises the following steps:
the set of vertices v i1 of the image in the initialized image triplet, for which absolute rotation is currently estimated, is noted as
The set of vertices v i2 for which absolute rotations are not currently estimated for images other than the initialized triplet is noted as
At the collectionIn and/>The middle vertex has vertices with shared edges, denoted v m, and the vertex set is denoted/>
Obtaining vertex v m and vertex setThe shared edge e im between the two is constructed into a shared edge set epsilon 1m which is marked as e im∈ε1m;
computing the absolute rotation of edge e im corresponding to vertex v m:
Wherein, Representing the absolute rotation of the vertex v m corresponding to the camera calculated from the edge e im, R im representing the relative rotation between the two cameras connected by the edge e im, R i representing the absolute rotation of the vertex v i1 for which the absolute rotation has been estimated;
Calculation of The selection cost of each absolute rotation/>
Where e jm represents any one of the edges ε 1m, n jm represents the number of feature matches between pairs of images connected by edge e jm,Representing the absolute rotation of the corresponding camera for vertex v m calculated from edge e jm;
Based on selection cost And edge e jm, determine the optimized value of sequence number i, denoted as i *:
i * identification set Representative absolute rotation number of (a);
Calculating the corresponding selection cost based on the sequence number i *
The next optimal view is selected as follows:
Wherein m * represents the next optimal view selection sequence number, Representing collections/>The selection cost of the representative absolute rotation in the vertex set/>, based on the sequence number m * In selecting vertices/>Constructing an optimal view;
S3: estimating the absolute rotation of the next optimal view, optimizing the absolute rotation estimated value, and taking the optimized absolute rotation as a camera rotation estimated value; the method for optimizing the absolute rotation estimated value comprises a local optimization method, wherein the local optimization method comprises the following steps:
Vertex for next best view Is recorded as/>Its most recently estimated absolute rotation is/>
Acquiring an inner value edge set:
Wherein, Representing vertex set v 1 and vertex/>Edge set between,/>Representation/>Is provided with a plurality of grooves, wherein the grooves are arranged on the surface of the substrate,Representation/>Inner value edge set of (1)/>Representing edges/>Relative rotation between two cameras connected,/>A current estimate representing the absolute rotation corresponding to vertex v i, θ th representing the error threshold for the angular distance between the two rotations;
For a pair of Optimizing;
Wherein, Representing absolute rotation/>Weighted local optimization results,/>Representation/>Any one side of/>And/>Respectively represent edges/>The number of feature matches and relative rotations between the connected image pairs;
the method for optimizing the absolute rotation estimated value further comprises a global optimization method, wherein the global optimization method comprises the following steps:
edge set from all current estimated absolute rotations Obtain inner value edge set/>
Wherein ε 1 representsEdge set between all vertices in (a), e ij represents/>R ij represents the relative rotation between the two cameras connected by edge e ij,/>A current estimate representing the absolute rotation corresponding to vertex v j;
For a pair of All absolute rotations among them are globally optimized by:
Wherein, Absolute rotation set representing global optimization, e ij represents/>Any one side of/>And (3) withRespectively represent edges/>The number of feature matches and relative rotations between the connected image pairs;
S4: and repeating S2 and S3 until the absolute rotation estimated values of all the cameras are obtained.
2. The incremental camera rotation estimation method of claim 1 wherein the top N edges with the greatest number of epipolar geometry feature matches are selected to form a triplet set, and image triples are selected from the triplet set as initial triples.
3. The incremental camera rotation estimation method of claim 1, wherein, in the setIs selected and combinedThe first n vertex constructs/>, with shared edge numbers in
4. The incremental camera rotation estimation method of claim 1 wherein the global optimization of the rotation estimation value is performed after a current estimated absolute rotation number increase ratio reaches a threshold.
CN202010777773.6A 2020-08-05 2020-08-05 Incremental camera rotation estimation method Active CN114066977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010777773.6A CN114066977B (en) 2020-08-05 2020-08-05 Incremental camera rotation estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010777773.6A CN114066977B (en) 2020-08-05 2020-08-05 Incremental camera rotation estimation method

Publications (2)

Publication Number Publication Date
CN114066977A CN114066977A (en) 2022-02-18
CN114066977B true CN114066977B (en) 2024-05-10

Family

ID=80232215

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010777773.6A Active CN114066977B (en) 2020-08-05 2020-08-05 Incremental camera rotation estimation method

Country Status (1)

Country Link
CN (1) CN114066977B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007088042A1 (en) * 2006-02-02 2007-08-09 Northrop Grumman Litef Gmbh Method for determining loads on/damage to a mechanical structure
WO2018049581A1 (en) * 2016-09-14 2018-03-22 浙江大学 Method for simultaneous localization and mapping
CN109166171A (en) * 2018-08-09 2019-01-08 西北工业大学 Motion recovery structure three-dimensional reconstruction method based on global and incremental estimation
CN109658445A (en) * 2018-12-14 2019-04-19 北京旷视科技有限公司 Network training method, increment build drawing method, localization method, device and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007088042A1 (en) * 2006-02-02 2007-08-09 Northrop Grumman Litef Gmbh Method for determining loads on/damage to a mechanical structure
WO2018049581A1 (en) * 2016-09-14 2018-03-22 浙江大学 Method for simultaneous localization and mapping
CN109166171A (en) * 2018-08-09 2019-01-08 西北工业大学 Motion recovery structure three-dimensional reconstruction method based on global and incremental estimation
CN109658445A (en) * 2018-12-14 2019-04-19 北京旷视科技有限公司 Network training method, increment build drawing method, localization method, device and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
图像匹配方法研究综述;贾迪;朱宁丹;杨宁华;吴思;李玉秀;赵明远;;中国图象图形学报;20190516(第05期);全文 *
基于凸优化改进的相机全局位置估计方法;谢理想;万刚;曹雪峰;王庆贺;王龙;自动化学报;20181231(第003期);全文 *

Also Published As

Publication number Publication date
CN114066977A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
US10553026B2 (en) Dense visual SLAM with probabilistic surfel map
CN109242873B (en) Method for carrying out 360-degree real-time three-dimensional reconstruction on object based on consumption-level color depth camera
CN108171791B (en) Dynamic scene real-time three-dimensional reconstruction method and device based on multi-depth camera
Garro et al. Solving the pnp problem with anisotropic orthogonal procrustes analysis
WO2018129715A1 (en) Simultaneous positioning and dense three-dimensional reconstruction method
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
US7565029B2 (en) Method for determining camera position from two-dimensional images that form a panorama
Gluckman et al. Rectifying transformations that minimize resampling effects
CN111897349B (en) Autonomous obstacle avoidance method for underwater robot based on binocular vision
Li et al. A 4-point algorithm for relative pose estimation of a calibrated camera with a known relative rotation angle
US20030156189A1 (en) Automatic camera calibration method
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN110807809A (en) Light-weight monocular vision positioning method based on point-line characteristics and depth filter
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
CN110764504A (en) Robot navigation method and system for transformer substation cable channel inspection
Yao et al. Relative camera refinement for accurate dense reconstruction
CN107220996A (en) A kind of unmanned plane linear array consistent based on three-legged structure and face battle array image matching method
Ventura et al. P1ac: Revisiting absolute pose from a single affine correspondence
Ma et al. Visual homing via guided locality preserving matching
CN104504691A (en) Camera position and posture measuring method on basis of low-rank textures
Lee et al. Robust uncertainty-aware multiview triangulation
KR101766823B1 (en) Robust visual odometry system and method to irregular illumination changes
Guan et al. Efficient recovery of multi-camera motion from two affine correspondences
CN114066977B (en) Incremental camera rotation estimation method
CN116883590A (en) Three-dimensional face point cloud optimization method, medium and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant