CN107680159B - Space non-cooperative target three-dimensional reconstruction method based on projection matrix - Google Patents
Space non-cooperative target three-dimensional reconstruction method based on projection matrix Download PDFInfo
- Publication number
- CN107680159B CN107680159B CN201710957020.1A CN201710957020A CN107680159B CN 107680159 B CN107680159 B CN 107680159B CN 201710957020 A CN201710957020 A CN 201710957020A CN 107680159 B CN107680159 B CN 107680159B
- Authority
- CN
- China
- Prior art keywords
- camera
- pixel
- matrix
- image
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention provides a space non-cooperative target three-dimensional reconstruction method based on a projection matrix, which comprises the steps of firstly extracting characteristic points of images shot by a left camera and a right camera, and then matching the corresponding characteristic points according to an image matching principle; then, according to the epipolar geometric constraint principle, a rotation matrix R and a translation matrix T are solved; and finally, calculating the coordinates of the three-dimensional space points through the projection matrix relationship between the three-dimensional space points and the image plane. Under the condition that the image feature points are correctly matched, the internal and external parameters of the camera can be solved only through epipolar geometric constraint, so that the complicated steps of calibrating the camera are omitted, the calculated amount is reduced, the reconstruction time is shortened, and the requirement of spacecraft operation on real-time performance can be met.
Description
Technical Field
The invention relates to a three-dimensional reconstruction method for a space non-cooperative target, and belongs to the field of three-dimensional reconstruction.
Background
With the development of aerospace technology, the exploration and development of outer space resources by human beings are more and more intensive. The spacecraft is abandoned after the spacecraft is out of order, fails or completes a task, and the spacecraft can float freely in space, namely space garbage is formed. Therefore, a space non-cooperative target capturing technology aiming at on-orbit maintenance, failed satellite cleaning, space debris cleaning, space attack and defense and the like of the traditional spacecraft becomes a new development direction in the field of space robots. And the accurate position information of the target is the premise for realizing the operations of detection, approximation, rendezvous and docking, maintenance and the like on the target. The spatial non-cooperative target three-dimensional reconstruction technology is an effective technology for acquiring target information, thereby becoming a research hotspot.
At present, scholars at home and abroad have developed related research work around spatial non-cooperative target three-dimensional reconstruction. Tommasi proposes a photographic reconstruction method, and the core idea is to recover the geometric structure of a scene and the motion information of a camera from two images by using a factorization method. Faugeras proposes to adopt different geometric constraint information to convert projective reconstruction into Euclidean space metric reconstruction, but the method is only suitable for objects under various geometric condition constraints and requires the existence of projective reconstruction. Pollefeees gives a more general approach to metric reconstruction using an automatic scaling method within a parameter range under varying focal length of the camera. Longuet-Higgins proposes that a three-dimensional Structure (SFM) is restored in the Motion of a camera, and the main idea is to calculate the internal parameters of the camera and the information of the direction, the position and the like of the camera by using key technologies such as feature point detection, computer geometric constraint relation and the like and then reconstruct a three-dimensional Structure model of a scene. Yasutaka Furukawa and Jean Ponce propose a Patch-Based three-dimensional Multi-View Stereo vision algorithm (PMVS), and the idea is mainly divided into three parts: initializing feature point matching, generating and expanding patches and screening the patches. Most of the methods solve the three-dimensional space point coordinates based on a triangulation method, and the calculation of the solving process consumes long time and cannot meet the real-time requirement of space operation.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a three-dimensional reconstruction method of a non-cooperative target based on a projection matrix space, which can solve and obtain a three-dimensional space point coordinate through a projection matrix and internal and external parameters of a camera under the condition of giving a pixel point coordinate position of an image plane, and further carry out three-dimensional reconstruction on the non-cooperative target in the space.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
(1) projecting three-dimensional space points to a camera pixel plane by adopting a pinhole model, and setting O-x-y-z as a cameraIn a coordinate system, the z axis points to the front of the camera, x points to the right, y points to the bottom, and O is the optical center of the camera; a point P in the three-dimensional space is projected by an optical center O and then falls on a camera image plane O '-x' -y ', and an imaging point is P'; let P coordinate be [ X, Y, Z ]]TAnd the coordinate of P 'is [ X', Y ', Z']TThe distance from the image plane of the camera to the optical center is f, then
(2) A pixel plane o-u-v is fixed on the image plane, the origin o of a pixel coordinate system is positioned at the upper left corner of the image plane, the u axis is parallel to the x axis towards the right, and the v axis is parallel to the y axis towards the lower side; the pixel coordinate system differs from the imaging plane by a zoom and a translation of the origin, assuming that the pixel coordinates are scaled by a times on the u-axis and by β times on v, while the origin is translated by cx,cy]TThen existThe pixel p' coordinate is [ u, v ] at this time]TThe space point P coordinate is [ X, Y, Z ]]TK is a camera reference matrix;
(3) according to the spatial point and image pixel conversion relation obtained in the step (2), a pair of registered feature points p is obtained from the two images1',p'2Then there is Zp'1=KP、Zp'2K (RP + T), R, T being a rotation matrix and a translation matrix; get x1=K-1p′1,x2=K-1p'2Then there is x2=Rx1+ T; at this time, antipodal geometric constraint is obtainedDetermining R, T the pixel position of the registration point;
(4) suppose the ith image projection matrix is Ni and the camera intrinsic parameters areThe rotation matrix is RiTranslation vector is Ti(ii) a The (i + 1) th image projection matrix is Ni+1The camera intrinsic parameters areThe rotation matrix is Ri+1Translation vector is Ti+1(ii) a Then there is Ni=Ki(Ri/Ti),Ni+1=Ki+1(Ri+1/Ti+1) (ii) a Obtaining Zp 'according to the coordinate system conversion relation in the step (3)'i=Ni·P,Zp'i+1=Ni+1P; wherein the content of the first and second substances, solving by substitutionWherein I.e. [ X ]w,Yw,Zw]=BA-1。
The invention has the beneficial effects that: under the condition that the image feature points are correctly matched, the internal and external parameters of the camera can be solved only through epipolar geometric constraint, the complicated steps of camera calibration are omitted, and finally the three-dimensional state of an unknown object is recovered through the projection matrix relation, so that the calculated amount is reduced, the reconstruction time is shortened, the requirement of spacecraft operation on real-time performance can be met, and the method is favorable for detecting, approaching, rendezvous and docking, maintaining and other operations of satellites, space robots and the like.
Drawings
FIG. 1 is a schematic diagram of a pinhole camera model;
FIG. 2 is a schematic view of similar triangles;
FIG. 3 is a schematic diagram of an image coordinate system and a pixel coordinate system;
FIG. 4 is a schematic antipodal geometric constraint;
FIG. 5 is a schematic view of a binocular stereo camera;
fig. 6 binocular stereoscopic input images;
FIG. 7 is a three-dimensional reconstruction point cloud result under different viewing angles;
fig. 8 adds the three-dimensional reconstruction results of the texture.
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
The idea of the invention is as follows: firstly, extracting characteristic points of images shot by a left camera and a right camera, and matching the corresponding characteristic points according to an image matching principle; then, according to the epipolar geometric constraint principle, a rotation matrix R and a translation matrix T are solved; and finally, calculating the coordinates of the three-dimensional space points through the projection matrix relationship between the three-dimensional space points and the image plane.
The specific solving method is as follows:
(1) three-dimensional spatial points are first projected into the camera pixel plane, and for this projection relationship, we use a pinhole model to model the mapping relationship. And setting O-x-y-z as a camera coordinate system, pointing the z axis to the front of the camera, setting x to the right and y to the down, and setting O as the optical center of the camera. A point P in the three-dimensional space, after being projected by the optical center O, falls on the camera image plane O '-x' -y ', and the imaging point is P'. Let P coordinate be [ X, Y, Z ]]TP 'is [ X', Y ', Z']TAnd let f (focal length) be the distance from the camera image plane to the optical center. According to the similar triangle relationTherefore, it is not only easy to use
(2) Based on the pinhole model given in step (1), described isThe spatial relationship between point P and its image. However, in the camera, pixels are finally obtained one by one, and it is also necessary to sample and quantize an image on an imaging plane. To describe the process by which the sensor converts the perceived light into image pixels, a pixel plane o-u-v is fixed to the image plane. The pixel coordinate system is usually defined as follows: the origin o is located in the upper left corner of the image plane, the u-axis is parallel to the x-axis to the right, and the v-axis is parallel to the y-axis downward. The difference between the pixel coordinate system and the imaging plane is a zoom and a translation of the origin, where it is assumed that the pixel coordinates are scaled by a times on the u-axis, by β times on v, while the origin is translated by cx,cy]T. Exist ofThe pixel p' coordinate is [ u, v ] at this time]TThe space point P coordinate is [ X, Y, Z ]]TAnd K is an internal reference matrix.
(3) And (3) obtaining a pair of registered feature points from the two images according to the spatial point and image pixel conversion relation obtained in the step (2). Suppose that the pixels are each p1',p'2Then there is Zp'1=KP,Zp'2K (RP + T), R, T are rotation and translation matrices. Get x1=K-1p′1,x2=K-1p'2Then there is x2=Rx1+ T. Then both sides simultaneously take left multiplication by T^To obtain T^x2=T^Rx1(ii) a Then simultaneously left-multiplying by x2 TTo obtainThe above formula is the epipolar geometric constraint. The constraint simply gives the spatial position relationship of the two registration points, and we only need to solve for R, T according to the pixel positions of the registration points.
(4) According to step (3), we have found R, T. Suppose that the ith sub-image projection matrix is NiThe camera intrinsic parameter is KiThe rotation matrix is RiTranslation vector is Ti(ii) a The (i + 1) th image projection matrix is Ni+1The camera intrinsic parameter is Ki+1The rotation matrix is Ri+1Translation vector is Ti+1. Then there is Ni=Ki(Ri/Ti),Ni+1=Ki+1(Ri+1/Ti+1). And (4) converting the relation according to the coordinate system in the step (3) to obtain Zp'i=Ni·P,Zp'i+1=Ni+1P. Solving by substitutionWherein I.e. [ X ]w,Yw,Zw]=BA-1。
An embodiment of the invention comprises the following steps:
in the first step, three-dimensional space points are first projected onto a camera plane, and for the projection relationship, a pinhole model is used to model the mapping relationship. Fig. 1 is a schematic diagram of a pinhole camera model. The coordinate system of the camera is O-X-Y-Z, and the coordinate P of the three-dimensional space point is (X, Y, Z)T(ii) a The image coordinate system is O ' -X ' -Y ', the imaging point is P ' (X ', Y ', Z ')T。
Second, as can be seen from fig. 1, the three-dimensional space point coordinate P and the imaging point P' have a similar triangular relationship as in fig. 2, where:
finishing to obtain:
in a third step, a pixel plane o-u-v is fixed on the imaging plane, as shown in fig. 3. Let the pixel coordinates be scaled by a factor of alpha on the u-axis and by vBeta times while the origin is shifted by [ c ]x,cy]T. Then, the coordinates of P' are associated with the pixel coordinates [ u, v ]]TIn a relationship of
Bringing formula (3) into formula (2) and combining α f into fxCombining β f into fyObtaining:
where f is in meters and alpha, beta is in pixels/meter, so fx,fyThe unit of (2) is a pixel. Write the equation in matrix form:
wherein K is a parameter matrix in the camera. At this time, the left camera coordinate system is taken as the world coordinate system, and the right coordinate system and the left camera coordinate system have a rotation and translation relationship.
Fourthly, as shown in FIG. 4, we find a pair of matched feature points p from two images1',p2' the right image and the left image have a relationship of a rotation matrix R and a translation matrix T. The centers of the two cameras are respectively O1,O2,O1,O2And the plane determined by the three points P is a polar plane. O is1O2Referred to as baseline, O1O2Connecting line and image plane I1,I2Respectively at the intersection points of e1,e2Referred to as poles. Polar plane and two image planes I1,I2Cross line l between1,l2Is the polar line. From the pinhole model, there were:
Zp'1=KP,Zp'2=K(RP+T) (6)
if homogeneous coordinates are used, the above equation can also be written as an equation that holds true with a non-zero constant:
p′1=KP,p'2=K(RP+T) (7)
taking:
x1=K-1p′1,x2=K-1p'2 (8)
x1,x2is the coordinate on the normalized plane of the two pixels. Bringing into formula (7) to obtain:
x2=Rx1+T (9)
both sides simultaneously left-hand T^Equivalently, both sides do outer products with T at the same time:
T^x2=T^Rx1 (10)
then simultaneously left-multiplying by x2 T:
Observe equation left side, T^x2Is one of the radicals and T and x2Are all perpendicular vectors. Re-mixing it with x2When inner product is made, 0 is obtained.
Re-substituting p1',p2', there are:
p'2 TK-TT^RK-1p1'=0 (13)
both equations are referred to as epipolar geometric constraints. It has the meaning of O1,O2And P is coplanar, and the antipodal geometric constraint simultaneously comprises translation and rotation. We can solve for R and T by matching the pixel locations of the points.
Fifthly, setting the projection matrixes as N according to the relation between the camera coordinate system and the pixel coordinate system1、N2. And (3) viewing the left camera coordinate system as a world coordinate system, wherein the projection relation of the left image and the right image is as follows:
where I is a 3 × 3 identity matrix, R is a rotation matrix, and T is a translation matrix. K1,K2The left and right camera intrinsic parameters, respectively.
The world coordinate system is projected to the pixel coordinate system with the following relationship:
the algorithm is applied to the ith sub-picture and the (i + 1) th sub-picture. Suppose that the ith sub-image projection matrix is NiThe camera intrinsic parameter is KiThe rotation matrix is RiTranslation vector is Ti(ii) a The (i + 1) th image projection matrix is Ni+1The camera intrinsic parameter is Ki+1The rotation matrix is Ri+1Translation vector is Ti+1. The formula (14) is substituted by:
from the formula (16), N can be obtainedi、Ni+1. Can be substituted by formula (15):
finishing to obtain:
removing Z to obtain:
Therefore, the three-dimensional spatial point P coordinates are:
[Xw,Yw,Zw]=BA-1
finally, a simple simulation experiment is given:
the satellite model is placed on a precise controllable rotary table, and the rotating speed of the controllable rotary table is controlled to be 3 degrees/s. Then, a binocular camera is used for imaging the satellite model, the first frames (shown in figure 6) of the left camera and the right camera are selected as the input of the three-dimensional reconstruction system, and the reconstruction result is used as the output.
After the work of image feature point extraction, feature point matching, depth information acquisition and the like, the three-dimensional space point coordinates of the satellite image are calculated according to the algorithm provided by the invention, the number of the point clouds displayed in Matlab is 458048, and the time required for reconstruction is 15 s. Two three-dimensional point clouds from different viewing angles are also taken, as shown in fig. 7.
And finally, adding texture information to the three-dimensional point cloud to obtain a final reconstruction result as shown in fig. 8.
Claims (1)
1. A space non-cooperative target three-dimensional reconstruction method based on a projection matrix is characterized by comprising the following steps:
(1) projecting the three-dimensional space point to a camera pixel plane by adopting a pinhole model, setting O-x-y-z as a camera coordinate system, pointing a z axis to the front of a camera, pointing x to the right, pointing y to the lower, and setting O as shootingA camera optical center; a point P in the three-dimensional space is projected by an optical center O and then falls on a camera image plane O '-x' -y ', and an imaging point is P'; let P coordinate be [ X, Y, Z ]]TAnd the coordinate of P 'is [ X', Y ', Z']TThe distance from the image plane of the camera to the optical center is f, then
(2) A pixel plane o-u-v is fixed on the image plane, the origin o of a pixel coordinate system is positioned at the upper left corner of the image plane, the u axis is parallel to the x axis towards the right, and the v axis is parallel to the y axis towards the lower side; the pixel coordinate system differs from the imaging plane by a zoom and a translation of the origin, assuming that the pixel coordinates are scaled by a times on the u-axis and by β times on v, while the origin is translated by cx,cy]TThen existThe pixel p' coordinate is [ u, v ] at this time]TThe space point P coordinate is [ X, Y, Z ]]TK is a camera reference matrix;
(3) according to the spatial point and image pixel conversion relation obtained in the step (2), a pair of registered feature points p is obtained from the two images1′,p′2Then there is Zp'1=KP、Zp′2K (RP + T), R, T being a rotation matrix and a translation matrix; get x1=K- 1p′1,x2=K-1p′2Then there is x2=Rx1+ T; both sides simultaneously left multiply T ^ which is equivalent to that both sides simultaneously make outer products with T, and at the moment, antipodal geometric constraint is obtainedDetermining R, T the pixel position of the registration point;
(4) suppose that the ith sub-image projection matrix is NiThe camera intrinsic parameters areThe rotation matrix is RiTranslation vector is Ti(ii) a The (i + 1) th image projection matrix is Ni+1The camera intrinsic parameters areThe rotation matrix is Ri+1Translation vector is Ti+1(ii) a Then there is Ni=Ki(Ri/Ti),Ni+1=Ki+1(Ri+1/Ti+1) (ii) a Obtaining Zp 'according to the coordinate system conversion relation in the step (3)'i=Ni·P,Zp′i+1=Ni+1P; wherein the content of the first and second substances, solving by substitutionWherein I.e. [ X ]w,Yw,Zw]=BA-1。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710957020.1A CN107680159B (en) | 2017-10-16 | 2017-10-16 | Space non-cooperative target three-dimensional reconstruction method based on projection matrix |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710957020.1A CN107680159B (en) | 2017-10-16 | 2017-10-16 | Space non-cooperative target three-dimensional reconstruction method based on projection matrix |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107680159A CN107680159A (en) | 2018-02-09 |
CN107680159B true CN107680159B (en) | 2020-12-08 |
Family
ID=61140923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710957020.1A Expired - Fee Related CN107680159B (en) | 2017-10-16 | 2017-10-16 | Space non-cooperative target three-dimensional reconstruction method based on projection matrix |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107680159B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109658450B (en) * | 2018-12-17 | 2020-10-13 | 武汉天乾科技有限责任公司 | Rapid orthoimage generation method based on unmanned aerial vehicle |
CN111582293B (en) * | 2019-02-19 | 2023-03-24 | 曜科智能科技(上海)有限公司 | Plane geometry consistency detection method, computer device and storage medium |
CN109827547B (en) * | 2019-03-27 | 2021-05-04 | 中国人民解放军战略支援部队航天工程大学 | Distributed multi-sensor space target synchronous correlation method |
CN110059691B (en) * | 2019-03-29 | 2022-10-14 | 南京邮电大学 | Multi-view distorted document image geometric correction method based on mobile terminal |
CN110933391B (en) * | 2019-12-20 | 2021-11-09 | 成都极米科技股份有限公司 | Calibration parameter compensation method and device for projection system and readable storage medium |
CN111462331B (en) * | 2020-03-31 | 2023-06-27 | 四川大学 | Lookup table method for expanding epipolar geometry and calculating three-dimensional point cloud in real time |
CN111568456B (en) * | 2020-04-24 | 2023-07-14 | 长春理工大学 | Knee joint posture measurement method based on three-dimensional reconstruction of feature points |
CN111504276B (en) * | 2020-04-30 | 2022-04-19 | 哈尔滨博觉科技有限公司 | Visual projection scale factor set-based joint target function multi-propeller attitude angle acquisition method |
CN111815765B (en) * | 2020-07-21 | 2022-07-05 | 西北工业大学 | Heterogeneous data fusion-based image three-dimensional reconstruction method |
CN113643328B (en) * | 2021-08-31 | 2022-09-09 | 北京柏惠维康科技股份有限公司 | Calibration object reconstruction method and device, electronic equipment and computer readable medium |
CN113888695A (en) * | 2021-09-21 | 2022-01-04 | 西北工业大学 | Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101033972A (en) * | 2007-02-06 | 2007-09-12 | 华中科技大学 | Method for obtaining three-dimensional information of space non-cooperative object |
CN103472668A (en) * | 2013-09-24 | 2013-12-25 | 东北大学 | Stereo imaging device and method |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103914874B (en) * | 2014-04-08 | 2017-02-01 | 中山大学 | Compact SFM three-dimensional reconstruction method without feature extraction |
US20150302575A1 (en) * | 2014-04-17 | 2015-10-22 | Siemens Aktiengesellschaft | Sun location prediction in image space with astronomical almanac-based calibration using ground based camera |
CN106204717B (en) * | 2015-05-28 | 2019-07-16 | 长沙维纳斯克信息技术有限公司 | A kind of stereo-picture quick three-dimensional reconstructing method and device |
CN106204656A (en) * | 2016-07-21 | 2016-12-07 | 中国科学院遥感与数字地球研究所 | Target based on video and three-dimensional spatial information location and tracking system and method |
CN106600645B (en) * | 2016-11-24 | 2019-04-09 | 大连理工大学 | A kind of video camera space multistory calibration rapid extracting method |
-
2017
- 2017-10-16 CN CN201710957020.1A patent/CN107680159B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101033972A (en) * | 2007-02-06 | 2007-09-12 | 华中科技大学 | Method for obtaining three-dimensional information of space non-cooperative object |
CN103472668A (en) * | 2013-09-24 | 2013-12-25 | 东北大学 | Stereo imaging device and method |
Also Published As
Publication number | Publication date |
---|---|
CN107680159A (en) | 2018-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107680159B (en) | Space non-cooperative target three-dimensional reconstruction method based on projection matrix | |
CN105976353B (en) | Spatial non-cooperative target pose estimation method based on model and point cloud global matching | |
CN103247075B (en) | Based on the indoor environment three-dimensional rebuilding method of variation mechanism | |
CN107833270A (en) | Real-time object dimensional method for reconstructing based on depth camera | |
CN112902953A (en) | Autonomous pose measurement method based on SLAM technology | |
CN106920276B (en) | A kind of three-dimensional rebuilding method and system | |
CN111028155B (en) | Parallax image splicing method based on multiple pairs of binocular cameras | |
CN110288712B (en) | Sparse multi-view three-dimensional reconstruction method for indoor scene | |
CN109115184B (en) | Collaborative measurement method and system based on non-cooperative target | |
CN110853151A (en) | Three-dimensional point set recovery method based on video | |
WO2024045632A1 (en) | Binocular vision and imu-based underwater scene three-dimensional reconstruction method, and device | |
CN111415375B (en) | SLAM method based on multi-fisheye camera and double-pinhole projection model | |
Banno et al. | Omnidirectional texturing based on robust 3D registration through Euclidean reconstruction from two spherical images | |
CN114608561A (en) | Positioning and mapping method and system based on multi-sensor fusion | |
CN112150518B (en) | Attention mechanism-based image stereo matching method and binocular device | |
Wendel et al. | Automatic alignment of 3D reconstructions using a digital surface model | |
Haala et al. | High density aerial image matching: State-of-the-art and future prospects | |
JP2023505891A (en) | Methods for measuring environmental topography | |
Jang et al. | Egocentric scene reconstruction from an omnidirectional video | |
Liu et al. | Dense stereo matching strategy for oblique images that considers the plane directions in urban areas | |
CN117115271A (en) | Binocular camera external parameter self-calibration method and system in unmanned aerial vehicle flight process | |
CN112102504A (en) | Three-dimensional scene and two-dimensional image mixing method based on mixed reality | |
Wang et al. | Automated mosaicking of UAV images based on SFM method | |
CN107806861B (en) | Inclined image relative orientation method based on essential matrix decomposition | |
CN113409442A (en) | Method for fusing multi-panorama and reconstructing three-dimensional image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201208 Termination date: 20211016 |