CN113888695A - Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration - Google Patents
Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration Download PDFInfo
- Publication number
- CN113888695A CN113888695A CN202111117965.5A CN202111117965A CN113888695A CN 113888695 A CN113888695 A CN 113888695A CN 202111117965 A CN202111117965 A CN 202111117965A CN 113888695 A CN113888695 A CN 113888695A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- point
- dimensional
- reconstruction
- registration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration, and belongs to the field of computer vision. Under the condition of giving a multi-angle picture sequence, the method firstly classifies the picture sequence and obtains three-dimensional point cloud data corresponding to various picture sequences by using a motion recovery structure algorithm. And then, carrying out scale unification on various point cloud data, and carrying out registration reconstruction on various point clouds by using a point cloud registration algorithm so as to realize three-dimensional reconstruction on a space non-cooperative target. According to the invention, the picture sequences are classified and then reconstructed in parallel, so that the time consumption in the reconstruction process is reduced, the reconstruction efficiency is improved, and the real-time requirement of spacecraft operation can be met. And various reconstructed point cloud data are registered, and finally the reconstructed point cloud is denser, so that the reconstruction precision is improved.
Description
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration.
Background
In recent years, with the development of aerospace technology and the deep exploration of outer space resources by human beings, spacecrafts are gradually applied to a plurality of fields such as society, military affairs and economy. However, because the gravity of the earth on the spacecraft is weak, the spacecraft cannot fall into the atmosphere to be destroyed after the spacecraft breaks down or finishes a task and is continuously retained in the orbital ring to float freely, so that space garbage is formed. Is limited by the stop position of the satellite, and cannot clean the space rubbish in the orbital ring slowly. In addition, with the development of the technology, the structures of various spacecrafts become more precise and complex, and the manufacturing cost is relatively increased. In order to save the manufacturing cost as much as possible, prolong the service life and improve the working capacity, the spacecraft needs to have the function of on-orbit maintenance. Accurate position information of the target is obtained, and space interactive butt joint is a primary condition for realizing on-orbit service tasks such as fault satellite maintenance and space junk cleaning. Therefore, as an effective technical means for acquiring the position information of the target, three-dimensional reconstruction of a space non-cooperative target (a spacecraft which cannot provide effective cooperative information and loses cooperative representation) is gradually becoming a research hotspot.
With the development of the space technology, the capture technology of the space cooperative target is mature, the resolving of information such as target attitude, speed and the like can be realized under the condition of target priori knowledge, and the related technology is successfully applied to the on-orbit tasks of some spacecrafts such as space interactive docking, fuel supply, cargo transportation and the like.
However, the actual space task is more non-cooperative, and the motion situation of the target and the position and posture parameter information of the target in the space orbit are not known in advance. Currently, for three-dimensional reconstruction of non-cooperative targets in space, the main technical solutions are divided into two categories, namely, methods based on laser scanning (scanning lidar, TOF flash lidar) and methods based on camera projection geometry (structured light camera, stereoscopic vision camera, etc.). The method based on laser scanning comprises the steps that a laser range finder emits a light beam to the surface of an object, the distance between the object and the laser range finder is determined according to the time difference between a sending signal and a receiving signal, and then the size and the shape of the object are determined. The method has high accuracy of generating the model, but the obtained point cloud data is huge, and then the point cloud data under multiple visual angles needs to be registered, so that the time consumption is long, and the real-time requirement of space operation cannot be met. A typical representative method based on projection geometry is the Structure from Motion (SfM) method. The method includes the steps of capturing a plurality of images from a plurality of viewpoints, detecting key points in the images, obtaining the corresponding relation of pixel points between the images by using a feature matching algorithm, obtaining three-dimensional coordinate information of space points by combining a matching constraint principle and a triangulation principle, and finally reconstructing the three-dimensional information of an object. The method has low requirements on images and high robustness and practical value, but point cloud data reconstructed by the method is sparse, the reconstruction consumes long time, and the real-time requirement cannot be met.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides a non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration.
Technical scheme
A non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration is characterized by comprising the following steps:
s1: classifying the input multi-angle picture sequence;
s2: for each category of picture sequence, simultaneously obtaining corresponding three-dimensional point cloud data by using an SfM technology;
s3: carrying out scale unification on point cloud data with different scales;
s4: and using a pairwise registration algorithm to pairwise register the three-dimensional point cloud data acquired by the adjacent category picture sequences so as to acquire reconstructed point clouds.
The further technical scheme of the invention is as follows: the step of classifying the multi-angle picture sequence according to time sequence in S1 includes:
s11: numbering the acquired picture sequences according to a time sequence of 1-N;
s12: setting the category size K and the overlap ratio r between the categories, and calculating the number of the categories:
the further technical scheme of the invention is as follows: the step of simultaneously obtaining the three-dimensional point cloud number corresponding to each category of picture sequence by using the SfM technology in S2 includes:
s21: extracting feature points and describing features of the pictures, and establishing a corresponding relation between adjacent pictures;
s22: solving a basic matrix F by using an eight-point method according to the corresponding relation;
s23: estimation of a camera matrix M using a basis matrix Fi:
S24: solving three-dimensional points X using triangularizationjCoordinates of (2)
Wherein, XjFor the n three-dimensional points, j is 1, …, n, xijThe pixel coordinates of the three-dimensional point corresponding to M images are shown, i is 1, …, M, MiRepresents a camera projection matrix corresponding to the ith image, an
xij=MiXj(i=1,…,m,j=1,…,n) (3)。
The further technical scheme of the invention is as follows: the step of unifying the scales of various point cloud data in the step S3 is as follows:
s31: recording two adjacent point clouds as source point clouds PsAnd a target point cloud PtSeparately calculating the resolution of each point cloud sequence
WhereinnsAnd ntThe number of points, dis, of the source point cloud and the target point cloud, respectivelyiRepresenting the Euclidean distance between the ith point and the nearest neighbor of the ith point in the point cloud;
s32: unifying the scale, and point-cloud P of the source pointsCoordinate enlargement of each point inAnd (4) doubling.
The further technical scheme of the invention is as follows: the step of pairwise registration of the point clouds obtained from the various picture sequences in S4 is as follows:
s41: pretreatment:
the method comprises the steps of adopting an algorithm idea of random sampling consistency to carry out registration on adjacent point clouds, firstly preprocessing the two point clouds to obtain an initial matching set C ═ CiTherein ofAndrespectively representing mutually matched key points in the source point cloud and the target point cloud, and taking the initial matching set as the input of the algorithm;
s42: calculating a compatibility value between matches
Wherein t isconsIs a constant, D (c)i,cj) Represents the distance between matches, defined as follows
S43: composing and sequencing the triples according to the compatibility values; abstracting each match into a node if the compatibility value s (c) between the nodesi,cj) Greater than a predetermined threshold tcompThen an edge is connected between the nodes,finally, forming a graph by discrete nodes; calculating compatibility scores for triangles in a graph
Comp(COT)=l(e(i,j))+l(e(i,k))+l(e(j,k)) (8)
Wherein l (e (i, j)) ═ s (c)i,cj) Then sorting the triangles according to the compatibility scores;
s44: sampling the triad; carrying out three-point sampling according to the sequencing result of S43, wherein three sampled matches are used for calculating the pose according to the three sampled matches
Wherein R isit,titRespectively representing a rotation change matrix and a translation change matrix between the ith iteration scene point cloud and the target point cloud;
s45: pose quality evaluation method using mean absolute error MAE function
WhereinIndicates the rotation error, theIs a constant for judging cjWhether the point is an interior point;
s46: repeating the steps of S44 and S45, and selecting the pose R with the highest scoreit,titAs the final output pose.
The further technical scheme of the invention is as follows: the preprocessing in step 41 includes point cloud down-sampling, calculating key points, descriptors, and establishing feature matching.
Advantageous effects
The invention provides a non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration. And then, carrying out scale unification on various point cloud data, and carrying out registration reconstruction on various point clouds by using a point cloud registration algorithm so as to realize three-dimensional reconstruction on a space non-cooperative target.
Compared with the prior art, the method has the beneficial effects that:
1) the picture sequences are classified and then reconstructed in parallel, so that the time consumption in the reconstruction process is reduced, the reconstruction efficiency is improved, and the real-time requirement of spacecraft operation can be met.
2) And various reconstructed point cloud data are registered, and finally the reconstructed point cloud is denser, so that the reconstruction precision is improved.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
FIG. 1 is a schematic flow chart of an embodiment of the present invention;
FIG. 2 is a schematic of parallel reconstruction;
FIG. 3 is a flow chart of SFM three-dimensional reconstruction;
FIG. 4 is a RANSAC-based point cloud registration algorithm flow chart;
FIG. 5 is a diagram of reconstruction results of a conventional SFM method and a proposed method under different data;
table 1 shows the results of quantitative analysis of the reconstruction results of the proposed method and the conventional SFM method under different data.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The flow of the non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration provided by the embodiment of the technical scheme of the invention is shown in fig. 1, and the method mainly comprises four steps of classification, parallel three-dimensional reconstruction, unified dimension, three-dimensional registration reconstruction and the like. Compared with the traditional three-dimensional reconstruction based on motion structure recovery, the method has the advantages that the initial picture sequences are classified according to time sequence; then, performing parallel three-dimensional reconstruction on each type; and then unifying the scale of the reconstructed point cloud data, and performing three-dimensional registration reconstruction by using a pairwise registration algorithm to finally obtain the reconstructed target point cloud data.
The specific solving method is as follows:
(1) and classifying the input multi-angle picture sequence. The optional classification strategy is based on visual angle, gray value, scale and the like. In consideration of the fact that subsequent point cloud registration requires overlapping of point cloud data and overlapping portions are required for image sequences between adjacent categories, the time-series-based classification method shown in fig. 2 is adopted in the embodiment of the invention. The initial picture sequence is numbered according to time sequence 1-N (the time sequence refers to the time when the camera acquires the pictures), category size K (the number of picture sequences contained in each category) and overlapping rate r between categories (the overlapping rate between two adjacent categories, 50% is used in the figure) are set, and the category number is calculated
(2) And for each class of picture sequence, simultaneously using an SfM technology to obtain corresponding three-dimensional point cloud data. Before reconstruction, the n three-dimensional points X are knownj(j-1, …, n) pixel coordinates x corresponding to m imagesijAnd m images corresponding to the reference matrix K in the camerai(i ═ 1, …, m), and
xij=MiXj=Ki[Ri Ti]Xj i=1,…,m,j=1,…,n (2)
where m is the number of images,n is the number of three-dimensional points, Mi,Ki,[RiTi]The specific steps of SfM reconstruction include four steps of establishing feature matching, solving a base matrix, solving an essential matrix, triangulating a three-dimensional point coordinate, and the like as shown in fig. 3.
And (2.1) establishing feature matching. And extracting feature points and describing features of the pictures, and establishing a corresponding relation between adjacent pictures. Specifically, in this embodiment, sift feature extraction is performed on adjacent pictures, each feature point is described, and a correspondence between two feature points is established.
And (2.2) solving the basic matrix. The matching relation obtained in the last step may have error matching, so that the random sampling consistency algorithm is used for eliminating errors, and the matching interior point rate is improved. And solving the basic matrix F by using an eight-point method according to the corresponding relation.
(2.3) Using the basis matrix F and the estimated Camera matrix Mi。
And (2.4) calculating three-dimensional point coordinates. Finally solving three-dimensional point X by using triangulationjCoordinates of (2)
Wherein, XjFor the n three-dimensional points, j is 1, …, n, xijThe pixel coordinates of the three-dimensional point corresponding to M images are shown, i is 1, …, M, MiRepresents a camera projection matrix corresponding to the ith image, an
xij=MiXj(i=1,…,m,j=1,…,n) (3)
Further, the step of unifying the scales of the various types of point cloud data in S3 is as follows:
(3) unifying the multi-time sequence three-dimensional point cloud reconstructed in the step (2) to a scale so as to perform subsequent point cloud registration, and specifically operating as follows.
(3.1) recording two adjacent point clouds as source point cloud P respectivelysAnd a target point cloud PtSeparately calculating the resolution of each point cloud sequence
Wherein n issAnd ntThe number of points, dis, of the source point cloud and the target point cloud, respectivelyiAnd expressing the Euclidean distance between the ith point and the nearest neighbor of the ith point in the point cloud.
(3.2) unifying the scale and the point cloud PsCoordinate enlargement of each point inAnd (4) doubling.
(4) And (4) carrying out registration reconstruction on the three-dimensional point cloud data with the unified scale in the step (3) to obtain complete and dense point cloud data. In the embodiment, a three-point sampling consistency guiding algorithm is adopted for registration, compared with a traditional random sampling consistency algorithm, the three-point sampling consistency guiding algorithm can sample to correct matching in the initial iteration stage, the registration accuracy and efficiency are improved, an algorithm flow chart is shown in fig. 4, the algorithm flow chart mainly comprises six steps of preprocessing, calculating a compatibility value, composing a picture, sampling, hypothesis generation, hypothesis evaluation and the like, and specific operations are as follows.
And (4.1) adopting an algorithm idea of random sampling consistency to register the adjacent point clouds. Recording two adjacent point clouds as source point clouds PsAnd a target point cloud PtFirstly, preprocessing two point clouds (including point cloud downsampling, calculating key points and descriptors, and establishing feature matching) to obtain an initial matching set C ═ CiTherein ofAndrepresenting source point clouds and target point clouds, respectively, with each otherThe key points of the matching. The initial set of matches is used as input to the algorithm.
(4.2) calculating compatibility values between matches
Wherein t isconsIs a constant, D (c)i,cj) Represents the distance between matches, defined as follows
And (4.3) composing and sorting the triangles according to the compatibility values. Abstracting each match into a node if the compatibility value s (c) between the nodesi,cj) Greater than a predetermined threshold tcompThen an edge is connected between the nodes, and finally the discrete nodes form a graph. Calculating compatibility scores for triangles in a graph
Comp(COT)=l(e(i,j))+l(e(i,k))+l(e(j,k)) (8)
Wherein l (e (i, j)) ═ s (c)i,cj) And then sorting the triangles according to the compatibility scores.
And (4.4) sampling the triad. Carrying out three-point sampling according to the sequencing result, and calculating the pose according to the three sampled matches
Wherein R isit,titAnd respectively representing a rotation change matrix and a translation change matrix between the ith iteration scene point cloud and the target point cloud.
(4.5) evaluating pose quality by using MAE function
WhereinIndicates the rotation error, theIs a constant for judging cjWhether it is an interior point. Smae(Ti) For the current pose TiThe smaller the error is, the higher the confidence coefficient of the current pose is.
(4.6) repeating the iteration of (4.4) and (4.5) until reaching the specified iteration times, terminating the algorithm, and selecting the pose R with the highest scoreit,titAnd as a final output pose, finishing the three-dimensional registration reconstruction by using the final pose.
The effect of applying the algorithm of the invention to the actual three-dimensional reconstruction of the non-cooperative target is shown in fig. 5 and table 1, and the algorithm of the invention is superior to the traditional SFM algorithm in reconstruction timeliness and density degree by reconstructing a plurality of groups of spatial non-cooperative targets.
Table 1 presents the method and the traditional SFM method for the quantitative analysis of the reconstruction result under different data
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present disclosure.
Claims (6)
1. A non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration is characterized by comprising the following steps:
s1: classifying the input multi-angle picture sequence;
s2: for each category of picture sequence, simultaneously obtaining corresponding three-dimensional point cloud data by using an SfM technology;
s3: carrying out scale unification on point cloud data with different scales;
s4: and using a pairwise registration algorithm to pairwise register the three-dimensional point cloud data acquired by the adjacent category picture sequences so as to acquire reconstructed point clouds.
2. The non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration according to claim 1, characterized in that: the step of classifying the multi-angle picture sequence according to time sequence in S1 includes:
s11: numbering the acquired picture sequences according to a time sequence of 1-N;
s12: setting the category size K and the overlap ratio r between the categories, and calculating the number of the categories:
3. the non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration according to claim 1, characterized in that: the step of simultaneously obtaining the three-dimensional point cloud number corresponding to each category of picture sequence by using the SfM technology in S2 includes:
s21: extracting feature points and describing features of the pictures, and establishing a corresponding relation between adjacent pictures;
s22: solving a basic matrix F by using an eight-point method according to the corresponding relation;
s23: estimation of a camera matrix M using a basis matrix Fi:
S24: solving three-dimensional points X using triangularizationjCoordinates of (2)
Wherein, XjFor the n three-dimensional points sought, j 1.,n,xijthe pixel coordinates of the three-dimensional point corresponding to the M images are shown, i is 1iRepresents a camera projection matrix corresponding to the ith image, an
xij=MiXj(i=1,...,m,j=1,...,n) (3)。
4. The non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration according to claim 1, characterized in that: the step of unifying the scales of various point cloud data in the step S3 is as follows:
s31: recording two adjacent point clouds as source point clouds PsAnd a target point cloud PtSeparately calculating the resolution of each point cloud sequence
Wherein n issAnd ntThe number of points, dis, of the source point cloud and the target point cloud, respectivelyiRepresenting the Euclidean distance between the ith point and the nearest neighbor of the ith point in the point cloud;
5. The non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration according to claim 1, characterized in that: the step of pairwise registration of the point clouds obtained from the various picture sequences in S4 is as follows:
s41: pretreatment:
the method adopts the algorithm idea of random sampling consistency to register adjacent point clouds, firstly pre-processes the two point clouds,obtaining an initial match set C ═ CiTherein of Andrespectively representing mutually matched key points in the source point cloud and the target point cloud, and taking the initial matching set as the input of the algorithm;
s42: calculating a compatibility value between matches
Wherein t isconsIs a constant, D (c)i,cj) Represents the distance between matches, defined as follows
S43: composing and sequencing the triples according to the compatibility values; abstracting each match into a node if the compatibility value s (c) between the nodesi,cj) Greater than a predetermined threshold tcompThen connecting an edge between the nodes, and finally forming a graph by the discrete nodes; calculating compatibility scores for triangles in a graph
Comp(COT)=l(e(i,j))+l(e(i,k))+l(e(j,k)) (8)
Wherein l (e (i, j)) ═ s (c)i,cj) Then sorting the triangles according to the compatibility scores;
s44: sampling the triad; carrying out three-point sampling according to the sequencing result of S43, wherein three sampled matches are used for calculating the pose according to the three sampled matches
Wherein R isit,titRespectively representing a rotation change matrix and a translation change matrix between the ith iteration scene point cloud and the target point cloud;
s45: pose quality evaluation method using mean absolute error MAE function
WhereinIndicates the rotation error, theIs a constant for judging cjWhether the point is an interior point;
s46: repeating the steps of S44 and S45, and selecting the pose R with the highest scoreit,titAs the final output pose.
6. The non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration as claimed in claim 5, wherein: the preprocessing in step 41 includes point cloud down-sampling, calculating key points, descriptors, and establishing feature matching.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111117965.5A CN113888695A (en) | 2021-09-21 | 2021-09-21 | Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration |
PCT/CN2022/101095 WO2023045455A1 (en) | 2021-09-21 | 2022-06-24 | Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111117965.5A CN113888695A (en) | 2021-09-21 | 2021-09-21 | Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113888695A true CN113888695A (en) | 2022-01-04 |
Family
ID=79010481
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111117965.5A Pending CN113888695A (en) | 2021-09-21 | 2021-09-21 | Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113888695A (en) |
WO (1) | WO2023045455A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115191949A (en) * | 2022-07-28 | 2022-10-18 | 哈尔滨工业大学 | Dental disease diagnosis method and diagnosis instrument |
WO2023045455A1 (en) * | 2021-09-21 | 2023-03-30 | 西北工业大学 | Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration |
WO2024103846A1 (en) * | 2023-07-03 | 2024-05-23 | 西北工业大学 | Three-dimensional registration reconstruction method based on multi-domain multi-dimensional feature map |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883471B (en) * | 2023-08-04 | 2024-03-15 | 天津大学 | Line structured light contact-point-free cloud registration method for chest and abdomen percutaneous puncture |
CN117291930A (en) * | 2023-08-25 | 2023-12-26 | 中建三局第三建设工程有限责任公司 | Three-dimensional reconstruction method and system based on target object segmentation in picture sequence |
CN116934822B (en) * | 2023-09-15 | 2023-12-05 | 众芯汉创(江苏)科技有限公司 | System for autonomously registering and converting refined model based on three-dimensional point cloud data |
CN116942317B (en) * | 2023-09-21 | 2023-12-26 | 中南大学 | Surgical navigation positioning system |
CN117315146B (en) * | 2023-09-22 | 2024-04-05 | 武汉大学 | Reconstruction method and storage method of three-dimensional model based on trans-scale multi-source data |
CN116993947B (en) * | 2023-09-26 | 2023-12-12 | 光谷技术有限公司 | Visual display method and system for three-dimensional scene |
CN117649409B (en) * | 2024-01-29 | 2024-04-02 | 深圳市锐健电子有限公司 | Automatic limiting system, method, device and medium for sliding table based on machine vision |
CN117934725A (en) * | 2024-02-01 | 2024-04-26 | 西安中核核仪器股份有限公司 | Simulation method for testing registration accuracy of indoor three-dimensional point cloud of building |
CN118097432A (en) * | 2024-04-18 | 2024-05-28 | 厦门理工学院 | Remote sensing image model estimation method based on second-order space consistency constraint |
CN118154677A (en) * | 2024-05-11 | 2024-06-07 | 常熟理工学院 | Non-cooperative target pose estimation method, system and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680159B (en) * | 2017-10-16 | 2020-12-08 | 西北工业大学 | Space non-cooperative target three-dimensional reconstruction method based on projection matrix |
WO2019230813A1 (en) * | 2018-05-30 | 2019-12-05 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Three-dimensional reconstruction method and three-dimensional reconstruction device |
CN111383333B (en) * | 2020-04-02 | 2024-02-20 | 西安因诺航空科技有限公司 | Sectional SFM three-dimensional reconstruction method |
CN112085844B (en) * | 2020-09-11 | 2021-03-05 | 中国人民解放军军事科学院国防科技创新研究院 | Unmanned aerial vehicle image rapid three-dimensional reconstruction method for field unknown environment |
CN112508999B (en) * | 2020-11-20 | 2024-02-13 | 西北工业大学深圳研究院 | Space target motion state identification method based on collaborative observation image sequence |
CN113888695A (en) * | 2021-09-21 | 2022-01-04 | 西北工业大学 | Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration |
-
2021
- 2021-09-21 CN CN202111117965.5A patent/CN113888695A/en active Pending
-
2022
- 2022-06-24 WO PCT/CN2022/101095 patent/WO2023045455A1/en unknown
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023045455A1 (en) * | 2021-09-21 | 2023-03-30 | 西北工业大学 | Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration |
CN115191949A (en) * | 2022-07-28 | 2022-10-18 | 哈尔滨工业大学 | Dental disease diagnosis method and diagnosis instrument |
WO2024103846A1 (en) * | 2023-07-03 | 2024-05-23 | 西北工业大学 | Three-dimensional registration reconstruction method based on multi-domain multi-dimensional feature map |
Also Published As
Publication number | Publication date |
---|---|
WO2023045455A1 (en) | 2023-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113888695A (en) | Non-cooperative target three-dimensional reconstruction method based on branch reconstruction registration | |
Riegler et al. | Octnetfusion: Learning depth fusion from data | |
CN111968129B (en) | Instant positioning and map construction system and method with semantic perception | |
CN111325843B (en) | Real-time semantic map construction method based on semantic inverse depth filtering | |
CN111429574B (en) | Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion | |
CN110458939B (en) | Indoor scene modeling method based on visual angle generation | |
CN110189399B (en) | Indoor three-dimensional layout reconstruction method and system | |
Li et al. | A tutorial review on point cloud registrations: principle, classification, comparison, and technology challenges | |
CN109186608B (en) | Repositioning-oriented sparse three-dimensional point cloud map generation method | |
CN112070770A (en) | High-precision three-dimensional map and two-dimensional grid map synchronous construction method | |
CN113902860A (en) | Multi-scale static map construction method based on multi-line laser radar point cloud | |
CN113192200B (en) | Method for constructing urban real scene three-dimensional model based on space-three parallel computing algorithm | |
CN112651944B (en) | 3C component high-precision six-dimensional pose estimation method and system based on CAD model | |
CN110569926B (en) | Point cloud classification method based on local edge feature enhancement | |
CN112862736B (en) | Real-time three-dimensional reconstruction and optimization method based on points | |
CN111860651A (en) | Monocular vision-based semi-dense map construction method for mobile robot | |
Alidoost et al. | Y-shaped convolutional neural network for 3d roof elements extraction to reconstruct building models from a single aerial image | |
Wen et al. | Cooperative indoor 3D mapping and modeling using LiDAR data | |
Buyukdemircioglu et al. | Deep learning for 3D building reconstruction: A review | |
Feng et al. | Predrecon: A prediction-boosted planning framework for fast and high-quality autonomous aerial reconstruction | |
Ashutosh et al. | 3d-nvs: A 3d supervision approach for next view selection | |
Shang et al. | Model-based tracking by classification in a tiny discrete pose space | |
CN114120095A (en) | Mobile robot autonomous positioning system and method based on aerial three-dimensional model | |
Chaturvedi et al. | Small object detection using retinanet with hybrid anchor box hyper tuning using interface of Bayesian mathematics | |
JP6761388B2 (en) | Estimator and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |