CN110363806B - Method for three-dimensional space modeling by using invisible light projection characteristics - Google Patents

Method for three-dimensional space modeling by using invisible light projection characteristics Download PDF

Info

Publication number
CN110363806B
CN110363806B CN201910456110.1A CN201910456110A CN110363806B CN 110363806 B CN110363806 B CN 110363806B CN 201910456110 A CN201910456110 A CN 201910456110A CN 110363806 B CN110363806 B CN 110363806B
Authority
CN
China
Prior art keywords
invisible light
pictures
dimensional
light projection
modeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910456110.1A
Other languages
Chinese (zh)
Other versions
CN110363806A (en
Inventor
崔岩
刘强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Original Assignee
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Germany Zhuhai Artificial Intelligence Institute Co ltd, 4Dage Co Ltd filed Critical China Germany Zhuhai Artificial Intelligence Institute Co ltd
Priority to CN201910456110.1A priority Critical patent/CN110363806B/en
Publication of CN110363806A publication Critical patent/CN110363806A/en
Application granted granted Critical
Publication of CN110363806B publication Critical patent/CN110363806B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for performing three-dimensional space modeling by using invisible light projection characteristics, and relates to the technical field of three-dimensional imaging digital modeling. The method comprises the following steps: respectively shooting a group of position pictures at two different positions in space, extracting feature points of the position pictures, fusing feature points of the same group of position pictures to obtain position feature points of the position, performing SIFT algorithm on the feature points of the different positions to perform matching calculation, calculating initial camera positions of the different groups of position pictures during shooting by using SLAM algorithm, calculating accurate camera positions and sparse point clouds by using SFM algorithm, performing three-dimensional structural modeling based on the camera positions and the sparse point clouds, and finally performing three-dimensional scene mapping. According to the invention, the characteristic points of the invisible light projection picture and the RGB picture are fused with each other to form mutual complementation of the characteristic points, so that the problems of less extracted characteristic points, higher difficulty in three-dimensional model reconstruction and poorer effect when modeling is carried out in a single color space by an SIFT mode are solved.

Description

Method for three-dimensional space modeling by using invisible light projection characteristics
Technical Field
The invention relates to the technical field of three-dimensional imaging digital modeling, in particular to a method for performing three-dimensional space modeling by using invisible light projection characteristics.
Background
The traditional three-dimensional space modeling mode mainly aims at scenes with rich and diversified color information. Because the number of the characteristic points in the spatial scene with obvious color change is large, a large amount of characteristic information can be extracted, and therefore the reconstructed three-dimensional model is more accurate, namely more consistent with the real scene. However, in a spatial scene with a single color (such as a building site, a white wall, glass, etc.), sufficient characteristic information is difficult to obtain because the transition of the color is not obvious or even has no color transition, and the effect of reconstructing the three-dimensional model is greatly reduced. The reason for the above problem is that the traditional three-dimensional model reconstruction is characterized in that feature points extracted in a single-color space by a Scale-invariant feature transform (SIFT) mode are few, the three-dimensional model reconstruction is difficult, and the effect is poor.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides a method for three-dimensional space modeling by using invisible light projection characteristics, which comprises the following steps:
s1, shooting a group of position pictures at the same position in the space, wherein the position pictures comprise an RGB picture and an invisible light projection picture;
s2, another group of position pictures are taken at least one other position in the same space;
s3, extracting feature points of all the RGB pictures and the invisible light projection pictures by using an SIFT algorithm;
s4, carrying out feature point fusion on the RGB pictures and the invisible light projection pictures of the same group of position pictures to obtain position feature points of the position;
s5, carrying out matching calculation by using an SIFT algorithm according to the feature points of different positions;
s6, calculating initial camera positions of different groups of position pictures during shooting by utilizing a SLAM algorithm;
s7, calculating an accurate camera position and a sparse point cloud by utilizing an SFM algorithm;
s8, carrying out three-dimensional structural modeling based on the camera position and the sparse point cloud;
and S9, carrying out three-dimensional scene mapping.
Preferably, the invisible light projection picture is formed by projecting a sample pattern by an invisible light projector and capturing the sample pattern by an invisible light sensor.
Preferably, the invisible light projector is an infrared projector, and the invisible light sensor is an infrared sensor.
Preferably, the RGB picture is an RGB picture photographed after filtering out invisible light by an optical filter.
Preferably, the sample pattern contains a specific feature point set in advance.
Preferably, the specific feature points include highlight point points or special shape point points.
Preferably, the RGB picture is a dome photo, and the three-dimensional scene mapping is mapping by using the dome photo.
Preferably, the step 8 further comprises performing closed loop detection according to the camera position.
Preferably, the closed loop detection is: comparing the current calculated camera position with the past camera position to detect a distance difference; and if the distance difference between the current camera position and the past camera position is detected to be within a certain threshold range, considering that the current camera position returns to the past camera position, and starting closed loop detection.
Preferably, the three-dimensional structured modeling specifically comprises the steps of:
s8.1, preliminarily calculating the position of the camera to obtain a part of sparse point cloud with noise points, and filtering the noise points in a distance and reprojection mode;
s8.2, marking the sparse point cloud, and carrying out corresponding marking;
s8.3, taking each sparse point cloud as a starting point, taking a corresponding spherical screen camera as a virtual straight line, and interweaving spaces through which a plurality of virtual straight lines pass to form a visual space;
s8.4, separating the space surrounded by the rays;
and S8.5, making a closed space based on the shortest path mode of graph theory.
Compared with the prior art, the invention has the beneficial effects that:
the invention avoids the calibration error when the data of a plurality of groups of cameras are collected by the infrared sensor and the optical filter, and solves the difficult problem that the three-dimensional reconstruction is difficult to be stably carried out in a monotonous space scene. The characteristic points of the invisible light projection picture and the RGB picture are fused with each other to form the mutual complement of the characteristic points. When processing RGB pictures of a scene such as a pure white wall or glass, the characteristic points are less, and stable characteristic points with a large number can be obtained by emitting invisible light patterns. The range of the characteristic points of the invisible light projection picture is limited to the area covered by the projected invisible light pattern, the whole scene cannot be covered, and the characteristic points of the RGB image can be effectively supplemented.
Drawings
FIG. 1 is a flow chart of a method for three-dimensional space modeling using invisible light projection characteristics
FIG. 2 three-dimensional structured modeling flow chart
Detailed Description
For a further understanding of the invention, reference will now be made to the following examples which are set forth in part by way of illustration:
the invention provides a method for carrying out three-dimensional space modeling by using invisible light projection characteristics, which comprises the following steps:
s1, respectively shooting a group of position pictures at two different positions in the same space, wherein the position pictures comprise an RGB picture and an invisible light projection picture;
s2, respectively extracting characteristic points of the shot RGB pictures and the invisible light projection pictures, and fusing the characteristic points of the RGB pictures and the invisible light projection pictures of the same group of position pictures to obtain characteristic points in the shot scene at the position;
s3, carrying out matching calculation on feature points in the scene shot at different positions and initial camera positions when two groups of position pictures are shot;
s4, calculating an accurate camera position and a sparse point cloud, and performing three-dimensional structural modeling;
and S5, carrying out three-dimensional scene mapping.
It should be noted that SIFT descriptors are used for extracting feature points of the RGB picture and the invisible light projection picture in the position picture, and meanwhile, the neighborhood of each feature point is analyzed, and the feature points are controlled according to the neighborhood. And calculating initial camera positions of different groups of position pictures during shooting by utilizing a SLAM algorithm, and calculating accurate camera positions and sparse point clouds by utilizing an SFM algorithm. Further accurate positioning of the camera position via the SFM algorithm may make the three-dimensional model generated in step S4 more accurate. And establishing a camera coordinate system by taking the position of the camera as an origin, and solving an internal reference matrix of the camera by the conventional camera calibration program or algorithm. The feature points are SIFT features, the matching result often has many mismatching, in order to eliminate the errors, some existing algorithms such as Ratio Test method and KNN algorithm are used in the embodiment, 2 features which are most matched with the features are searched, if the Ratio of the matching distance of the first feature to the matching distance of the second feature is smaller than a certain threshold value, the matching is accepted, otherwise, the mismatching is considered. After the matching point is obtained, a feature matrix can be obtained by using a function findEstimalaMat () newly added in OpenCV3.0, three-dimensional reconstruction is to restore the coordinates of the matching point in the space through the known information, and the findEstimalaMat function can be expressed as follows:
findEssentialMat(InputArray points1,InputArray points2,
InputArray cameraMatrix,int method=RANSAC,
double prob=0.999,double threshold=1.0,
OutputArray mask=noArray());
the findEsentialMat function mainly has the function of calculating a basic matrix from corresponding points in the two images, wherein points1 represent N two-dimensional pixel points of the first image, and the point coordinates are floating points with single precision or double precision; points2 represent two-dimensional pixels of the second picture, which are the same size and type as points 1; cameraMatrix is a camera matrix, and it is assumed that points1 and points2 are feature points of cameras having the same camera matrix; method is a method for calculating the characteristic matrix, and the RANSAC algorithm is adopted in the embodiment; prob is expressed as probability and is used for parameters of a characteristic matrix calculation method, and the correct reliability of the matrix is mainly estimated; threshold is used as a parameter of RANSAC and represents the maximum distance (in pixels) from a point to an epipolar line, and when the maximum distance exceeds the point, the point is regarded as an abnormal value and is not used for calculating a final basic matrix, and the maximum distance can be set according to the difference of point positioning precision, image resolution and image noise; mask is an array that outputs N elements, where each element is set to 0 for outliers and 1 for other points. The function return is the calculated local feature matrix, which may be further passed to decomposesesentitalmat or recoverPose to recover the relative position between the cameras.
The SFM algorithm is an off-line algorithm for three-dimensional reconstruction based on various collected disordered pictures. Before the core algorithm structure-from-motion is performed, some preparation is needed to pick out the appropriate picture. Firstly, focal length information is extracted from a picture (BA requirement is initialized later), then image features are extracted by using a feature extraction algorithm such as SIFT (Scale invariant feature transform) and the like, and Euclidean distance between feature points of two pictures is calculated by using a kd-tree model to match the feature points, so that an image pair with the feature point matching number meeting the requirement is found. For each image matching pair, epipolar geometry is calculated, the F matrix is estimated and the matching pairs are improved by ransac algorithm optimization. Thus, if feature points can be passed on in such matching pairs in a chain-like manner and are detected all the time, a trajectory can be formed.
The point cloud is a point data set of the appearance surface of an object obtained by a measuring instrument in reverse engineering, and the number of points obtained by using a three-dimensional coordinate measuring machine is small, the distance between the points is large, and the point cloud is called as sparse point cloud; the point clouds obtained by using the three-dimensional laser scanner or the photographic scanner have larger and denser point quantities, and are called dense point clouds.
And the position information positioned by the VSLAM algorithm is the dome camera position information obtained by positioning the dome camera. It should be further explained that feature points are extracted from the two-dimensional panoramic picture taken by the dome camera through the VSLAM algorithm, and the three-dimensional spatial position of the dome camera is recovered by triangularizing the feature points.
The step S3 further includes a closed loop detection performed according to the camera position, where the closed loop detection is: comparing the current calculated camera position with the past camera position to detect a distance difference; and if the distance difference between the current camera position and the past camera position is detected to be within a certain threshold range, considering that the current camera position returns to the past camera position, and starting closed loop detection.
It is further noted that the present invention is a closed loop detection based on spatial information rather than time series.
Further, in a preferred embodiment of the present invention, the invisible light projection picture is formed by projecting a sample pattern by an invisible light projector and capturing the sample pattern by an invisible light sensor. The invisible light projector is an infrared projector, and the invisible light sensor is an infrared sensor. The sample pattern comprises preset specific feature points, and the specific feature points comprise highlight point positions or special shape point positions. For example, the specific feature points of the invisible light emitted from the invisible light projector may be preset in the shape of a pattern such as "stars" or "dragon", and the points in the pattern have a specific distribution, such as a highlight point, which has a point on the left, a point on the right, or a point on the top, and the positions of the feature points can be found along the specific feature points set when the pattern is preset in the shooting environment. Meanwhile, since the pattern is preset, it is possible to know how many feature points are in the pattern, and thus, a minimum number of feature points matched with the pattern can be set, for example, 100 feature points are preset, and it is possible to set that at least 70 or 80 points can be found during image recognition to ensure image capturing stability, and even to search for feature points as much as possible.
In addition, in a further implementation, different sample patterns may be projected at the same position in space by using the invisible light with different wavelengths, the sample patterns with the different wavelengths are filtered by using the optical filters for filtering the different wavelengths to obtain two or even more groups of sample patterns at the same position, the calculation is performed according to the feature extraction results of the sample patterns in the different groups, and the calculation results are compared with each other to obtain a result with higher accuracy.
Alternatively, the above-described method may be applied to different positions in space, for example, a plurality of sets of sample patterns are preset, a first sample pattern is projected by using invisible light at a first position in space, a second sample pattern … is projected at a second position in space, and so on, and the sample patterns may be alternately used or cyclically used, and then the operation is performed based on the feature point extraction results obtained from the different sample patterns. Because the setting modes of the characteristic points in each group of sample patterns are different, the method can effectively avoid the operation error caused by repeatedly using the same group of sample patterns when the characteristic points of one group of sample patterns are not obviously set.
In order to ensure that the RGB pictures in a group of position pictures shot at the same position and the invisible light projection pictures are shot at an azimuth angle, a removable movable optical filter is arranged between an image sensor and a lens of a camera used for shooting the position pictures, and the RGB pictures are shot after the invisible light is filtered by the optical filter so as to avoid interference.
It should be noted that the three-dimensional structural modeling in step S4 specifically includes the steps of:
s4.1, preliminarily calculating the position of the camera to obtain a part of sparse point cloud with noise points, and filtering the noise points in a distance and reprojection mode;
s4.2, marking the sparse point cloud, and carrying out corresponding marking;
s4.3, taking each sparse point cloud as a starting point, taking a corresponding spherical screen camera as a virtual straight line, and interweaving spaces through which a plurality of virtual straight lines pass to form a visual space;
s4.4, digging out the space surrounded by the rays;
s4.5, a closed space is made based on the shortest path mode of graph theory.
Specifically, the RGB picture is a dome photo, that is, a dome camera is provided to complete shooting, and the three-dimensional scene mapping is mapping by using the dome photo.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined by the appended claims and their equivalents.

Claims (8)

1. A method for three-dimensional space modeling using invisible light projection features, comprising the steps of:
s1, shooting a group of position pictures at the same position in the space, wherein the position pictures comprise an RGB picture and an invisible light projection picture;
s2, another group of position pictures are taken at least one other position in the same space;
s3, extracting feature points of all the RGB pictures and the invisible light projection pictures by using an SIFT algorithm;
s4, carrying out feature point fusion on the RGB pictures and the invisible light projection pictures of the same group of position pictures to obtain position feature points of the position;
s5, carrying out matching calculation by using an SIFT algorithm according to the feature points of different positions;
s6, calculating initial camera positions of different groups of position pictures during shooting by utilizing a SLAM algorithm;
s7, calculating an accurate camera position and a sparse point cloud by utilizing an SFM algorithm;
s8, carrying out three-dimensional structural modeling based on the camera position and the sparse point cloud;
s9, carrying out three-dimensional scene mapping;
the invisible light projection picture is formed by projecting a sample pattern through an invisible light projector and capturing the sample pattern by using an invisible light sensor; the sample pattern comprises preset specific characteristic points; searching the position of the characteristic point according to the specific characteristic point set during pattern presetting;
the RGB picture is an RGB picture of a spatial scene with a single color.
2. The method for three-dimensional space modeling using invisible light projection features according to claim 1, wherein the invisible light projector is an infrared projector and the invisible light sensor is an infrared sensor.
3. The method as claimed in claim 1, wherein the RGB images are RGB images obtained by filtering out invisible light with a filter.
4. The method for three-dimensional space modeling using invisible light projection features according to claim 1, wherein: the specific feature points comprise highlight point points or special shape point points.
5. The method of claim 1, wherein the RGB images are dome photos and the three-dimensional scene map is a map using the dome photos.
6. The method for modeling three-dimensional space using invisible light projection features according to claim 1, wherein said step 8 further comprises performing closed loop detection based on said camera position.
7. The method for three-dimensional space modeling using invisible light projection features according to claim 6, wherein the closed loop detection is: comparing the current calculated camera position with the past camera position to detect a distance difference; and if the distance difference between the current camera position and the past camera position is detected to be within a certain threshold range, considering that the current camera position returns to the past camera position, and starting closed loop detection.
8. Method for three-dimensional spatial modeling using invisible light projection features according to claim 7, characterized in that said three-dimensional structured modeling comprises in particular the steps of:
s8.1, preliminarily calculating the position of the camera to obtain a part of sparse point cloud with noise points, and filtering the noise points in a distance and reprojection mode;
s8.2, marking the sparse point cloud, and carrying out corresponding marking;
s8.3, taking each sparse point cloud as a starting point, taking a corresponding spherical screen camera as a virtual straight line, and interweaving spaces through which a plurality of virtual straight lines pass to form a visual space;
s8.4, separating the space surrounded by the rays;
and S8.5, making a closed space based on the shortest path mode of graph theory.
CN201910456110.1A 2019-05-29 2019-05-29 Method for three-dimensional space modeling by using invisible light projection characteristics Active CN110363806B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910456110.1A CN110363806B (en) 2019-05-29 2019-05-29 Method for three-dimensional space modeling by using invisible light projection characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910456110.1A CN110363806B (en) 2019-05-29 2019-05-29 Method for three-dimensional space modeling by using invisible light projection characteristics

Publications (2)

Publication Number Publication Date
CN110363806A CN110363806A (en) 2019-10-22
CN110363806B true CN110363806B (en) 2021-12-31

Family

ID=68215002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910456110.1A Active CN110363806B (en) 2019-05-29 2019-05-29 Method for three-dimensional space modeling by using invisible light projection characteristics

Country Status (1)

Country Link
CN (1) CN110363806B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112294453B (en) * 2020-10-12 2022-04-15 浙江未来技术研究院(嘉兴) Microsurgery surgical field three-dimensional reconstruction system and method
CN114067061A (en) * 2021-12-01 2022-02-18 成都睿铂科技有限责任公司 Three-dimensional reconstruction method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021548A (en) * 2014-05-16 2014-09-03 中国科学院西安光学精密机械研究所 Method for acquiring 4D scene information
CN106384383A (en) * 2016-09-08 2017-02-08 哈尔滨工程大学 RGB-D and SLAM scene reconfiguration method based on FAST and FREAK feature matching algorithm
CN106548489A (en) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus
CN108267097A (en) * 2017-07-17 2018-07-10 杭州先临三维科技股份有限公司 Three-dimensional reconstruction method and device based on binocular three-dimensional scanning system
CN108447116A (en) * 2018-02-13 2018-08-24 中国传媒大学 The method for reconstructing three-dimensional scene and device of view-based access control model SLAM
CN108510434A (en) * 2018-02-12 2018-09-07 中德(珠海)人工智能研究院有限公司 The method for carrying out three-dimensional modeling by ball curtain camera
CN108566545A (en) * 2018-03-05 2018-09-21 中德(珠海)人工智能研究院有限公司 The method that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera
CN108629829A (en) * 2018-03-23 2018-10-09 中德(珠海)人工智能研究院有限公司 The three-dimensional modeling method and system that one bulb curtain camera is combined with depth camera
CN108958469A (en) * 2018-05-07 2018-12-07 中德(珠海)人工智能研究院有限公司 A method of hyperlink is increased in virtual world based on augmented reality

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104021548A (en) * 2014-05-16 2014-09-03 中国科学院西安光学精密机械研究所 Method for acquiring 4D scene information
CN106384383A (en) * 2016-09-08 2017-02-08 哈尔滨工程大学 RGB-D and SLAM scene reconfiguration method based on FAST and FREAK feature matching algorithm
CN106548489A (en) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus
CN108267097A (en) * 2017-07-17 2018-07-10 杭州先临三维科技股份有限公司 Three-dimensional reconstruction method and device based on binocular three-dimensional scanning system
CN108510434A (en) * 2018-02-12 2018-09-07 中德(珠海)人工智能研究院有限公司 The method for carrying out three-dimensional modeling by ball curtain camera
CN108447116A (en) * 2018-02-13 2018-08-24 中国传媒大学 The method for reconstructing three-dimensional scene and device of view-based access control model SLAM
CN108566545A (en) * 2018-03-05 2018-09-21 中德(珠海)人工智能研究院有限公司 The method that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera
CN108629829A (en) * 2018-03-23 2018-10-09 中德(珠海)人工智能研究院有限公司 The three-dimensional modeling method and system that one bulb curtain camera is combined with depth camera
CN108958469A (en) * 2018-05-07 2018-12-07 中德(珠海)人工智能研究院有限公司 A method of hyperlink is increased in virtual world based on augmented reality

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于Kinect单目视觉的SLAM的相关问题研究;张徐杰;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190115(第1期);I138-4082 *
基于SFM的多源目标三维重构方法研究;王静;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190215(第2期);I138-1638 *

Also Published As

Publication number Publication date
CN110363806A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
Hasinoff Photon, poisson noise
CN103337094B (en) A kind of method of applying binocular camera and realizing motion three-dimensional reconstruction
WO2019100933A1 (en) Method, device and system for three-dimensional measurement
KR101121034B1 (en) System and method for obtaining camera parameters from multiple images and computer program products thereof
Teller et al. Calibrated, registered images of an extended urban area
Zhang et al. A UAV-based panoramic oblique photogrammetry (POP) approach using spherical projection
CN111968129A (en) Instant positioning and map construction system and method with semantic perception
CN110378995B (en) Method for three-dimensional space modeling by using projection characteristics
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
CN111382613B (en) Image processing method, device, equipment and medium
CN101853524A (en) Method for generating corn ear panoramic image by using image sequence
CN110855903A (en) Multi-channel video real-time splicing method
CN113592721B (en) Photogrammetry method, apparatus, device and storage medium
CN109900274B (en) Image matching method and system
CN115035235A (en) Three-dimensional reconstruction method and device
CN110363806B (en) Method for three-dimensional space modeling by using invisible light projection characteristics
Zhuang et al. Degeneracy in self-calibration revisited and a deep learning solution for uncalibrated slam
CN107038714A (en) Many types of visual sensing synergistic target tracking method
EP4066162A1 (en) System and method for correspondence map determination
Tamas et al. Relative pose estimation and fusion of omnidirectional and lidar cameras
Sang et al. Inferring super-resolution depth from a moving light-source enhanced RGB-D sensor: a variational approach
Dai et al. Multi-spectral visual odometry without explicit stereo matching
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
Sinha Pan-tilt-zoom (PTZ) camera
CN115456870A (en) Multi-image splicing method based on external parameter estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant