CN111652967A - Three-dimensional reconstruction system and method based on front-back fusion imaging - Google Patents
Three-dimensional reconstruction system and method based on front-back fusion imaging Download PDFInfo
- Publication number
- CN111652967A CN111652967A CN202010413235.9A CN202010413235A CN111652967A CN 111652967 A CN111652967 A CN 111652967A CN 202010413235 A CN202010413235 A CN 202010413235A CN 111652967 A CN111652967 A CN 111652967A
- Authority
- CN
- China
- Prior art keywords
- dimensional reconstruction
- module
- fusion imaging
- article
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 59
- 230000004927 fusion Effects 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims abstract description 28
- 230000009466 transformation Effects 0.000 claims abstract description 8
- 239000011159 matrix material Substances 0.000 claims abstract description 5
- 239000013589 supplement Substances 0.000 claims description 12
- 239000000463 material Substances 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 238000004806 packaging method and process Methods 0.000 claims description 3
- 230000001502 supplementing effect Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000010146 3D printing Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/74—Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a three-dimensional reconstruction system based on front-back fusion imaging, which comprises a frame body, a fusion imaging module and a three-dimensional reconstruction module, wherein the frame body is provided with a bearing space used for placing articles; the fusion imaging module shoots a foreground image of the article and a background image of the article, the fusion imaging module is arranged on the inner side of the frame body, and the fusion imaging module is connected with the frame body; the three-dimensional reconstruction module is used for obtaining a three-dimensional model of the article by combining the foreground image of the article and the background image of the article, and the three-dimensional reconstruction module is electrically connected with the fusion imaging module. The invention also discloses a three-dimensional reconstruction method based on the front-back fusion imaging, which comprises the following steps: s1: placing a calibration object in the bearing space, and adjusting the internal parameters and the external parameters of the fusion imaging module and the coordinate transformation matrix of the three-dimensional reconstruction module according to the calibration object; s2: and taking out the calibration object, placing the target object at the position of the calibration object, and forming a three-dimensional model of the object by the adjusted three-dimensional reconstruction system.
Description
Technical Field
The invention relates to the field of three-dimensional reconstruction, in particular to a three-dimensional reconstruction system and a three-dimensional reconstruction method based on front-back fusion imaging.
Background
In recent years, with the rapid development of computer vision technology, three-dimensional reconstruction is widely applied to the fields of 3D printing, medical technology, remote sensing technology, and the like as an important branch in the computer vision field, and has received high attention.
Current three-dimensional reconstruction techniques can be largely classified into depth camera-based three-dimensional reconstruction, time-based reconstruction, and space-based reconstruction. The three-dimensional reconstruction based on the depth camera is also called as an active reconstruction technology, and depth information is extracted by acquiring reflection intensity information of an energy source through acquisition equipment by means of auxiliary energy sources such as laser and sound waves. Depth camera devices such as kinect in the current market are all implemented using this principle. The time-based reconstruction and the space-based reconstruction can be completed only by a common camera, and the hardware requirement is relatively low. The time-based reconstruction method mainly realizes the acquisition of parallax pictures through the inter-frame parallax generated by the same camera in the motion process, and further performs inter-frame matching reconstruction. The method can realize the three-dimensional reconstruction of the object by a single camera, but needs the relative position change of the camera and the object and records the change parameters for the three-dimensional reconstruction, and has certain requirements on shooting motion conditions. If a large-scale surface reconstruction is performed, a large-scale movement is required, and multiple frames are required for calculation, which increases the complexity of the process of object shooting and the amount of information processing for reconstruction. The main principle of the space-based reconstruction method is to use a plurality of cameras to shoot a reconstructed object at different angles to obtain a picture group with parallax, and construct a transformation matrix for converting two-dimensional image coordinates into three-dimensional coordinates by combining the position relations of the plurality of cameras to reconstruct the surface of the object. The main problem with this method is the conflicting relationship between camera angle and reconstructed surface area. If the use of the cameras is reduced to save cost, a binocular vision system needs to be constructed by using at least two cameras, the surface area acquired by the binocular vision system is relatively limited, and the reconstruction range of a single group of pictures is small; however, if reconstruction in a larger range is to be achieved, more machine positions are required to take surface pictures at different angles, the requirement on equipment cost is high, and the number of processed pictures is increased. In summary, both the current time-based reconstruction method and the current space-based reconstruction method have the problems of complexity of reconstruction operation and limited range of reconstruction surface.
Disclosure of Invention
In order to overcome the defects of the prior art and the method, the invention provides a three-dimensional reconstruction system and a three-dimensional reconstruction method based on front-back fusion imaging. The three-dimensional model of the target object can be obtained by only imaging once under the condition that the object or the shooting equipment does not need to move.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a three-dimensional reconstruction system based on front-back fusion imaging is used for forming a three-dimensional model of an article and comprises a frame body, a fusion imaging module and a three-dimensional reconstruction module, wherein,
the frame body is provided with a bearing space, and the bearing space is used for placing articles;
the fusion imaging module shoots a foreground image of the article and a background image of the article, the fusion imaging module is arranged on the inner side of the frame body, and the fusion imaging module is connected with the frame body;
the three-dimensional reconstruction module combines the foreground image of the article and the background image of the article to obtain a three-dimensional model of the article, and the three-dimensional reconstruction module is electrically connected with the fusion imaging module.
The three-dimensional model of the target object can be obtained by only imaging once under the condition that the object or the shooting equipment does not need to move.
In a preferred embodiment, the frame is made of an opaque material.
In this preferred scheme, light that light-tight material can prevent external environment from getting into and bearing the weight of the space, influence and fuse getting for instance of imaging module.
In a preferred embodiment, the three-dimensional reconstruction system further includes a light supplement module, the light supplement module is configured to perform light supplement illumination, the light supplement module is disposed inside the frame, and the light supplement module is connected to the frame.
In this preferred scheme, can solve the not enough problem of bearing space light intensity through the light filling module.
In a preferred embodiment, the luminance of the fill-in light module is adjustable.
In a preferred embodiment, the fill-in light module is a light source with a high color rendering index.
In the preferred scheme, the light source with high color rendering index is beneficial to improving the reduction degree of the reconstructed picture to the scene.
In a preferable scheme, the inner side of the frame body is coated with non-reflective materials.
In this preferred embodiment, because the problem that the light filling module has light reflection in the course of the work may lead to the influence to fuse getting for instance of imaging module. Therefore, a non-reflective material needs to be laid on the inner side of the frame body, so that the problem of light reflection of the light supplementing module is solved.
In a preferred scheme, the fusion imaging module comprises a binocular shooting submodule and two reflectors, wherein,
the binocular shooting submodule is arranged in front of the article and used for acquiring a foreground image of the article;
the two reflectors are arranged behind the article, and the reflectors are used for acquiring a background image of the article by combining the binocular shooting submodule through reflection.
In a preferred scheme, the reflecting mirrors are arranged vertically to the horizontal plane, and the included angle formed by the two reflecting mirrors is less than 180 degrees.
The invention also discloses a three-dimensional reconstruction method based on the three-dimensional reconstruction system, which comprises the following steps:
s1: placing a calibration object in the bearing space, and adjusting the internal parameters and the external parameters of the fusion imaging module and the coordinate transformation matrix of the three-dimensional reconstruction module according to the calibration object;
s2: and taking out the calibration object, placing the target object at the position of the calibration object, and forming a three-dimensional model of the object by the adjusted three-dimensional reconstruction system.
In a preferred embodiment, the step S2 includes the following sub-steps:
s2.1: taking out the calibration object, placing the target object at the position of the calibration object, acquiring a foreground image and a background image of the target object by the fusion imaging module, inputting the foreground image and the background image into the three-dimensional reconstruction module for algorithm reconstruction, and acquiring three-dimensional reconstruction point cloud data of the foreground image of the target object and three-dimensional reconstruction point cloud data of the background image of the target object;
s2.2: in the reconstruction process, the input image is subjected to distortion correction through the adjusted internal parameters of the fusion imaging module, and errors caused by lens distortion are eliminated;
s2.3: coordinate transformation is carried out through the adjusted external parameters of the fusion imaging module, and the two-dimensional image coordinate is converted into a three-dimensional point cloud coordinate;
s2.4: filtering wrong matching points in the three-dimensional point cloud coordinate conversion process through a point cloud filtering algorithm;
s2.5: and packaging the spliced complete point cloud data to form a three-dimensional model.
In a preferred scheme, the fusion imaging module comprises a binocular shooting submodule and two reflectors, and forms a left camera foreground image, a right camera foreground image, a left camera background image and a right camera background image on a target object; the S2.2 also comprises the following contents:
and after the distortion correction, performing feature extraction and matching on the target object in the image to obtain a matching point for matching the left camera foreground image and the right camera foreground image or a matching point for matching the left camera background image and the right camera background image.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the three-dimensional model of the target object can be obtained by only imaging once under the condition that the object or the shooting equipment does not need to move.
Drawings
FIG. 1 is a flow chart of example 2.
FIG. 2 is a block diagram of example 1.
Fig. 3 is a schematic diagram of embodiment 2.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 2, a three-dimensional reconstruction system based on front-back fusion imaging for forming a three-dimensional model of an article comprises a frame body, a fusion imaging module and a three-dimensional reconstruction module, wherein,
the frame body is provided with a bearing space for placing articles;
the fusion imaging module shoots a foreground image of the article and a background image of the article, the fusion imaging module is arranged on the inner side of the frame body, and the fusion imaging module is connected with the frame body;
the three-dimensional reconstruction module is used for obtaining a three-dimensional model of the article by combining the foreground image of the article and the background image of the article, and the three-dimensional reconstruction module is electrically connected with the fusion imaging module.
Embodiment 1 a three-dimensional model of a target object can be obtained by only performing imaging once without moving an object or a photographing apparatus.
In embodiment 1, the following extensions can also be made: the frame body is made of opaque material.
In the improved embodiment, the light-tight material can prevent light rays of the external environment from entering the bearing space and influencing the image taking of the fusion imaging module.
In embodiment 1 and the above modified embodiments, the following extensions may also be made: the three-dimensional reconstruction system further comprises a light supplementing module, the light supplementing module is used for supplementing light for illumination, the light supplementing module is arranged on the inner side of the frame body, and the light supplementing module is connected with the frame body.
In this modified embodiment, can solve the not enough problem of bearing space light intensity through the light filling module.
In embodiment 1 and the above modified embodiments, the following extensions may also be made: the brightness of the light supplementing module is adjustable.
In embodiment 1 and the above modified embodiments, the following extensions may also be made: the light supplement module is a light source with high color rendering index.
In the improved embodiment, the light source with high color rendering index is beneficial to improving the reduction degree of the reconstructed picture to the scene.
In embodiment 1 and the above modified embodiments, the following extensions may also be made: the inner side of the frame body is laid with non-reflective materials.
In the improved embodiment, the light supplement module may affect the image capturing of the fusion imaging module due to the problem of light reflection in the working process. Therefore, a non-reflective material needs to be laid on the inner side of the frame body, so that the problem of light reflection of the light supplementing module is solved.
In embodiment 1 and the above modified embodiments, the following extensions may also be made: the fusion imaging module comprises a binocular shooting submodule and two reflectors, wherein,
the binocular shooting submodule is arranged in front of the article and used for acquiring a foreground image of the article;
the two reflectors are arranged behind the article, and the reflectors are used for acquiring a background image of the article by combining with the binocular shooting submodule through reflection.
In a preferred scheme, the reflecting mirrors are arranged vertically to the horizontal plane, and the included angle formed by the two reflecting mirrors is less than 180 degrees.
Example 2
As shown in fig. 2 to 3, embodiment 2 is a method according to embodiment 1, and a three-dimensional reconstruction method based on anteroposterior fusion imaging includes the following steps:
s1: placing a calibration object in the bearing space, and adjusting the internal parameters and the external parameters of the fusion imaging module and the coordinate transformation matrix of the three-dimensional reconstruction module according to the calibration object;
s2: and taking out the calibration object, placing the target object at the position of the calibration object, and forming a three-dimensional model of the object by the adjusted three-dimensional reconstruction system.
In embodiment 2, the following extensions can also be made: s2 includes the following substeps:
s2.1: taking out the calibration object, placing the target object at the position of the calibration object, acquiring a foreground image and a background image of the target object by the fusion imaging module, inputting the foreground image and the background image into the three-dimensional reconstruction module for algorithm reconstruction, and acquiring three-dimensional reconstruction point cloud data of the foreground image of the target object and three-dimensional reconstruction point cloud data of the background image of the target object;
s2.2: in the reconstruction process, the input image is subjected to distortion correction through the adjusted internal parameters of the fusion imaging module, and errors caused by lens distortion are eliminated;
s2.3: coordinate transformation is carried out through the adjusted external parameters of the fusion imaging module, and the two-dimensional image coordinate is converted into a three-dimensional point cloud coordinate;
s2.4: filtering wrong matching points in the three-dimensional point cloud coordinate conversion process through a point cloud filtering algorithm;
s2.5: and packaging the spliced complete point cloud data to form a three-dimensional model.
In embodiment 2 and the above modified embodiments, the following extensions may also be made: the fusion imaging module comprises a binocular shooting submodule and two reflectors and forms a left camera foreground image, a right camera foreground image, a left camera background image and a right camera background image on a target object; s2.2 also includes the following:
and after the distortion correction, performing feature extraction and matching on the target object in the image to obtain a matching point for matching the left camera foreground image and the right camera foreground image or a matching point for matching the left camera background image and the right camera background image.
In the detailed description of the embodiments, various technical features may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent; for example, the calculation formula of the ion conductivity in the embodiment is not limited to the formula illustrated in the embodiment, and the calculation formula of the ion conductivity is different for different species. The foregoing is a definition of the embodiments and is not to be construed as limiting the present patent.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (9)
1. A three-dimensional reconstruction system based on front-back fusion imaging is used for forming a three-dimensional model of an article and is characterized by comprising a frame body, a fusion imaging module and a three-dimensional reconstruction module, wherein,
the frame body is provided with a bearing space, and the bearing space is used for placing articles;
the fusion imaging module shoots a foreground image of the article and a background image of the article, the fusion imaging module is arranged on the inner side of the frame body, and the fusion imaging module is connected with the frame body;
the three-dimensional reconstruction module combines the foreground image of the article and the background image of the article to obtain a three-dimensional model of the article, and the three-dimensional reconstruction module is electrically connected with the fusion imaging module.
2. The three-dimensional reconstruction system of claim 1, wherein said frame is made of an opaque material.
3. The three-dimensional reconstruction system according to claim 1 or 2, further comprising a light supplement module, wherein the light supplement module is used for performing light supplement illumination, the light supplement module is disposed inside the frame body, and the light supplement module is connected to the frame body.
4. The three-dimensional reconstruction system of claim 3, wherein the frame body is coated with a non-reflective material on an inner side thereof.
5. The three-dimensional reconstruction system of claim 1, 2 or 4, wherein said fused imaging module comprises a binocular shooting sub-module and two mirrors, wherein,
the binocular shooting submodule is arranged in front of the article and used for acquiring a foreground image of the article;
the two reflectors are arranged behind the article, and the reflectors are used for acquiring a background image of the article by combining the binocular shooting submodule through reflection.
6. The three-dimensional reconstruction system of claim 5, wherein said mirrors are disposed vertically to a horizontal plane and the angle formed by the two mirrors is less than 180 °.
7. The three-dimensional reconstruction method based on the front-back fusion imaging of the three-dimensional reconstruction system according to any one of claims 1 to 6, characterized by comprising the following steps:
s1: placing a calibration object in the bearing space, and adjusting the internal parameters and the external parameters of the fusion imaging module and the coordinate transformation matrix of the three-dimensional reconstruction module according to the calibration object;
s2: and taking out the calibration object, placing the target object at the position of the calibration object, and forming a three-dimensional model of the object by the adjusted three-dimensional reconstruction system.
8. The three-dimensional reconstruction method as claimed in claim 7, wherein said S2 comprises the following sub-steps:
s2.1: taking out the calibration object, placing the target object at the position of the calibration object, acquiring a foreground image and a background image of the target object by the fusion imaging module, inputting the foreground image and the background image into the three-dimensional reconstruction module for algorithm reconstruction, and acquiring three-dimensional reconstruction point cloud data of the foreground image of the target object and three-dimensional reconstruction point cloud data of the background image of the target object;
s2.2: in the reconstruction process, the input image is subjected to distortion correction through the adjusted internal parameters of the fusion imaging module, and errors caused by lens distortion are eliminated;
s2.3: coordinate transformation is carried out through the adjusted external parameters of the fusion imaging module, and the two-dimensional image coordinate is converted into a three-dimensional point cloud coordinate;
s2.4: filtering wrong matching points in the three-dimensional point cloud coordinate conversion process through a point cloud filtering algorithm;
s2.5: and packaging the spliced complete point cloud data to form a three-dimensional model.
9. The three-dimensional reconstruction method according to claim 8, wherein the fusion imaging module comprises a binocular shooting submodule and two reflectors, and the fusion imaging module forms a left camera foreground image, a right camera foreground image, a left camera background image and a right camera background image for a target object; the S2.2 also comprises the following contents:
and after the distortion correction, performing feature extraction and matching on the target object in the image to obtain a matching point for matching the left camera foreground image and the right camera foreground image or a matching point for matching the left camera background image and the right camera background image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010413235.9A CN111652967B (en) | 2020-05-15 | 2020-05-15 | Three-dimensional reconstruction system and method based on front-back fusion imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010413235.9A CN111652967B (en) | 2020-05-15 | 2020-05-15 | Three-dimensional reconstruction system and method based on front-back fusion imaging |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111652967A true CN111652967A (en) | 2020-09-11 |
CN111652967B CN111652967B (en) | 2023-07-04 |
Family
ID=72347970
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010413235.9A Active CN111652967B (en) | 2020-05-15 | 2020-05-15 | Three-dimensional reconstruction system and method based on front-back fusion imaging |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111652967B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116612263A (en) * | 2023-07-20 | 2023-08-18 | 北京天图万境科技有限公司 | Method and device for sensing consistency dynamic fitting of latent vision synthesis |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109887071A (en) * | 2019-01-12 | 2019-06-14 | 天津大学 | A kind of 3D video image dendoscope system and three-dimensional rebuilding method |
CN109993696A (en) * | 2019-03-15 | 2019-07-09 | 广州愿托科技有限公司 | The apparent panorama sketch of works based on multi-view image corrects joining method |
WO2019179200A1 (en) * | 2018-03-22 | 2019-09-26 | 深圳岚锋创视网络科技有限公司 | Three-dimensional reconstruction method for multiocular camera device, vr camera device, and panoramic camera device |
-
2020
- 2020-05-15 CN CN202010413235.9A patent/CN111652967B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019179200A1 (en) * | 2018-03-22 | 2019-09-26 | 深圳岚锋创视网络科技有限公司 | Three-dimensional reconstruction method for multiocular camera device, vr camera device, and panoramic camera device |
CN109887071A (en) * | 2019-01-12 | 2019-06-14 | 天津大学 | A kind of 3D video image dendoscope system and three-dimensional rebuilding method |
CN109993696A (en) * | 2019-03-15 | 2019-07-09 | 广州愿托科技有限公司 | The apparent panorama sketch of works based on multi-view image corrects joining method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116612263A (en) * | 2023-07-20 | 2023-08-18 | 北京天图万境科技有限公司 | Method and device for sensing consistency dynamic fitting of latent vision synthesis |
CN116612263B (en) * | 2023-07-20 | 2023-10-10 | 北京天图万境科技有限公司 | Method and device for sensing consistency dynamic fitting of latent vision synthesis |
Also Published As
Publication number | Publication date |
---|---|
CN111652967B (en) | 2023-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11354840B2 (en) | Three dimensional acquisition and rendering | |
TWI555378B (en) | An image calibration, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
EP3057317B1 (en) | Light-field camera | |
CN110572630B (en) | Three-dimensional image shooting system, method, device, equipment and storage medium | |
JP6090786B2 (en) | Background difference extraction apparatus and background difference extraction method | |
CN111028155A (en) | Parallax image splicing method based on multiple pairs of binocular cameras | |
CN112734863B (en) | Crossed binocular camera calibration method based on automatic positioning | |
WO2019184185A1 (en) | Target image acquisition system and method | |
WO2019184184A1 (en) | Target image acquisition system and method | |
CN109325981B (en) | Geometric parameter calibration method for micro-lens array type optical field camera based on focusing image points | |
WO2019184183A1 (en) | Target image acquisition system and method | |
CN111854636B (en) | Multi-camera array three-dimensional detection system and method | |
CN112634379B (en) | Three-dimensional positioning measurement method based on mixed vision field light field | |
KR20200129657A (en) | Method for gaining 3D model video sequence | |
CN106772974B (en) | The system and method for quick three-dimensional refractive index micro-imaging | |
Gul et al. | A high-resolution high dynamic range light-field dataset with an application to view synthesis and tone-mapping | |
CN109302600B (en) | Three-dimensional scene shooting device | |
CN111652967B (en) | Three-dimensional reconstruction system and method based on front-back fusion imaging | |
CN111583117A (en) | Rapid panoramic stitching method and device suitable for space complex environment | |
TWI504936B (en) | Image processing device | |
US11758101B2 (en) | Restoration of the FOV of images for stereoscopic rendering | |
CN109194947A (en) | Binocular camera shooting mould group and mobile terminal | |
CN108961378A (en) | A kind of more mesh point cloud three-dimensional rebuilding methods, device and its equipment | |
CN111562562B (en) | 3D imaging module calibration method based on TOF | |
CN109089100B (en) | Method for synthesizing binocular stereo video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |