CN110880161B - Depth image stitching and fusion method and system for multiple hosts and multiple depth cameras - Google Patents
Depth image stitching and fusion method and system for multiple hosts and multiple depth cameras Download PDFInfo
- Publication number
- CN110880161B CN110880161B CN201911146215.3A CN201911146215A CN110880161B CN 110880161 B CN110880161 B CN 110880161B CN 201911146215 A CN201911146215 A CN 201911146215A CN 110880161 B CN110880161 B CN 110880161B
- Authority
- CN
- China
- Prior art keywords
- depth
- node
- cameras
- host
- depth image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 17
- 238000000034 method Methods 0.000 claims description 19
- 230000002452 interceptive effect Effects 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 claims description 2
- 238000000926 separation method Methods 0.000 claims 1
- 230000006872 improvement Effects 0.000 description 9
- 230000000007 visual effect Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 4
- 230000004927 fusion Effects 0.000 description 3
- 230000009194 climbing Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 239000011435 rock Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a depth image splicing and fusion method and a system for multiple depth cameras of multiple hosts, wherein a projection host is connected with multiple node hosts, and each node is connected with multiple depth cameras; dividing the region of the acquired identification picture into a plurality of shooting regions, wherein each shooting region corresponds to a depth camera; all depth cameras collect depth data of the corresponding shooting areas simultaneously; each node host receives depth data of a plurality of corresponding depth cameras, and calculates and linearly converts the depth data based on depth environment background data and three-dimensional space recognition range parameters of each depth camera, so as to obtain a first depth image which can be processed and displayed; each node host is spliced and fused with the first depth images of the plurality of depth cameras to obtain a second depth image; the projection host receives the second depth images of the plurality of node hosts, splices the second depth images according to a designated sequence, identifies the position of the interactors and displays the spliced third depth images.
Description
Technical Field
The invention relates to the technical field of image stitching and fusion, in particular to a depth image stitching and fusion method and system of a multi-host multi-depth camera.
Background
OpenCV (Open Source Computer Vision Library) is a cross-platform computer vision library based on BSD license (open source) issue, which is lightweight and efficient-consists of a series of C functions and a small number of C++ classes, and provides an interface of Python, ruby, MATLAB and other languages, thus realizing a plurality of general algorithms in terms of image processing and computer vision.
The depth camera can detect the depth of field distance of the shooting space, namely, the three-dimensional space coordinate of each point in the image can be accurately obtained; the real scene can be restored through the three-dimensional coordinates, and the application of scene modeling and the like is realized. At present, three main technical routes exist for depth cameras: structured Light (TOF) and binocular vision. The structured light technology projects coded gratings or linear light sources and the like onto a measured object, and demodulates three-dimensional information of the measured object according to distortion generated by the coded gratings or the linear light sources. The TOF technology is that a sensor emits modulated near infrared light, the near infrared light is reflected after meeting an object, and the distance of the shot object is converted by calculating the time difference or the phase difference between the light emission and the reflection. The binocular vision technology is to calculate the distance between the measured objects by using two common cameras in a parallax mode like eyes.
The application of traditional depth camera recognition is to adopt a single depth camera to recognize the position of an interactor in a visual scene, for example, chinese patent document CN 107272910A discloses a realization method which enables an indoor rock climbing experimenter to experience the pleasure of man-machine interaction in the process of rock climbing through projection equipment and Kinect interaction equipment.
However, since the visual angle of view of a single depth camera is small, the measurement distance is generally in the range of several meters, which results in a failure to realize the requirement of identifying the position of the interactors in a large interactive scene.
Disclosure of Invention
Aiming at the defects in the problems, the invention provides a depth image splicing and fusion method and a system for a multi-host multi-depth camera, which solve the problem of small scene recognition view angle and realize the requirement of the position recognition of interactors in a large scene.
The invention discloses a depth image splicing and fusion method of multiple hosts and multiple depth cameras, wherein a projection host is connected with multiple node hosts, and each node is connected with multiple depth cameras;
the depth image stitching and fusing method comprises the following steps:
step 1, dividing an area of an acquired identification picture into a plurality of shooting areas, wherein each shooting area corresponds to one depth camera;
step 2, all the depth cameras collect depth data of the corresponding shooting areas at the same time;
step 3, each node host receives depth data of a plurality of corresponding depth cameras, and calculates and linearly converts the depth data based on depth environment background data and three-dimensional space recognition range parameters of each depth camera respectively to obtain a first depth image which can be processed and displayed;
step 4, each node host is spliced and fused with the first depth images of the plurality of depth cameras to obtain a second depth image;
and 5, the projection host receives a plurality of second depth images of the node hosts, splices the second depth images according to a designated sequence, identifies the position of an interactor and displays a spliced third depth image.
As a further improvement of the present invention, the number of the plurality of node hosts connected to the projection host is not greater than the maximum number of the node hosts connected to the projection host, and the number of the plurality of depth cameras connected to the node host is not greater than the maximum number of the depth cameras connected to the node host.
As a further improvement of the invention, the number of the node hosts is 2, the number of the depth cameras is 8, each node host is connected with 4 depth cameras, and the imaging area is 8;
as a further improvement of the present invention, the depth cameras are installed in parallel arrangement from left to right at a specified interval distance.
As a further improvement of the invention, the distance between the depth cameras is 3m.
As a further improvement of the present invention, the step 3 includes:
the node host judges whether the corresponding depth cameras acquire depth data for the first time;
if yes, each depth camera acquires depth environment background data Z of the interactive scene back Each depth camera is provided with an identification range parameter X of the three-dimensional space min 、X max ,Y min 、Y max ,Z min 、Z max ;
If not, the node host receives depth data of a plurality of corresponding depth cameras, and calculates and linearly converts the depth data based on depth environment background data and three-dimensional space recognition range parameters of each depth camera respectively to obtain a first depth image which can be processed and displayed; wherein,,
the formula of the depth image horizontal pixel coordinates is:
the depth image vertical pixel coordinate formula is:
the depth image pixel brightness formula is:
wherein x, y and z are three-dimensional coordinate points based on a depth camera, x ', y ' are coordinate points of the depth image after linear transformation, z ' is a brightness value at a coordinate position (x ', y ') after transformation, and width and height are wide and high resolution of the depth image.
As a further improvement of the invention, the depth environment background data Z back And setting the depth value at the corresponding three-dimensional coordinate to be 0 if the depth value is smaller than the minimum recognition range or exceeds the maximum recognition range for the effective value in the depth camera recognition range.
As a further improvement of the present invention, the projection host receives the second depth image sent by each node host using NDI open transmission protocol.
As a further improvement of the present invention, in step 5, the projection host performs a re-stitching process according to the arrangement position of each of the second depth images by using an OpenCV library function, and identifies the location of the interactor.
The invention also discloses a depth image splicing and fusing system of the multi-host multi-depth camera, which comprises the following steps: the projection system comprises a projection host, a plurality of node hosts and a plurality of depth cameras;
the projection host is connected with a plurality of node hosts, and each node is connected with a plurality of depth cameras; dividing the area of the acquired identification picture into a plurality of shooting areas, wherein each shooting area corresponds to one depth camera;
the depth camera is used for realizing the step 2 of the depth image splicing and fusion method;
the node host is used for realizing the steps 3 and 4 of the depth image splicing and fusion method;
the projection host is used for realizing the step 5 of the depth image splicing and fusion method.
Compared with the prior art, the invention has the beneficial effects that:
the invention solves the technical problem of splicing and fusing images of the multi-depth cameras through the multi-host multi-depth camera image splicing algorithm, realizes splicing and fusing images acquired by the multi-depth cameras, and provides a feasible scheme for video image acquisition and interactors identification of large-scale interaction scenes.
Drawings
FIG. 1 is a flow chart of a method for splicing and fusing depth images of multiple hosts and multiple depth cameras according to an embodiment of the present invention;
fig. 2 is a frame diagram of a depth image stitching and fusing system with multiple hosts and multiple depth cameras according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a depth image splicing and fusion method and system for multiple hosts and multiple depth cameras, which solve the problem of small scene recognition view angles and meet the requirements of position recognition of interactors in a large scene. The whole conception is as follows: the method comprises the steps that a plurality of node hosts are controlled to project images to the inner wall surface of an interactive scene through a projection host, a plurality of depth cameras connected with the node hosts respectively collect the positions of interactors in the interactive scene, the depth images of the depth cameras connected with each node host are spliced and fused, then the spliced and fused depth images of the node hosts are transmitted to the projection host to be spliced again, and finally the projection host displays spliced images of the whole interactive scene and the positions of the interactors on the wall surface.
To achieve the above-described idea, the present invention is described in further detail below with reference to the accompanying drawings:
as shown in fig. 1, the invention provides a depth image stitching and fusing method of multiple hosts and multiple depth cameras, and equipment based on which the method is realized comprises a projection host, multiple node hosts and multiple depth cameras, wherein the projection host is connected with the multiple node hosts, and each node is connected with the multiple depth cameras.
Further, the number of the projection hosts connected with the plurality of node hosts is not greater than the maximum number of the node hosts supported by the projection hosts, and the number of the node hosts connected with the plurality of depth cameras is not greater than the maximum number of the depth cameras supported by the node hosts; each node host is connected with a plurality of depth cameras, the number of the plurality of the node hosts can be adjusted according to the number of the depth cameras supported by the node hosts, and the number of the plurality of the node hosts can be adjusted according to the actual interactive scene size; the projection host controls a plurality of projectors, and the quantity of the projectors can be calculated and determined according to the actual interactive scene size and the projection picture size of the projectors. The plurality of depth cameras are arranged in a parallel arrangement mode from left to right towards the wall surface at specified interval distances, and the specific interval distances can be adjusted according to the horizontal visual angles of the depth cameras.
Furthermore, the number of the node hosts is preferably 2, the number of the depth cameras is 8, each node host is connected with 4 depth cameras, and the distance between the depth cameras is 3 meters.
The depth image stitching and fusion method comprises the following steps:
step 1, dividing an area of a collected identification picture into a plurality of shooting areas, wherein each shooting area corresponds to a depth camera; wherein,,
the plurality of image pickup areas can be calculated and determined according to the actual scene size and the view angle size of the depth cameras, and if the invention selects 8 depth cameras, the image pickup areas are correspondingly divided into 8 image pickup areas.
Step 2, all depth cameras collect depth data of the corresponding shooting areas at the same time;
step 3, each node host receives depth data of a plurality of corresponding depth cameras, and calculates and linearly converts the depth data based on the depth environment background data and the three-dimensional space recognition range parameters of each depth camera respectively to obtain a first depth image which can be processed and displayed; the method specifically comprises the following steps:
the node host judges whether the corresponding multiple depth cameras acquire depth data for the first time;
if yes, each depth camera acquires depth environment background data Z of the interactive scene back Each depth camera is provided with an identification range parameter X of the three-dimensional space min 、X max ,Y min 、Y max ,Z min 、Z max ;
If not, the node host receives depth data of a plurality of corresponding depth cameras, and calculates and linearly converts the depth data based on the depth environment background data and the three-dimensional space recognition range parameters of each depth camera respectively to obtain a first depth image which can be processed and displayed; wherein,,
the formula of the depth image horizontal pixel coordinates is:
the depth image vertical pixel coordinate formula is:
the depth image pixel brightness formula is:
wherein x, y and z are three-dimensional coordinate points based on a depth camera, x ', y ' are coordinate points of the depth image after linear transformation, z ' is a brightness value at a coordinate position (x ', y ') after transformation, and width and height are wide and high resolution of the depth image.
Further, depth environmental background data Z back And setting the depth value at the corresponding three-dimensional coordinate to be 0 if the depth value is smaller than the minimum recognition range or exceeds the maximum recognition range for the effective value in the depth camera recognition range.
Furthermore, each depth camera collects depth environment background data of the interactive scene under the condition that no interactors exist in the interactive scene, and pixel extraction of the positions of the interactors can be effectively achieved by collecting the depth environment background data.
Further, the multiple depth cameras connected with each node host machine are respectively provided with identification range parameters of three-dimensional space, wherein the identification range parameters are the overlapping areas of the two depth camera visual angles of the interactors, the complete interactors' images are displayed in the spliced fusion image by adjusting the corresponding parameters, and the human body images are continuously displayed when the interactors pass through the visual angle areas, and the process is only one time, so that the real-time spliced fusion system is realized.
And 4, each node host is spliced and fused with the first depth images of the plurality of depth cameras to obtain a second depth image.
Step 5, the projection host receives second depth images of the plurality of node hosts, splices the second depth images according to a designated sequence, identifies the position of an interactor and displays a spliced third depth image; wherein,,
the projection host receives the second depth image sent by each node host by adopting NDI (Network Device Interface) open transmission protocol; the method has the characteristics of ultralow time delay, lossless transmission, bidirectional control and the like;
and the projection host performs splicing processing again by using an OpenCV library function according to the arrangement position of each second depth image and identifies the position of the interactor.
Further, the projection host receives the spliced and fused images sent by all the node hosts and then carries out image splicing again, so that the function of virtualizing all the depth cameras into one wide-view depth camera is realized, the spliced images are finally displayed and the positions of interactors are marked, and the experience of real-time interaction is enhanced; meanwhile, the invention can be widely applied to various interactive display scenes.
As shown in fig. 2, the present invention provides a depth image stitching and fusing system with multiple hosts and multiple depth cameras, including: the projection system comprises a projection host, a plurality of node hosts and a plurality of depth cameras;
the projection host is connected with a plurality of node hosts, and each node is connected with a plurality of depth cameras; dividing the region of the acquired identification picture into a plurality of shooting regions, wherein each shooting region corresponds to a depth camera;
the depth camera is used for realizing the step 2 of the depth image splicing and fusion method;
the node host is used for realizing the steps 3 and 4 of the depth image splicing and fusion method;
and 5, a projection host used for realizing the step 5 of the depth image splicing and fusion method.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A depth image splicing and fusion method of multiple hosts and multiple depth cameras is characterized in that a projection host is connected with multiple node hosts, and each node is connected with multiple depth cameras;
the depth image stitching and fusing method comprises the following steps:
step 1, dividing an area of an acquired identification picture into a plurality of shooting areas, wherein each shooting area corresponds to one depth camera;
step 2, all the depth cameras collect depth data of the corresponding shooting areas at the same time;
step 3, each node host receives depth data of a plurality of corresponding depth cameras, and calculates and linearly converts the depth data based on depth environment background data and three-dimensional space recognition range parameters of each depth camera respectively to obtain a first depth image which can be processed and displayed;
the method specifically comprises the following steps:
the node host judges whether the corresponding depth cameras acquire depth data for the first time;
if yes, each depth camera acquires depth environment background data Z of the interactive scene back Each depth camera is provided with an identification range parameter X of the three-dimensional space min 、X max ,Y min 、Y max ,Z min 、Z max ;
If not, the node host receives depth data of a plurality of corresponding depth cameras, and calculates and linearly converts the depth data based on depth environment background data and three-dimensional space recognition range parameters of each depth camera respectively to obtain a first depth image which can be processed and displayed; wherein,,
the formula of the depth image horizontal pixel coordinates is:
the depth image vertical pixel coordinate formula is:
the depth image pixel brightness formula is:
wherein x, y and z are three-dimensional coordinate points taking a depth camera as a reference, x ', y ' are coordinate points of the depth image after linear transformation, z ' is a brightness value at a coordinate position (x ', y ') after transformation, and width and height are the wide and high resolution of the depth image;
step 4, each node host is spliced and fused with the first depth images of the plurality of depth cameras to obtain a second depth image;
and 5, the projection host receives a plurality of second depth images of the node hosts, splices the second depth images according to a designated sequence, identifies the position of an interactor and displays a spliced third depth image.
2. The depth image stitching and fusing method of claim 1, wherein the number of the plurality of node hosts connected by the projection host is no greater than a maximum number of node hosts that the projection host supports connection, and the number of the plurality of depth cameras connected by the node host is no greater than a maximum number of depth cameras that the node host supports connection.
3. The depth image stitching fusion method according to claim 2, wherein the number of the node hosts is 2, the number of the depth cameras is 8, each node host is connected with 4 depth cameras, and the imaging area is 8.
4. The depth image stitching method of claim 1 wherein the depth cameras are mounted in parallel arrangement from left to right at a specified separation distance.
5. The depth image stitching method of claim 4 wherein the depth cameras are spaced apart by a distance of 3m.
6. The depth image stitching method of claim 1, wherein the depth environment background data Z back And setting the depth value at the corresponding three-dimensional coordinate to be 0 if the depth value is smaller than the minimum recognition range or exceeds the maximum recognition range for the effective value in the depth camera recognition range.
7. The depth image stitching and fusion method of claim 1 wherein the projection host receives the second depth image transmitted by each of the node hosts using NDI open transmission protocol.
8. The method according to claim 1, wherein in the step 5, the projection host performs a re-stitching process and identifies the interactor position by using an OpenCV library function according to the arrangement position of each of the second depth images.
9. A depth image splicing and fusing system of a plurality of host computers and a plurality of depth cameras is characterized by comprising: the projection system comprises a projection host, a plurality of node hosts and a plurality of depth cameras;
the projection host is connected with a plurality of node hosts, and each node is connected with a plurality of depth cameras; dividing the area of the acquired identification picture into a plurality of shooting areas, wherein each shooting area corresponds to one depth camera;
the depth camera for implementing step 2 of any one of claims 1-8;
the node host being configured to implement steps 3, 4 as claimed in any one of claims 1-8;
the projection host is configured to implement step 5 of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911146215.3A CN110880161B (en) | 2019-11-21 | 2019-11-21 | Depth image stitching and fusion method and system for multiple hosts and multiple depth cameras |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911146215.3A CN110880161B (en) | 2019-11-21 | 2019-11-21 | Depth image stitching and fusion method and system for multiple hosts and multiple depth cameras |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110880161A CN110880161A (en) | 2020-03-13 |
CN110880161B true CN110880161B (en) | 2023-05-09 |
Family
ID=69729232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911146215.3A Active CN110880161B (en) | 2019-11-21 | 2019-11-21 | Depth image stitching and fusion method and system for multiple hosts and multiple depth cameras |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110880161B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112433640B (en) * | 2020-11-11 | 2022-06-24 | 大庆思特传媒科技有限公司 | Automatic calibration interactive projection system of multiple image sensors and implementation method thereof |
CN112433641B (en) * | 2020-11-11 | 2022-06-17 | 大庆思特传媒科技有限公司 | Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors |
CN114040097A (en) * | 2021-10-27 | 2022-02-11 | 苏州金螳螂文化发展股份有限公司 | Large-scene interactive action capturing system based on multi-channel image acquisition and fusion |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102682477A (en) * | 2012-05-16 | 2012-09-19 | 南京邮电大学 | Regular scene three-dimensional information extracting method based on structure prior |
CN103945210A (en) * | 2014-05-09 | 2014-07-23 | 长江水利委员会长江科学院 | Multi-camera photographing method for realizing shallow depth of field effect |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101488229B (en) * | 2009-02-24 | 2011-04-27 | 南京师范大学 | PCI three-dimensional analysis module oriented implantation type ture three-dimensional stereo rendering method |
CN103489214A (en) * | 2013-09-10 | 2014-01-01 | 北京邮电大学 | Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system |
CN104134188A (en) * | 2014-07-29 | 2014-11-05 | 湖南大学 | Three-dimensional visual information acquisition method based on two-dimensional and three-dimensional video camera fusion |
CN104519340B (en) * | 2014-12-30 | 2016-08-17 | 余俊池 | Panoramic video joining method based on many depth images transformation matrix |
CN104820964B (en) * | 2015-04-17 | 2018-03-27 | 深圳华侨城文化旅游科技股份有限公司 | Based on the image mosaic fusion method projected more and system |
CN105374019B (en) * | 2015-09-30 | 2018-06-19 | 华为技术有限公司 | A kind of more depth map fusion methods and device |
WO2017124116A1 (en) * | 2016-01-15 | 2017-07-20 | Bao Sheng | Searching, supplementing and navigating media |
CN106910243A (en) * | 2017-02-09 | 2017-06-30 | 景致三维(江苏)股份有限公司 | The method and device of automatic data collection and three-dimensional modeling based on turntable |
CN107423008A (en) * | 2017-03-10 | 2017-12-01 | 北京市中视典数字科技有限公司 | A kind of multi-cam picture fusion method and scene display device in real time |
CN106991645B (en) * | 2017-03-22 | 2018-09-28 | 腾讯科技(深圳)有限公司 | Image split-joint method and device |
CN107274336B (en) * | 2017-06-14 | 2019-07-12 | 电子科技大学 | A kind of Panorama Mosaic method for vehicle environment |
CN108174089B (en) * | 2017-12-27 | 2020-07-17 | 深圳普思英察科技有限公司 | Backing image splicing method and device based on binocular camera |
CN108470323B (en) * | 2018-03-13 | 2020-07-31 | 京东方科技集团股份有限公司 | Image splicing method, computer equipment and display device |
-
2019
- 2019-11-21 CN CN201911146215.3A patent/CN110880161B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102682477A (en) * | 2012-05-16 | 2012-09-19 | 南京邮电大学 | Regular scene three-dimensional information extracting method based on structure prior |
CN103945210A (en) * | 2014-05-09 | 2014-07-23 | 长江水利委员会长江科学院 | Multi-camera photographing method for realizing shallow depth of field effect |
Also Published As
Publication number | Publication date |
---|---|
CN110880161A (en) | 2020-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10116922B2 (en) | Method and system for automatic 3-D image creation | |
CN109064545B (en) | Method and device for data acquisition and model generation of house | |
CN110880161B (en) | Depth image stitching and fusion method and system for multiple hosts and multiple depth cameras | |
US7825948B2 (en) | 3D video conferencing | |
CN114339194B (en) | Projection display method, apparatus, projection device, and computer-readable storage medium | |
CN110377148B (en) | Computer readable medium, method of training object detection algorithm, and training apparatus | |
EP3441788A1 (en) | Apparatus and method for generating a representation of a scene | |
US20180192032A1 (en) | System, Method and Software for Producing Three-Dimensional Images that Appear to Project Forward of or Vertically Above a Display Medium Using a Virtual 3D Model Made from the Simultaneous Localization and Depth-Mapping of the Physical Features of Real Objects | |
KR20150050172A (en) | Apparatus and Method for Selecting Multi-Camera Dynamically to Track Interested Object | |
TW201533612A (en) | Calibration of sensors and projector | |
WO2018028152A1 (en) | Image acquisition device and virtual reality device | |
CN105912145A (en) | Laser pen mouse system and image positioning method thereof | |
CN111970454B (en) | Shot picture display method, device, equipment and storage medium | |
KR20140121529A (en) | Method and apparatus for formating light field image | |
CN102508548A (en) | Operation method and system for electronic information equipment | |
EP3742396A1 (en) | Image processing | |
TW201347813A (en) | System and method for detecting a shot direction of a light gun | |
CN206378680U (en) | 3D cameras based on 360 degree of spacescans of structure light multimode and positioning | |
CN113870213A (en) | Image display method, image display device, storage medium, and electronic apparatus | |
CN113382224B (en) | Interactive handle display method and device based on holographic sand table | |
CN112073640B (en) | Panoramic information acquisition pose acquisition method, device and system | |
KR20140014868A (en) | Gaze tracking apparatus and method | |
KR102298047B1 (en) | Method of recording digital contents and generating 3D images and apparatus using the same | |
TW201205344A (en) | Adjusting system and method for screen, advertisement board including the same | |
CN110784728B (en) | Image data processing method and device and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |