CN104504671A - Method for generating virtual-real fusion image for stereo display - Google Patents

Method for generating virtual-real fusion image for stereo display Download PDF

Info

Publication number
CN104504671A
CN104504671A CN201410765678.9A CN201410765678A CN104504671A CN 104504671 A CN104504671 A CN 104504671A CN 201410765678 A CN201410765678 A CN 201410765678A CN 104504671 A CN104504671 A CN 104504671A
Authority
CN
China
Prior art keywords
depth map
present frame
scene
virtual
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410765678.9A
Other languages
Chinese (zh)
Other versions
CN104504671B (en
Inventor
张骏飞
王梁昊
李东晓
张明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201410765678.9A priority Critical patent/CN104504671B/en
Publication of CN104504671A publication Critical patent/CN104504671A/en
Application granted granted Critical
Publication of CN104504671B publication Critical patent/CN104504671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for generating a virtual-real fusion image for stereo display. The method comprises the following steps: (1) utilizing a monocular RGB-D camera to acquire a depth map and a color map in real scene; (2) rebuilding a three-dimensional scene surface model and calculating a camera parameter; (3) mapping, thereby acquiring the depth map and the color map in a virtual viewpoint position; (4) finishing the three-dimensional registration of a virtual object, rendering for acquiring the depth map and the color map of the virtual object, and performing virtual-real fusion, thereby acquiring a virtual-real fusion content for stereo display. According to the method provided by the invention, the monocular RGB-D camera is used for shooting, the three-dimensional scene surface model is rebuilt frame by frame and the model is simultaneously used for tracking the camera and mapping the virtual viewpoint, so that higher camera tracking precision and virtual object registration precision can be acquired, the cavities appearing in the virtual viewpoint drawing technology based on the image can be effectively handled, the shielding judgment and collision detection for the virtual-real scene can be realized and a stereo display device can be utilized to acquire a vivid stereo display effect.

Description

A kind of virtual reality fusion image generating method for stereo display
Technical field
The invention belongs to technical field of three-dimensional stereo, be specifically related to a kind of virtual reality fusion image generating method for stereo display.
Background technology
Stereo display technique has become the theme in IT, communication, broadcasting and TV field, also becomes people's noun that what's frequently heard can be repeated in detail.Along with becoming better and approaching perfection day by day of stereo display technique, people are also more and more higher with expectation to the enthusiasm of 3D content.Along with the continuous increase of the market demand, people are seeking the 3D content generating mode of more convenient low cost.Virtual reality fusion refers to the technology by computer technology, virtual information being dissolved into real world, all has wide practical use in fields such as medical science, amusement and military affairs.The combination of stereo display technique and virtual reality fusion is trend of the times, and stereo display technique is that virtual reality fusion provides better exhibition method on the one hand, and virtual reality fusion is then for 3D content production brings new method on the other hand.
For monocular-camera, the key of 3D content production is the generating algorithm of virtual view.Virtual view generating algorithm can be divided into two large classes: the virtual viewpoint rendering technology based on model and the virtual viewpoint rendering technology based on image.
Virtual viewpoint rendering technology based on model refers to and first adopts computer vision knowledge to rebuild the three-dimensional model of photographed scene, is then drawn the technology obtaining new virtual visual point image by Computerized three-dimensional graphics rendering technology.This method can obtain good virtual viewpoint rendering effect in the simple scene of structure, but for complex scene, the difficulty that accurate three-dimensional model is set up is high, and required calculation resources and data volume are also very large, therefore and be not suitable for the virtual viewpoint rendering of natural scene in real world.
Virtual viewpoint rendering technology based on image refers to and does not need accurate three-dimensional scene models, directly utilizes the true picture that video camera is taken, and by binocular or multi-lens camera model, maps a class technology of the virtual visual point image made new advances.Compare the virtual viewpoint rendering technology based on model, it has the plurality of advantages such as input data volume is little, and image data acquisition is simple, and speed of drawing is fast, be applicable to very much the drafting application of nature three-dimensional scenic, but the hole-filling problem produced owing to blocking in the region that parallax is larger is difficult to solve.
Video camera tracer technique is technology the most key in virtual reality fusion, and it refers to that system should be able to calculate the position residing for video camera in real time accurately.In virtual reality fusion system, can video camera tracer technique is directly connected to dummy object correctly all the time be placed to correct position, will determine stability and the sense of reality of virtual reality fusion effect.Initial virtual reality fusion system adopts the method based on mark to carry out video camera tracking, special graph is used to estimate the athletic posture of video camera as mark, this method is relatively simple, but completes owing to following the trail of dependence special marking thing, and therefore use scenes is not extensive.
Last century the nineties, the people such as Smith and Cheeseman give the solution based on estimation theory of instant location with map structuring (SLAM), build sparse unique point cloud by extracting image characteristic point, recycling unique point cloud carries out video camera tracking.On this basis, continue to bring out out many video camera tracing schemes based on monocular RGB video camera, as the MonoSLAM system that Davision proposes, PTAM (Parallel Tracking and the location) algorithm that the people such as Klein propose, researcher can be carried out flexibly without the need to the three-dimensional registration of mark, but tracking precision is still not high enough.
Summary of the invention
For the above-mentioned technical matters existing for prior art, the invention provides a kind of virtual reality fusion image generating method for stereo display, higher video camera can be obtained and follow the trail of precision and dummy object registration precision, the cavity occurred in the virtual viewpoint rendering technology based on image can be tackled preferably.
For a virtual reality fusion image generating method for stereo display, comprise the steps:
(1) monocular RGB-D (red green blue tricolor adds the distance degree of depth) camera acquisition is utilized to obtain the depth map D of monocular RGB-D camera views about scene present frame r_kwith chromaticity diagram C r_k;
(2) utilize the monocular RGB-D camera parameters of the 3 D scene rebuilding model determination present frame of former frame, and utilize depth map D r_kwith chromaticity diagram C r_kdescribed 3 D scene rebuilding model is upgraded, obtains the 3 D scene rebuilding model of present frame;
(3) according to the depth map D collected r_kwith chromaticity diagram C r_kusing the 3 D scene rebuilding model of present frame as guidance, binocular camera model is utilized to obtain the depth map D of virtual video camera viewpoint about scene present frame by mapping v_kwith chromaticity diagram C v_k;
(4) to dummy object carry out three-dimensional register and play up obtain monocular RGB-D camera views and virtual video camera viewpoint about the depth map of dummy object and cromogram; Utilize two viewpoints to carry out shadowing and collision detection to merge the chromaticity diagram of two viewpoints about scene and dummy object about the depth map of scene and dummy object, obtain the virtual reality fusion image for stereo display.
The detailed process of described step (2) is as follows:
2.1 extract the depth map D of monocular RGB-D camera views about scene former frame from the 3 D scene rebuilding model of former frame r_k-1;
2.2 couples of present frame depth map D r_kwith former frame depth map D r_k-1mate, calculate the monocular RGB-D camera parameters of present frame;
2.3 from the point not in the know of matching process filtering obtain moving object region, using moving object region as template from the depth map D of present frame r_kwith chromaticity diagram C r_kin isolate moving object and static background;
2.4 utilize depth information and the color information of present frame static scene according to the monocular RGB-D camera parameters of present frame, adopt the 3 D scene rebuilding model of body Integrated Algorithm to former frame to upgrade, obtain the 3 D scene rebuilding model of present frame.
Preferably, utilize Raycast algorithm from the 3 D scene rebuilding model of former frame, extract the depth map D of monocular RGB-D camera views about scene former frame r_k-1.
Preferably, adopt ICP (iterative closest point) algorithm to depth map D r_kwith depth map D r_k-1mate.
Described point not in the know is present frame depth map D r_kin with former frame depth map D r_k-1the pixel do not matched.
In described step 2.3, from present frame depth map D r_kpoint not in the know in filter out belong to point not in the know in scene on object edge, point not in the know that monocular RGB-D video camera cannot get depth value and the point not in the know that fragmentary fritter is assembled, thus obtain moving object region.
The detailed process of described step (3) is as follows:
The monocular RGB-D camera parameters of present frame is substituted into the virtual video camera parameter calculating present frame in binocular camera model by 3.1, extracts the depth map D of virtual video camera viewpoint about scene present frame according to described virtual video camera parameter from the 3 D scene rebuilding model of present frame v1_kwith chromaticity diagram C v1_k;
3.2 according to binocular camera model, from present frame depth map D r_kmap and obtain the depth map D of virtual video camera viewpoint about scene present frame v2_k;
The present frame depth map D that 3.3 pairs of mappings obtain v2_kin resampling cavity fill up;
3.4 according to the depth map D after filling up v2_kwith binocular camera model, from present frame chromaticity diagram C r_kmap and obtain the chromaticity diagram C of virtual video camera viewpoint about scene present frame v2_k;
3.5 utilize the present frame depth map D extracting and obtain v1_kwith chromaticity diagram C v1_kto the depth map D mapping the present frame obtained v2_kwith chromaticity diagram C v2_kcarry out blocking hole-filling, finally obtain the depth map D of virtual video camera viewpoint about scene present frame v_kwith chromaticity diagram C v_k.
Preferably, utilize Raycast algorithm from the 3 D scene rebuilding model of present frame, extract the depth map D of virtual video camera viewpoint about scene present frame v1_kwith chromaticity diagram C v1_k.
The present invention adopts monocular RGB-D video camera to take, reconstruction of three-dimensional model of place frame by frame, model is used simultaneously in video camera and follows the trail of and virtual view mapping, higher video camera can be obtained and follow the trail of precision and dummy object registration precision, the cavity occurred in the virtual viewpoint rendering technology based on image mapped can be tackled preferably, shadowing and the collision detection of actual situation scene can be realized, utilize 3D stereoscopic display device can obtain stereo display effect true to nature.
Accompanying drawing explanation
Fig. 1 is the treatment scheme schematic diagram of video camera tracing module of the present invention.
Fig. 2 is the treatment scheme schematic diagram of virtual viewpoint rendering module of the present invention.
Embodiment
In order to more specifically describe the present invention, below in conjunction with the drawings and the specific embodiments, technical scheme of the present invention is described in detail.
The present invention is used for the virtual reality fusion image generating method of stereo display, comprises the steps:
(1) monocular RGB-D video camera is utilized to carry out the acquisition of depth information of scene and colouring information.
(2) utilize video camera tracing module to determine the camera parameters of every frame according to 3 D scene rebuilding model, frame by frame depth information of scene and colouring information are incorporated in 3 D scene rebuilding model simultaneously.
2.1 utilize Raycast algorithm, extract the depth map of previous frame according to the video camera attitude of the previous frame preserved from 3 D scene rebuilding model;
The depth map of 2.2 pairs of present frames carries out pre-service.Utilize ICP algorithm, the depth map of previous frame and present frame is mated, calculate the camera motion from previous frame to present frame, and then calculate the camera parameters of present frame;
2.3 from the point not in the know of matching process filtering obtain moving object region, using moving object region as template at present frame depth map D r_kwith present frame chromaticity diagram C r_kin isolate moving object and static background;
2.4 utilize body Integrated Algorithm, according to the camera parameters of present frame, the depth information of static scene in present frame and chromatic information are incorporated 3 D scene rebuilding model.This model is the square in a three dimensions, and this model is made up of many little squares of uniform size, and each little square stores Weighted T SDF value and the weighted color value of the locus representated by it.
As shown in Figure 1, what video camera tracing module adopted is video camera method for tracing based on model, utilizes the three-dimensional scenic surface model rebuild frame by frame as match objects, and isolates moving object from coupling point not in the know, improves the anti-interference trace ability of video camera.Present embodiment adopts monocular RGB-D video camera as collecting device, and video camera tracing process is only suitable for depth information and mates.First need to demarcate video camera, to obtain video camera internal reference.Need after every frame Depth Information Acquistion to carry out noise reduction process to depth map, present embodiment adopts two-sided filter to carry out filtering.According to video camera internal reference, depth map can be become the three-dimensional point cloud under camera coordinate system.Suppose that camera motion is mild, the point cloud of ICP algorithm to the some cloud of present frame and former frame can be utilized to carry out Rapid matching, obtain the relative motion of former frame and present frame video camera, and then calculate present frame camera parameters according to former frame camera parameters, what wherein ICP algorithm adopted is that an identity distance energy theorem is as follows:
E ( T g , k ) = Σ u ∈ U | | ( T g , k V k ( u ) - V k - 1 ( u ) ) T N k - 1 ( u ) | | 2
Wherein: V k, V k-1the vertex graph of present frame and former frame three-dimensional point cloud respectively, N k-1the normal direction spirogram of former frame three-dimensional point cloud, T g,kit is the camera motion matrix between two frames.
In addition, the point not in the know in matching process can filter out moving object position by morphological operation, can obtain the scene depth figure of filtering motion artifacts as template.By the camera parameters obtained, present frame depth map can be mapped in space again, namely obtain the scene surface each point position in space that present frame photographs, these depth informations are incorporated three-dimensional scenic surface model.This model is the square in a three dimensions, and this model is made up of many little squares of uniform size, and each little square stores Weighted T SDF value and the weighted color value of the locus representated by it.Wherein TSDF value representative be this locus to the distance from its nearest solid object surface, the value stored in little square is obtained by the weighting of each frame TSDF value, and weighting scheme is as follows:
d k = w k - 1 d k - 1 + w k ′ d k ′ w k - 1 + w k ′
w k=w k-1+w k′
Wherein: k-1, k, d k 'former frame Weighted T SDF value respectively, present frame Weighted T SDF value and present frame TSDF value, w k-1, w kformer frame and present frame weight respectively, w k 'for every frame increases weight, in this method, be set to constant 1.Color-weighted mode is identical with TSDF weighting scheme.
(3) depth map utilizing virtual viewpoint rendering module to collect according to every frame and cromogram, use 3 D scene rebuilding model as guidance, utilize binocular camera Model Mapping to obtain depth map and the cromogram of virtual view position.
3.1, according to the camera parameters of binocular camera model, utilize Raycast algorithm, extract the depth map obtaining camera site, the depth map of virtual view position and cromogram from 3 D scene rebuilding model;
3.2 use and extract the depth map of depth map to present frame of camera sites obtained and fill up, and according to binocular camera model, map from the depth map of the present frame after filling up the depth map obtaining virtual view position;
Resampling cavity in the depth map of the virtual view position that the 3.3 pairs of mappings obtain is filled up;
3.4 use virtual view positions to fill up after depth map, according to the camera parameters of binocular camera, map the cromogram obtaining virtual view position from the cromogram of present frame;
3.5 utilize the depth map that extracts in model and cromogram again to carry out hole-filling to the depth map of virtual view position and cromogram, obtain depth map and the cromogram of final virtual view position.
As shown in Figure 2, the methods combining virtual viewpoint rendering technology based on image and the virtual viewpoint rendering technology based on model that adopt of virtual viewpoint rendering module.Module comprises blocks hole-filling unit, depth map map unit, resampling hole-filling unit, cromogram inverse mapping unit.After getting the depth map of a frame, first Raycast algorithm is utilized from 3 D scene rebuilding model, to extract the depth map of present frame as auxiliary depth map, hole-filling unit with this secondary auxiliary depth map as a reference, carries out hole-filling to present frame depth map.Following depth map map unit utilizes binocular camera model that present frame depth map is mapped to virtual view position, thus obtains virtual view depth map.Then resampling hole-filling unit needs the cavity to producing because of resampling in virtual view depth map to fill up operation.The cromogram of present frame utilizes the determined inverse mapping relation of virtual view depth map after filling up, and through cromogram inverse mapping unit, obtains virtual view cromogram.Finally block hole-filling unit and from 3 D scene rebuilding model, extract the depth map of virtual camera position and cromogram as a reference, fill up with the virtual view depth map and cromogram that block cavity, so far obtain depth map and the cromogram of the virtual view position not having cavity.
(4) utilize virtual reality fusion module to carry out the three-dimensional registration of dummy object, and play up the dummy object depth map and cromogram that obtain camera site and virtual camera position.Actual situation image is merged, and utilizes depth information to carry out shadowing and collision detection, obtain the virtual reality fusion content for stereo display.
Virtual reality fusion module is the module true camera site and the dummy object of virtual camera position two viewpoints and the depth map of real scene and cromogram being carried out merging.Module comprises three-dimensional registering unit, shadowing unit, collision detection unit and dummy object control module, and the operation of each unit is carried out for two viewpoints all simultaneously.Dummy object control module can monitor input through keyboard, makes dummy object carry out scaling in world coordinate system, mobile, the motions such as rotation.Three-dimensional registering unit, according to the locus of dummy object in world coordinate system and the camera parameters of two viewpoints, calculates cromogram and depth map that dummy object presents in two viewpoint projection planes.Shadowing unit judges show dummy object or real scene at the depth value of same position, to obtain real occlusion effect by monitoring dummy object and real scene.This method obtains the depth map of the dummy object front and back on video camera direction of visual lines simultaneously, collision detection unit is by judging the depth map of dummy object front and back, and the relation between real scene depth map judges whether to there occurs collision, the position occurred for collision is indicated with redness.Final according to practical application, virtual reality fusion result can be determined to show with various stereo format.

Claims (8)

1., for a virtual reality fusion image generating method for stereo display, comprise the steps:
(1) monocular RGB-D camera acquisition is utilized to obtain the depth map D of monocular RGB-D camera views about scene present frame r_kwith chromaticity diagram C r_k;
(2) utilize the monocular RGB-D camera parameters of the 3 D scene rebuilding model determination present frame of former frame, and utilize depth map D r_kwith chromaticity diagram C r_kdescribed 3 D scene rebuilding model is upgraded, obtains the 3 D scene rebuilding model of present frame;
(3) according to the depth map D collected r_kwith chromaticity diagram C r_kusing the 3 D scene rebuilding model of present frame as guidance, binocular camera model is utilized to obtain the depth map D of virtual video camera viewpoint about scene present frame by mapping v_kwith chromaticity diagram C v_k;
(4) to dummy object carry out three-dimensional register and play up obtain monocular RGB-D camera views and virtual video camera viewpoint about the depth map of dummy object and cromogram; Utilize two viewpoints to carry out shadowing and collision detection to merge the chromaticity diagram of two viewpoints about scene and dummy object about the depth map of scene and dummy object, obtain the virtual reality fusion image for stereo display.
2. virtual reality fusion image generating method according to claim 1, is characterized in that: the detailed process of described step (2) is as follows:
2.1 extract the depth map D of monocular RGB-D camera views about scene former frame from the 3 D scene rebuilding model of former frame r_k-1;
2.2 couples of present frame depth map D r_kwith former frame depth map D r_k-1mate, calculate the monocular RGB-D camera parameters of present frame;
2.3 from the point not in the know of matching process filtering obtain moving object region, using moving object region as template from the depth map D of present frame r_kwith chromaticity diagram C r_kin isolate moving object and static background;
2.4 utilize depth information and the color information of present frame static scene according to the monocular RGB-D camera parameters of present frame, adopt the 3 D scene rebuilding model of body Integrated Algorithm to former frame to upgrade, obtain the 3 D scene rebuilding model of present frame.
3. virtual reality fusion image generating method according to claim 2, is characterized in that: utilize Raycast algorithm from the 3 D scene rebuilding model of former frame, extract the depth map D of monocular RGB-D camera views about scene former frame r_k-1.
4. virtual reality fusion image generating method according to claim 2, is characterized in that: adopt ICP algorithm to depth map D r_kwith depth map D r_k-1mate.
5. virtual reality fusion image generating method according to claim 2, is characterized in that: described point not in the know is present frame depth map D r_kin with former frame depth map D r_k-1the pixel do not matched.
6. virtual reality fusion image generating method according to claim 2, is characterized in that: in described step 2.3, from present frame depth map D r_kpoint not in the know in filter out belong to point not in the know in scene on object edge, point not in the know that monocular RGB-D video camera cannot get depth value and the point not in the know that fragmentary fritter is assembled, thus obtain moving object region.
7. virtual reality fusion image generating method according to claim 1, is characterized in that: the detailed process of described step (3) is as follows:
The monocular RGB-D camera parameters of present frame is substituted into the virtual video camera parameter calculating present frame in binocular camera model by 3.1, extracts the depth map D of virtual video camera viewpoint about scene present frame according to described virtual video camera parameter from the 3 D scene rebuilding model of present frame v1_kwith chromaticity diagram C v1_k;
3.2 according to binocular camera model, from present frame depth map D r_kmap and obtain the depth map D of virtual video camera viewpoint about scene present frame v2_k;
The present frame depth map D that 3.3 pairs of mappings obtain v2_kin resampling cavity fill up;
3.4 according to the depth map D after filling up v2_kwith binocular camera model, from present frame chromaticity diagram C r_kmap and obtain the chromaticity diagram C of virtual video camera viewpoint about scene present frame v2_k;
3.5 utilize the present frame depth map D extracting and obtain v1_kwith chromaticity diagram C v1_kto the depth map D mapping the present frame obtained v2_kwith chromaticity diagram C v2_kcarry out blocking hole-filling, finally obtain the depth map D of virtual video camera viewpoint about scene present frame v_kwith chromaticity diagram C v_k.
8. virtual reality fusion image generating method according to claim 7, is characterized in that: utilize Raycast algorithm from the 3 D scene rebuilding model of present frame, extract the depth map D of virtual video camera viewpoint about scene present frame v1_kwith chromaticity diagram C v1_k.
CN201410765678.9A 2014-12-12 2014-12-12 Method for generating virtual-real fusion image for stereo display Active CN104504671B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410765678.9A CN104504671B (en) 2014-12-12 2014-12-12 Method for generating virtual-real fusion image for stereo display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410765678.9A CN104504671B (en) 2014-12-12 2014-12-12 Method for generating virtual-real fusion image for stereo display

Publications (2)

Publication Number Publication Date
CN104504671A true CN104504671A (en) 2015-04-08
CN104504671B CN104504671B (en) 2017-04-19

Family

ID=52946065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410765678.9A Active CN104504671B (en) 2014-12-12 2014-12-12 Method for generating virtual-real fusion image for stereo display

Country Status (1)

Country Link
CN (1) CN104504671B (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106161907A (en) * 2016-08-31 2016-11-23 北京的卢深视科技有限公司 Obtain the security protection network cameras of scene three-dimensional information
CN106296789A (en) * 2016-08-05 2017-01-04 深圳迪乐普数码科技有限公司 A kind of it is virtually implanted method and the terminal that object shuttles back and forth in outdoor scene
CN106558076A (en) * 2015-09-16 2017-04-05 富士通株式会社 The method and apparatus of three-dimensional reconstruction object
ES2610797A1 (en) * 2015-10-29 2017-05-03 Mikonos Xviii Sl Procedure for virtual showcase in situ. (Machine-translation by Google Translate, not legally binding)
CN107071379A (en) * 2015-11-02 2017-08-18 联发科技股份有限公司 The enhanced method of display delay and mancarried device
CN107742300A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Image processing method, device, electronic installation and computer-readable recording medium
CN108021896A (en) * 2017-12-08 2018-05-11 北京百度网讯科技有限公司 Image pickup method, device, equipment and computer-readable medium based on augmented reality
WO2018119889A1 (en) * 2016-12-29 2018-07-05 深圳前海达闼云端智能科技有限公司 Three-dimensional scene positioning method and device
CN108292358A (en) * 2015-12-15 2018-07-17 英特尔公司 The generation of the synthesis three-dimensional object image of system for identification
CN108399634A (en) * 2018-01-16 2018-08-14 达闼科技(北京)有限公司 The RGB-D data creation methods and device calculated based on high in the clouds
CN108416846A (en) * 2018-03-16 2018-08-17 北京邮电大学 It is a kind of without the three-dimensional registration algorithm of mark
CN108520204A (en) * 2018-03-16 2018-09-11 西北大学 A kind of face identification method
CN108573524A (en) * 2018-04-12 2018-09-25 东南大学 Interactive real-time, freedom stereo display method based on rendering pipeline
CN109726760A (en) * 2018-12-29 2019-05-07 驭势科技(北京)有限公司 The method and device of training picture synthetic model
CN110033510A (en) * 2019-03-25 2019-07-19 阿里巴巴集团控股有限公司 Color mapping relationship is established for correcting the method and device of rendering color of image
CN110390719A (en) * 2019-05-07 2019-10-29 香港光云科技有限公司 Based on flight time point cloud reconstructing apparatus
CN110663256A (en) * 2017-05-31 2020-01-07 维里逊专利及许可公司 Method and system for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene
US10573075B2 (en) 2016-05-19 2020-02-25 Boe Technology Group Co., Ltd. Rendering method in AR scene, processor and AR glasses
CN110928414A (en) * 2019-11-22 2020-03-27 上海交通大学 Three-dimensional virtual-real fusion experimental system
CN111223192A (en) * 2020-01-09 2020-06-02 北京华捷艾米科技有限公司 Image processing method and application method, device and equipment thereof
CN111754558A (en) * 2019-03-26 2020-10-09 舜宇光学(浙江)研究院有限公司 Matching method for RGB-D camera system and binocular imaging system, system and computing system thereof
CN111768496A (en) * 2017-08-24 2020-10-13 Oppo广东移动通信有限公司 Image processing method, image processing device, server and computer-readable storage medium
CN111899293B (en) * 2020-09-29 2021-01-08 成都索贝数码科技股份有限公司 Virtual and real shielding processing method in AR application
CN112291549A (en) * 2020-09-23 2021-01-29 广西壮族自治区地图院 Method for acquiring stereoscopic sequence frame images of raster topographic map based on DEM
CN112509151A (en) * 2020-12-11 2021-03-16 华中师范大学 Method for generating sense of reality of virtual object in teaching scene
CN112637582A (en) * 2020-12-09 2021-04-09 吉林大学 Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
US11138740B2 (en) 2017-09-11 2021-10-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing methods, image processing apparatuses, and computer-readable storage medium
CN113593008A (en) * 2021-07-06 2021-11-02 四川大学 True 3D image significant reconstruction method under complex scene
JP2021535466A (en) * 2018-08-23 2021-12-16 株式会社ソニー・インタラクティブエンタテインメント Methods and systems for reconstructing scene color and depth information
CN115578499A (en) * 2022-11-29 2023-01-06 北京天图万境科技有限公司 Fitting reconstruction method and device for asymmetric color misregistration consistency

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101771893A (en) * 2010-01-05 2010-07-07 浙江大学 Video frequency sequence background modeling based virtual viewpoint rendering method
CN101902657A (en) * 2010-07-16 2010-12-01 浙江大学 Method for generating virtual multi-viewpoint images based on depth image layering
CN102568026A (en) * 2011-12-12 2012-07-11 浙江大学 Three-dimensional enhancing realizing method for multi-viewpoint free stereo display
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101771893A (en) * 2010-01-05 2010-07-07 浙江大学 Video frequency sequence background modeling based virtual viewpoint rendering method
CN101902657A (en) * 2010-07-16 2010-12-01 浙江大学 Method for generating virtual multi-viewpoint images based on depth image layering
CN102568026A (en) * 2011-12-12 2012-07-11 浙江大学 Three-dimensional enhancing realizing method for multi-viewpoint free stereo display
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
IZADI S ET AL: "KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera", 《PROCEEDINGS OF THE 24TH ANNXIAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY》 *
李佳宁等: "基于体分割重建的增强现实交互***", 《光电子技术》 *
黄浩等: "基于Kinect的虚拟视点生成技术", 《信息技术》 *

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106558076B (en) * 2015-09-16 2019-06-18 富士通株式会社 The method and apparatus of three-dimensional reconstruction object
CN106558076A (en) * 2015-09-16 2017-04-05 富士通株式会社 The method and apparatus of three-dimensional reconstruction object
ES2610797A1 (en) * 2015-10-29 2017-05-03 Mikonos Xviii Sl Procedure for virtual showcase in situ. (Machine-translation by Google Translate, not legally binding)
CN107071379A (en) * 2015-11-02 2017-08-18 联发科技股份有限公司 The enhanced method of display delay and mancarried device
US11574453B2 (en) 2015-12-15 2023-02-07 Tahoe Research, Ltd. Generation of synthetic 3-dimensional object images for recognition systems
CN108292358A (en) * 2015-12-15 2018-07-17 英特尔公司 The generation of the synthesis three-dimensional object image of system for identification
US12014471B2 (en) 2015-12-15 2024-06-18 Tahoe Research, Ltd. Generation of synthetic 3-dimensional object images for recognition systems
US10573075B2 (en) 2016-05-19 2020-02-25 Boe Technology Group Co., Ltd. Rendering method in AR scene, processor and AR glasses
CN106296789A (en) * 2016-08-05 2017-01-04 深圳迪乐普数码科技有限公司 A kind of it is virtually implanted method and the terminal that object shuttles back and forth in outdoor scene
CN106296789B (en) * 2016-08-05 2019-08-06 深圳迪乐普数码科技有限公司 It is a kind of to be virtually implanted the method and terminal that object shuttles in outdoor scene
CN106161907A (en) * 2016-08-31 2016-11-23 北京的卢深视科技有限公司 Obtain the security protection network cameras of scene three-dimensional information
WO2018119889A1 (en) * 2016-12-29 2018-07-05 深圳前海达闼云端智能科技有限公司 Three-dimensional scene positioning method and device
CN110663256B (en) * 2017-05-31 2021-12-14 维里逊专利及许可公司 Method and system for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene
CN110663256A (en) * 2017-05-31 2020-01-07 维里逊专利及许可公司 Method and system for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene
CN111768496A (en) * 2017-08-24 2020-10-13 Oppo广东移动通信有限公司 Image processing method, image processing device, server and computer-readable storage medium
CN111768496B (en) * 2017-08-24 2024-02-09 Oppo广东移动通信有限公司 Image processing method, device, server and computer readable storage medium
US11138740B2 (en) 2017-09-11 2021-10-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing methods, image processing apparatuses, and computer-readable storage medium
CN107742300A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Image processing method, device, electronic installation and computer-readable recording medium
CN108021896A (en) * 2017-12-08 2018-05-11 北京百度网讯科技有限公司 Image pickup method, device, equipment and computer-readable medium based on augmented reality
CN108399634B (en) * 2018-01-16 2020-10-16 达闼科技(北京)有限公司 RGB-D data generation method and device based on cloud computing
CN108399634A (en) * 2018-01-16 2018-08-14 达闼科技(北京)有限公司 The RGB-D data creation methods and device calculated based on high in the clouds
CN108416846A (en) * 2018-03-16 2018-08-17 北京邮电大学 It is a kind of without the three-dimensional registration algorithm of mark
CN108520204A (en) * 2018-03-16 2018-09-11 西北大学 A kind of face identification method
CN108573524A (en) * 2018-04-12 2018-09-25 东南大学 Interactive real-time, freedom stereo display method based on rendering pipeline
CN108573524B (en) * 2018-04-12 2022-02-08 东南大学 Interactive real-time free stereo display method based on rendering pipeline
JP7403528B2 (en) 2018-08-23 2023-12-22 株式会社ソニー・インタラクティブエンタテインメント Method and system for reconstructing color and depth information of a scene
JP2021535466A (en) * 2018-08-23 2021-12-16 株式会社ソニー・インタラクティブエンタテインメント Methods and systems for reconstructing scene color and depth information
CN109726760B (en) * 2018-12-29 2021-04-16 驭势科技(北京)有限公司 Method and device for training picture synthesis model
CN109726760A (en) * 2018-12-29 2019-05-07 驭势科技(北京)有限公司 The method and device of training picture synthetic model
CN110033510A (en) * 2019-03-25 2019-07-19 阿里巴巴集团控股有限公司 Color mapping relationship is established for correcting the method and device of rendering color of image
CN110033510B (en) * 2019-03-25 2023-01-31 创新先进技术有限公司 Method and device for establishing color mapping relation for correcting rendered image color
CN111754558A (en) * 2019-03-26 2020-10-09 舜宇光学(浙江)研究院有限公司 Matching method for RGB-D camera system and binocular imaging system, system and computing system thereof
CN111754558B (en) * 2019-03-26 2023-09-26 舜宇光学(浙江)研究院有限公司 Matching method for RGB-D camera system and binocular imaging system and related system thereof
CN110390719A (en) * 2019-05-07 2019-10-29 香港光云科技有限公司 Based on flight time point cloud reconstructing apparatus
CN110928414A (en) * 2019-11-22 2020-03-27 上海交通大学 Three-dimensional virtual-real fusion experimental system
CN111223192A (en) * 2020-01-09 2020-06-02 北京华捷艾米科技有限公司 Image processing method and application method, device and equipment thereof
CN111223192B (en) * 2020-01-09 2023-10-03 北京华捷艾米科技有限公司 Image processing method, application method, device and equipment thereof
CN112291549B (en) * 2020-09-23 2021-07-09 广西壮族自治区地图院 Method for acquiring stereoscopic sequence frame images of raster topographic map based on DEM
CN112291549A (en) * 2020-09-23 2021-01-29 广西壮族自治区地图院 Method for acquiring stereoscopic sequence frame images of raster topographic map based on DEM
CN111899293B (en) * 2020-09-29 2021-01-08 成都索贝数码科技股份有限公司 Virtual and real shielding processing method in AR application
CN112637582A (en) * 2020-12-09 2021-04-09 吉林大学 Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
CN112637582B (en) * 2020-12-09 2021-10-08 吉林大学 Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
CN112509151B (en) * 2020-12-11 2021-08-24 华中师范大学 Method for generating sense of reality of virtual object in teaching scene
CN112509151A (en) * 2020-12-11 2021-03-16 华中师范大学 Method for generating sense of reality of virtual object in teaching scene
CN113593008A (en) * 2021-07-06 2021-11-02 四川大学 True 3D image significant reconstruction method under complex scene
CN115578499A (en) * 2022-11-29 2023-01-06 北京天图万境科技有限公司 Fitting reconstruction method and device for asymmetric color misregistration consistency
CN115578499B (en) * 2022-11-29 2023-04-07 北京天图万境科技有限公司 Fitting reconstruction method and device for asymmetric color misregistration consistency

Also Published As

Publication number Publication date
CN104504671B (en) 2017-04-19

Similar Documents

Publication Publication Date Title
CN104504671A (en) Method for generating virtual-real fusion image for stereo display
CN103810685B (en) A kind of super-resolution processing method of depth map
CN103971408B (en) Three-dimensional facial model generating system and method
CN102592275B (en) Virtual viewpoint rendering method
CN102902355B (en) The space interaction method of mobile device
CN104036488B (en) Binocular vision-based human body posture and action research method
CN108513123B (en) Image array generation method for integrated imaging light field display
CN102801994B (en) Physical image information fusion device and method
CN108734776A (en) A kind of three-dimensional facial reconstruction method and equipment based on speckle
CN104077808A (en) Real-time three-dimensional face modeling method used for computer graph and image processing and based on depth information
CN103400409A (en) 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN106296825B (en) A kind of bionic three-dimensional information generating system and method
CN104599284A (en) Three-dimensional facial reconstruction method based on multi-view cellphone selfie pictures
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
CN106875437A (en) A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN102647606A (en) Stereoscopic image processor, stereoscopic image interaction system and stereoscopic image display method
CN103971379B (en) Foam stereoscopic features extracting method based on the equivalent binocular stereo vision model of single camera
CN106600632A (en) Improved matching cost aggregation stereo matching algorithm
CN103208110B (en) The conversion method and device of video image
CN106203429A (en) Based on the shelter target detection method under binocular stereo vision complex background
CN103747236A (en) 3D (three-dimensional) video processing system and method by combining human eye tracking
CN105184856A (en) Two-phase human skin three-dimensional reconstruction method based on density matching
CN111105451A (en) Driving scene binocular depth estimation method for overcoming occlusion effect
CN109218706A (en) A method of 3 D visual image is generated by single image
CN105025287A (en) Method for constructing scene stereo panoramic image by utilizing video sequence images of rotary shooting

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant