CN117853694A - Virtual-real combined rendering method of continuous depth - Google Patents

Virtual-real combined rendering method of continuous depth Download PDF

Info

Publication number
CN117853694A
CN117853694A CN202410258758.9A CN202410258758A CN117853694A CN 117853694 A CN117853694 A CN 117853694A CN 202410258758 A CN202410258758 A CN 202410258758A CN 117853694 A CN117853694 A CN 117853694A
Authority
CN
China
Prior art keywords
target object
augmented reality
model
depth
real scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410258758.9A
Other languages
Chinese (zh)
Inventor
张博
李宁驰
刘琴
赵刚
张品祥
游靳步
何金泓
宁晨宇
陈沫含
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Baihe Special Optical Research Institute Co ltd
Original Assignee
Henan Baihe Special Optical Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Baihe Special Optical Research Institute Co ltd filed Critical Henan Baihe Special Optical Research Institute Co ltd
Priority to CN202410258758.9A priority Critical patent/CN117853694A/en
Publication of CN117853694A publication Critical patent/CN117853694A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a virtual-real combined rendering method of continuous depth, which comprises the steps of obtaining a real scene image through an acquisition module, obtaining the space position information of a target object in the real scene image through a target object identification module, adjusting the camera rendering parameters of the space position of the target object and the rendering parameters of an augmented reality content model to obtain the size and the position information of the target object in a three-dimensional space, and adjusting the size of an augmented reality three-dimensional model according to the size and the space position of the target object; adjusting the shooting position and angle of the virtual camera according to the space position coordinates of the target object to enable the shooting position and angle to be matched with the watching position and the watching angle; and establishing a depth mask in the virtual scene, and combining the depth mask to restore the shielding and perspective relation between the augmented reality content model and the real scene in continuous depth so as to fuse the augmented reality model and the real scene. The invention restores the relative position relation between the augmented reality content and the real scene on the continuous depth, and enhances the sense of reality of the rendered content.

Description

Virtual-real combined rendering method of continuous depth
Technical Field
The invention relates to the technical field of augmented reality, in particular to a virtual-real combined rendering method of continuous depth.
Background
With the continuous progress of technology, the application range of the augmented reality technology is rapidly expanding. Augmented reality technology may fuse real world and virtual world information, presenting virtual visual information in a real scene through a particular display device. The rendering of the virtual-real combined material is the core for realizing the augmented reality technology and the virtual reality technology, and the vivid and natural rendering method can create experience of high reality and deep immersion for users in the virtual-real combined scene, so that the development of the augmented reality display technology in different fields is promoted.
The prior virtual-real combined rendering method for the augmented reality mainly renders two-dimensional or binocular images, such as CN114549718A lacks representation and interaction on continuous depth, so that the reality sense of the augmented reality display content is low.
Therefore, there is a need for a continuous depth virtual-real combined rendering method to improve the sense of reality of real display content.
Disclosure of Invention
The invention provides a virtual-real combination rendering method of continuous depth, which solves the problem that the existing virtual-real combination rendering method has few representation and interaction on continuous depth, restores the relative position relation between the augmented reality content and the real scene on the continuous depth, and enhances the sense of reality of the rendered content.
In order to achieve the above object, a virtual-real combined rendering method for continuous depth includes: the system comprises an acquisition module, a target object identification module, a rendering parameter calculation module and an augmented reality model rendering module;
step 1: acquiring a real scene image through an acquisition module, and acquiring spatial position information of a target object in the real scene image through a target object identification module, wherein the spatial position information comprises coordinate distribution information and depth distribution information;
step 2: adjusting a camera rendering parameter corresponding to the spatial position of a target object in the real scene image and an augmented reality content model rendering parameter to obtain the size and position information of the target object in a three-dimensional space, and adjusting the size of an augmented reality three-dimensional model according to the size and the spatial position of the target object;
step 3: adjusting the shooting position and angle of the virtual camera according to the space position coordinates of the target object to enable the shooting position and angle to be matched with the watching position and the watching angle;
step 4: and establishing a depth mask in the virtual scene according to the spatial position and depth distribution of the target object, and combining the depth mask to restore the shielding and perspective relation between the augmented reality content model and the real scene at the continuous depth so as to fuse the augmented reality model and the real scene.
Further, the capturing of the image of the real scene in step 1 by the capturing module uses at least one capturing device, including but not limited to a monocular camera, a depth camera, and a binocular camera.
Further, the spatial position information of the target object in the real scene obtained in step 1 includes, but is not limited to, a machine learning method, a deep learning method or a threshold filtering method, performing semantic segmentation on the target object to obtain a target object image, and obtaining coordinate distribution information and depth distribution information of the target object image.
Further, the adjusting the camera rendering parameters and the augmented reality content model rendering parameters corresponding to the spatial position of the target object in the real scene image in step 2 includes: mapping the real scene image from the image coordinate system to the world coordinate system by combining with the camera imaging principle to obtain the dimension of the target object in the three-dimensional spaceAnd spatial position
Resizing an augmented reality three-dimensional model based on target object size and spatial positionThe size scale factor of the target object and the augmented reality model is +.>The size of the target object and the augmented reality model is made to satisfy the formula (1):
(1)。
further, the step 3 specifically includes: obtaining the position coordinates of the viewing position in the world coordinate system by actual measurement or camera calibration modeAnd measuring the viewing angle of the viewer relative to the display deviceAnd adjusting the shooting position and angle of the virtual camera according to the space position coordinates of the target object so as to match the watching position and the watching angle.
Further, step 4 of restoring the occlusion and perspective relationship between the augmented reality content model and the real scene at the continuous depth in combination with the depth mask includes:
based on the spatial position of the target objectEstablishing a depth mask in the virtual scene by depth distribution, and if the augmented reality model has a region blocked by the depth mask, not rendering the region image;
presetting the relative position relation between the augmented reality three-dimensional model and the target object according to the targetSpatial position of objectDetermining a placement position of an augmented reality three-dimensional model in a virtual scene +.>The relative positional relationship of the augmented reality three-dimensional model and the target object includes, but is not limited to, conforming to the target object surface, being located laterally of the target object, being located above or below the target object, being located in front of or behind the target object.
Further, the fusion of step 4 includes, but is not limited to, characterizing the interrelationship of perspective transformation, occlusion, and size scale between the augmented reality model of continuous depth and the real scene.
Further, the acquisition module is used for acquiring an image of a real scene, and the target object identification module is used for identifying spatial position information of a target object in the real scene;
the rendering parameter calculation module is used for calculating the size and the position of the model according to the mapping relation between the target object image and the model, and determining the shooting position and the pitch angle parameters of the rendering camera according to the watching position;
the augmented reality model rendering module is used for matching the spatial position of the target object to render the corresponding continuous depth of the augmented reality model rendering image. Through the technical scheme, the invention has the beneficial effects that:
the invention restores the relative position relation between the augmented reality content and the real scene on the continuous depth, and enhances the sense of reality of the rendered content. Firstly, acquiring coordinate distribution and depth distribution of an image of an object, then determining relevant parameters of a model size, a shooting position and a pitch angle of a rendering camera according to a mapping relation between the image of the object and the model and a position of a viewer, establishing a depth mask under a virtual scene according to the spatial position and the depth distribution of the object, reducing a shielding and perspective relation between an augmented reality content model and a real scene on continuous depth by combining the depth mask, and finally rendering the augmented reality model according to preset and calculation parameters to enable the augmented reality model to be fused with the real scene.
Drawings
FIG. 1 is a flow chart showing steps of a continuous depth virtual-real combined rendering method according to the present invention;
FIG. 2 is a schematic block diagram of a continuous depth virtual-real combined rendering method according to the present invention;
fig. 3 is a rendering result of a virtual-real combination rendering method of continuous depth according to the present invention.
Detailed Description
The invention is further described with reference to the drawings and detailed description which follow:
examples
As shown in fig. 1-2, a virtual-real combined rendering method with continuous depth is characterized by comprising the following steps: the system comprises an acquisition module, a target object identification module, a rendering parameter calculation module and an augmented reality model rendering module;
step 1: acquiring a real scene image through an acquisition module, and acquiring spatial position information of a target object in the real scene image through a target object identification module, wherein the spatial position information comprises coordinate distribution information and depth distribution information;
step 2: adjusting a camera rendering parameter corresponding to the spatial position of a target object in the real scene image and an augmented reality content model rendering parameter to obtain the size and position information of the target object in a three-dimensional space, and adjusting the size of an augmented reality three-dimensional model according to the size and the spatial position of the target object;
step 3: adjusting the shooting position and angle of the virtual camera according to the space position coordinates of the target object to enable the shooting position and angle to be matched with the watching position and the watching angle;
step 4: and establishing a depth mask in the virtual scene according to the spatial position and depth distribution of the target object, and combining the depth mask to restore the shielding and perspective relation between the augmented reality content model and the real scene at the continuous depth so as to fuse the augmented reality model and the real scene.
The acquisition module is used for acquiring an image of a real scene, and the target object identification module is used for identifying the spatial position information of a target object in the real scene;
the rendering parameter calculation module is used for calculating the size and the position of the model according to the mapping relation between the target object image and the model, and determining the shooting position and the pitch angle parameters of the rendering camera according to the watching position;
the augmented reality model rendering module is used for matching the spatial position of the target object to render the corresponding continuous depth of the augmented reality model rendering image.
Preferably, as shown in fig. 2, the acquisition module (S1), the target object recognition module (S2), the rendering parameter calculation module (S3), and the augmented reality model rendering module (S4).
The device part of the invention specifically comprises:
the system comprises an image source module, an optical path folding assembly, a transmission remote imaging device and a spectroscope. The size of the equipment image generator is 6.4 inch LCD display, the light rays emitted by the image source module vertically pass through a transmission long-distance imaging device formed by two planoconvex lenses with the radius of curvature of 575mm and the caliber of 250mm through the light path folding component, and the light rays enter human eyes after being refracted by the transmission long-distance imaging device and reflected by a spectroscope with the light transmittance of 70 percent and the reflectivity of 30 percent.
Preferably, the capturing of the image of the real scene in step 1 by the capturing module uses at least one capturing device, including but not limited to a monocular camera, a depth camera, and a binocular camera.
Preferably, the spatial position information of the target object in the real scene obtained in step 1 includes, but is not limited to, a machine learning method, a deep learning method or a threshold filtering method, performing semantic segmentation on the target object to obtain a target object image, and obtaining coordinate distribution information and depth distribution information of the target object image.
In the present embodiment, the target recognition is performed on the road.
Preferably, the adjusting the camera rendering parameters and the augmented reality content model rendering parameters corresponding to the spatial position of the target object in the real scene image in step 2 includes: mapping the real scene image from the image coordinate system to the world coordinate system by combining with the camera imaging principle to obtain the dimension of the target object in the three-dimensional spaceAnd spatial position
Resizing an augmented reality three-dimensional model based on target object size and spatial positionThe size scale factor of the target object and the augmented reality model is +.>The size of the target object and the augmented reality model is made to satisfy the formula (1):
(1)。
as shown in FIG. 3, in the present embodiment, the road line model scale factorRoad sign model scale factorAuxiliary information model scaling factor->
Obtaining the position coordinates of the viewing position in the world coordinate system by actual measurement or camera calibration modeAnd measuring the viewing angle of the viewer with respect to the display device>And adjusting the shooting position and angle of the virtual camera according to the space position coordinates of the target object so as to match the watching position and the watching angle. Obtaining the position coordinates of the viewing position in the world coordinate system by means of actual measurement>And measuring the viewing angle of the viewer with respect to the display device>
Preferably, the restoring the occlusion and perspective relationship between the augmented reality content model and the real scene at the continuous depth in combination with the depth mask in step 4 includes:
based on the spatial position of the target objectEstablishing a depth mask in the virtual scene by depth distribution, and if the augmented reality model has a region blocked by the depth mask, not rendering the region image;
presetting the relative position relation between the augmented reality three-dimensional model and the target object according to the spatial position of the target objectDetermining a placement position of an augmented reality three-dimensional model in a virtual scene +.>The relative positional relationship of the augmented reality three-dimensional model and the target object includes, but is not limited to, conforming to the target object surface, being located laterally of the target object, being located above or below the target object, being located in front of or behind the target object.
Preferably, the fusion of step 4 includes, but is not limited to, characterizing the interrelationship of perspective transformation, occlusion, and size scale between the augmented reality model of continuous depth and the real scene.
The device can display images with continuous depth from 5m to 25m of the spectroscope, the angle of view in the horizontal direction is 14.4 degrees, the angle of view in the vertical direction is 8 degrees, the horizontal movement range in which human eyes can watch the complete imaging content is 20cm, and the vertical movement range is 18cm. The rendering result obtained by combining the method of the invention is shown in fig. 3, and as can be seen from fig. 3, the perspective relationship is correct, the lane lines are attached to the surface of the target object, and the navigation information and the like are positioned on the side of the target object, above or below the target object and in front of or behind the target object.
The above-described embodiments are merely preferred embodiments of the present invention and are not intended to limit the scope of the present invention, so that all equivalent changes or modifications of the structure, characteristics and principles described in the claims should be included in the scope of the present invention.

Claims (8)

1. A virtual-real combined rendering method of continuous depth, comprising: the system comprises an acquisition module, a target object identification module, a rendering parameter calculation module and an augmented reality model rendering module;
step 1: acquiring a real scene image through an acquisition module, and acquiring spatial position information of a target object in the real scene image through a target object identification module, wherein the spatial position information comprises coordinate distribution information and depth distribution information;
step 2: adjusting a camera rendering parameter corresponding to the spatial position of a target object in the real scene image and an augmented reality content model rendering parameter to obtain the size and position information of the target object in a three-dimensional space, and adjusting the size of an augmented reality three-dimensional model according to the size and the spatial position of the target object;
step 3: adjusting the shooting position and angle of the virtual camera according to the space position coordinates of the target object to enable the shooting position and angle to be matched with the watching position and the watching angle;
step 4: and establishing a depth mask in the virtual scene according to the spatial position and depth distribution of the target object, and combining the depth mask to restore the shielding and perspective relation between the augmented reality content model and the real scene at the continuous depth so as to fuse the augmented reality model and the real scene.
2. A method of continuous depth virtual-real combined rendering according to claim 1, wherein in step 1, the real scene image is acquired by the acquisition module using at least one acquisition device, including but not limited to a monocular camera, a depth camera, and a binocular camera.
3. The method of claim 1, wherein the spatial position information of the target object in the real scene obtained in step 1 includes, but is not limited to, a machine learning method, a deep learning method or a threshold filtering method, performing semantic segmentation on the target object to obtain a target object image, and obtaining coordinate distribution information and depth distribution information of the target object image.
4. The method according to claim 1, wherein the adjusting of the camera rendering parameters and the augmented reality content model rendering parameters corresponding to the spatial position of the target object in the real scene image in step 2 comprises: mapping the real scene image from the image coordinate system to the world coordinate system by combining with the camera imaging principle to obtain the dimension of the target object in the three-dimensional spaceAnd spatial position->
Resizing an augmented reality three-dimensional model based on target object size and spatial positionThe size scale factor of the target object and the augmented reality model is +.>The size of the target object and the augmented reality model is made to satisfy the formula (1):
(1)。
5. the method of claim 4, wherein the steps ofThe step 3 specifically comprises the following steps: obtaining the position coordinates of the viewing position in the world coordinate system by actual measurement or camera calibration modeAnd measuring the viewing angle of the viewer with respect to the display device>And adjusting the shooting position and angle of the virtual camera according to the space position coordinates of the target object so as to match the watching position and the watching angle.
6. The method of claim 1, wherein the restoring the occlusion and perspective relationship between the augmented reality content model and the real scene at the continuous depth by combining the depth mask in step 4 comprises:
based on the spatial position of the target objectEstablishing a depth mask in the virtual scene by depth distribution, and if the augmented reality model has a region blocked by the depth mask, not rendering the region image;
presetting the relative position relation between the augmented reality three-dimensional model and the target object according to the spatial position of the target objectDetermining a placement position of an augmented reality three-dimensional model in a virtual scene +.>The relative positional relationship of the augmented reality three-dimensional model and the target object includes, but is not limited to, conforming to the target object surface, being located laterally of the target object, being located above or below the target object, being located in front of or behind the target object.
7. The method of claim 6, wherein the fusing in step 4 includes, but is not limited to, characterizing the interrelationship of perspective transformation, occlusion and size scale between the augmented reality model and the real scene for the continuous depth.
8. The method according to claim 1, wherein the acquisition module is configured to acquire an image of a real scene, and the target object recognition module is configured to recognize spatial position information of a target object in the real scene;
the rendering parameter calculation module is used for calculating the size and the position of the model according to the mapping relation between the target object image and the model, and determining the shooting position and the pitch angle parameters of the rendering camera according to the watching position;
the augmented reality model rendering module is used for matching the spatial position of the target object to render the corresponding continuous depth of the augmented reality model rendering image.
CN202410258758.9A 2024-03-07 2024-03-07 Virtual-real combined rendering method of continuous depth Pending CN117853694A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410258758.9A CN117853694A (en) 2024-03-07 2024-03-07 Virtual-real combined rendering method of continuous depth

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410258758.9A CN117853694A (en) 2024-03-07 2024-03-07 Virtual-real combined rendering method of continuous depth

Publications (1)

Publication Number Publication Date
CN117853694A true CN117853694A (en) 2024-04-09

Family

ID=90540377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410258758.9A Pending CN117853694A (en) 2024-03-07 2024-03-07 Virtual-real combined rendering method of continuous depth

Country Status (1)

Country Link
CN (1) CN117853694A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN106548519A (en) * 2016-11-04 2017-03-29 上海玄彩美科网络科技有限公司 Augmented reality method based on ORB SLAM and the sense of reality of depth camera
CN109903129A (en) * 2019-02-18 2019-06-18 北京三快在线科技有限公司 Augmented reality display methods and device, electronic equipment, storage medium
CN113066189A (en) * 2021-04-06 2021-07-02 海信视像科技股份有限公司 Augmented reality equipment and virtual and real object shielding display method
CN113238656A (en) * 2021-05-25 2021-08-10 北京达佳互联信息技术有限公司 Three-dimensional image display method and device, electronic equipment and storage medium
CN114511665A (en) * 2020-10-28 2022-05-17 北京理工大学 Virtual-real fusion rendering method and device based on monocular camera reconstruction
CN117419713A (en) * 2023-09-05 2024-01-19 罗普特科技集团股份有限公司 Navigation method based on augmented reality, computing device and storage medium
CN117522936A (en) * 2023-11-28 2024-02-06 长春理工大学 Virtual-real shielding processing method integrating multi-information assistance and contour detection

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN106548519A (en) * 2016-11-04 2017-03-29 上海玄彩美科网络科技有限公司 Augmented reality method based on ORB SLAM and the sense of reality of depth camera
CN109903129A (en) * 2019-02-18 2019-06-18 北京三快在线科技有限公司 Augmented reality display methods and device, electronic equipment, storage medium
CN114511665A (en) * 2020-10-28 2022-05-17 北京理工大学 Virtual-real fusion rendering method and device based on monocular camera reconstruction
CN113066189A (en) * 2021-04-06 2021-07-02 海信视像科技股份有限公司 Augmented reality equipment and virtual and real object shielding display method
CN113238656A (en) * 2021-05-25 2021-08-10 北京达佳互联信息技术有限公司 Three-dimensional image display method and device, electronic equipment and storage medium
CN117419713A (en) * 2023-09-05 2024-01-19 罗普特科技集团股份有限公司 Navigation method based on augmented reality, computing device and storage medium
CN117522936A (en) * 2023-11-28 2024-02-06 长春理工大学 Virtual-real shielding processing method integrating multi-information assistance and contour detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵东阳;陈一民;李启明;刘燕;黄晨;徐升;周明珠;: "一种基于亮度和深度信息的实时景深渲染算法", ***仿真学报, no. 08, 8 August 2012 (2012-08-08) *

Similar Documents

Publication Publication Date Title
CN109040738B (en) Calibration method and non-transitory computer readable medium
JP4764305B2 (en) Stereoscopic image generating apparatus, method and program
CN104933718B (en) A kind of physical coordinates localization method based on binocular vision
EP3057066B1 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
US6205241B1 (en) Compression of stereoscopic images
JP4440066B2 (en) Stereo image generation program, stereo image generation system, and stereo image generation method
CN111373748A (en) System and method for externally calibrating a camera and diffractive optical element
WO2012117706A1 (en) Video processing device, video processing method, program
JP4817425B2 (en) Image display system and image display method
CN109974659A (en) A kind of embedded range-measurement system based on binocular machine vision
KR20200129657A (en) Method for gaining 3D model video sequence
KR20200056721A (en) Method and apparatus for measuring optical properties of augmented reality device
JP6061334B2 (en) AR system using optical see-through HMD
Deng et al. Towards stereoscopic on-vehicle AR-HUD
CN117853694A (en) Virtual-real combined rendering method of continuous depth
KR101883883B1 (en) method for glass-free hologram display without flipping images
Kwon et al. Selective attentional point-tracking through a head-mounted stereo gaze tracker based on trinocular epipolar geometry
KR20150098252A (en) Computer graphics based stereo floting integral imaging creation system
JP2003521857A (en) Software defocused 3D method, system and apparatus
US20220351653A1 (en) System and method for augmenting lightfield images
CN114219887A (en) Optimized ray tracing three-dimensional element image array acquisition method
Balasubramanian et al. Simplified video transmission of stereo images for 3D image reconstruction in TV
Guo Definitions of perspective diminution factor and foreshortening factor: Applications in the analysis of perspective distortion
CN118071827A (en) Three-dimensional calibration system and method based on video image fusion technology
JPH09261537A (en) Device and method for processing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination