CN111951361B - Method and device for realizing AR image display effect - Google Patents

Method and device for realizing AR image display effect Download PDF

Info

Publication number
CN111951361B
CN111951361B CN201910410763.6A CN201910410763A CN111951361B CN 111951361 B CN111951361 B CN 111951361B CN 201910410763 A CN201910410763 A CN 201910410763A CN 111951361 B CN111951361 B CN 111951361B
Authority
CN
China
Prior art keywords
image
preset
processing
depth
light effect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910410763.6A
Other languages
Chinese (zh)
Other versions
CN111951361A (en
Inventor
白欲立
王雪健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo New Vision Beijing Technology Co Ltd
Original Assignee
Lenovo New Vision Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo New Vision Beijing Technology Co Ltd filed Critical Lenovo New Vision Beijing Technology Co Ltd
Priority to CN201910410763.6A priority Critical patent/CN111951361B/en
Publication of CN111951361A publication Critical patent/CN111951361A/en
Application granted granted Critical
Publication of CN111951361B publication Critical patent/CN111951361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

The application provides a method and a device for realizing an AR image display effect, wherein the method comprises the following steps: acquiring a plurality of first depth buffer areas with preset depths; the first depth buffer zone comprises a first de-interleaving and a first view space with preset specifications; respectively carrying out light effect processing on each first de-interlacing, and respectively storing each processing result in a corresponding first 3D format texture image in the first view space; and merging all the first 3D format texture images in the depth buffer and performing image processing to generate a full-resolution image. The method and the device have the advantages that special optimization is carried out on the AR scene, layering sense and reality sense of the space are combined better, virtual objects are enabled to be closer to the reality in the hierarchy, the performance of virtual shadows is not illegal when the traditional method is adopted, the calculation efficiency is improved, and the problems of low efficiency and rendering errors in the traditional scheme are solved.

Description

Method and device for realizing AR image display effect
Technical Field
The present application relates to the field of image display, and in particular, to a method for implementing an AR image display effect, and an apparatus for implementing an AR image display effect.
Background
Particles in the colloid scatter light to form a bright passage. In nature, cloud, fog, smoke dust in air and the like are all colloid, so when light irradiates, scattering occurs, and thus, the Jesus light seen by people is formed. This effect is also known as the tyndall effect.
To simulate this phenomenon in a game, it is impossible to do so in a real-world manner. If the volume light is rendered in a realistic manner, a large number of particles may be required, requiring real-time computation by a computer to be very difficult. And is less likely on current mobile devices.
In the field of computer display, the traditional method for approximately simulating the tyndall effect needs to consume a large amount of resources, has low calculation speed, cannot be used in a mobile terminal, and cannot be perfectly combined with an AR scene by using an algorithm currently used on the mobile terminal.
Disclosure of Invention
The application provides a method for realizing an AR image display effect, and a device for realizing the AR image display effect; the problem that the effect of displaying the Tyndall effect in the AR scene is poor is solved.
In order to solve the technical problems, the embodiment of the application provides the following technical scheme:
the application provides a method for realizing an AR image display effect, which comprises the following steps:
acquiring a plurality of first depth buffer areas with preset depths; the first depth buffer zone comprises a first de-interleaving and a first view space with preset specifications;
respectively carrying out light effect processing on each first de-interlacing, and respectively storing each processing result in a corresponding first 3D format texture image in the first view space;
and merging all the first 3D format texture images in the depth buffer and performing image processing to generate a full-resolution image.
Optionally, the performing light effect processing on the first de-interleaving and saving the processing result in the first 3D format texture image in the first view space includes:
acquiring the first de-interlacing light effect processing range according to a preset boundary parameter, and storing the light effect processing range in the first 3D format texture image in the first view space;
and carrying out first light effect blurring processing on the image within the light effect processing range in the first 3D format texture image according to a preset first blurring parameter.
Optionally, the light effect processing range includes: an ambient light masking treatment range and a light edge treatment range.
Optionally, the first light effect blurring process is specifically first edge-aware intelligent blurring process with a preset number of channels.
Optionally, the number of the preset channels is 1-6 channels.
Optionally, the image processing, in particular, the second edge-aware intelligent blurring processing.
Optionally, the number of the first depth buffers is four; the preset depth is one quarter depth; the preset specification is a 2×2 specification; .
Optionally, the first 3D format texture image is an R8G8 format texture image; the R8G8 format texture image includes at least a normal map, an ambient map, and a depth ambient light shading map.
Optionally, the first view space is configured according to the first deinterleaved depth value.
The application provides a device for realizing AR image display effect, which comprises:
an acquisition unit, configured to acquire a plurality of first depth buffers of a preset depth; the first depth buffer zone comprises a first de-interleaving and a first view space with preset specifications;
the processing unit is used for respectively carrying out light effect processing on each first de-interleaving and respectively storing each processing result in a corresponding first 3D format texture image in the first view space;
and the merging unit is used for merging all the first 3D format texture images in the depth buffer and performing image processing to generate a full-resolution image.
Based on the disclosure of the above embodiments, it can be known that the embodiments of the present application have the following beneficial effects:
the application provides a method and a device for realizing an AR image display effect, wherein the method comprises the following steps: acquiring a plurality of first depth buffer areas with preset depths; the first depth buffer zone comprises a first de-interleaving and a first view space with preset specifications; respectively carrying out light effect processing on each first de-interlacing, and respectively storing each processing result in a corresponding first 3D format texture image in the first view space; and merging all the first 3D format texture images in the depth buffer and performing image processing to generate a full-resolution image. The method and the device have the advantages that special optimization is carried out on the AR scene, layering sense and reality sense of the space are combined better, virtual objects are enabled to be closer to the reality in the hierarchy, the performance of virtual shadows is not illegal when the traditional method is adopted, the calculation efficiency is improved, and the problems of low efficiency and rendering errors in the traditional scheme are solved.
Drawings
Fig. 1 is a flowchart of a method for implementing an AR image display effect according to an embodiment of the present application;
fig. 2 is a block diagram of a unit of an apparatus for implementing an AR image display effect according to an embodiment of the present application.
Detailed Description
Hereinafter, specific embodiments of the present application will be described in detail with reference to the accompanying drawings, but not limiting the present application.
It should be understood that various modifications may be made to the embodiments disclosed herein. Therefore, the above description should not be taken as limiting, but merely as exemplification of the embodiments. Other modifications within the scope and spirit of this application will occur to those skilled in the art.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the present application will become apparent from the following description of a preferred form of embodiment, given as a non-limiting example, with reference to the accompanying drawings.
It is also to be understood that, although the present application has been described with reference to some specific examples, a person skilled in the art will certainly be able to achieve many other equivalent forms of the present application, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The foregoing and other aspects, features, and advantages of the present application will become more apparent in light of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application will be described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely examples of the application, which may be embodied in various forms. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application with unnecessary or excessive detail. Therefore, specific structural and functional details disclosed herein are not intended to be limiting, but merely serve as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the word "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments as per the application.
The application provides a method for realizing an AR image display effect; the application also provides a device for realizing the AR image display effect. The following examples are described in detail one by one.
The first embodiment provided in the present application, that is, an embodiment of a method for realizing an AR image display effect.
The following describes the present embodiment in detail with reference to fig. 1, where fig. 1 is a flowchart of a method for implementing an AR image display effect according to the embodiment of the present application.
The embodiment of the application can realize the expansion quality related to the performance by changing the quantity of the ambient light shielding (AO for short) flickers (progressive sampling core support) and performing characteristic switching in different preset stages.
Random sampling can be used to share AO values between adjacent pixels (sampling disks that are continually expanding based on rotation) and the last used denoising blur. Denoising blur can perceive edges, preventing effects from penetrating into irrelevant background or foreground objects, thereby avoiding halation. The edges may be based on depth alone or on depth and normal. (the latter may improve quality but may extend processing time). Intelligent blurring is performed in the 2 x 2 de-interleaving domain to achieve the highest cache efficiency, with only the final channel being completed in the interleaved (reconstructed) channel at full high definition.
The embodiment of the application comprises the following steps under the condition of high preset:
step S101, a plurality of first depth buffers with preset depths are acquired.
The first depth buffer area comprises a first de-interleaving and a first view space of a preset specification.
Optionally, the number of the first depth buffers is four; the preset depth is one quarter depth; the preset specification is a 2 x 2 specification.
Optionally, the first view space is configured according to the first deinterleaved depth value.
The embodiments of the present application require a screen normal, which would require reconstruction by depth if no input screen normal were provided.
Step S102, respectively performing light effect processing on each first de-interleaving, and respectively storing each processing result in a corresponding first 3D format texture image in the first view space.
A 3D texture is a stereoscopic "graphic" as if the 2D texture were a matrix of pixels in (x, y) form, and the 3D texture was a matrix of voxels in (x, y, z) form, each (x, y, z) corresponding to a scalar value; the 3D texture mapping is to map the three-dimensional pixel array into a three-dimensional object space; the object and 3D texture are typically processed into cubes or cylinders. The 3D format texture image is stored in a preset specific mode, is composed of multiple layers of pictures which are related to each other, and has different display functions, but in order to enable the display effect to be close to the real effect, the pictures are related to each other. For example, (8) (1) the first 3D format texture image is an R8G8 format texture image; the R8G8 format texture image includes at least a normal map, an ambient map, and a depth ambient light shading map.
The light effect processing is performed on the first de-interleaving, and the processing result is stored in a first 3D format texture image in the first view space, including the following steps:
step S102-1, obtaining the first de-interlacing light effect processing range according to a preset boundary parameter, and storing the light effect processing range in the first 3D format texture image in the first view space.
The purpose of this step is to calculate AO options and edges.
Optionally, the light effect processing range includes: an ambient light masking treatment range and a light edge treatment range.
Step S102-2, performing a first light effect blurring process on an image within a light effect processing range in the first 3D format texture image according to a preset first blurring parameter.
Optionally, the first light effect blurring process is specifically first edge-aware intelligent blurring process with a preset number of channels.
Optionally, the number of the preset channels is 1-6 channels.
Step S103, merging the texture images of the first 3D format in all the depth buffers and performing image processing to generate a full resolution image.
Optionally, the image processing, in particular, the second edge-aware intelligent blurring processing.
The "highest/adaptive" quality presets include auxiliary base AO channels that can be used to provide importance heuristics to provide guidance on the number of instances of variables per pixel for the main AO channel.
According to the provided screen normal analysis, the dual channel blur and "highest" adaptive target is set to 0.45, and by changing the AO number of taps, and switching each specific on/off state, the effect (quality and performance) can be extended between the "low"/"medium"/"high"/"highest" presets.
According to the embodiment of the application, special optimization is performed on the AR scene, so that layering sense and reality sense of the space are combined better, a virtual object is enabled to be closer to the reality in the hierarchy, the performance of virtual shadows is not illegal when the traditional method is adopted, the calculation efficiency is improved, and the problems of low efficiency and rendering errors in the traditional scheme are solved.
Corresponding to the first embodiment provided in the present application, the present application also provides a second embodiment, i.e. a device for realizing the AR image display effect. Since the second embodiment is substantially similar to the first embodiment, the description is relatively simple, and the relevant portions will be referred to the corresponding descriptions of the first embodiment. The device embodiments described below are merely illustrative.
Fig. 2 shows an embodiment of an apparatus for realizing an AR image display effect provided in the present application. Fig. 2 is a block diagram of a unit of an apparatus for implementing an AR image display effect according to an embodiment of the present application.
Referring to fig. 2, the present application provides an apparatus for implementing an AR image display effect, including: an acquisition unit 201, a processing unit 202, a merging unit 203;
an acquiring unit 201, configured to acquire a plurality of first depth buffers with preset depths; the first depth buffer zone comprises a first de-interleaving and a first view space with preset specifications;
a processing unit 202, configured to perform light effect processing on each first de-interlace, and store each processing result in a corresponding first 3D format texture image in the first view space;
a merging unit 203, configured to merge and perform image processing on the first 3D format texture images in all the depth buffers, and generate a full resolution image.
Optionally, in the processing unit 202, it includes:
an acquisition range subunit, configured to acquire the first deinterleaved light effect processing range according to a preset boundary parameter, and store the light effect processing range in the first 3D format texture image in the first view space;
and the blurring processing subunit is used for carrying out first light effect blurring processing on the image within the light effect processing range in the first 3D format texture image according to a preset first blurring parameter.
Optionally, the light effect processing range includes: an ambient light masking treatment range and a light edge treatment range.
Optionally, the first light effect blurring process is specifically first edge-aware intelligent blurring process with a preset number of channels.
Optionally, the number of the preset channels is 1-6 channels.
Optionally, the image processing, in particular, the second edge-aware intelligent blurring processing.
Optionally, the number of the first depth buffers is four; the preset depth is one quarter depth; the preset specification is a 2×2 specification; .
Optionally, the first 3D format texture image is an R8G8 format texture image; the R8G8 format texture image includes at least a normal map, an ambient map, and a depth ambient light shading map.
Optionally, the first view space is configured according to the first deinterleaved depth value.
According to the embodiment of the application, special optimization is performed on the AR scene, so that layering sense and reality sense of the space are combined better, a virtual object is enabled to be closer to the reality in the hierarchy, the performance of virtual shadows is not illegal when the traditional method is adopted, the calculation efficiency is improved, and the problems of low efficiency and rendering errors in the traditional scheme are solved.
The above embodiments are only exemplary embodiments of the present application and are not intended to limit the present application, the scope of which is defined by the claims. Various modifications and equivalent arrangements may be made to the present application by those skilled in the art, which modifications and equivalents are also considered to be within the scope of the present application.

Claims (9)

1. A method for implementing an AR image display effect, comprising:
acquiring a plurality of first depth buffer areas with preset depths; the first depth buffer zone comprises a first de-interleaving and a first view space with preset specifications;
and respectively carrying out light effect processing on each first de-interleaving, and respectively storing each processing result in a corresponding first 3D format texture image in the first view space, wherein the processing method specifically comprises the following steps: acquiring the first de-interlacing light effect processing range according to a preset boundary parameter, and storing the light effect processing range in the first 3D format texture image in the first view space; performing first light effect blurring processing on an image within a light effect processing range in the first 3D format texture image according to a preset first blurring parameter;
and merging all the first 3D format texture images in the depth buffer and performing image processing to generate a full-resolution image.
2. The method of claim 1, wherein the light effect processing range comprises: an ambient light masking treatment range and a light edge treatment range.
3. The method according to claim 1, wherein the first light effect blurring process is specifically a first edge-aware intelligent blurring process of a preset number of channels.
4. A method according to claim 3, wherein the predetermined number of channels is 1-6 channels.
5. The method according to claim 1, characterized in that the image processing, in particular the second edge-aware intelligent blurring process.
6. The method of claim 1, wherein the number of first depth buffers is four; the preset depth is one quarter depth; the preset specification is a 2 x 2 specification.
7. The method of claim 1, wherein the first 3D format texture image is an R8G8 format texture image; the R8G8 format texture image includes a normal map, an ambient map, and a depth ambient light shading map.
8. The method of claim 1, wherein the first view space is constructed from the first de-interleaved depth values.
9. An apparatus for realizing an AR image display effect, comprising:
an acquisition unit, configured to acquire a plurality of first depth buffers of a preset depth; the first depth buffer zone comprises a first de-interleaving and a first view space with preset specifications;
the processing unit is configured to perform light effect processing on each first de-interleaving, and store each processing result in a corresponding first 3D format texture image in the first view space, where the processing unit specifically includes:
acquiring the first de-interlacing light effect processing range according to a preset boundary parameter, and storing the light effect processing range in the first 3D format texture image in the first view space; performing first light effect blurring processing on an image within a light effect processing range in the first 3D format texture image according to a preset first blurring parameter;
and the merging unit is used for merging all the first 3D format texture images in the depth buffer and performing image processing to generate a full-resolution image.
CN201910410763.6A 2019-05-17 2019-05-17 Method and device for realizing AR image display effect Active CN111951361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910410763.6A CN111951361B (en) 2019-05-17 2019-05-17 Method and device for realizing AR image display effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910410763.6A CN111951361B (en) 2019-05-17 2019-05-17 Method and device for realizing AR image display effect

Publications (2)

Publication Number Publication Date
CN111951361A CN111951361A (en) 2020-11-17
CN111951361B true CN111951361B (en) 2024-04-02

Family

ID=73336725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910410763.6A Active CN111951361B (en) 2019-05-17 2019-05-17 Method and device for realizing AR image display effect

Country Status (1)

Country Link
CN (1) CN111951361B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102113015A (en) * 2008-07-28 2011-06-29 皇家飞利浦电子股份有限公司 Use of inpainting techniques for image correction
CN102254340A (en) * 2011-07-29 2011-11-23 北京麒麟网信息科技有限公司 Method and system for drawing ambient occlusion images based on GPU (graphic processing unit) acceleration
CN104103089A (en) * 2014-07-29 2014-10-15 无锡梵天信息技术股份有限公司 Real-time soft shadow realization method based on image screen space
CN108805971A (en) * 2018-05-28 2018-11-13 中北大学 A kind of ambient light masking methods

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8134556B2 (en) * 2007-05-30 2012-03-13 Elsberg Nathan Method and apparatus for real-time 3D viewer with ray trace on demand
US10129523B2 (en) * 2016-06-22 2018-11-13 Microsoft Technology Licensing, Llc Depth-aware reprojection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102113015A (en) * 2008-07-28 2011-06-29 皇家飞利浦电子股份有限公司 Use of inpainting techniques for image correction
CN102254340A (en) * 2011-07-29 2011-11-23 北京麒麟网信息科技有限公司 Method and system for drawing ambient occlusion images based on GPU (graphic processing unit) acceleration
CN104103089A (en) * 2014-07-29 2014-10-15 无锡梵天信息技术股份有限公司 Real-time soft shadow realization method based on image screen space
CN108805971A (en) * 2018-05-28 2018-11-13 中北大学 A kind of ambient light masking methods

Also Published As

Publication number Publication date
CN111951361A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN111508052B (en) Rendering method and device of three-dimensional grid body
Weier et al. Foveated real‐time ray tracing for head‐mounted displays
CN107274476B (en) Shadow map generation method and device
US8325186B2 (en) Method and apparatus for rendering shadows
US8970583B1 (en) Image space stylization of level of detail artifacts in a real-time rendering engine
US10748332B2 (en) Hybrid frustum traced shadows systems and methods
CN109035383B (en) Volume cloud drawing method and device and computer readable storage medium
US9501860B2 (en) Sparse rasterization
CN108805971B (en) Ambient light shielding method
JP2004164593A (en) Method and apparatus for rendering 3d model, including multiple points of graphics object
KR102442488B1 (en) Graphics processing systems and graphics processors
Billeter et al. Real time volumetric shadows using polygonal light volumes
CN112652046B (en) Game picture generation method, device, equipment and storage medium
CN104103089A (en) Real-time soft shadow realization method based on image screen space
CA2744504A1 (en) Optimal point density using camera proximity for point-based global illumination
Gautron Real-time ray-traced ambient occlusion of complex scenes using spatial hashing
MohammadBagher et al. Screen-space percentage-closer soft shadows
CN111951361B (en) Method and device for realizing AR image display effect
Vos Volumetric light effects in killzone: Shadow fall
Zhang et al. Indirect illumination with efficient monte carlo integration and denoising
Rohmer et al. Tiled frustum culling for differential rendering on mobile devices
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
Bischoff et al. A Real-time Global Illumination Approach for High Resolution Reflective Shadow Maps in Open World Scenes.
CN107330965B (en) Method for realizing hard shadow anti-aliasing by using local conservative rasterization method
Andersson et al. Efficient multi-view ray tracing using edge detection and shader reuse

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant