CN115457179A - Method, apparatus, device and medium for rendering virtual object - Google Patents

Method, apparatus, device and medium for rendering virtual object Download PDF

Info

Publication number
CN115457179A
CN115457179A CN202211154080.7A CN202211154080A CN115457179A CN 115457179 A CN115457179 A CN 115457179A CN 202211154080 A CN202211154080 A CN 202211154080A CN 115457179 A CN115457179 A CN 115457179A
Authority
CN
China
Prior art keywords
environment
rendering
image
environment image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211154080.7A
Other languages
Chinese (zh)
Inventor
高星
刘慧琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211154080.7A priority Critical patent/CN115457179A/en
Publication of CN115457179A publication Critical patent/CN115457179A/en
Priority to PCT/CN2023/115909 priority patent/WO2024060952A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Methods, apparatuses, devices and media for rendering virtual objects are provided. In one method, a plurality of environment images of a real environment are acquired, the plurality of environment images including images of the real environment acquired at a plurality of acquisition time points, respectively. Based on the plurality of acquisition time points, an environment image that matches a current time point at which the augmented reality application is used is selected from the plurality of environment images as a rendered environment image. Virtual objects rendered with a rendering environment image are presented in an augmented reality application. With the exemplary implementation of the present disclosure, a rendering environment image for rendering a virtual object may be selected in real time based on a current time point and the rendering is performed. In this way, the illumination information of the rendered environment image can be made to coincide with the illumination information of the real environment at the current point in time.

Description

Method, apparatus, device and medium for rendering virtual object
Technical Field
Example implementations of the present disclosure relate generally to rendering virtual objects, and more particularly, to methods, apparatuses, devices, and computer-readable storage media for rendering virtual objects based on real-time selected images of an environment in an Augmented Reality (AR) application.
Background
A number of augmented reality applications have been developed, in which a user can capture a scene in a real environment using an image capture device in a device such as a mobile terminal, and can add a virtual object to a video of the captured real environment. For example, virtual objects may be placed at desired locations, or virtual characters that may move may be added, and so on. Because the illumination in the real environment may change constantly, how to set the illumination rendering parameters for the virtual object so that the illumination effect of the rendered virtual object is consistent with the surrounding real environment becomes a problem to be solved urgently.
Disclosure of Invention
In a first aspect of the disclosure, a method for rendering a virtual object in an augmented reality application is provided. In the method, a plurality of environment images of the real environment are acquired, the plurality of environment images including images of the real environment acquired at a plurality of acquisition time points, respectively. Based on the plurality of acquisition time points, an environment image that matches a current time point at which the augmented reality application is used is selected from the plurality of environment images as a rendering environment image. Virtual objects rendered with a rendering environment image are presented in an augmented reality application.
In a second aspect of the disclosure, an apparatus for rendering a virtual object in an augmented reality application is provided. The device includes: an acquisition module configured to acquire a plurality of environment images of a real environment, the plurality of environment images including images of the real environment acquired at a plurality of acquisition time points, respectively; a selection module configured to select, as a rendered environment image, an environment image from a plurality of environment images that matches a current point in time at which the augmented reality application is used, based on the plurality of acquisition points in time; and a rendering module configured to render the virtual object rendered with the rendering environment image in an augmented reality application.
In a third aspect of the disclosure, an electronic device is provided. The electronic device includes: at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit cause the apparatus to perform a method according to the first aspect of the disclosure.
In a fourth aspect of the present disclosure, a computer-readable storage medium is provided, having stored thereon a computer program, which, when executed by a processor, causes the processor to carry out the method according to the first aspect of the present disclosure.
It should be understood that what is described in this section is not intended to limit key features or essential features of implementations of the disclosure, nor is it intended to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various implementations of the present disclosure will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates an example of an application environment in which implementations of the present disclosure may be used;
FIG. 2 illustrates a block diagram for rendering virtual objects in an augmented reality application, in accordance with some implementations of the present disclosure;
fig. 3 illustrates a block diagram of a plurality of environmental images respectively acquired at a plurality of acquisition time points, in accordance with some implementations of the present disclosure;
FIG. 4 illustrates a block diagram of a plurality of environmental images respectively acquired at a plurality of acquisition locations, in accordance with some implementations of the present disclosure;
FIG. 5 illustrates a block diagram of a process for selecting a rendering environment image based on a comparison of a device location and an acquisition location of a terminal device running an augmented reality application, in accordance with some implementations of the present disclosure;
FIG. 6 illustrates a block diagram of a process for selecting a rendering environment image based on an occlusion relationship in accordance with some implementations of the present disclosure;
FIG. 7 illustrates a block diagram of a process for converting an environmental image to a standard environmental image, in accordance with some implementations of the present disclosure;
FIG. 8 illustrates a block diagram of a process for mapping pixels, in accordance with some implementations of the present disclosure;
FIG. 9 illustrates a block diagram of a spherical panoramic image, in accordance with some implementations of the present disclosure;
FIG. 10 illustrates a block diagram of a process for generating new rendering parameters based on multiple environment images, in accordance with some implementations of the present disclosure;
11A and 11B illustrate block diagrams of presenting virtual objects in an augmented reality application, respectively, according to some implementations of the present disclosure;
FIG. 12 illustrates a flow diagram of a method for rendering virtual objects, in accordance with some implementations of the present disclosure;
FIG. 13 illustrates a block diagram of an apparatus for rendering virtual objects, in accordance with some implementations of the present disclosure; and
fig. 14 illustrates a block diagram of a device capable of implementing various implementations of the present disclosure.
Detailed Description
Implementations of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain implementations of the present disclosure are illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the implementations set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and implementations of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
In describing implementations of the present disclosure, the terms "include," including, "and their like are to be construed as being inclusive, i.e.," including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one implementation" or "the implementation" should be understood as "at least one implementation". The term "some implementations" should be understood as "at least some implementations". Other explicit and implicit definitions are also possible below. As used herein, the term "model" may represent an associative relationship between various data. For example, the above-described association may be obtained based on various technical solutions that are currently known and/or will be developed in the future.
It will be appreciated that the data involved in the subject technology, including but not limited to the data itself, the acquisition or use of the data, should comply with the requirements of the corresponding laws and regulations and related regulations.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the user should be informed of the type, the use range, the use scene, etc. of the personal information related to the present disclosure and obtain the authorization of the user through an appropriate manner according to the relevant laws and regulations.
For example, in response to receiving an active request from a user, a prompt message is sent to the user to explicitly prompt the user that the requested operation to be performed would require the acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution, according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the prompt information is sent to the user, for example, a pop-up window may be used, and the prompt information may be presented in text in the pop-up window. In addition, a selection control for providing personal information to the electronic equipment by the user for selecting 'consent' or 'disagreement' can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and is not intended to limit the implementation of the present disclosure, and other ways of satisfying the relevant laws and regulations may be applied to the implementation of the present disclosure.
Example Environment
A number of AR applications have been developed, and an AR application environment according to one exemplary implementation of the present disclosure is described with reference to fig. 1. Fig. 1 illustrates an example 100 of an application environment in which implementations of the present disclosure may be used. An AR application may be run at a terminal device and a user may hold the terminal device and take a scene in a real environment using an augmented reality application 110 running at the terminal device and may add a virtual object 120 (e.g., add a cube sculpture) to the video of the taken real environment. At this time, since the user holds the terminal device, the device position of the terminal device in the real environment can be considered to be the same as the user position.
Generally, the illumination in an AR application environment will vary with time and device location movement. Fig. 1 illustrates a virtual object 120 rendered using preset lighting rendering data, and in fig. 1, it appears that a lighting effect of the rendered virtual object 120 is inconsistent with a surrounding real environment. Specifically, the preset illumination rendering data is set for the daytime scene, and the illumination effect of the virtual object 120 obtained by using the illumination rendering data is brighter. When the user uses the augmented reality application 110 at night, the lighting of the real environment is dim. At this time, when the virtual object 120 is added to the AR scene, the excessively bright virtual object 120 does not harmonize with the surrounding dark environment. This results in unrealistic visual effects for the augmented reality application 110.
It will be appreciated that a real environment may have complex lighting information. For example, the illumination may be roughly divided into direct illumination and ambient illumination, and there may be a plurality of direct light sources in a real environment. According to one aspect, a captured image of a real environment may be processed based on real-time illumination estimation techniques, i.e., illumination analysis may be performed on the image of the real environment to obtain real illumination. However, this solution can only obtain the overall illumination intensity or the most significant direct illumination of the real environment, however the accuracy is not satisfactory. Furthermore, real-time illumination estimation techniques incur a large computational resource overhead and are thus difficult to implement in portable computing devices such as mobile terminals. In this case, how to determine the peripheral illumination information of the virtual object and render the same in a more convenient and efficient manner becomes an urgent problem to be solved.
Profiling process for rendering virtual objects
In order to address the deficiencies in the above technical solutions, according to an exemplary implementation of the present disclosure, a method for rendering a virtual object in an augmented reality application is proposed. In summary, a plurality of environment images of a real environment may be acquired and taken as candidate rendering environment images for rendering a virtual object. Here, the plurality of environment images may be images of the real environment acquired at a plurality of acquisition time points, respectively. For example, multiple environmental images may be acquired at different points in time of day (such as daytime and nighttime), respectively.
In the context of the present disclosure, AR applications may be run at a variety of terminal devices. For example, the AR application may be run on a conventional mobile terminal device (including, but not limited to, a mobile phone, a mobile tablet computing device, a mobile notebook computing device, etc.). As another example, an AR application may be run on a wearable terminal device (including, but not limited to, a computing-enabled eyewear device, a helmet device, etc.). As another example, the AR application may be run on a computing device with display functionality separate from the computing functionality (e.g., running the AR application with a portable computing device and displaying an interface of the AR application with an eyewear device in communication with the portable computing device).
An overview of one exemplary implementation according to the present disclosure is described with reference to fig. 2, which fig. 2 shows a block diagram 200 for rendering virtual objects in an augmented reality application, according to some implementations of the present disclosure. As shown in fig. 2, a plurality of environmental images acquired at a plurality of acquisition times, respectively, may be acquired. For example, the acquisition time point 210 of the environment image 220 may be daytime, \ 8230and the acquisition time point 212 of the environment image 222 may be nighttime.
A current point in time 244 at which the user is using the augmented reality application 240 may be determined. Further, the environmental image 222 matching the current time 244 may be selected from the plurality of environmental images 220, \8230;, 222 as the rendered environmental image 230 based on a comparison of the current time 244 to the plurality of acquisition times 210, \8230;, 212. In particular, if the user uses the augmented reality application 240 during the daytime, the environmental image 220 captured during the daytime may be selected as the rendered environmental image 230 and rendered. At this point, rendered virtual object 250 will present a daytime brightly lit effect. For another example, if the user is using the augmented reality application 240 at night, the environment image 222 acquired at night may be selected as the rendered environment image 230. At this point, the rendered virtual object 250 will present a dim lighting effect at night.
As shown in fig. 2, the current time point 244 is night, and thus the environment image 222 at night may be selected as the rendering environment image 230. Virtual object 250 may be rendered with rendering environment image 230 and rendered virtual object 250 presented in augmented reality application 240. At this time, a night light effect as shown in fig. 2 may be presented.
With the exemplary implementation of the present disclosure, the rendering environment image 230 for rendering the virtual object 250 may be selected in real time based on the current time point 244 and the rendering may be performed. In this way, the illumination information of the rendering environment image 230 may be made to coincide with the illumination information of the real environment at the current time point 244, and thus the virtual object 250 obtained by rendering may be made to coincide with the surrounding real environment.
Detailed process for rendering virtual objects
Having described an overview of an exemplary implementation according to the present disclosure with reference to fig. 2, further details for rendering virtual objects will be described in detail below. Describing more information about an environmental image with reference to fig. 3, fig. 3 shows a block diagram 300 of a plurality of environmental images respectively acquired at a plurality of acquisition time points, according to some implementations of the present disclosure. As shown in FIG. 3, multiple environmental images may be acquired at different points in time of day, for example, environmental images 220, \ 8230;, may be acquired at time T1 and environmental image 222 may be acquired at time TN at predetermined time intervals (e.g., 2 hours, 4 hours, etc.).
According to one exemplary implementation of the present disclosure, the plurality of environment images may be stored directly at the terminal device for running the augmented reality application 240. Alternatively and/or additionally, a plurality of environment images may be stored at a server providing an AR service in order to acquire a desired environment image via a network.
It will be appreciated that while the above illustrates the process of acquiring environmental images at different points in time of the day, alternatively and/or additionally, different environmental images may also be acquired in different seasons (e.g., spring, summer, fall, and winter) in outdoor application scenarios. In this way, an environmental image that more closely matches a particular time of use may be selected and rendered based on both the season and the point in time that the user uses the augmented reality application 240. Thereby, the rendering effect of the virtual object 250 can be further improved, and the illumination information of the virtual object 250 can be made to more match the surrounding real environment.
Positioning is usually performed based on a Visual Positioning System (VPS) in augmented reality applications. In order to improve the positioning accuracy of VPS, a large number of images of the real environment need to be acquired in advance at different time points and different positions. According to an exemplary implementation of the present disclosure, additional steps are not required to acquire the plurality of environment images 220, \8230;, 222 described above, but each environment image previously acquired for VPS purposes may be directly used as an environment image for rendering purposes. In this way, the workload of data collection is not increased, but rather the already collected environment images can be reused for the purpose of rendering virtual objects 250.
It will be appreciated that the lighting information of the virtual object 250 in the augmented reality application 240 will vary with the position of the user in the real environment, and thus the device position may also be considered when rendering the virtual object 250. According to an exemplary implementation of the present disclosure, a set of environmental images matching the current time point 244 may be first found from the plurality of environmental images based on the current time point 244. Further, an environment image that more closely conforms to the device location may be found in the set of environment images as the rendered environment image 230. In particular, a plurality of environment images respectively acquired at a plurality of acquisition positions in the real environment may be acquired.
More information about the acquisition location is described with reference to fig. 4. Fig. 4 illustrates a block diagram 400 of a plurality of environmental images respectively acquired at a plurality of acquisition locations, according to some implementations of the present disclosure. As shown in fig. 4, real environment 410 may include a plurality of acquisition locations 420, 430, \8230;, and 440, and an environment image may be acquired at each acquisition location. For example, the environmental image 220 may be acquired at acquisition location 420, and other environmental images may be acquired at acquisition locations 430 and 440, respectively, and so on.
According to one exemplary implementation of the present disclosure, at a time point T1, a plurality of environment images may be acquired at a plurality of acquisition positions, respectively; 8230; at time point TN, a plurality of environment images may be acquired at a plurality of acquisition positions, respectively. It will be appreciated that the present disclosure is not limited to the number of ambient images acquired at each point in time, nor is it limited to whether the acquisition location chosen at each point in time is the same, but rather, a plurality of ambient images acquired for VPS purposes may be used directly. The plurality of environment images may be managed in terms of acquisition time and acquisition location.
According to an example implementation of the present disclosure, a device location of a user in the real environment 410 may be determined, and a rendered environment image 230 matching the device location may be selected from the plurality of environment images based on a comparison of the device location to the plurality of acquisition locations. Describing more details of selecting a rendering environment image 230 with reference to fig. 5, fig. 5 illustrates a block diagram 500 of a process for selecting a rendering environment image 230 based on a comparison of a device location 510 and an acquisition location of a terminal device running an augmented reality application, according to some implementations of the present disclosure.
As shown in FIG. 5, assuming that the user is located at a device location 510 in the real environment 410, the distances between the device location 510 and the respective acquisition locations 420, 430, \ 8230;, and 440, respectively, may be obtained. Further, the environmental image acquired at the acquisition location closest to the device location 510 may be selected based on a comparison of the respective distances. In FIG. 5, assuming that the distances between the device location 510 and the acquisition locations 420 and 440 are 520 and 530, respectively, it is known that the distance 520 is greater than the distance 530 based on the comparison, at which point the environmental image acquired at the acquisition location 440 may be selected as the rendered environmental image 230.
With the exemplary implementations of the present disclosure, by selecting the environment image acquired at the acquisition location 440 closest to the device location 510, the virtual object 250 may be rendered with the environment image that most closely approximates the surrounding environment seen by the user at the device location 510. In this way, the lighting effect of the virtual object 250 may be made to more closely match the ambient lighting seen at the device location 510, thereby enhancing the visual effect of the augmented reality application 240.
It will be appreciated that there may be occlusions of objects in the real environment 410, such as walls, plants, etc., which will affect the lighting effect in the real environment 410. According to an example implementation of the present disclosure, the occlusion relationship may be further considered when selecting the rendering environment image 230.
FIG. 6 illustrates a block diagram 600 of a process for selecting a rendered environment image 230 based on an occlusion relationship according to some implementations of the present disclosure. Specifically, the spatial structure of the real environment 410 may be determined based on a plurality of environment images. The spatial structure of the real environment 410 may be obtained based on three-dimensional reconstruction techniques that are currently known and/or will be developed in the future, and will not be described in detail herein. Further, the spatial structure may be utilized to determine whether an obstacle exists between the acquisition location of the rendered environment image and the device location 510. As shown in fig. 6, the spatial structure indicates that an obstacle 610 is present in the real environment 410, and the obstacle 610 is located between the device location 510 and the acquisition location 440.
It will be appreciated that the obstacle 610 may cause the user at the device location 510 to not be able to directly see the acquisition location 440, and thus if the environment image acquired at the acquisition location 440 is directly taken as the rendered environment image 230, it may cause the illumination information of the rendered virtual object 250 to not conform to the user's real ambient illumination. At this point, even though the acquisition location 440 is closest to the device location 510, the environmental image at the acquisition location 440 cannot be selected.
According to an example implementation of the present disclosure, an environmental image acquired at another acquisition location 420 adjacent to the device location 510 may be selected from a plurality of environmental images. For example, because there are no obstructions between the acquisition location 420 and the device location 510, the environmental image at the acquisition location 420 that is closer to the device location 510 may be selected as the rendered environmental image 230. With exemplary implementations of the present disclosure, an environment image that more closely matches the user's ambient lighting information may be selected by considering the occlusion relationship. In this way, the discordance problem that arises when selecting the rendering environment image 230 based on distance alone may be avoided.
It will be appreciated that in acquiring the environmental images, an engineer may use the panoramic image acquisition device to acquire a plurality of environmental image panoramic images at different points in time and at different locations. Generally, panoramic image capture devices are mounted on pan-tilt devices, and engineers can hold the pan-tilt and/or fix the pan-tilt at a support or the like and capture. During the acquisition process, the engineer needs to move the pan-tilt head in order to acquire the environmental image at different acquisition positions. At this time, it cannot be always ensured that the acquisition angle of the pan/tilt head is consistent.
It will be appreciated that different acquisition angles may result in different pixel distributions of the surrounding scene in the acquired panoramic image. The rendering effect of the virtual object 250 depends on the pixel distribution used for rendering the environment image 230, and thus different capture angles will directly result in different rendering lighting effects of the virtual object 250. At this time, it is necessary to normalize the capturing angles of the respective panoramic images captured, that is, to "zero" them to a standard angle (for example, (0, 0)).
According to one exemplary implementation of the present disclosure, a normalization process may be performed for each of the acquired environmental images. Specifically, acquisition angles associated with the respective environment images may be acquired, and the environment images are converted to standard environment images at standard angles based on the acquisition angles. In this way, it is possible to process the respective environment images in a uniform manner and obtain an accurate rendering illumination effect under a uniform standard angle.
Fig. 7 illustrates a block diagram 700 of a process for converting an environmental image to a standard environmental image, according to some implementations of the present disclosure. The purpose of the conversion process is to angle-zero the respective panoramic image, i.e. to convert the euler angle of the panoramic image to a predefined standard angle 720 (0, 0) while keeping the position of the panoramic image unchanged. As shown in fig. 7, the environmental image 220 may be obtained when the panoramic image capture device is operated at a capture angle 710.
According to an exemplary implementation of the present disclosure, the acquisition angle 710 may be defined based on a variety of coordinate systems, for example, the coordinate system 712 may be based as a reference coordinate system. According to an example implementation of the present disclosure, the acquisition angle 710 may be represented based on a roll (heading). The acquisition position (x, y, z) and the acquisition angle (roll, pitch, heading) of the environment image can be determined based on known algorithms in the VPS technology, and thus will not be described in detail.
Further, individual pixels in the ambient image 220 may be processed one by one to convert the ambient image 220 to the standard ambient image 730. Specifically, a standard pixel in the standard environment image 730 corresponding to the pixel may be determined based on the acquisition angle 710, the spherical coordinates of the pixel in the environment image 220, and the standard angle 720. In the following, more details of the normalization process will be described with reference to fig. 8, which fig. 8 shows a block diagram 800 for a pixel mapping process according to some implementations of the present disclosure.
As shown in fig. 8, the environmental image 220 includes a pixel 810 (e.g., an arbitrary pixel), and the pixel 810 may be mapped to a pixel 820 in a standard environmental image 730 based on a mathematical transformation. The spherical coordinates of the pixel 820 in the converted standard environment image 730 can be expressed as (long) new ,lat new ) The corresponding quaternion can be expressed as Q new . For the environment image 220 before transformation, the euler angle of the environment image 220 may be represented as (roll, pitch, heading), and the corresponding quaternion may be represented as Q pano . The quaternion of pixel 810 in environment image 220 may be represented as Q new -Q pano And the spherical coordinates of pixel 810 may be expressed as (long) old ,lat old )。
Based on the mapping relationship, the color of the pixel 820 in the standard environment image 730 is the color of the pixel 810 in the environment image 220. It will be understood that the conversion between the image coordinates of the panoramic image and the spherical coordinates, and the conversion between the euler angles and the quaternions may be performed based on coordinate conversion formulas that have been proposed so far and/or will be developed in the future, and will not be described in detail herein. Thus, the color of each pixel in the standard environment image 730 can be determined one by one, thereby obtaining the complete standard environment image 730.
With exemplary implementations of the present disclosure, the standard environmental image 730 may be determined based on a simple mathematical transformation. Although the normalization process has been described above with only the environment image 220 as an example, similar processing may be performed in advance for each acquired environment image, so that processing is performed in a subsequent rendering process directly based on the standard environment image of the selected rendering environment image 230. In this way, when a subsequent rendering process is performed using the standard environment image 730, it can be ensured that the rendering illumination effect of the virtual object 250 more matches the user's surrounding real environment.
It will be appreciated that although the processing for the ambient image is described above with the rectangular format panoramic image as an example of the ambient image, the panoramic image may alternatively and/or additionally be stored in a spherical format as shown in fig. 9. This fig. 9 illustrates a block diagram 900 of a spherical panoramic image 910 according to some implementations of the present disclosure. Each pixel in the spherical panoramic image 910 may be processed one by one based on the principles of the normalization process described above to obtain a corresponding standard environmental image stored in spherical format.
According to an exemplary implementation of the present disclosure, after the rendering environment image 230 that most closely matches the current time and device location 510 has been selected, the associated standard environment image of the rendering environment image 230 may be used as an ambient light map to render the virtual object 250. With an exemplary implementation of the present disclosure, an ambient light map may be input to the renderer, thereby obtaining a virtual object 250 that matches the user's surrounding environment.
It will be appreciated that rendering efficiency using panoramic images directly as ambient light maps in the actual rendering process may not be ideal. At this time, the normalized panoramic image may be further converted into a spherical harmonic illumination parameter vector based on the spherical harmonic illumination model, and the rendering process may be performed using the vector. In particular, each normalized standard environment image may be processed in a similar manner, and a corresponding spherical harmonic illumination parameter vector may be generated for each standard environment image. With the exemplary implementation of the present disclosure, in the post-rendering process, the corresponding spherical harmonic illumination parameter vector may be directly invoked, thereby performing the rendering process with higher performance. In this way, rendering efficiency may be improved, thereby reducing latency of the augmented reality application 240 in rendering the virtual object.
According to one exemplary implementation of the present disclosure, a plurality of environment images may be processed in advance to extract corresponding spherical harmonic illumination parameter vectors. The plurality of spherical harmonic illumination parameter vectors may be stored directly at the terminal device for running the augmented reality application 240. Alternatively and/or additionally, a plurality of spherical harmonic illumination parameter vectors may be stored at the server in order to obtain the desired spherical harmonic illumination parameter vector via the network.
According to one exemplary implementation of the present disclosure, a plurality of environment images may be indexed by an acquisition time point and an acquisition location, respectively, so as to improve search efficiency in selecting an environment image matching a current time point and a device location from a large number of environment images. For example, the plurality of environmental images may be indexed by the chronological order of the acquisition time points and/or the distance between the acquisition positions. The environmental image most similar to the current time and device location may be searched directly based on the index.
It will be appreciated that the number of acquisition locations may be small and/or the distribution of multiple acquisition locations may not evenly cover the real environment 410. At this time, even if the environment image at the capture position closest to the device position 510 is used, a satisfactory rendering illumination effect may sometimes not be obtained. To further improve the rendering lighting effect, new rendering parameters may be generated based on the environmental images at two or more acquisition locations near the device location 510. In the following, further details are described with reference to fig. 10, which fig. 10 shows a block diagram 1000 of a process for generating new rendering parameters based on a plurality of environment images, according to some implementations of the present disclosure.
FIG. 10 illustrates generation of new rendering parameters based on multiple ambient images, with an example of spatial location-based interpolation. As shown in fig. 10, acquisition locations 420 and 440 near device location 510 may be determined based on the index described above. For example, a standard ambient image 730 of the ambient image 220 at the acquisition location 420 may be acquired, and further a corresponding spherical harmonic illumination parameter vector 1010 may be acquired. Similarly, a standard ambient image 1022 of the ambient image 1020 at the acquisition location 440 may be acquired, and further a corresponding spherical harmonic illumination parameter vector 1024 may be acquired. At this time, a spatial location-based interpolation 1030 may be determined based on the spherical harmonic illumination parameter vectors 1010 and 1024. The interpolation 1030 takes into account the illumination information at the two acquisition locations 420 and 440, relative to the spherical harmonic illumination parameter vectors 1010 and 1024, and can thereby more accurately simulate the ambient illumination information at the device location 510.
According to an exemplary implementation of the present disclosure, the interpolation 1030 may be determined based on a variety of ways. For example, the interpolation between two or more vectors may be determined based on any one of nearest neighbor interpolation, linear interpolation, and bilinear interpolation. In a simple example, the interpolation 1030 may be determined based on an average of the individual vectors. Alternatively and/or additionally, the distance between the respective acquisition location and the device location 510 may be utilized to determine the interpolation 1030 based on a weighted average. The interpolation 1030 may then be used as a new spherical harmonic illumination parameter vector and as a rendering parameter for the renderer.
It will be appreciated that figure 10 only schematically illustrates one example of generating new rendering parameters based on a plurality of ambient images. Alternatively and/or additionally, the occlusion relationship may be further considered, e.g. acquisition positions where no obstacles are present between the acquisition position and the device position may be selected based on the above described method. Assuming that there is an obstruction between the acquisition location 440 and the device location 510, the environmental images at the other acquisition locations may be selected.
It will be appreciated that fig. 10 describes the process of generating new rendering parameters using multiple ambient images, by way of example only of a spatial-based interpolation 1030, which may alternatively and/or additionally be obtained in a similar manner. For example, for the spherical harmonic illumination parameter vectors at different time points, one more interpolation (e.g., using linear interpolation) may be performed in the time dimension to simulate the ambient illumination information at time points closer to the time point. It is assumed that only ambient images of 8 am and 12 am are currently present, whereas the current time point is 10 am. At this time, an interpolation may be determined based on the relevant spherical harmonic illumination parameter vectors of the environment images of 8 and 12 am, which may be used as a new rendering parameter for rendering the virtual object.
With example implementations of the present disclosure, the interpolation may be determined based on the spherical harmonic illumination parameter vectors associated with the plurality of ambient images, in temporal and/or spatial ranges, respectively. In this way, the interpolation may take into account more ambient images in the temporal and/or spatial range, thereby obtaining ambient lighting information that more closely matches the current time and/or device location. When the rendering is performed using interpolation, the rendering illumination effect of the virtual object 250 may be more matched to the real environment around the user.
The specific process for rendering virtual object 250 in augmented reality application 240 has been described above. Hereinafter, specific rendering effects are provided with reference to fig. 11A and 11B. Fig. 11A and 11B illustrate block diagrams 1100A and 1100B, respectively, of presenting a virtual object 250 in an augmented reality application 240, according to some implementations of the present disclosure. In particular, diagram 1100A schematically illustrates the effect of a user using augmented reality application 240 during the daytime. At this time, the virtual object 250 may be rendered based on the daytime environment image acquired at the acquisition position close to the device position. As shown in fig. 11A, the surface of the virtual object 250 at this time coincides with daytime illumination of the real environment, and exhibits a brighter illumination effect.
Further, diagram 1100B schematically shows the effect of a user using the augmented reality application 240 at night. At this time, the virtual object 250 may be rendered based on a night environment image captured at a capture location close to the device location. As shown in fig. 11B, the surface of the virtual object 250 at this time coincides with the night illumination of the real environment, and exhibits a darker illumination effect.
It will be appreciated that although the rendering process is described above with the cube as a specific example of a virtual object. According to one exemplary implementation of the present disclosure, the virtual object may represent other objects. For example, in an AR-based street view navigation application, the virtual object may represent a virtual sign, a virtual mascot, etc. along a street store; in AR-based gaming applications, virtual objects may represent virtual items, virtual characters, etc.; in AR-based shopping applications, the virtual object may represent a virtual garment or the like that is tried on.
With exemplary implementations of the present disclosure, multiple ambient images (i.e., panoramic images) captured for VPS purposes may be reused and the visual effect of virtual object rendering improved without additional data capture overhead. Further, by subjecting the captured panoramic images to the normalization process, the panoramic images captured at different angles can be converted to the standard environment image in the standard direction (i.e., (0,0,0)). Thereby, the standard environment image can be directly used for rendering without further coordinate conversion.
Alternatively and/or additionally, to further improve rendering performance, the spherical harmonic illumination parameter vector may be extracted from the normalized processed standard environment image. Alternatively and/or additionally, the spherical harmonic illumination parameter vectors associated with the plurality of ambient images may be interpolated over time and/or space to obtain illumination parameters that more closely match the current time and/or device location. In this way, the illumination information around the current point in time and the current device location may be simulated, thereby making the rendered virtual object more consistent with the surrounding real environment.
Example procedure
Fig. 12 illustrates a flow diagram of a method 1200 for rendering virtual objects, in accordance with some implementations of the present disclosure. Specifically, at block 1210, a plurality of environment images of the real environment is acquired, the plurality of environment images including images of the real environment acquired at a plurality of acquisition time points, respectively. At block 1220, an environmental image that matches a current point in time at which the augmented reality application is used is selected from the plurality of environmental images as a rendered environmental image based on the plurality of acquisition points in time. At block 1230, a virtual object rendered with the rendering environment image is presented in an augmented reality application.
According to an exemplary implementation of the present disclosure, the plurality of environment images are acquired at a plurality of acquisition positions in the real environment, respectively, and the selecting the rendering environment image further comprises: determining the device position of the terminal device running the augmented reality application in the real environment; and selecting a rendering environment image matched with the device position from the plurality of environment images based on the plurality of acquisition positions.
According to an exemplary implementation of the present disclosure, rendering environment images matched to a device location includes: respectively determining corresponding distances between a plurality of acquisition positions of a plurality of environment images and the position of equipment; and selecting a rendering environment image based on the comparison of the respective distances.
According to an example implementation of the present disclosure, selecting the rendering environment image based on the comparison of the respective distances further comprises: determining a spatial structure of the real environment based on the plurality of environment images; determining whether an obstacle exists between an acquisition position of the rendering environment image and a device position based on the spatial structure; and selecting the rendering environment image in response to determining that no obstacle exists between the capture location of the rendering environment image and the device location.
According to an exemplary implementation of the present disclosure, the method further comprises: in response to determining that an obstacle exists between the capture position of the rendered environment image and the device position, another environment image of the plurality of environment images having a capture position adjacent to the device position is selected as the rendered environment image.
According to one exemplary implementation of the present disclosure, presenting a virtual object includes: acquiring an acquisition angle associated with a rendering environment image; based on the acquisition angle, converting the rendering environment image into a standard environment image under a standard angle; and rendering the virtual object using the standard ambient image as an ambient light map.
According to one exemplary implementation of the present disclosure, converting the rendering environment image to the standard environment image includes: and determining a standard pixel corresponding to the pixel in the standard environment image based on the acquisition angle, the spherical coordinate of the pixel in the rendering environment image and the standard angle for the pixel in the rendering environment image.
According to an exemplary implementation of the present disclosure, rendering a virtual object includes: determining a spherical harmonic illumination parameter vector associated with the standard environmental image based on the spherical harmonic illumination model; and rendering the virtual object based on the spherical harmonic illumination parameter vector.
According to one exemplary implementation of the present disclosure, rendering a virtual object using a spherical harmonic illumination parameter vector includes: selecting another rendering environment image from the plurality of environment images based on at least any one of the device location and the current time; determining another spherical harmonic illumination parameter vector of another rendered environment image; and rendering the virtual image based on the interpolation of the spherical harmonic illumination parameter vector and the other spherical harmonic illumination parameter vector.
According to an exemplary implementation of the disclosure, the interpolation includes at least any one of: location based interpolation, time based interpolation.
Example apparatus and devices
Fig. 13 illustrates a block diagram of an apparatus 1300 for rendering virtual objects according to some implementations of the present disclosure. The apparatus 1300 includes: an acquisition module 1310 configured to acquire a plurality of environment images of a real environment, the plurality of environment images including images of the real environment acquired at a plurality of acquisition time points, respectively; a selecting module 1320 configured to select, as a rendering environment image, an environment image that matches a current time point at which the augmented reality application is used from the plurality of environment images based on the plurality of acquisition time points; and a presentation module 1330 configured to present virtual objects rendered with the rendering environment image in an augmented reality application.
According to an exemplary implementation of the present disclosure, the plurality of environment images are acquired at a plurality of acquisition positions in the real environment, respectively, and the selection module 1320 further includes: a location determination module configured to determine a device location of a terminal device running an augmented reality application in a real environment; and an image selection module configured to select a rendered environment image from the plurality of environment images that matches the device location based on the plurality of acquisition locations.
According to one exemplary implementation of the present disclosure, an image selection module includes: a distance determination module configured to determine respective distances between a plurality of acquisition locations of the plurality of environmental images and the device location, respectively; and a comparison module configured to select a rendered environment image based on the comparison of the respective distances.
According to an exemplary implementation of the disclosure, the comparing module further comprises: a structure determination module configured to determine a spatial structure of a real environment based on a plurality of environment images; a detection module configured to determine whether an obstacle exists between an acquisition position of the rendering environment image and a device position based on a spatial structure; and an obstacle-based selection module configured to select the rendered environment image in response to determining that no obstacle exists between the acquisition location of the rendered environment image and the device location.
According to an example implementation of the present disclosure, the obstacle-based selection module is further configured to: in response to determining that an obstacle exists between the capture position of the rendered environment image and the device position, another environment image of the plurality of environment images having a capture position adjacent to the device position is selected as the rendered environment image.
According to an exemplary implementation of the present disclosure, the presenting module 1330 includes: an angle acquisition module configured to acquire an acquisition angle associated with rendering the environment image; the conversion module is configured to convert the rendering environment image into a standard environment image under a standard angle based on the acquisition angle; and a rendering module configured to render the virtual object using the standard environment image as an environment light map.
According to one exemplary implementation of the present disclosure, a conversion module includes: a pixel determination module configured to determine, for a pixel in the rendering environment image, a standard pixel in the standard environment image corresponding to the pixel based on the acquisition angle, the spherical coordinate of the pixel in the rendering environment image, and the standard angle.
According to one exemplary implementation of the present disclosure, a rendering module includes: a vector determination module configured to determine a spherical harmonic illumination parameter vector associated with the standard environmental image based on the spherical harmonic illumination model; and a virtual object rendering module configured to render the virtual object based on the spherical harmonic illumination parameter vector.
According to an example implementation of the present disclosure, the selection module 1320 is further configured to select another rendered environment image from the plurality of environment images based on at least any one of the device location and the current time; the vector determination module is further configured for determining another spherical harmonic illumination parameter vector of another rendered environment image; and the virtual object rendering module further comprises: an interpolation-based rendering module configured to render a virtual image based on the interpolation of the spherical harmonic illumination parameter vector and the further spherical harmonic illumination parameter vector.
According to an exemplary implementation of the disclosure, the interpolation includes at least any one of: location based interpolation, time based interpolation.
Fig. 14 illustrates a block diagram of a device 1400 capable of implementing various implementations of the present disclosure. It should be understood that the computing device 1400 illustrated in FIG. 14 is merely exemplary and should not constitute any limitation as to the functionality or scope of the implementations described herein. The computing device 1400 shown in fig. 14 may be used to implement the methods described above.
As shown in fig. 14, computing device 1400 is in the form of a general purpose computing device. Components of computing device 1400 may include, but are not limited to, one or more processors or processing units 1410, memory 1420, storage 1430, one or more communication units 1440, one or more input devices 1450, and one or more output devices 1460. The processing unit 1410 may be a real or virtual processor and can perform various processes according to programs stored in the memory 1420. In a multi-processor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capabilities of computing device 1400.
Computing device 1400 typically includes a number of computer storage media. Such media may be any available media that is accessible by computing device 1400 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. Memory 1420 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory), or some combination thereof. Storage 1430 may be a removable or non-removable medium and may include a machine-readable medium, such as a flash drive, diskette, or any other medium, which may be capable of being used for storing information and/or data (e.g., training data for training) and which may be accessed within computing device 1400.
Computing device 1400 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 14, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 1420 may include a computer program product 1425 having one or more program modules configured to perform the various methods or acts of the various implementations of the disclosure.
The communication unit 1440 enables communication with other computing devices over a communication medium. Additionally, the functionality of the components of computing device 1400 may be implemented in a single computing cluster or multiple computing machines, which are capable of communicating over a communications connection. Thus, computing device 1400 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another network node.
The input device 1450 may be one or more input devices, such as a mouse, keyboard, trackball, or the like. Output device(s) 1460 may be one or more output devices such as a display, speakers, printer, or the like. Computing device 1400 can also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., communicating with one or more devices that enable a user to interact with computing device 1400, or communicating with any devices (e.g., network cards, modems, etc.) that enable computing device 1400 to communicate with one or more other computing devices, via communication unit 1440, as desired. Such communication may be performed via input/output (I/O) interfaces (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions is provided, wherein the computer-executable instructions are executed by a processor to implement the above-described method. According to an exemplary implementation of the present disclosure, there is also provided a computer program product, tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions that are executed by a processor to implement the method described above. According to an exemplary implementation of the present disclosure, a computer program product is provided, on which a computer program is stored, which when executed by a processor implements the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of the implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the various implementations disclosed herein.

Claims (13)

1. A method for rendering a virtual object in an augmented reality application, comprising:
acquiring a plurality of environment images of a real environment, the plurality of environment images including images of the real environment acquired at a plurality of acquisition time points, respectively;
selecting an environment image matched with the current time point of the augmented reality application from the plurality of environment images as a rendering environment image based on the plurality of acquisition time points; and
presenting the virtual object rendered with the rendering environment image in the augmented reality application.
2. The method of claim 1, wherein the plurality of environmental images are acquired at a plurality of acquisition locations in the real environment, respectively, and selecting the rendered environmental image further comprises:
determining a device position of a terminal device running the augmented reality application in the real environment; and
selecting the rendered environment image from the plurality of environment images that matches the device location based on the plurality of acquisition locations.
3. The method of claim 2, wherein determining the rendering environment image that matches the device location comprises:
determining respective distances between a plurality of acquisition positions of the plurality of environmental images and the device position, respectively; and
selecting the rendering environment image based on the comparison of the respective distances.
4. The method of claim 3, wherein selecting the rendering environment image based on the comparison of the respective distances further comprises:
determining a spatial structure of the real environment based on the plurality of environmental images;
determining whether an obstacle exists between an acquisition position of the rendering environment image and the device position based on the spatial structure; and
selecting the rendering environment image in response to determining that no obstacle exists between the acquisition location and the device location of the rendering environment image.
5. The method of claim 4, further comprising: in response to determining that an obstacle exists between the capture location of the rendered environment image and the device location, selecting another environment image of the plurality of environment images having a capture location adjacent to the device location as the rendered environment image.
6. The method of claim 2, wherein presenting the virtual object comprises:
acquiring an acquisition angle associated with the rendering environment image;
converting the rendering environment image to a standard environment image at a standard angle based on the acquisition angle; and
and using the standard environment image as an environment light map to render the virtual object.
7. The method of claim 6, wherein converting the rendered environment image to the standard environment image comprises: for a pixel in the rendered environment image,
determining a standard pixel corresponding to the pixel in the standard environment image based on the acquisition angle, the spherical coordinate of the pixel in the rendering environment image, and the standard angle.
8. The method of claim 6, wherein rendering the virtual object comprises:
determining a spherical harmonic illumination parameter vector associated with the standard environmental image based on a spherical harmonic illumination model; and
rendering the virtual object based on the spherical harmonic illumination parameter vector.
9. The method of claim 8, wherein rendering the virtual object with the spherical harmonic illumination parameter vector comprises:
selecting another rendering environment image from the plurality of environment images based on at least any one of the device location and the current time;
determining another spherical harmonic illumination parameter vector for the another rendered ambient image; and
rendering the virtual image based on the interpolation of the spherical harmonic illumination parameter vector and the other spherical harmonic illumination parameter vector.
10. The method of claim 9, wherein the interpolation comprises at least any one of: location based interpolation, time based interpolation.
11. An apparatus for rendering a virtual object in an augmented reality application, comprising:
an acquisition module configured to acquire a plurality of environment images of a real environment, the plurality of environment images including images of the real environment acquired at a plurality of acquisition time points, respectively;
a selection module configured to select, as a rendered environment image, an environment image matching a current time point at which the augmented reality application is used from the plurality of environment images based on the plurality of acquisition time points; and
a presentation module configured to present the virtual object rendered with the rendering environment image in the augmented reality application.
12. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit causing the electronic device to perform the method of any of claims 1-10.
13. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, causes the processor to carry out the method according to any one of claims 1 to 10.
CN202211154080.7A 2022-09-21 2022-09-21 Method, apparatus, device and medium for rendering virtual object Pending CN115457179A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211154080.7A CN115457179A (en) 2022-09-21 2022-09-21 Method, apparatus, device and medium for rendering virtual object
PCT/CN2023/115909 WO2024060952A1 (en) 2022-09-21 2023-08-30 Method and apparatus for rendering virtual objects, device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211154080.7A CN115457179A (en) 2022-09-21 2022-09-21 Method, apparatus, device and medium for rendering virtual object

Publications (1)

Publication Number Publication Date
CN115457179A true CN115457179A (en) 2022-12-09

Family

ID=84306618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211154080.7A Pending CN115457179A (en) 2022-09-21 2022-09-21 Method, apparatus, device and medium for rendering virtual object

Country Status (2)

Country Link
CN (1) CN115457179A (en)
WO (1) WO2024060952A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379884A (en) * 2021-07-05 2021-09-10 北京百度网讯科技有限公司 Map rendering method and device, electronic equipment, storage medium and vehicle
WO2024060952A1 (en) * 2022-09-21 2024-03-28 北京字跳网络技术有限公司 Method and apparatus for rendering virtual objects, device, and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107705353A (en) * 2017-11-06 2018-02-16 太平洋未来科技(深圳)有限公司 Rendering intent and device applied to the virtual objects effect of shadow of augmented reality
RU2757563C1 (en) * 2021-02-19 2021-10-18 Самсунг Электроникс Ко., Лтд. Method for visualizing a 3d portrait of a person with altered lighting and a computing device for it
US20220076047A1 (en) * 2020-09-04 2022-03-10 Sony Interactive Entertainment Inc. Content generation system and method
CN114549723A (en) * 2021-03-30 2022-05-27 完美世界(北京)软件科技发展有限公司 Rendering method, device and equipment for illumination information in game scene
CN114979457A (en) * 2021-02-26 2022-08-30 华为技术有限公司 Image processing method and related device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10777010B1 (en) * 2018-03-16 2020-09-15 Amazon Technologies, Inc. Dynamic environment mapping for augmented reality
US11195324B1 (en) * 2018-08-14 2021-12-07 Certainteed Llc Systems and methods for visualization of building structures
CN115457179A (en) * 2022-09-21 2022-12-09 北京字跳网络技术有限公司 Method, apparatus, device and medium for rendering virtual object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107705353A (en) * 2017-11-06 2018-02-16 太平洋未来科技(深圳)有限公司 Rendering intent and device applied to the virtual objects effect of shadow of augmented reality
US20220076047A1 (en) * 2020-09-04 2022-03-10 Sony Interactive Entertainment Inc. Content generation system and method
RU2757563C1 (en) * 2021-02-19 2021-10-18 Самсунг Электроникс Ко., Лтд. Method for visualizing a 3d portrait of a person with altered lighting and a computing device for it
CN114979457A (en) * 2021-02-26 2022-08-30 华为技术有限公司 Image processing method and related device
CN114549723A (en) * 2021-03-30 2022-05-27 完美世界(北京)软件科技发展有限公司 Rendering method, device and equipment for illumination information in game scene

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379884A (en) * 2021-07-05 2021-09-10 北京百度网讯科技有限公司 Map rendering method and device, electronic equipment, storage medium and vehicle
CN113379884B (en) * 2021-07-05 2023-11-17 北京百度网讯科技有限公司 Map rendering method, map rendering device, electronic device, storage medium and vehicle
WO2024060952A1 (en) * 2022-09-21 2024-03-28 北京字跳网络技术有限公司 Method and apparatus for rendering virtual objects, device, and medium

Also Published As

Publication number Publication date
WO2024060952A1 (en) 2024-03-28

Similar Documents

Publication Publication Date Title
KR102051889B1 (en) Method and system for implementing 3d augmented reality based on 2d data in smart glass
CN111586360B (en) Unmanned aerial vehicle projection method, device, equipment and storage medium
CN113808253B (en) Method, system, equipment and medium for processing dynamic object of three-dimensional reconstruction of scene
US20200082571A1 (en) Method and apparatus for calibrating relative parameters of collector, device and storage medium
CN115457179A (en) Method, apparatus, device and medium for rendering virtual object
CN109582880B (en) Interest point information processing method, device, terminal and storage medium
CN109191554B (en) Super-resolution image reconstruction method, device, terminal and storage medium
CN108388649B (en) Method, system, device and storage medium for processing audio and video
CN109325996B (en) Method and device for generating information
CN110111364B (en) Motion detection method and device, electronic equipment and storage medium
US11036182B2 (en) Hologram location
CN113793370B (en) Three-dimensional point cloud registration method and device, electronic equipment and readable medium
CN111382618B (en) Illumination detection method, device, equipment and storage medium for face image
CN116563493A (en) Model training method based on three-dimensional reconstruction, three-dimensional reconstruction method and device
CN113220251A (en) Object display method, device, electronic equipment and storage medium
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN112288878B (en) Augmented reality preview method and preview device, electronic equipment and storage medium
CN113409444A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN115272575B (en) Image generation method and device, storage medium and electronic equipment
CN112465692A (en) Image processing method, device, equipment and storage medium
CN114266876B (en) Positioning method, visual map generation method and device
CN115375740A (en) Pose determination method, three-dimensional model generation method, device, equipment and medium
CN113763468B (en) Positioning method, device, system and storage medium
CN112184766B (en) Object tracking method and device, computer equipment and storage medium
CN112825198B (en) Mobile tag display method, device, terminal equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination