CN110021071B - Rendering method, device and equipment in augmented reality application - Google Patents

Rendering method, device and equipment in augmented reality application Download PDF

Info

Publication number
CN110021071B
CN110021071B CN201811590118.9A CN201811590118A CN110021071B CN 110021071 B CN110021071 B CN 110021071B CN 201811590118 A CN201811590118 A CN 201811590118A CN 110021071 B CN110021071 B CN 110021071B
Authority
CN
China
Prior art keywords
environment
intermediate image
virtual object
panorama
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811590118.9A
Other languages
Chinese (zh)
Other versions
CN110021071A (en
Inventor
周岳峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201811590118.9A priority Critical patent/CN110021071B/en
Publication of CN110021071A publication Critical patent/CN110021071A/en
Application granted granted Critical
Publication of CN110021071B publication Critical patent/CN110021071B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A rendering method, apparatus and device in augmented reality application are disclosed. According to the scheme of the embodiment of the specification, the image pickup module is called to pick up the surrounding environment images of the user, and the surrounding environment panoramic map is generated dynamically by splicing. Furthermore, the AR application can call a rendering model to render the virtual object according to the brightness value of the pixel point in the environment panorama map, so that the rendering effect of the virtual object can respond to scene change in real time.

Description

Rendering method, device and equipment in augmented reality application
Technical Field
Embodiments of the present disclosure relate to the field of information technologies, and in particular, to a rendering method, apparatus, and device in augmented reality application.
Background
Augmented reality (Augmented Reality, AR), a technique of superimposing a virtual image on an actual scene image, aims at interacting with a user based on the above-described manner. In AR technology, rendering an image is a very important link.
When rendering objects in AR, it is often done based on a preset map. In this way, the virtual object is rendered without consideration of ambient light factors, making the virtual object model more obtrusive in the environment. And when the actual environment and the preset environment mapping have large differences, the reality sense of the reflective materials in the virtual object is not strong, and the static preset mapping can not reflect the real-time change of the scene.
Based on this, a rendering scheme with better rendering effect is required.
Disclosure of Invention
Aiming at the problem that the rendering effect of the virtual object does not respond to environmental change in the existing AR application, in order to achieve better rendering effect of the virtual object, the embodiment of the specification provides a rendering method in the augmented reality application, which comprises the following steps:
the augmented reality application calls a camera module to shoot surrounding images of a user;
splicing the shot surrounding environment images to construct an environment panorama map;
determining brightness values of pixel points in the environmental panorama;
and calling a rendering model to enable the rendering model to render the virtual object in the application according to the brightness value of the pixel point in the panoramic map.
Correspondingly, the embodiment of the present specification further provides a rendering device in an augmented reality application, including:
the first calling module is used for calling the camera module by the augmented reality application and shooting surrounding images of a user;
the construction module is used for splicing the shot surrounding environment images to construct an environment panorama map;
the determining module is used for determining brightness values of pixel points in the environment panorama;
and the second calling module is used for calling a rendering model so that the rendering model renders the virtual object in the application according to the brightness value of the pixel point in the panoramic map.
According to the scheme of the embodiment of the specification, the image pickup module is called to pick up the surrounding environment images of the user, and the surrounding environment panoramic map is generated dynamically by splicing. Furthermore, the AR application can call a rendering model to render the virtual object according to the brightness value of the pixel point in the environment panorama map, so that the rendering effect of the virtual object can respond to scene change in real time. In addition, when rendering the mirror surface material and the non-mirror surface material, different light source information in the environment panorama is adopted, so that a rendering effect can be better for the mirror surface material object.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the embodiments of the disclosure.
Further, not all of the effects described above need be achieved in any of the embodiments of the present specification.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present description, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
Fig. 1 is a flow chart of a rendering method in an augmented reality application according to an embodiment of the present disclosure;
FIG. 2 is a schematic illustration of an intermediate image stitched based on captured ambient images;
FIG. 3 is a schematic diagram of determining illumination parameters on a mirror material according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a rendering device in an augmented reality application according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an apparatus for configuring the method of the embodiments of the present specification.
Detailed Description
In order for those skilled in the art to better understand the technical solutions in the embodiments of the present specification, the technical solutions in the embodiments of the present specification will be described in detail below with reference to the drawings in the embodiments of the present specification, and it is apparent that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification shall fall within the scope of protection.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings. As shown in fig. 1, fig. 1 is a schematic flow chart of a rendering method in an augmented reality application according to an embodiment of the present disclosure, where the flow specifically includes the following steps:
s101, the augmented reality application calls a camera module to shoot surrounding images of a user.
The camera module may be a camera on a removable device (e.g., a smart phone, tablet, etc.) that the user is carrying with him. The AR application may guide the user to initiate a call to the camera and capture an image of the surrounding environment by setting a guide interface within the application.
For example, the user is first guided to initiate a call request to the camera. Then, the user is guided to shoot the surrounding environments in different directions (for example, four directions of front, back, left and right) respectively, and when shooting in one direction meets a certain time length or times, prompt information is sent out in a guiding interface so as to guide the user to shoot the surrounding environment in the next direction until shooting is finished.
On some mobile devices, it may include two cameras, one front and one back. In this case, the front and rear cameras may be simultaneously turned on to perform photographing together. If the device does not support the simultaneous starting of the cameras, the rear-mounted cameras can be preferentially called for shooting. Compared with a front camera, the panoramic image is more convenient to collect by using a rear camera.
S103, splicing the shot surrounding environment images to construct an environment panorama map.
The splicing of the images can be performed in a preset blank panoramic area, the shot pictures are respectively filled into the panoramic area according to the position relation, and the pictures containing the same part can be mutually covered.
In one embodiment, the photographed image is comprehensive, and the environmental panorama map without missing parts can be directly spliced according to the photographed surrounding environment image.
In another embodiment, the photographed pictures are not comprehensive enough, the number of pictures is not enough, and the spliced images have defects. That is, when the stitched intermediate picture is filled into the panoramic area, there is still a missing portion without an image. The missing portion may be at an edge of the intermediate image or may be at an intermediate portion of the intermediate image. As shown in fig. 2, fig. 2 is a schematic diagram of an intermediate image obtained by stitching based on a captured environmental image. The dashed line part in the figure shows that in the splicing process, multiple environmental pictures may be displayed for the environmental information at the same position. The overlaying manner may be sequentially performed according to the shooting time, for example, the environmental picture obtained by the later shooting is overlaid with the environmental picture obtained by the earlier shooting. The pictures spliced from the pictures 1 to 5 are intermediate pictures, and in the schematic diagram, the intermediate pictures do not cover all panoramic areas.
In this way, it is also necessary to complement the missing part based on the intermediate image to obtain the environmental panorama map. The manner of completion may be based on the size of the missing portion and the intermediate image itself.
For example, if the missing degree of the intermediate image with respect to the panoramic area exceeds a threshold, it may be known that the missing portion is already large, at this time, one or more light source information may be generated from the captured surrounding image for replacing the environmental map of the missing portion, and then an environmental panoramic map including the intermediate image and the light source information is constructed. The threshold may be a specific size of the missing region, or may be a ratio of the missing region to the panoramic region. In this case, the luminance value of the pixel point of the missing region may be given based on the light source. For example, the luminance value of the pixel point of the missing region is determined based on the luminance value of the light source and the distance to the light source.
For another example, if the missing degree of the intermediate image with respect to the panoramic area does not exceed the threshold, it is known that the missing portion is not large, so that the captured surrounding image may be adopted to interpolate and complement the missing area, and the filter is used to blur the interpolation edge, so that the blurred surrounding panoramic map is smoother, and is beneficial to subsequent rendering.
S105, determining the brightness value of the pixel point in the environment panorama map.
S107, calling a rendering model to enable the rendering model to render the virtual object in the application according to the brightness value of the pixel point in the panoramic map.
The algorithm of the rendering model itself is generally unchanged, but some parameters of its input may be adjusted. The luminance values of the pixels in the environment map may be used as input as illumination parameters for the rendering model. Along with the input of different illumination parameters, the rendering model can obtain different rendering effects on the same virtual object. For example, the ambient brightness parameters are determined together by brightness values of pixels in the ambient panorama map to adjust the rendering effect on the virtual object. Because the environment panorama is spliced based on the pictures obtained by actual shooting, the brightness information contained in the environment panorama is related to the environment in real time and can be dynamically changed.
For example, in the same AR scene, if a preset environment map is adopted, the rendering effect for the virtual object does not change whenever the user enters the scene. Whereas in the present solution the user is at 15;00 enters the scene, and the obtained surrounding image and the user are 19; the surrounding environment images obtained when 00 enters the scene are obviously different, namely, the specific difference is that the brightness values of all pixel points in the environment map are different, and the environment panorama map spliced by the surrounding environment images also has the difference in brightness values.
According to the scheme of the embodiment of the specification, the image pickup module is called to pick up the surrounding environment images of the user, and the surrounding environment panoramic map is generated dynamically by splicing. Furthermore, the AR application can call a rendering model to render the virtual object according to the brightness value of the pixel point in the environment panorama map, so that the rendering effect of the virtual object can respond to scene change in real time.
In an embodiment, if the spliced intermediate image is missing relative to the panoramic area, before determining the missing degree of the intermediate image, the AR application may further determine whether the current shooting times and/or shooting duration are lower than a threshold value, and if yes, call the shooting module again to shoot the surrounding image of the user. That is, in such a case of insufficient photographing, the splicing of the surrounding image is not performed once, but photographing is continued to acquire more surrounding images until the photographing times and/or photographing duration satisfy preset photographing conditions. Further, when splicing, all the surrounding environment pictures obtained before meeting the shooting conditions are spliced.
In one embodiment, if the rendering of the virtual object within the AR application has been completed, but at this point there is still a missing portion of the intermediate image. The camera may also continue to be invoked to perform incremental updates to the missing region. That is, the intermediate image that has been stitched is kept unchanged, the intermediate image is delta-complemented, and the environmental panorama map is built again using the intermediate image that has been obtained after the delta-complement. The rendering model may also re-render the virtual object using the updated panorama map. Incremental updates to the intermediate image may be made until no missing regions exist in the intermediate image. Or may be suspended for a period of time or after a certain period of time, so that continuous improvement of the rendering effect of the virtual object can be realized.
In this way, if the user's device can have multiple cameras, multiple cameras can be invoked to do so together. If the front and rear cameras exist in the equipment of the user, but the multi-opening is not supported, the front camera of the user equipment can be preferentially called to shoot at the moment. In this embodiment, since the direction of specular reflection is more to point to the direction of the user, more contents can be photographed and rendered by using the front camera, which is beneficial to obtaining the panoramic image map.
In addition, in AR applications, the materials of the virtual object may be further classified into diffuse reflective materials and specular materials. Different materials are extracted as parameters such as active luminescence, reflectivity of light in all directions and the like when the model is rendered. It is easy to understand that the mirror material is a material having high reflectivity for light in all directions, and the diffuse reflection material is the opposite. In the scheme provided in the specification, different parameters provided by the environment panorama map are respectively adopted for the diffuse reflection material and the mirror surface material for rendering. It will be readily appreciated that in a virtual object, it may be partially diffuse reflective material, and partially specular material, without affecting the rendering model to employ unused rendering parameters for different parts of the virtual object.
When the virtual object is a diffuse reflection material, in general, a simplified light source is used for the diffuse reflection material. Thus, in the present description embodiment, the ambient brightness parameter may be determined based on a statistical value of brightness values of pixels in the ambient panorama map. For example, the mean or median of all pixels in the ambient panorama is calculated and used as the ambient brightness parameter. The rendering model renders the virtual object of the diffuse reflection material based on the ambient brightness parameter, or renders a part of the diffuse reflection material in the virtual object.
When the virtual object is a specular reflection material, at this time, for any point on the specular reflection material, a normal vector of the point on the surface of the specular reflection material may be determined, and an intersection point of the normal vector and the environmental panorama is obtained. Thus, an illumination parameter may be determined based on the luminance value of the intersection or the luminance value of points within a specified range area around the intersection. For example, taking the point as the center, the average value of the brightness values of all pixel points in a circular area with the radius r is taken as the illumination parameter. And the rendering model renders the point based on the parameter. It can be seen that the points on the surface of the specular material that intersect with the ambient panorama at different normal vectors are also different. Fig. 3 is a schematic diagram of determining illumination parameters on a mirror material according to an embodiment of the present disclosure, as shown in fig. 3. In the figure, the environmental panorama is simplified and illustrated, A and B in the figure respectively represent two different points on the mirror surface material, P (A) is the point corresponding to A on the environmental panorama through a normal vector, and P (B) is similar. The rendering mode is adopted for each point on the surface of the mirror surface material, and the factors of ambient light are fully considered, so that the rendered virtual object can respond to the illumination of the surrounding environment and is not obtrusive, and the mirror surface material part of the virtual object can be mapped to have the reflection of the real world and the user.
Correspondingly, the embodiment of the present disclosure further provides a rendering device in an augmented reality application, as shown in fig. 4, fig. 4 is a schematic structural diagram of the rendering device in the augmented reality application provided in the embodiment of the present disclosure, including:
the first invoking module 401, the augmented reality application invokes the camera module to shoot the surrounding image of the user;
the construction module 403 is used for splicing the shot surrounding environment images and constructing an environment panorama map;
a determining module 405, configured to determine a luminance value of a pixel point in the environmental panorama;
and a second calling module 407, for calling a rendering model to make the rendering model render the virtual object in the application according to the brightness value of the pixel point in the panorama.
Further, the construction module 403 splices the captured surrounding images to generate an intermediate image; when the intermediate image is missing, determining the missing degree of the intermediate image: if the missing degree of the intermediate image exceeds a threshold value, generating light source information according to the shot surrounding environment image, and constructing an environment panorama map comprising the intermediate image and the light source information; otherwise, interpolating and complementing the missing area in the intermediate image according to the shot surrounding environment image to generate the environment panorama map.
Further, the device further includes a determining module 409, configured to determine whether the current shooting times and/or shooting duration are lower than a threshold, and if yes, call the image capturing module again, and capture an image of the surrounding environment of the user.
Further, the device also comprises an increment updating module 411, a camera shooting module is called, surrounding environment images of the user are shot, and missing parts in the intermediate images are updated in an increment mode; the construction module 403 constructs an environmental panorama map according to the incrementally updated intermediate image.
Further, when the virtual object is a diffuse reflection material, the rendering model calculates a statistic value of brightness values of pixel points in the environmental panorama, wherein the statistic value comprises a mean value or a median value, the statistic value of the brightness values is determined to be an environmental brightness parameter, and the virtual object of the diffuse reflection material is rendered based on the environmental brightness parameter.
Further, when the virtual object is a specular reflection material, the rendering model optionally selects a point on the virtual object of the specular reflection material, determines a normal vector of the point, determines an intersection point of the normal vector and the environmental panorama, and renders the selected point on the virtual object of the specular reflection material according to a brightness value of the intersection point or a brightness value of a point in a specified range area around the intersection point.
The embodiments of the present disclosure also provide a computer device, which at least includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the rendering method in the augmented reality application shown in fig. 1 when executing the program.
FIG. 5 illustrates a more specific hardware architecture diagram of a computing device provided by embodiments of the present description, which may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 implement communication connections therebetween within the device via a bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1020 may store an operating system and other application programs, and when the embodiments of the present specification are implemented in software or firmware, the associated program code is stored in memory 1020 and executed by processor 1010.
The input/output interface 1030 is used to connect with an input/output module for inputting and outputting information. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1040 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1050 includes a path for transferring information between components of the device (e.g., processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
It should be noted that although the above-described device only shows processor 1010, memory 1020, input/output interface 1030, communication interface 1040, and bus 1050, in an implementation, the device may include other components necessary to achieve proper operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The present description also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a rendering method in an augmented reality application as shown in fig. 1.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
From the foregoing description of embodiments, it will be apparent to those skilled in the art that the present embodiments may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions of the embodiments of the present specification may be embodied in essence or what contributes to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present specification.
The system, method, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the method embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The above-described method embodiments are merely illustrative, in that the modules illustrated as separate components may or may not be physically separate, and the functions of the modules may be implemented in the same piece or pieces of software and/or hardware when implementing the embodiments of the present disclosure. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing is merely a specific implementation of the embodiments of this disclosure, and it should be noted that, for a person skilled in the art, several improvements and modifications may be made without departing from the principles of the embodiments of this disclosure, and these improvements and modifications should also be considered as protective scope of the embodiments of this disclosure.

Claims (9)

1. A method of rendering in an augmented reality application, comprising:
the augmented reality application calls a camera module to shoot surrounding images of a user;
splicing the shot surrounding environment images to construct an environment panorama map;
determining brightness values of pixel points in the environmental panorama;
calling a rendering model, taking the brightness value of the pixel point in the environment panorama as input, and using the brightness value as an illumination parameter of the rendering model so that the rendering model renders the virtual object in the application according to the brightness value of the pixel point in the panorama;
when the virtual object is a diffuse reflection material, calculating a statistic value of brightness values of pixel points in the environment panorama, wherein the statistic value comprises a mean value or a median value; determining the statistical value of the brightness value as an environment brightness parameter; the rendering model renders the virtual object of the diffuse reflection material based on the ambient brightness parameter;
when the virtual object is made of a specular reflection material, optionally selecting a point on the virtual object made of the specular reflection material, and determining a normal vector of the point; determining an intersection point of the normal vector and the environmental panorama; and a preset rendering model for rendering the selected point on the virtual object of the specular reflection material according to the brightness value of the intersection point or the brightness value of the point in the area of the appointed range around the intersection point.
2. The method of claim 1, stitching the acquired ambient images to construct an ambient panorama map, comprising:
splicing the shot surrounding environment images to generate an intermediate image;
when the intermediate image is missing, determining the missing degree of the intermediate image:
if the missing degree of the intermediate image exceeds a threshold value, generating light source information according to the shot surrounding environment image, and constructing an environment panorama map comprising the intermediate image and the light source information;
otherwise, interpolating and complementing the missing area in the intermediate image according to the shot surrounding environment image to generate the environment panorama map.
3. The method of claim 2, when there is a deletion of the intermediate image, further comprising, prior to determining the degree of deletion of the intermediate image:
judging whether the current shooting times and/or shooting duration are lower than a threshold value, if so, calling the shooting module again to shoot the surrounding environment image of the user.
4. The method of claim 2, further comprising:
invoking a camera module to shoot surrounding environment images of a user, and incrementally updating missing parts in the intermediate images;
and constructing an environment panorama map according to the intermediate image updated by the increment.
5. A rendering apparatus in an augmented reality application, comprising:
the first calling module is used for calling the camera module by the augmented reality application and shooting surrounding images of a user;
the construction module is used for splicing the shot surrounding environment images to construct an environment panorama map;
the determining module is used for determining brightness values of pixel points in the environment panorama;
the second calling module is used for calling a rendering model, taking the brightness value of the pixel point in the environment panorama map as input and using the brightness value as an illumination parameter of the rendering model so that the rendering model renders the virtual object in the application according to the brightness value of the pixel point in the panorama map;
when the virtual object is a diffuse reflection material, the rendering model calculates a statistic value of brightness values of pixel points in the environment panorama, wherein the statistic value comprises a mean value or a median value, the statistic value of the brightness values is determined to be an environment brightness parameter, and the virtual object of the diffuse reflection material is rendered based on the environment brightness parameter;
when the virtual object is a specular reflection material, the rendering model optionally selects a point on the virtual object of the specular reflection material, determines a normal vector of the point, determines an intersection point of the normal vector and the environmental panorama, and renders the selected point on the virtual object of the specular reflection material according to a brightness value of the intersection point or a brightness value of a point in a specified range area around the intersection point.
6. The device of claim 5, wherein the construction module is used for stitching the captured surrounding environment images to generate an intermediate image; when the intermediate image is missing, determining the missing degree of the intermediate image: if the missing degree of the intermediate image exceeds a threshold value, generating light source information according to the shot surrounding environment image, and constructing an environment panorama map comprising the intermediate image and the light source information; otherwise, interpolating and complementing the missing area in the intermediate image according to the shot surrounding environment image to generate the environment panorama map.
7. The device of claim 6, further comprising a judging module for judging whether the current shooting times and/or shooting durations are lower than a threshold value, and if yes, invoking the shooting module again to shoot the surrounding image of the user.
8. The apparatus of claim 6, further comprising an incremental update module that invokes a camera module to capture an image of the user's surroundings, incrementally updating missing portions in the intermediate image; and the construction module constructs the environment panorama map according to the intermediate image updated by the increment.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 4 when the program is executed by the processor.
CN201811590118.9A 2018-12-25 2018-12-25 Rendering method, device and equipment in augmented reality application Active CN110021071B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811590118.9A CN110021071B (en) 2018-12-25 2018-12-25 Rendering method, device and equipment in augmented reality application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811590118.9A CN110021071B (en) 2018-12-25 2018-12-25 Rendering method, device and equipment in augmented reality application

Publications (2)

Publication Number Publication Date
CN110021071A CN110021071A (en) 2019-07-16
CN110021071B true CN110021071B (en) 2024-03-12

Family

ID=67188676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811590118.9A Active CN110021071B (en) 2018-12-25 2018-12-25 Rendering method, device and equipment in augmented reality application

Country Status (1)

Country Link
CN (1) CN110021071B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11238662B2 (en) * 2019-09-25 2022-02-01 Apple Inc. Optimal luminance mapping for augmented reality devices
CN111176452B (en) * 2019-12-30 2022-03-25 联想(北京)有限公司 Method and apparatus for determining display area, computer system, and readable storage medium
CN111292406B (en) * 2020-03-12 2023-10-24 抖音视界有限公司 Model rendering method, device, electronic equipment and medium
CN114125421A (en) * 2020-09-01 2022-03-01 华为技术有限公司 Image processing method, mobile terminal and storage medium
CN111932641B (en) 2020-09-27 2021-05-14 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113034570A (en) * 2021-03-09 2021-06-25 北京字跳网络技术有限公司 Image processing method and device and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106657910A (en) * 2016-12-22 2017-05-10 国网浙江省电力公司杭州供电公司 Panoramic video monitoring method for power substation
CN108986199A (en) * 2018-06-14 2018-12-11 北京小米移动软件有限公司 Dummy model processing method, device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056658B (en) * 2016-05-23 2019-01-25 珠海金山网络游戏科技有限公司 A kind of virtual objects rendering method and device
CN107808409B (en) * 2016-09-07 2022-04-12 中兴通讯股份有限公司 Method and device for performing illumination rendering in augmented reality and mobile terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106657910A (en) * 2016-12-22 2017-05-10 国网浙江省电力公司杭州供电公司 Panoramic video monitoring method for power substation
CN108986199A (en) * 2018-06-14 2018-12-11 北京小米移动软件有限公司 Dummy model processing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110021071A (en) 2019-07-16

Similar Documents

Publication Publication Date Title
CN110021071B (en) Rendering method, device and equipment in augmented reality application
KR102194094B1 (en) Synthesis method, apparatus, program and recording medium of virtual and real objects
KR20190021138A (en) Electronic device which stores depth information associating with image in accordance with Property of depth information acquired using image and the controlling method thereof
CN107330868B (en) Picture processing method and device
CN109040523B (en) Artifact eliminating method and device, storage medium and terminal
CN113875220B (en) Shooting anti-shake method, shooting anti-shake device, terminal and storage medium
CN107341777B (en) Picture processing method and device
CN106060412A (en) Photographic processing method and device
CN113709355B (en) Sliding zoom shooting method and electronic equipment
CN113487500B (en) Image distortion correction method and apparatus, electronic device, and storage medium
CN117135470B (en) Shooting method, electronic equipment and storage medium
CN112565604A (en) Video recording method and device and electronic equipment
US10701286B2 (en) Image processing device, image processing system, and non-transitory storage medium
CN115908120A (en) Image processing method and electronic device
CN113891008B (en) Exposure intensity adjusting method and related equipment
CN115134532A (en) Image processing method, image processing device, storage medium and electronic equipment
CN114070998B (en) Moon shooting method and device, electronic equipment and medium
CN114240792A (en) Image exposure fusion method and device and storage medium
CN114520903A (en) Rendering display method, device, storage medium and computer program product
CN109949212B (en) Image mapping method, device, electronic equipment and storage medium
CN113890984B (en) Photographing method, image processing method and electronic equipment
CN116708931B (en) Image processing method and electronic equipment
CN114363507A (en) Image processing method and device
JP2024102803A (en) Imaging device
CN114842118A (en) Three-dimensional model mapping method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201013

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20201013

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant