Detailed Description
In order for those skilled in the art to better understand the technical solutions in the embodiments of the present specification, the technical solutions in the embodiments of the present specification will be described in detail below with reference to the drawings in the embodiments of the present specification, and it is apparent that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification shall fall within the scope of protection.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings. As shown in fig. 1, fig. 1 is a schematic flow chart of a rendering method in an augmented reality application according to an embodiment of the present disclosure, where the flow specifically includes the following steps:
s101, the augmented reality application calls a camera module to shoot surrounding images of a user.
The camera module may be a camera on a removable device (e.g., a smart phone, tablet, etc.) that the user is carrying with him. The AR application may guide the user to initiate a call to the camera and capture an image of the surrounding environment by setting a guide interface within the application.
For example, the user is first guided to initiate a call request to the camera. Then, the user is guided to shoot the surrounding environments in different directions (for example, four directions of front, back, left and right) respectively, and when shooting in one direction meets a certain time length or times, prompt information is sent out in a guiding interface so as to guide the user to shoot the surrounding environment in the next direction until shooting is finished.
On some mobile devices, it may include two cameras, one front and one back. In this case, the front and rear cameras may be simultaneously turned on to perform photographing together. If the device does not support the simultaneous starting of the cameras, the rear-mounted cameras can be preferentially called for shooting. Compared with a front camera, the panoramic image is more convenient to collect by using a rear camera.
S103, splicing the shot surrounding environment images to construct an environment panorama map.
The splicing of the images can be performed in a preset blank panoramic area, the shot pictures are respectively filled into the panoramic area according to the position relation, and the pictures containing the same part can be mutually covered.
In one embodiment, the photographed image is comprehensive, and the environmental panorama map without missing parts can be directly spliced according to the photographed surrounding environment image.
In another embodiment, the photographed pictures are not comprehensive enough, the number of pictures is not enough, and the spliced images have defects. That is, when the stitched intermediate picture is filled into the panoramic area, there is still a missing portion without an image. The missing portion may be at an edge of the intermediate image or may be at an intermediate portion of the intermediate image. As shown in fig. 2, fig. 2 is a schematic diagram of an intermediate image obtained by stitching based on a captured environmental image. The dashed line part in the figure shows that in the splicing process, multiple environmental pictures may be displayed for the environmental information at the same position. The overlaying manner may be sequentially performed according to the shooting time, for example, the environmental picture obtained by the later shooting is overlaid with the environmental picture obtained by the earlier shooting. The pictures spliced from the pictures 1 to 5 are intermediate pictures, and in the schematic diagram, the intermediate pictures do not cover all panoramic areas.
In this way, it is also necessary to complement the missing part based on the intermediate image to obtain the environmental panorama map. The manner of completion may be based on the size of the missing portion and the intermediate image itself.
For example, if the missing degree of the intermediate image with respect to the panoramic area exceeds a threshold, it may be known that the missing portion is already large, at this time, one or more light source information may be generated from the captured surrounding image for replacing the environmental map of the missing portion, and then an environmental panoramic map including the intermediate image and the light source information is constructed. The threshold may be a specific size of the missing region, or may be a ratio of the missing region to the panoramic region. In this case, the luminance value of the pixel point of the missing region may be given based on the light source. For example, the luminance value of the pixel point of the missing region is determined based on the luminance value of the light source and the distance to the light source.
For another example, if the missing degree of the intermediate image with respect to the panoramic area does not exceed the threshold, it is known that the missing portion is not large, so that the captured surrounding image may be adopted to interpolate and complement the missing area, and the filter is used to blur the interpolation edge, so that the blurred surrounding panoramic map is smoother, and is beneficial to subsequent rendering.
S105, determining the brightness value of the pixel point in the environment panorama map.
S107, calling a rendering model to enable the rendering model to render the virtual object in the application according to the brightness value of the pixel point in the panoramic map.
The algorithm of the rendering model itself is generally unchanged, but some parameters of its input may be adjusted. The luminance values of the pixels in the environment map may be used as input as illumination parameters for the rendering model. Along with the input of different illumination parameters, the rendering model can obtain different rendering effects on the same virtual object. For example, the ambient brightness parameters are determined together by brightness values of pixels in the ambient panorama map to adjust the rendering effect on the virtual object. Because the environment panorama is spliced based on the pictures obtained by actual shooting, the brightness information contained in the environment panorama is related to the environment in real time and can be dynamically changed.
For example, in the same AR scene, if a preset environment map is adopted, the rendering effect for the virtual object does not change whenever the user enters the scene. Whereas in the present solution the user is at 15;00 enters the scene, and the obtained surrounding image and the user are 19; the surrounding environment images obtained when 00 enters the scene are obviously different, namely, the specific difference is that the brightness values of all pixel points in the environment map are different, and the environment panorama map spliced by the surrounding environment images also has the difference in brightness values.
According to the scheme of the embodiment of the specification, the image pickup module is called to pick up the surrounding environment images of the user, and the surrounding environment panoramic map is generated dynamically by splicing. Furthermore, the AR application can call a rendering model to render the virtual object according to the brightness value of the pixel point in the environment panorama map, so that the rendering effect of the virtual object can respond to scene change in real time.
In an embodiment, if the spliced intermediate image is missing relative to the panoramic area, before determining the missing degree of the intermediate image, the AR application may further determine whether the current shooting times and/or shooting duration are lower than a threshold value, and if yes, call the shooting module again to shoot the surrounding image of the user. That is, in such a case of insufficient photographing, the splicing of the surrounding image is not performed once, but photographing is continued to acquire more surrounding images until the photographing times and/or photographing duration satisfy preset photographing conditions. Further, when splicing, all the surrounding environment pictures obtained before meeting the shooting conditions are spliced.
In one embodiment, if the rendering of the virtual object within the AR application has been completed, but at this point there is still a missing portion of the intermediate image. The camera may also continue to be invoked to perform incremental updates to the missing region. That is, the intermediate image that has been stitched is kept unchanged, the intermediate image is delta-complemented, and the environmental panorama map is built again using the intermediate image that has been obtained after the delta-complement. The rendering model may also re-render the virtual object using the updated panorama map. Incremental updates to the intermediate image may be made until no missing regions exist in the intermediate image. Or may be suspended for a period of time or after a certain period of time, so that continuous improvement of the rendering effect of the virtual object can be realized.
In this way, if the user's device can have multiple cameras, multiple cameras can be invoked to do so together. If the front and rear cameras exist in the equipment of the user, but the multi-opening is not supported, the front camera of the user equipment can be preferentially called to shoot at the moment. In this embodiment, since the direction of specular reflection is more to point to the direction of the user, more contents can be photographed and rendered by using the front camera, which is beneficial to obtaining the panoramic image map.
In addition, in AR applications, the materials of the virtual object may be further classified into diffuse reflective materials and specular materials. Different materials are extracted as parameters such as active luminescence, reflectivity of light in all directions and the like when the model is rendered. It is easy to understand that the mirror material is a material having high reflectivity for light in all directions, and the diffuse reflection material is the opposite. In the scheme provided in the specification, different parameters provided by the environment panorama map are respectively adopted for the diffuse reflection material and the mirror surface material for rendering. It will be readily appreciated that in a virtual object, it may be partially diffuse reflective material, and partially specular material, without affecting the rendering model to employ unused rendering parameters for different parts of the virtual object.
When the virtual object is a diffuse reflection material, in general, a simplified light source is used for the diffuse reflection material. Thus, in the present description embodiment, the ambient brightness parameter may be determined based on a statistical value of brightness values of pixels in the ambient panorama map. For example, the mean or median of all pixels in the ambient panorama is calculated and used as the ambient brightness parameter. The rendering model renders the virtual object of the diffuse reflection material based on the ambient brightness parameter, or renders a part of the diffuse reflection material in the virtual object.
When the virtual object is a specular reflection material, at this time, for any point on the specular reflection material, a normal vector of the point on the surface of the specular reflection material may be determined, and an intersection point of the normal vector and the environmental panorama is obtained. Thus, an illumination parameter may be determined based on the luminance value of the intersection or the luminance value of points within a specified range area around the intersection. For example, taking the point as the center, the average value of the brightness values of all pixel points in a circular area with the radius r is taken as the illumination parameter. And the rendering model renders the point based on the parameter. It can be seen that the points on the surface of the specular material that intersect with the ambient panorama at different normal vectors are also different. Fig. 3 is a schematic diagram of determining illumination parameters on a mirror material according to an embodiment of the present disclosure, as shown in fig. 3. In the figure, the environmental panorama is simplified and illustrated, A and B in the figure respectively represent two different points on the mirror surface material, P (A) is the point corresponding to A on the environmental panorama through a normal vector, and P (B) is similar. The rendering mode is adopted for each point on the surface of the mirror surface material, and the factors of ambient light are fully considered, so that the rendered virtual object can respond to the illumination of the surrounding environment and is not obtrusive, and the mirror surface material part of the virtual object can be mapped to have the reflection of the real world and the user.
Correspondingly, the embodiment of the present disclosure further provides a rendering device in an augmented reality application, as shown in fig. 4, fig. 4 is a schematic structural diagram of the rendering device in the augmented reality application provided in the embodiment of the present disclosure, including:
the first invoking module 401, the augmented reality application invokes the camera module to shoot the surrounding image of the user;
the construction module 403 is used for splicing the shot surrounding environment images and constructing an environment panorama map;
a determining module 405, configured to determine a luminance value of a pixel point in the environmental panorama;
and a second calling module 407, for calling a rendering model to make the rendering model render the virtual object in the application according to the brightness value of the pixel point in the panorama.
Further, the construction module 403 splices the captured surrounding images to generate an intermediate image; when the intermediate image is missing, determining the missing degree of the intermediate image: if the missing degree of the intermediate image exceeds a threshold value, generating light source information according to the shot surrounding environment image, and constructing an environment panorama map comprising the intermediate image and the light source information; otherwise, interpolating and complementing the missing area in the intermediate image according to the shot surrounding environment image to generate the environment panorama map.
Further, the device further includes a determining module 409, configured to determine whether the current shooting times and/or shooting duration are lower than a threshold, and if yes, call the image capturing module again, and capture an image of the surrounding environment of the user.
Further, the device also comprises an increment updating module 411, a camera shooting module is called, surrounding environment images of the user are shot, and missing parts in the intermediate images are updated in an increment mode; the construction module 403 constructs an environmental panorama map according to the incrementally updated intermediate image.
Further, when the virtual object is a diffuse reflection material, the rendering model calculates a statistic value of brightness values of pixel points in the environmental panorama, wherein the statistic value comprises a mean value or a median value, the statistic value of the brightness values is determined to be an environmental brightness parameter, and the virtual object of the diffuse reflection material is rendered based on the environmental brightness parameter.
Further, when the virtual object is a specular reflection material, the rendering model optionally selects a point on the virtual object of the specular reflection material, determines a normal vector of the point, determines an intersection point of the normal vector and the environmental panorama, and renders the selected point on the virtual object of the specular reflection material according to a brightness value of the intersection point or a brightness value of a point in a specified range area around the intersection point.
The embodiments of the present disclosure also provide a computer device, which at least includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the rendering method in the augmented reality application shown in fig. 1 when executing the program.
FIG. 5 illustrates a more specific hardware architecture diagram of a computing device provided by embodiments of the present description, which may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 implement communication connections therebetween within the device via a bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1020 may store an operating system and other application programs, and when the embodiments of the present specification are implemented in software or firmware, the associated program code is stored in memory 1020 and executed by processor 1010.
The input/output interface 1030 is used to connect with an input/output module for inputting and outputting information. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1040 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1050 includes a path for transferring information between components of the device (e.g., processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
It should be noted that although the above-described device only shows processor 1010, memory 1020, input/output interface 1030, communication interface 1040, and bus 1050, in an implementation, the device may include other components necessary to achieve proper operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The present description also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a rendering method in an augmented reality application as shown in fig. 1.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
From the foregoing description of embodiments, it will be apparent to those skilled in the art that the present embodiments may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions of the embodiments of the present specification may be embodied in essence or what contributes to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present specification.
The system, method, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the method embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The above-described method embodiments are merely illustrative, in that the modules illustrated as separate components may or may not be physically separate, and the functions of the modules may be implemented in the same piece or pieces of software and/or hardware when implementing the embodiments of the present disclosure. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing is merely a specific implementation of the embodiments of this disclosure, and it should be noted that, for a person skilled in the art, several improvements and modifications may be made without departing from the principles of the embodiments of this disclosure, and these improvements and modifications should also be considered as protective scope of the embodiments of this disclosure.