WO2022156451A1 - 一种渲染方法及装置 - Google Patents

一种渲染方法及装置 Download PDF

Info

Publication number
WO2022156451A1
WO2022156451A1 PCT/CN2021/139426 CN2021139426W WO2022156451A1 WO 2022156451 A1 WO2022156451 A1 WO 2022156451A1 CN 2021139426 W CN2021139426 W CN 2021139426W WO 2022156451 A1 WO2022156451 A1 WO 2022156451A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendered
rendering
content
patches
model
Prior art date
Application number
PCT/CN2021/139426
Other languages
English (en)
French (fr)
Inventor
余洲
孙涛
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Publication of WO2022156451A1 publication Critical patent/WO2022156451A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present application relates to the field of graphics rendering, and in particular, to a rendering method and device.
  • Ray tracing rendering technology has always been the basic technology in the field of computer graphics. So far, this technology is the most important technology to achieve high-quality, realistic and high-quality images. However, this technology has always required a long calculation time to complete a large number of Monte Carlo integration calculation processes and generate final calculation results. Therefore, this technology has been used in offline rendering scenes, such as film and television, animation and other fields. With the upgrade of computer hardware computing power, in recent years, as some rendering business fields (games, virtual reality) that require strong real-time performance have begun to appear, the need for ray tracing rendering technology has become stronger and stronger.
  • the present application provides a rendering method, which can improve the efficiency of ray tracing rendering.
  • a first aspect of the present application provides a rendering method, which includes: receiving content to be rendered of an application, where the content to be rendered includes at least one model, and each model includes at least one patch.
  • a first set of patches to be rendered and a second set of patches to be rendered are acquired from the content to be rendered.
  • the first patch set to be rendered is rendered based on the first tracking ray count
  • the second to-be-rendered patch set is rendered based on the second tracking ray count.
  • the number of the first tracking rays is higher than the number of the second tracking rays.
  • the rendering result of the first patch set to be rendered and the rendering result of the second patch set to be rendered are obtained.
  • the method further includes: acquiring rendering results corresponding to multiple pieces of historical rendering content of the application, where each piece of historical rendering content includes at least one model.
  • High attention models included in the multiple pieces of historical rendering content are determined according to the number of appearances of each model in the multiple pieces of historical rendering content.
  • the acquiring the first set of patches to be rendered and the second set of patches to be rendered from the content to be rendered includes: determining a degree of high interest in the content to be rendered according to high interest models included in the multiple pieces of historical rendering content Model. From the high attention model in the content to be rendered, the first patch set to be rendered is determined.
  • the method further includes: acquiring rendering results corresponding to multiple pieces of historical rendering content of the application, where each piece of historical rendering content includes at least one model. High-interest models included in the multiple pieces of historical rendering content are determined according to the number of staying frames of each model in the multiple pieces of historical rendering content.
  • the acquiring the first set of patches to be rendered and the second set of patches to be rendered from the content to be rendered includes: determining a degree of high interest in the content to be rendered according to high interest models included in the multiple pieces of historical rendering content Model. From the high attention model in the content to be rendered, the first patch set to be rendered is determined.
  • the method further includes: acquiring rendering results corresponding to multiple pieces of historical rendering content of the application, where each piece of historical rendering content includes at least one model.
  • High attention models included in the multiple pieces of historical rendering content are determined according to the number of staying frames and/or the number of occurrences of each model in the multiple pieces of historical rendering content.
  • salient patches in the high-attention model included in the multiple pieces of historical rendering content are determined.
  • the determining the first patch set to be rendered from the high-attention models in the content to be rendered includes: determining the content to be rendered according to the significant patches in the high-attention models included in the multiple pieces of historical rendering content
  • the salient patches in the high-intensity model in are used as the first patch set to be rendered.
  • the method further includes: acquiring rendering results corresponding to multiple historical rendering contents of the application. Based on the moving target detection method, the moving patches in the model included in the multiple pieces of historical rendering content are determined. The moving patch in the content to be rendered is determined as the first patch set to be rendered. According to the first patch set to be rendered, the second patch set to be rendered is determined. Based on the moving target detection method, determining the moving patch in the model included in the multiple pieces of historical rendering content includes: determining the moving pixel according to the difference of the same pixel in the two rendering results corresponding to the rendering content and the detection threshold. Based on the moving pixels, the moving patch is determined.
  • the number of tracing rays of each patch in the second patch set to be rendered is determined according to the distance between the patch and the patches in the first patch set to be rendered.
  • the second set of patches to be rendered is determined according to the content to be rendered and the first set of patches to be rendered.
  • a second aspect of the present application provides an apparatus for rendering, the apparatus including a communication unit, a processing unit and a storage unit.
  • the communication unit is used to receive the content to be rendered of the application.
  • the storage unit is used to store the content to be rendered.
  • the processing unit is configured to obtain a first set of patches to be rendered and a second set of patches to be rendered from the content to be rendered; and to render the first set of patches to be rendered based on the first number of traced rays.
  • the second patch set to be rendered is rendered based on the second tracing ray number. Wherein, the number of the first tracking rays is higher than the number of the second tracking rays.
  • the rendering result of the first patch set to be rendered and the rendering result of the second patch set to be rendered are obtained.
  • the processing unit is further configured to: acquire rendering results corresponding to multiple pieces of historical rendering content of the application, where each piece of historical rendering content includes at least one model.
  • High attention models included in the multiple pieces of historical rendering content are determined according to the number of appearances of each model in the multiple pieces of historical rendering content.
  • the acquiring the first set of patches to be rendered and the second set of patches to be rendered from the content to be rendered includes: determining a degree of high interest in the content to be rendered according to high interest models included in the multiple pieces of historical rendering content Model. From the high attention model in the content to be rendered, the first patch set to be rendered is determined.
  • the processing unit is further configured to: acquire rendering results corresponding to multiple pieces of historical rendering content of the application, where each piece of historical rendering content includes at least one model.
  • High-interest models included in the multiple pieces of historical rendering content are determined according to the number of staying frames of each model in the multiple pieces of historical rendering content.
  • the acquiring the first set of patches to be rendered and the second set of patches to be rendered from the content to be rendered includes: determining a degree of high interest in the content to be rendered according to high interest models included in the multiple pieces of historical rendering content Model. From the high attention model in the content to be rendered, the first patch set to be rendered is determined.
  • the processing unit is further configured to: acquire rendering results corresponding to multiple pieces of historical rendering content of the application, where each piece of historical rendering content includes at least one model.
  • High attention models included in the multiple pieces of historical rendering content are determined according to the number of staying frames and/or the number of occurrences of each model in the multiple pieces of historical rendering content.
  • salient patches in the high-attention model included in the multiple pieces of historical rendering content are determined.
  • the determining the first patch set to be rendered from the high-attention models in the content to be rendered includes: determining the content to be rendered according to the significant patches in the high-attention models included in the multiple pieces of historical rendering content
  • the salient patches in the high-intensity model in are used as the first patch set to be rendered.
  • the processing unit is further configured to: obtain rendering results corresponding to multiple pieces of historical rendering content of the application. Based on the moving target detection method, the moving patches in the model included in the multiple pieces of historical rendering content are determined. The moving patch in the content to be rendered is determined as the first patch set to be rendered. According to the first patch set to be rendered, the second patch set to be rendered is determined.
  • determining the moving patch in the model included in the multiple pieces of historical rendering content includes: determining the moving pixel according to the difference value of the same pixel in the two rendering results corresponding to the rendering content and the detection threshold. Based on the moving pixels, the moving patch is determined.
  • a third aspect of the present application provides a computing device cluster including at least one computing device, each computing device including a processor and a memory.
  • the processor of the at least one computing device is configured to execute instructions stored in the memory of the at least one computing device to cause the cluster of computing devices to perform the first aspect or any method of the first aspect.
  • a fourth aspect of the present application provides a computer program product comprising instructions that, when executed by a cluster of computer devices, cause the cluster of computer devices to perform the first aspect or any of the methods of the first aspect.
  • a fifth aspect of the present application provides a computer-readable storage medium, the computer-readable storage medium comprising computer program instructions, when the computer program instructions are executed by a computing device cluster, the computing device cluster executes the first aspect or the first any method of one aspect.
  • FIG. 1(a) is a schematic diagram of a rendering structure under a single viewpoint provided by an embodiment of the present application
  • Fig. 1 (b) provides a kind of patch division situation of the embodiment of this application
  • 1(c) is a schematic diagram of the correspondence between a pixel and a patch provided by an embodiment of the present application
  • FIG. 1(d) is a schematic diagram of a pixel projection area provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of a rendering method provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a rendering method provided by an embodiment of the present application.
  • FIG. 4 is an architecture of a rendering engine provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a computing device according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a computing device cluster according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a connection mode of a computing device cluster according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a connection manner of a computing device cluster according to an embodiment of the present application.
  • first and second in the embodiments of the present application are only used for the purpose of description, and cannot be understood as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • a tile is the smallest planar unit in two-dimensional or three-dimensional space.
  • the model in space needs to be divided into countless tiny planes.
  • These planes also known as patches, can be any polygon, but triangles and quadrilaterals are commonly used.
  • the intersections of the edges of these patches are the vertices of each patch.
  • Patches can be randomly divided according to information such as the material or color of the model. Also, consider that each patch has two sides, and usually only one side is visible. Therefore, in some cases it is necessary to perform backface culling on the patch.
  • the number of rays traced per pixel refers to the number of rays that pass through each pixel. Among them, a pixel is the smallest unit in the view plane. Usually the screen we see is composed of pixels arranged one by one. The color of a pixel is calculated from the color (red, green, blue, RGB) of the light rays passing through the pixel during ray tracing. In ray tracing, the number of rays traced per patch can affect the result of the rendering. A larger number of traced rays per patch means that more rays are cast from the viewpoint to the model in 3D space. The more rays are cast on each pixel, the more accurate the calculation of the color value of each pixel can be.
  • Ray tracing also known as ray tracing or ray tracing, is a general technique from geometric optics that traces rays that interact with optical surfaces to obtain a model of the path the rays travel through. It is used in the design of optical systems such as camera lenses, microscopes, telescopes, and binoculars. When used for rendering, tracing the light from the eye rather than the light source reveals the mathematical model of the choreographed scene produced by such a technique. The result is similar to that of the raycasting and scanline rendering methods, but with better optics. For example, reflection and refraction have more accurate simulation effects and are very efficient, so this method is often used when such high quality results are sought.
  • the ray tracing method first calculates the distance, direction, and new location of a ray that travels in the medium before being absorbed by the medium or changing direction. Then a new ray is generated from this new position, and the same processing method is used to finally calculate a complete path for the light to travel through the medium. Since the algorithm is a complete simulation of the imaging system, complex pictures can be simulated.
  • ray tracing rendering In ray tracing rendering, light rays can be emitted from the viewpoint, and after touching the model in the content to be rendered, return to the light source after a limited number of refractions and reflections. For ray-traced rendering, the higher the number of rays emitted from the viewpoint, the higher the quality of the rendered image.
  • the optimization of rendering technology mainly includes optimization of sampling method. For example, Monte Carlo-based per-pixel sampling methods, super-sampling methods, and distributed super-sampling methods, etc. There are also methods by changing or combining the propagation directions of the rays. For example, bidirectional ray tracing and hybrid ray tracing. But the above methods are all methods that can be fully repeated for all pixels. In practice, in a frame of pictures, not all models corresponding to pixels are highly concerned by users. For example, during the game, the user may pay attention to the moving models and some brightly colored models in the screen.
  • an embodiment of the present application provides a rendering method 400 based on attention.
  • This method may be performed by the rendering engine 500 .
  • the content to be rendered is divided into a first set of patches to be rendered and a second set of patches to be rendered based on the attention model.
  • ray tracing rendering of the first number of tracing rays is performed on the first patch set to be rendered.
  • ray tracing rendering of the second number of tracing rays is performed on the second patch set to be rendered.
  • the number of the first tracing rays is higher than the number of the second tracing rays.
  • the method divides the content to be rendered by establishing an attention model.
  • the first number of tracing rays is sampled for the patches in the first patch set to be rendered, which ensures the rendering quality of these patches. It achieves the purpose of ensuring the output of high-quality rendered images without increasing or decreasing the total number of sampled rays.
  • Figure 1(a) shows a schematic diagram of a rendering structure under a single viewpoint.
  • the rendering structure includes at least a virtual view point 100 , a virtual view plane 200 , a model 600 and a light source 302 .
  • the virtual viewpoint 100 is an eye or eyes of a person simulated in space for perceiving three-dimensional structures. Among them, each frame of picture corresponds to a space. According to the number of viewpoints, the virtual viewpoints 100 can be divided into monocular viewpoints, binocular viewpoints, and multi-view viewpoints. Specifically, binocular viewpoint or multi-view viewpoint refers to acquiring two or more images from two or more different viewpoints to reconstruct the 3D structure or depth information of the target model.
  • the virtual viewing plane 200 is an analog display screen in space.
  • the construction of the virtual view plane 200 is mainly determined by two factors, the distance from the virtual view point 100 to the virtual view plane 200 and the screen resolution.
  • the distance from the virtual view point 100 to the virtual view plane 200 refers to the vertical distance from the virtual view point 100 to the virtual view plane 200 . Further, the distance can be set as required.
  • the screen resolution refers to the number of pixels contained in the virtual viewing plane 200 .
  • the virtual view plane 200 includes one or more pixels.
  • the virtual view plane 200 contains 9 pixels (3*3).
  • the results obtained through the rendering operation can be used for output.
  • the rendering result of each pixel in the virtual view plane 200 together constitutes a frame of picture. That is, in one ray tracing, one virtual view plane 200 corresponds to one frame of picture.
  • Corresponding to the virtual viewing plane is a display screen on the client side for outputting the final result.
  • the screen resolution of the display is not necessarily equal to the screen resolution of the virtual viewing plane.
  • the rendering result on the virtual viewing plane 200 may be output to the display screen at a ratio of 1:1.
  • the rendering result on the virtual viewing plane 200 is output to the display screen according to a certain ratio.
  • the calculation of the specific ratio belongs to the prior art, and details are not repeated here.
  • One or more models 600 may be contained in the space. Which models 600 can be included in the rendering result corresponding to the virtual view plane 200 is determined by the relative position between the corresponding virtual view point 100 and each model 600 .
  • the surface of the model Before rendering operations, it is usually necessary to divide the surface of the model into multiple patches. Wherein, the size and shape of each patch may be consistent or inconsistent. Specifically, the method for dividing a patch belongs to the prior art, and details are not described here.
  • FIG. 1( b ) shows the patch division of one face of the model 600 . As shown in Figure 1b, one face of the model 600 is divided into 6 triangular patches of different sizes.
  • All the vertices in the space include not only the intersections (eg D1, D2, D4, D6) of each face of the model 600, but also the vertices (eg D0, D3, D5) of each face.
  • Figure 1(c) shows a schematic diagram of the correspondence between pixels and patches.
  • the bolded box in FIG. 1c is the projection of a pixel in the virtual view plane 200 on the model 600 in FIG. 1a. It can be seen that the pixel projection areas cover part of the areas of patches 1 to 6 respectively.
  • the pixel projection area indicates the area enclosed by the projection of the pixel on the model.
  • a pixel projection area can cover multiple patches or only one patch. Wherein, when one pixel projection area covers only one patch, it may cover the entire area of the patch, or may cover part of the area of the patch.
  • the projection area of one pixel covers part of the area of the patch 6 . That is, the patch 6 can cover multiple pixel projection areas at the same time.
  • each model in the space can be divided into multiple polygonal patches, and all the vertices in the space are a collection of vertices of each polygonal patch.
  • the pixel projection area corresponding to one pixel may cover one or more patches, and one patch may also cover the pixel projection area corresponding to one or more pixels.
  • the light source 302 is a virtual light source set in the space for generating a lighting environment in the space.
  • the type of light source 302 may be any one of the following light sources: point light source, area light source, line light source, and the like. Further, one or more light sources 302 may be included in the space. Further, when there are multiple light sources 302 in the space, different light source types may be different.
  • Operations such as the setting of the virtual viewpoint, the setting of the virtual viewing plane, the establishment of the model, and the division of the patches in the above-mentioned space are usually completed before the rendering operation is performed.
  • the above steps may be performed by a rendering engine 500 such as a video rendering engine 500 or a game rendering engine 500 .
  • game rendering engine 500 unity
  • unreal engine unreal
  • the rendering engine 500 can receive the above-mentioned relative positional relationship and related information.
  • the information includes the type and number of virtual viewpoints, the distance and screen resolution from the virtual viewing plane to the virtual viewpoint, the lighting environment, the relative positional relationship between each model and the virtual viewpoint, the patch division of each model, Patch number information and patch material information, etc.
  • the rendering engine 500 may further execute the rendering method 400 below.
  • an embodiment of the rendering method 400 provided by the embodiment of the present invention includes: after the content to be rendered is analyzed and judged by a rendering system, determining a first set of patches to be rendered that has a high attention attribute in the content to be rendered , and a second set of patches to be rendered with a low attention attribute.
  • the patch set includes one or more patches.
  • ray tracing rendering is performed on the patches in the second patch set to be rendered using the second number of tracing rays to obtain a rendering result of the second patch set to be rendered.
  • the rendering result of the first patch set to be rendered and the rendering result of the second patch set to be rendered the rendering result of the content to be rendered can be obtained.
  • the rendering results of the patches contained in one or more models in the content to be rendered can be obtained. Further, based on the correspondence between one or more models in the content to be rendered and the pixels in the viewing plane, the color value of each pixel in the viewing plane can be determined, thereby obtaining the rendering result of the content to be rendered.
  • the content to be rendered includes other patches than the first patch set to be rendered and the second patch set to be rendered.
  • Ray tracing for a certain number of tracing rays may be performed on the other patches.
  • the ray tracing method belongs to the prior art and will not be described again.
  • the color value of each pixel in the viewing plane can be removed to obtain the rendering results of the content to be rendered.
  • the judgment of the content to be rendered by the rendering engine 500 includes the division of the attention degree of the model, the identification of salient patches in the model with high attention degree, and the identification of moving patches.
  • the method includes two parts: determining the high-focus patch and ray tracing.
  • determining the high-interest patch part includes S100 to S110.
  • the rendering engine 500 collects historical rendering results.
  • the rendering engine 500 collects the rendering results within the acquisition time. Wherein, one frame of rendering result corresponds to one frame of picture. Therefore, the rendering engine 500 needs to collect all rendering results within the collection time. Specifically, all rendering results during the acquisition time include rendering results generated under all or part of the virtual viewpoints during the acquisition time.
  • the start and end time nodes of the collection time can be set according to requirements.
  • each piece of rendering content includes at least one model.
  • the rendering engine 500 needs to collect rendering results generated by multiple players running the game map within the collection time. Among them, a player can produce multi-frame rendering results.
  • the above collection operation may be performed by the rendering engine 500 , and at the same time, the historical rendering results obtained by collection will be stored in the rendering engine 500 .
  • the rendering engine 500 counts the number of appearances and the average staying time of each model in the historical rendering results.
  • the rendering engine 500 may count the number of occurrences and the average stay duration of the rendering results corresponding to each model within the collection time.
  • the number of occurrences needs to be counted in units of models.
  • the number of times indicates the frequency in which the rendering results corresponding to each model appear in the collected rendering results, in units of frames.
  • the rendering result only includes the rendering result of a partial area of the model, it may be considered that the model does not appear in the rendering result.
  • the average dwell time needs to be counted in units of models.
  • the stay duration indicates the duration of consecutive appearances of the same model in the rendering results corresponding to the same virtual viewpoint. That is, the number of frames that stay continuously. Among them, in consecutive frames, the same model may correspond to the rendering results of different models.
  • a model remains within the view plane, but at different angles relative to the virtual viewpoint.
  • the model rendering result of the model always exists, and the model rendering results are not exactly the same in these frames.
  • the model can be considered to persist in the rendering results.
  • each model can correspond to one or more dwell durations.
  • the multiple stay durations may be unequal to each other.
  • the average stay duration of each model can be obtained.
  • the average value may be an arithmetic average value or a weighted average value.
  • the above-mentioned statistical operations of the number of occurrences and the average stay duration may be performed by the rendering engine 500 .
  • the statistical results will be stored in the rendering engine 500 .
  • the respective models are also stored in the rendering engine 500 .
  • the rendering engine 500 determines a model with a high degree of attention according to the number of occurrences of each model and the average staying time.
  • the rendering engine 500 may determine the high attention model according to one or more of the following parameters: the number of occurrences of each model and the average staying time of each model.
  • the high attention model may be determined according to the number of times threshold and the number of occurrences of each model obtained in S102.
  • the number of thresholds can be set as required. Specifically, when the number of occurrences of the model is greater than the number threshold, the model is a high attention model.
  • the high attention model may be determined according to the stay duration threshold and the average stay duration of each model obtained in S100.
  • the stay duration threshold can be set as required. Specifically, when the average stay duration of the model is greater than the stay duration threshold, the model is a high attention model.
  • the high attention model can be determined according to the threshold of times, the threshold of stay duration, the number of occurrences of each model and the average stay duration. Specifically, when the number of occurrences of the model is greater than the number of times threshold and the average stay duration is greater than the stay duration threshold, the model is a high attention model.
  • the high attention model may also be determined according to the product of the number of appearances of the model and the average stay duration and the frequency-time threshold.
  • the frequency-time threshold can be set as required. Specifically, when the product of the number of appearances of the model and the average stay duration is greater than the frequency-time threshold, the model is a high-attention model.
  • repeated marking may not be performed.
  • a high attention model is determined according to the number of appearances and the average staying time of each model in the historical rendering result.
  • the salient patches in the high-attention model need to be determined, and the specific part of determining the salient patches includes S106 to S110.
  • the rendering engine 500 detects the high attention model and determines the salient patches.
  • the historical rendering results contain multiple frames of rendering results
  • the position of the rendering result of the high-intensity model in the rendering result of the frame is determined, that is, the pixels covered by the rendering result of the high-intensity model on the corresponding virtual view plane.
  • determine the salient pixels in the rendering result of the high-attention model so as to determine the corresponding salient patches in the high-attention model.
  • the rendering engine 500 may determine saliency patches using saliency detection methods.
  • the saliency detection method may be a Boolean map based saliency model (BMS).
  • BMS Boolean map based saliency model
  • the saliency detection method may also be a color saliency method or the like.
  • the saliency detection method used in detecting the rendering result of the high-interest model in S106 may be a combination of multiple methods. Specifically, the Boolean graph-based saliency detection method and the color saliency method can be used for detection at the same time.
  • the Boolean-based saliency detection method converts its color values into Boolean values. Specifically, according to the color of the pixel and the conversion threshold, the conversion of the Boolean value can be realized. Among them, the conversion threshold can be set as required.
  • a significant pixel can be determined according to the Boolean value of each pixel. Specifically, when the Boolean values of two adjacent pixels are different, these two pixels are considered to be part of the salient pixels.
  • the adjacent indications are two pixels that are up and down or left and right of each other in the virtual viewing plane.
  • the patches corresponding to the pixels are the potential salient patches. It should be noted that salient patches need to be part of the high-attention model.
  • salient patches can be obtained by culling out the patches that are different from the patch sets included in the high-attention model from the aforementioned set of potential salient patches.
  • the rendering engine 500 detects the historical rendering results and determines the moving patch.
  • the moving patch can be determined by using the moving target detection method.
  • the moving target detection method may be an inter-frame difference method.
  • the moving target detection method may also be a background subtraction method or an optical flow method or the like. The following will take the inter-frame difference method as an example for introduction.
  • the historical rendering results obtained in step S100 include multi-frame rendering results corresponding to multiple virtual viewpoints.
  • one virtual viewpoint may correspond to multiple frames of rendering results.
  • the multi-frame rendering results corresponding to a virtual viewpoint are arranged in time sequence. Specifically, they can be arranged in time sequence from far to near. Optionally, they can also be arranged in order from near to far.
  • the number of pixels in the virtual view plane is fixed. In other words, the number of pixels in each frame of pictures in the multi-frame pictures corresponding to the same virtual viewpoint is fixed. Therefore, in units of pixels, the difference between the colors of the pixels of two adjacent frames is calculated.
  • the inter-frame difference threshold can be set as required. Specifically, when the difference is smaller than the inter-frame difference threshold, it is considered that the pixel in the next frame does not belong to the pixel corresponding to the moving patch. When the difference is greater than or equal to the inter-frame difference threshold, it is considered that the pixel in the subsequent frame belongs to the pixel corresponding to the moving patch.
  • the pixel set corresponding to the rendering result of the moving patch can be determined.
  • the moving patch can be determined according to the pixel set corresponding to the rendering result of the moving patch and the model corresponding to each pixel in the pixel set. Specifically, how to determine the moving patch will be described in detail later.
  • step S108 does not necessarily need to occur after step S102 , and it only needs to occur after step S100 that the historical rendering results are collected. In other words, step S108 may occur at any time after step S100 and before step S110.
  • the moving patches may include patches in the low-focus model. That is, the pixels covered by the rendering result of the moving patch may include the pixels covered by the rendering result of the low attention model.
  • the rendering engine 500 determines a high-concern patch and a low-concern patch according to the high-concern model, the salient patch, and the moving patch.
  • the high attention patches can be determined. Further, low attention patches can be determined.
  • Step S104 is to determine the high attention model by collecting statistics on all historical rendering results.
  • the salient patch is determined based on the high-attention model, and is performed frame by frame in the historical rendering result.
  • the determination of the moving patch in step S108 is also determined frame by frame.
  • the determination of the high-intensity patch is performed frame by frame.
  • the high-attention patches may be determined according to the patches included in the high-attention model. In other words, all the patches included in the high-attention model can be determined as high-attention patches.
  • the low-concern patch may be a patch included in the content to be rendered except the high-concern patch.
  • the high-attention patches may be determined according to the salient patches in the high-attention model. That is, the salient patches in the high-attention model can all be determined as high-attention patches.
  • the low-concern patch may be a patch included in the content to be rendered except the high-concern patch.
  • the low-concern patch may be a patch included in the high-concern model other than the high-concern patch.
  • the patches that are not included in the high-attention model may not be marked, and conventional ray tracing may be performed according to the prior art in subsequent ray tracing.
  • all moving patches may be determined as high-interest patches.
  • the low-concern patch may be a patch included in the content to be rendered except the high-concern patch.
  • a patch belonging to a high-concern model among the moving patches may be determined as a high-concern patch.
  • the low-concern patch may be a patch included in the content to be rendered except the high-concern patch.
  • the low-focus patch may be a patch other than the patch included in the high-focus model in the content to be rendered.
  • the low-focus patch may also be a patch other than the moving patch among the patches included in the high-focus model.
  • the low-focus patch may also be a patch other than the high-focus model among the moving patches.
  • the moving patches that belong to the salient patches may be determined to be high-interest patches.
  • the low-concern patch may be a patch included in the content to be rendered except the high-concern patch.
  • the low-concern patches may be one or more of the following patches: moving patches except the patches included in the high-concern model, high-concern models excluding the salient patches and moving The patches other than the patches and the moving patches belong to the patches included in the high-attention model but do not belong to the significant patches.
  • the high-intensity patch in each model can be obtained. Further, low-attention patches in each model can be obtained.
  • the above-mentioned determination operation for the first patch set to be rendered is performed by the rendering engine 500 , and the information of the high/low attention patches in each model will also be stored in the rendering engine 500 .
  • step S110 after determining the high-interest patches and low-interest patches in each model of the application, ray tracing can be performed in the current content to be rendered belonging to the same application accordingly.
  • the specific ray tracing part includes S112 and S114.
  • the rendering engine 500 acquires the first and second to-be-rendered patch sets in the content to be rendered according to the high-interest patches and the low-interest patches, and respectively performs ray tracing.
  • the content to be rendered and the multiple pieces of rendering content within the collection time involved in step S100 belong to different processes (viewpoints) within the same application. That is, one or more models included in the content to be rendered may partially appear in multiple pieces of historical rendering content.
  • the model in the content to be rendered is judged to determine a model of high interest in the content to be rendered. Further, a first set of patches to be rendered in the content to be rendered is determined, and ray tracing rendering is performed on the first set of patches to be rendered in the content to be rendered based on the first number of traced rays. At the same time, the second set of patches to be rendered may be determined according to the low-interest patches. Further, ray tracing rendering may be performed based on the second tracing ray count for the second patch set to be rendered in the content to be rendered.
  • the coordinates of the area of high interest determined by the patches of high interest may be determined. Further, by establishing a connection between the virtual viewpoint and the high-interest area, the imaging area corresponding to the high-interest area can be determined on the virtual viewing plane, and the pixels covered by the imaging area corresponding to the high-interest area can be executed. High SPP ray tracing.
  • the coordinates of the low-focus area determined by the low-focus patch can also be determined by obtaining the coordinates of the second patch set to be rendered in space. Further, by establishing the connection between the virtual viewpoint and the low-interest area, the imaging area corresponding to the low-interest area can be determined on the virtual view plane, and the pixels covered by the imaging area corresponding to the low-interest area can be performed. High SPP ray tracing.
  • ray tracing with high sample per mesh can also be performed directly on the first/second set of patches to be rendered in space.
  • performing ray tracing of a certain number of SPPs on the pixels in the virtual view plane belongs to the prior art, and thus will not be repeated here.
  • the following is an example of performing ray tracing of a certain number of SPMs on each patch in space.
  • the SPM for ray tracing on the first set of patches to be rendered is greater than the SPM for ray tracing on the second set of patches to be rendered.
  • ray tracing of the same number of rays may be performed on each first set of patches to be rendered.
  • the content to be rendered may be divided into two parts according to the high-concern patch: a high-concern area and a low-concern area.
  • Raytracing is done with a higher number of rays allocated overall to areas of high interest, thereby allocating a lower number of rays for ray tracing from areas of low interest.
  • the number of rays emitted by the viewpoint may be less than the upper limit of the number of rays that can be emitted by the viewpoint, thereby further improving the efficiency of ray tracing.
  • smooth ray tracing may be performed on the second patch set to be rendered or the area of low interest.
  • the smooth ray tracing indicates that the number of ray tracings for the aforementioned patch or region may be inversely proportional to the distance between the aforementioned patch or region and the first set of patches to be rendered or the region of high interest. Specifically, a higher number of ray tracings are performed on regions that are close to the first set of patches to be rendered or regions of high interest than regions that are far from the first set of patches to be rendered or regions of high interest.
  • smooth ray tracing can make a more natural transition from the rendering result of the high-focus area to the rendering result of the low-focus area, thereby further improving the image quality of the rendering result.
  • the above-mentioned ray tracing operation is performed by the rendering engine 500 , and the rendering result of the first patch set to be rendered and the rendering result of the second patch set to be rendered will be stored in the rendering engine 500 .
  • the rendering engine 500 obtains and stores the ray tracing rendering result.
  • the ray tracing rendering result of the content to be rendered can be obtained.
  • the rendering result may be stored in the historical rendering result in the rendering engine 500 .
  • FIG. 4 shows an architecture of rendering engine 500 .
  • the rendering engine 500 includes a communication unit 502 , a storage unit 504 and a processing unit 506 .
  • the storage unit 504 is configured to store, in step S102, the number of occurrences and the average staying time of each model in the historical rendering result. At the same time, the storage unit 504 is also used for storing the information of each model in the space. In step S104, the storage unit 504 is used for storing high/low attention information of each model. The information of the first patch set to be rendered determined in step S110 will also be stored in the storage unit 504 .
  • the storage unit 504 is further configured to store the salient patches in the historical rendering results of each frame in step S106, and store the moving patches in the historical rendering results of each frame in step S108.
  • the processing unit 506 is configured to collect historical rendering results and store the historical rendering results in the storage unit 504 .
  • the processing unit 506 is configured to obtain historical rendering results from the storage unit 504 .
  • the processing unit 506 is configured to count the number of appearances and the average staying time of each model in the historical rendering results.
  • the processing unit 506 is further configured to determine the high-attention model according to the number of appearances of each model and the average stay duration in step S104.
  • step S106 the processing unit 506 is used to determine salient patches.
  • the processing unit 506 is further configured to detect the moving patch in step S108.
  • step S110 the operation of determining the first set of patches to be rendered according to the high-attention model, the salient patches, and the moving patches is also performed by the processing unit 506 .
  • the processing unit 506 is further configured to perform the operation of performing ray tracing rendering on the content to be rendered according to the first set of patches to be rendered in step S112.
  • the processing unit 506 is configured to obtain a ray tracing rendering result of the content to be rendered.
  • the processing unit 506 is further configured to store the obtained ray tracing rendering result of the content to be rendered into the storage unit 204 .
  • the processing unit may include a determination unit 508 and a light tracking unit 510 .
  • the determining unit 508 is configured to collect historical rendering results and store the historical rendering results in the storage unit 504 .
  • the determining unit 508 is configured to obtain the historical rendering result from the storage unit 504 .
  • the determining unit 508 is configured to count the number of appearances and the average staying time of each model in the historical rendering results.
  • the determining unit 508 is further configured to determine the high attention model according to the number of times each model appears and the average stay duration in step S104.
  • step S106 the determination unit 508 is used to determine the salient patches.
  • the determining unit 508 is further configured to detect the moving patch in step S108.
  • step S110 the operation of determining the first set of patches to be rendered according to the high-attention model, the salient patches, and the moving patches is also performed by the determining unit 508 .
  • the ray tracing unit 510 is configured to perform the operation of performing ray tracing rendering on the content to be rendered according to the first set of patches to be rendered in step S112.
  • the ray tracing unit 510 is configured to obtain a ray tracing rendering result of the content to be rendered. Further, the ray tracing unit 510 is further configured to store the obtained ray tracing rendering result of the content to be rendered into the storage unit 204 .
  • the storage unit 504 is configured to store the historical rendering results and the ray tracing rendering results obtained in step S114.
  • the communication unit 502 is configured to receive the content to be rendered in step S112. Optionally, it is also used to send the ray tracing rendering result obtained in step S114.
  • the communication unit 502, the storage unit 504, and the processing unit 506 in the rendering engine 500 may all be deployed on a cloud device or a local device, respectively. Wherein, the communication unit 502, the storage unit 504 and the processing unit 506 may also be respectively deployed on different cloud devices or local devices.
  • FIG. 5 provides a schematic structural diagram of a computing device 600 .
  • computing device 600 includes: bus 602 , processor 604 , memory 606 , and communication interface 608 .
  • the bus 602 communicates between the processor 604 , the memory 606 and the communication interface 608 .
  • the bus 602 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus or the like.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of presentation, only one thick line is used in FIG. 5, but it does not mean that there is only one bus or one type of bus.
  • Bus 602 may include pathways for communicating information between various components of computing device 600 (eg, memory 606, processor 604, communication interface 608).
  • the processor 604 may be a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor (MP), or a digital signal processor (DSP), etc. any one or more of the devices.
  • CPU central processing unit
  • GPU graphics processing unit
  • MP microprocessor
  • DSP digital signal processor
  • Memory 606 may include volatile memory, such as random access memory (RAM).
  • the memory 604 may also include non-volatile memory (non-volatile memory), such as read-only memory (ROM), flash memory, hard disk drive (HDD), or solid state drive (solid state drive) , SSD).
  • ROM read-only memory
  • HDD hard disk drive
  • SSD solid state drive
  • Executable program codes are stored in the memory 606, and the processor 604 executes the executable program codes to realize the functions of the foregoing rendering engine 500, or execute the rendering method 400 described in the foregoing embodiments.
  • Communication interface 608 enables communication between computing device 600 and other devices or communication networks using a transceiver module such as, but not limited to, a transceiver.
  • a transceiver module such as, but not limited to, a transceiver.
  • the content to be rendered and the like can be obtained through the communication interface 608 .
  • Embodiments of the present application further provide a computing device cluster.
  • the computing device cluster includes at least one computing device 600 .
  • the computing device clusters included in the computing device cluster may all be terminal devices, may all be cloud servers, or may be partly cloud servers and partly terminal devices.
  • the memory 606 of one or more computing devices 600 in the computing device cluster may store the same rendering engine 500 for executing the instructions of the rendering method 400 .
  • one or more computing devices 600 in the computing device cluster may also be used to execute some instructions of the rendering engine 500 for executing the rendering method 400 .
  • a combination of one or more computing devices 600 may collectively execute the instructions of rendering engine 500 for performing rendering method 400 .
  • the memory 306 in different computing devices 600 in the computing device cluster may store different instructions for performing some functions of the rendering method 400 .
  • Figure 7 shows one possible implementation.
  • two computing devices 600A and 600B are connected through a communication interface 608 .
  • Instructions for performing the functions of communication unit 202 , determination unit 208 , and light tracking unit 510 are stored on memory in computing device 600A.
  • Instructions for performing the functions of storage unit 804 are stored on memory in computing device 600B.
  • the memory 606 of the computing devices 600A and 600B collectively stores the instructions of the rendering engine 500 for performing the rendering method 400 or the rendering method 700 .
  • connection manner between the computing device clusters shown in FIG. 7 may take into account that the rendering method 400 or the rendering method 700 provided by the present application needs to store a large amount of historical rendering results of the patches in the historical frames. Therefore, consider offloading the storage function to computing device 600B.
  • computing device 600A shown in FIG. 7 may also be performed by multiple computing devices 600 .
  • the functions of computing device 600B may also be performed by multiple computing devices 600 .
  • one or more computing devices in a cluster of computing devices may be connected by a network.
  • the network may be the Internet or a local area network, or the like.
  • Figure 8 shows one possible implementation. As shown in FIG. 8, two computing devices 600C and 600D are connected through a network. Specifically, the network is connected through a communication interface in each computing device.
  • the memory 606 in the computing device 600C contains instructions for executing the communication unit 502 and the determination unit 508 .
  • the memory 606 in the computing device 600D stores instructions for executing the storage unit 504 and the light tracking unit 510 .
  • connection mode between the computing device clusters shown in FIG. 8 may be based on the fact that the rendering method 400 or the rendering method 700 provided by the present application needs to perform a large number of calculations of ray tracing and store the historical rendering results of the patches in a large number of historical frames. Therefore, Consider entrusting the functions implemented by the light tracking unit 510 and the storage unit 504 to the computing device 600D to perform.
  • computing device 600C shown in FIG. 8 may also be performed by multiple computing devices 600 .
  • the functions of computing device 600D may also be performed by multiple computing devices 600 .
  • Embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that a computing device can store, or a data storage device such as a data center that contains one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state drives), and the like.
  • the computer-readable storage medium includes instructions that instruct a computing device to perform the above-described rendering method 400 applied to the rendering engine 500 .
  • Embodiments of the present application also provide a computer program product including instructions.
  • the computer program product may be a software or program product containing instructions, capable of being executed on a computing device or stored in any available medium.
  • the computer program product is executed on at least one computer device, the at least one computer device is caused to execute the above-described rendering method 400 applied to the rendering engine 500 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Image Generation (AREA)

Abstract

本申请提供了一种渲染方法及装置。该方法接收应用的待渲染内容后,从所述待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合,基于第一追踪光线数对所述第一待渲染面片集合进行渲染,基于第二追踪光线数对所述第二待渲染面片集合进行渲染,其中,所述第一追踪光线数高于所述第二追踪光线数,获得所述第一待渲染面片集合的渲染结果和所述第二待渲染面片集合的渲染结果。所述渲染方法通过将待渲染内容分为第一待渲染面片集合和第二待渲染面片集合,并对第一待渲染面片集合和第二待渲染面片集合执行不同追踪光线数的光线追踪,有效的提升了光线追踪渲染的效率。

Description

一种渲染方法及装置 技术领域
本申请涉及图形渲染领域,特别涉及一种渲染方法及装置。
背景技术
光线追踪渲染技术一直是计算机图形学领域的基础技术,至今为止,该技术是实现高品质,真实感,高画质图像的最主要技术。但该技术一直以来,需要较长的计算时间,才能完成大量的蒙特卡洛积分计算过程,生成最终计算结果。所以,该技术一直应用在离线渲染场景,如影视,动画等领域。随着计算机硬件算力升级,近年来,随着一些对于实时性需要较强的渲染业务领域(游戏,虚拟现实)开始出现,对于光线追踪渲染技术的需要越来越强烈。
从虚拟视点发出的光线数目越多,渲染的图像质量就越高。为了完成一张高质量的渲染图,需要从虚拟视点发射出上百万的光线,这非常耗费计算资源。
因此,如何在不降低图像质量的前提下,提升光线追踪渲染的效率,成为了业界重点关注的问题。
发明内容
本申请提供了一种渲染方法,可以提升光线追踪渲染的效率。
本申请的第一方面提供了一种渲染方法,该方法包括:接收应用的待渲染内容,该待渲染内容包括至少一个模型,每个模型包括至少一个面片。从该待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合。基于第一追踪光线数对该第一待渲染面片集合进行渲染,基于第二追踪光线数对该第二待渲染面片集合进行渲染。其中,该第一追踪光线数高于该第二追踪光线数。获得该第一待渲染面片集合的渲染结果和该第二待渲染面片集合的渲染结果。
在一些可能的设计中,该方法还包括:获取该应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型。根据该多条历史渲染内容中各模型出现的次数,确定该多条历史渲染内容包括的高关注度模型。该从该待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合,包括:根据该多条历史渲染内容包括的高关注度模型,确定该待渲染内容中的高关注度模型。从该待渲染内容中的高关注度模型中,确定该第一待渲染面片集合。
在一些可能的设计中,该方法还包括:获取该应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型。根据该多条历史渲染内容中各模型的停留帧数,确定该多条历史渲染内容包括的高关注度模型。该从该待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合,包括:根据该多条历史渲染内容包括的高关注度模型,确定该待渲染内容中的高关注度模型。从该待渲染内容中的高关注度模型中,确定该第一待渲染面片集合。
在一些可能的设计中,该方法还包括:获取该应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型。根据该多条历史渲染内容中各模型的停留帧数和/或出现次数,确定该多条历史渲染内容包括的高关注度模型。基于显著度检测方法,确定该多条历史渲染内容包括的高关注度模型中的显著面片。该从该待渲染内容中的高关注度模型中,确定该第一待渲染面片集合,包括:根据该多条历史渲染内容包括的高关注度模型中的显著面片,确定该待渲染内容中的高关注度模型中的显著面片作为该第一待渲染面片集合。
在一些可能的设计中,该方法还包括:获取该应用的多条历史渲染内容对应的渲染结果。基于移动目标检测方法,确定该多条历史渲染内容包括的模型中的移动面片。将该待渲染内容中的该移动面片确定为第一待渲染面片集合。根据该第一待渲染面片集合,确定该第二待渲染面片集合。基于移动目标检测方法,确定该多条历史渲染内容包括的模型中的移动面片包括:根据两条该渲染内容对应的渲染结果中同一像素的差值和检测阈值确定移动像素。根据该移动像素,确定该移动面片。
在一些可能的设计中,该第二待渲染面片集合内各面片的追踪光线数根据该面片和该第一待渲染面片集合中的面片的距离确定。
在一些可能的设计中,该第二待渲染面片集合根据该待渲染内容和该第一待渲染面片集合确定。
本申请的第二方面提供了一种用于渲染的装置,该装置包括通信单元、处理单元和存储单元。该通信单元,用于接收应用的待渲染内容。该存储单元,用于存储该待渲染内容。该处理单元,用于从该待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合;基于第一追踪光线数对该第一待渲染面片集合进行渲染。基于第二追踪光线数对该第二待渲染面片集合进行渲染。其中,该第一追踪光线数高于该第二追踪光线数。获得该第一待渲染面片集合的渲染结果和该第二待渲染面片集合的渲染结果。
在一些可能的设计中,该处理单元还用于:获取该应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型。根据该多条历史渲染内容中各模型出现的次数,确定该多条历史渲染内容包括的高关注度模型。该从该待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合,包括:根据该多条历史渲染内容包括的高关注度模型,确定该待渲染内容中的高关注度模型。从该待渲染内容中的高关注度模型中,确定该第一待渲染面片集合。
在一些可能的设计中,该处理单元还用于:获取该应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型。根据该多条历史渲染内容中各模型的停留帧数,确定该多条历史渲染内容包括的高关注度模型。该从该待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合,包括:根据该多条历史渲染内容包括的高关注度模型,确定该待渲染内容中的高关注度模型。从该待渲染内容中的高关注度模型中,确定该第一待渲染面片集合。
在一些可能的设计中,该处理单元还用于:获取该应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型。根据该多条历史渲染内容中各模型的停留帧数和/或出现次数,确定该多条历史渲染内容包括的高关注度模型。基于显著度检测方法,确定该多条历史渲染内容包括的高关注度模型中的显著面片。该从该 待渲染内容中的高关注度模型中,确定该第一待渲染面片集合,包括:根据该多条历史渲染内容包括的高关注度模型中的显著面片,确定该待渲染内容中的高关注度模型中的显著面片作为该第一待渲染面片集合。
在一些可能的设计中,该处理单元还用于:获取该应用的多条历史渲染内容对应的渲染结果。基于移动目标检测方法,确定该多条历史渲染内容包括的模型中的移动面片。将该待渲染内容中的该移动面片确定为第一待渲染面片集合。根据该第一待渲染面片集合,确定该第二待渲染面片集合。该基于移动目标检测方法,确定该多条历史渲染内容包括的模型中的移动面片包括:根据两条该渲染内容对应的渲染结果中同一像素的差值和检测阈值确定移动像素。根据该移动像素,确定该移动面片。
本申请的第三方面提供了一种计算设备集群,包括至少一个计算设备,每个计算设备包括处理器和存储器。该至少一个计算设备的处理器用于执行该至少一个计算设备的存储器中存储的指令,以使得该计算设备集群执行如第一方面或第一方面的任一方法。
本申请的第四方面提供了一种包含指令的计算机程序产品,当该指令被计算机设备集群运行时,使得该计算机设备集群执行如第一方面或第一方面的任一方法。
本申请的第五方面提供了一种计算机可读存储介质,该计算机可读存储介质包括计算机程序指令,当该计算机程序指令由计算设备集群执行时,该计算设备集群执行如第一方面或第一方面的任一方法。
附图说明
为了更清楚地说明本申请实施例的技术方法,下面将对实施例中所需使用的附图作以简单地介绍。
图1(a)为本申请实施例提供的一种单一视点下的渲染结构示意图;
图1(b)为本申请实施例提供的一种面片划分情况;
图1(c)为本申请实施例提供的一种像素与面片对应关系的示意图;
图1(d)为本申请实施例提供的一种像素投影区域的示意图;
图2为本申请实施例提供的一种渲染方法的流程图;
图3为本申请实施例提供的一种渲染方法的流程图;
图4为本申请实施例提供的一种渲染引擎的架构;
图5为本申请实施例提供的一种计算设备的结构示意图;
图6为本申请实施例提供的一种计算设备集群的结构示意图;
图7为本申请实施例提供的一种计算设备集群的连接方式示意图;
图8为本申请实施例提供的一种计算设备集群的连接方式示意图。
具体实施方式
本申请实施例中的术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。
首先对本申请实施例中所涉及到的一些技术术语进行介绍。
面片(tile):面片是指二维或三维空间中最小的平面构成单元。通常在渲染中,需要将空间中的模型划分成无数个微小的平面。这些平面又被称为面片,它们可以是任意多边形,常用的是三角形和四边形。这些面片各条边的交点则是各个面片的顶点。面片可以是根据模型的材质或颜色等信息随机划分的。此外,考虑到每一个面片都有正反两面,而通常只有一面是可以被看到的。因此,在一些情况下需要对面片进行背面剔除的操作。
每像素追踪光线数(sample per pixel,SPP):每像素追踪光线数是指每一个像素中通过的光线数量。其中,像素是在视平面中的最小单元。通常我们看到的屏幕是由一个个的像素排列而成的。像素的颜色是根据在光线追踪过程中穿过该像素的光线的颜色(red,green,blue,RGB)计算得到的。在光线追踪中,每面片追踪光线数的大小可以影响渲染的结果。每面片追踪光线数越大,意味着从视点会有更多的光线投向三维空间中的模型。每一像素上被投射的光线数越多,各个像素的颜色值计算就可以更为准确。
光线追踪(ray tracing):光线追踪又称为光迹跟踪或光线追迹,来自于几何光学的一项通用技术,它通过跟踪与光学表面发生交互作用的光线从而得到光线经过路径的模型。它用于光学***设计,如照相机镜头、显微镜、望远镜以及双目镜等。当用于渲染时,跟踪从眼睛发出的光线而不是光源发出的光线,通过这样一项技术生成编排好的场景的数学模型显现出来。这样得到的结果类似于光线投射与扫描线渲染方法的结果,但是这种方法有更好的光学效果。例如对于反射与折射有更准确的模拟效果,并且效率非常高,所以当追求这样高质量结果时候经常使用这种方法。具体地,光线追踪方法首先计算一条光线在被介质吸收,或者改变方向前,光线在介质中传播的距离、方向以及到达的新位置。然后从这个新的位置产生出一条新的光线,使用同样的处理方法,最终计算出一个完整的光线在介质中传播的路径。由于该算法是成像***的完全模拟,所以可以模拟生成复杂的图片。
光线追踪渲染随着计算机算力的提高和行业发展的需要逐渐成为业界的焦点。
在光线追踪渲染中,光线可以从视点发出,在接触到待渲染内容中的模型后,经过有限次的折、反射后回到光源。对于光线追踪渲染而言,从视点发出的光线数量越多,渲染获得的图像的画质越高。
目前对于渲染技术的优化主要有对采样方法进行优化的。例如,基于蒙特卡洛的每像素采样方法、超级采样方法和分布式超级采样方法等。还有通过改变或组合光线的传播方向的方法。例如,双向光线追踪和混合光线追踪。但是上述方法都是一些所有像素都可以完全重复执行的方法。而实际中,在一帧画面中,并非所有的像素对应的模型都是用户高度关注的。例如,在游戏过程中,用户可能会关注画面中的移动模型和一些颜色鲜艳的模型等。
有鉴于此,本申请实施例提供了一种基于关注度的渲染方法400。该方法可以由渲染引擎500执行。具体地,针对一帧画面中需要渲染的内容,基于关注度模型将需要渲染的内容划分为第一待渲染面片集合和第二待渲染面片集合。进一步地,对第一待渲染面片集合进行第一追踪光线数的光线追踪渲染。同时,对第二待渲染面片集合进行第二追踪光线数的光线追踪渲染。其中,第一追踪光线数高于第二追踪光线数。
在一帧画面的形成过程中,对整个视平面进行采样的光线数量应不超过该渲染引擎500的最大采样能力对应的光线数量。最后,根据前述两种采样结果,获得渲染结果。
该方法通过建立关注度模型,将待渲染内容进行划分。对第一待渲染面片集合中的面片进行第一追踪光线数的采样,保证了这些面片的渲染质量。实现了在不增加或降低总采样光线数量的情况下,保证输出高质量的渲染图像的目的。
为了使得本申请的技术方案更加清楚、易于理解,在对本申请提供的渲染方法400进行介绍之前,先对渲染技术涉及的三个基本概念——面片、顶点和像素之间的关系进行介绍。
图1(a)示出了一种单一视点下的渲染结构示意图。该渲染结构至少包含虚拟视点100、虚拟视平面200、模型600和光源302。
虚拟视点100是在空间中模拟的人的一只眼睛或多只眼睛,用于感知三维结构。其中,每一帧画面对应一个空间。按照视点数量划分,虚拟视点100可以分为单目视点、双目视点和多目视点。具体地,双目视点或多目视点是指从两个及两个以上的不同的视点获取两幅或多幅图像来重构目标模型3D结构或深度信息。
虚拟视平面200是一种空间中的模拟显示屏。虚拟视平面200的构建主要由虚拟视点100到虚拟视平面200的距离和屏幕分辨率这两个因素决定。
其中,虚拟视点100到虚拟视平面200的距离指的是虚拟视点100到虚拟视平面200的垂直距离。进一步地,该距离可以根据需求进行设置。
屏幕分辨率指的是虚拟视平面200所包含的像素数量。换言之,虚拟视平面200包含一个或多个像素。例如在图1a中,虚拟视平面200包含9像素(3*3)。
在一些可能的实现方式中,经过渲染操作获得的结果可以用于输出。在一次光线追踪中,虚拟视平面200中每一个像素的渲染结果共同构成一帧画面。也即,在一次光线追踪中,一个虚拟视平面200对应一帧画面。
与虚拟视平面相对应的,是在用户端侧用于输出最终结果的显示屏。该显示屏的屏幕分辨率不一定等于虚拟视平面的屏幕分辨率。
当显示屏和虚拟视平面200的屏幕分辨率相等时,可以将虚拟视平面200上的渲染结果按照1:1的比例输出至显示屏。
当显示屏和虚拟视平面200的屏幕分辨率不同时,则将虚拟视平面200上的渲染结果按照一定的比例输出至显示屏。其中,具体的比例的计算属于现有技术,这里不再赘述。
空间中可以包含一个或多个模型600。虚拟视平面200对应的渲染结果中可以包含哪些模型600,由对应的虚拟视点100与各模型600之间的相对位置决定。
在进行渲染操作前,通常需要将模型表面划分成多个面片。其中,各个面片的大小和形状可以一致,也可以不一致。具体地,面片的划分方法属于现有技术,这里不再赘述。
图1(b)示出了模型600的一个面的面片划分情况。如图1b所示,模型600的一个面被划分成了6个不同大小的三角形面片。
空间中所有的顶点不仅包括模型600各个面的交点(例如D1、D2、D4、D6), 还包括各个面片的顶点(例如D0、D3、D5)。
图1(c)示出了一种像素与面片对应关系的示意图。图1c中加粗的方框即图1a中虚拟视平面200中一个像素在模型600上的投影。可以看到,该像素投影区域分别覆盖了面片1至6的部分区域。所述像素投影区域指示的是该像素在模型上的投影所围成的区域。
一个像素投影区域可以覆盖多个面片,也可以仅覆盖一个面片。其中,当一个像素投影区域仅覆盖一个面片时,可以覆盖该面片的全部区域,也可以覆盖该面片的部分区域。
例如,如图1(d)所示,一个像素投影区域覆盖了面片6的部分区域。也即,面片6可以同时覆盖多个像素投影区域。
综上所述,空间中各模型的表面可以被划分成多个多边形面片,空间中的所有顶点即各个多边形面片顶点的集合。而一个像素对应的像素投影区域可以覆盖一个或多个面片,一个面片也可以覆盖一个或多个像素对应的像素投影区域。
光源302是空间中设置的虚拟光源,用于生成空间中的光照环境。光源302的类型可以是以下光源中的任意一种:点光源、面光源和线光源等。进一步地,空间中可以包括一个或多个光源302。进一步地,当空间中存在多个光源302时,不同的光源类型可以不同。
上述空间中虚拟视点的设置、虚拟视平面的设置、模型的建立和面片的划分等操作,通常都在执行渲染操作之前已经完成了。上述这些步骤可以由影视渲染引擎500或游戏渲染引擎500等渲染引擎500执行。例如,游戏渲染引擎500(unity)或虚幻引擎(unreal)等。
在将虚拟视点、虚拟视平面、光源、和各模型的相对位置关系设置好后,渲染引擎500即可接收上述相对位置关系及相关信息。具体地,所述信息包括虚拟视点的类型和数量、虚拟视平面到虚拟视点的距离和屏幕分辨率、光照环境、各模型与虚拟视点之间的相对位置关系、各模型的面片划分情况、面片编号信息和面片材质信息等。在获得上述信息后,渲染引擎500可以进一步地执行下文中的渲染方法400。
下面结合附图对本申请实施例提供的渲染方法400进行介绍。
如图2所示,本发明实施例提供的渲染方法400的一实施例包括:待渲染内容通过渲染***的分析判断后,确定待渲染内容中具有高关注度属性的第一待渲染面片集合,和具有低关注度属性的第二待渲染面片集合。其中,面片集合包括一个或多个面片。对第一待渲染面片集合中的面片采用第一追踪光线数进行光线追踪渲染,获得第一待渲染面片集合渲染结果。同时,对第二待渲染面片集合中的面片采用第二追踪光线数进行光线追踪渲染,获得第二待渲染面片集合渲染结果。
根据第一待渲染面片集合的渲染结果和第二待渲染面片集合的渲染结果,即可获得待渲染内容的渲染结果。
具体的,当待渲染内容中只包含第一待渲染面片集合和第二待渲染面片集合时,根据根据第一待渲染面片集合的渲染结果和第二待渲染面片集合的渲染结果,可以获得待渲染内容中一个或多个模型包含的面片的渲染结果。进一步地,基于待渲染内容 中一个或多个模型与视平面中像素的对应关系,可以确定视平面中各像素的颜色值,从而获得待渲染内容的渲染结果。
可选的,当待渲染内容中包含第一待渲染面片集合和第二待渲染面片集合之外的其他面片时。可以对所述其他面片执行一定数量追踪光线数的光线追踪。具体地光线追踪方法属于现有技术,不再赘述。同理,在获得待渲染内容中一个或多个模型包含的面片的渲染结果后,可以去欸的那个视平面中各像素的颜色值,从而获得待渲染内容的渲染结果。
渲染引擎500对待渲染内容的判断包括对模型关注度的划分,对高关注度模型中显著面片的识别以及移动面片的识别。
接下来,从渲染引擎500的角度,对本申请实施例提供的渲染方法400进行详细介绍。
参见图3所示的渲染方法400的流程图,该方法包括确定确定高关注度面片和光线追踪两个部分。其中,确定高关注度面片部分包括S100至S110。
S100:渲染引擎500采集历史渲染结果。
渲染引擎500对采集时间内的渲染结果进行收集。其中,一帧渲染结果对应一帧画面。因此,渲染引擎500需要对所述采集时间内所有的渲染结果进行收集。具体地,采集时间内所有的渲染结果包括采集时间内所有或部分虚拟视点下产生的渲染结果。其中,所述采集时间的起、止时间节点可以根据需求进行设置。
需要说明的是,上述采集时间内的渲染结果均属于同一应用内不同进程(视点)产生的。如上所述,一帧渲染内容对应一个或多个模型,因此采集时间内多帧渲染结果也对应多条渲染内容。其中,每条渲染内容包括至少一个模型。
例如,对于一款游戏中的一张地图而言,渲染引擎500需要对在所述采集时间内运行该张游戏地图的多个玩家所产生的渲染结果进行收集。其中,一个玩家可以产生多帧渲染结果。
上述的采集操作可以由渲染引擎500执行,同时,采集得到的历史渲染结果将被存储在渲染引擎500中。
S102:渲染引擎500统计各模型在历史渲染结果中的出现次数和平均停留时长。
在对历史渲染结果进行采集后,渲染引擎500即可对各模型对应的渲染结果在采集时间内的出现次数和平均停留时长进行统计。
在一些可能的实现方式中,在完成对采集时间内的渲染结果的收集后,需要以模型为单位统计出现次数。其中,次数指示的是以帧为单位,各模型对应的渲染结果在收集到的渲染结果中出现的频次。可选的,当渲染结果中仅包含模型的部分区域的渲染结果时,可以认为该模型未在该渲染结果中出现。
在一些可能的实现方式中,在完成对采集时间内的渲染结果进行收集后,需要以模型为单位统计平均停留时长。其中,停留时长指示的是同一模型的在同一虚拟视点对应的渲染结果中连续出现的时长。也即,连续停留帧数。其中,在连续帧中,同一模型可以对应不同的模型的渲染结果。
例如,在连续的几帧中,某一模型始终保持在视平面范围内,但是相对于虚拟视 点的角度不同。换言之,在连续的几帧中,该模型的模型渲染结果始终存在,并且在这几帧中模型渲染结果不完全相同。在这一情况中,可以认为该模型持续在渲染结果中持续出现。
在采集时间内,各模型可以对应一个或多个停留时长。其中,多个停留时长可以互不相等。通过计算各模型对应的一个或多个停留时长的平均值,可以获得各模型的平均停留时长。所述平均值可以是算术平均值,也可以是加权平均值。
上述的出现次数和平均停留时长的统计操作可以由渲染引擎500执行。同时,统计得到的结果将被存储在渲染引擎500中。此外,各个模型也存储在渲染引擎500中。
S104:渲染引擎500根据各模型出现次数和平均停留时长,确定高关注度模型。
渲染引擎500可以根据下述参数中的一种或多种确定高关注度模型:各模型出现次数和各模型平均停留时长。
在一些可能的实现方式中,根据次数阈值和在S102中获得的各模型的出现次数,可以确定高关注度模型。其中,次数阈值可以根据需要进行设置。具体地,当模型的出现次数大于次数阈值时,该模型是高关注度模型。
在一些可能的实现方式中,根据停留时长阈值和在S100中获得的各模型的平均停留时长,可以确定高关注度模型。其中,停留时长阈值可以根据需要进行设置。具体地,当模型的平均停留时长大于停留时长阈值时,该模型是高关注度模型。
在一些可能的实现方式中,根据次数阈值、停留时长阈值和各模型的出现次数和平均停留时长,可以确定高关注度模型。具体地,当模型的出现次数大于次数阈值且平均停留时长大于停留时长阈值时,该模型是高关注度模型。
可选的,还可以根据模型的出现次数和平均停留时长的乘积和频时阈值,确定高关注度模型。其中,频时阈值可以根据需要进行设置。具体地,当模型的出现次数和平均停留时长的乘积大于频时阈值时,该模型是高关注度模型。
在确定了何类模型是高关注模型后,需要对历史渲染结果中的高关注度模型进行标记。
可选的,对于已经标记过的模型,在对后续的历史渲染结果的标记中,可以不再进行重复的标记。
在S104中,根据各模型在历史渲染结果中的出现次数和平均停留时长,确定了高关注度模型。接下来需要对高关注度模型中的显著面片进行确定,具体的确定显著面片部分包括S106至S110。
S106:渲染引擎500对高关注度模型进行检测,确定显著面片。
考虑到历史渲染结果中包含多帧渲染结果,需要逐帧选取历史渲染结果中的渲染结果,对一帧渲染结果中的高关注度模型渲染结果进行检测。具体地,确定高关注度模型渲染结果在该帧渲染结果中的位置,也即所述高关注度模型渲染结果在对应的虚拟视平面上覆盖的像素。以像素为单位,确定该高关注度模型渲染结果中的显著像素,从而确定对应的在高关注度模型的显著面片。
渲染引擎500可以利用显著性检测方法确定显著面片。其中,显著性检测方法可以是基于布尔图的显著性检测法(a boolean map based saliency model,BMS)。可选的,显著性检测方法还可以是颜色显著法等。
需要说明的是,在S106中对高关注模型渲染结果检测时使用的显著性检测方法可以是多种方法的结合。具体地,可以同时利用基于布尔图的显著性检测法和颜色显著法进行检测。
下文将以基于布尔图的显著性检测法方法为例进行介绍。
对历史渲染结果中每一帧画面中的像素,基于布尔图的显著性检测法将其颜色值转换成布尔值。具体地,根据像素的颜色和转换阈值,可以实现布尔值的转换。其中,转换阈值可以根据需要进行设置。
当像素颜色小于转换阈值时,将该像素颜色修改为0或1。当像素颜色大于或等于转换阈值时,将该像素颜色修改为1或0。
在将需要转换的像素的颜色转换为布尔值后,可以根据各个像素的布尔值,确定显著像素。具体地,当相邻的两个像素的布尔值不同时,认为这两个像素为显著像素中的一部分。其中,所述相邻指示的是在虚拟视平面中互为上下或左右的两个像素。
在确定了每一帧画面中属于显著面片的像素后,所述像素对应的面片即为潜在显著面片。需要说明的是,显著面片需要是高关注度模型中的一部分。
因此,通过前述属于潜在显著面片的集合中剔除与高关注度模型包含的面片集合中不同的面片,即可获得显著面片。
S108:渲染引擎500对历史渲染结果进行检测,确定移动面片。
根据步骤S100中获得的历史渲染结果,利用移动目标检测方法可以确定移动面片。其中,移动目标检测方法可以是帧间差法。可选的,移动目标检测方法还可以是背景减除法或光流法等。下文将以帧间差法为例进行介绍。
如前所述,在步骤S100中获得的历史渲染结果中包含多个虚拟视点对应的多帧渲染结果。其中,一个虚拟视点可以对应多帧渲染结果。对移动面片进行检测时,需要以虚拟视点为单位,逐个虚拟视点进行检测。
首先,将一个虚拟视点对应的多帧渲染结果按照时序排列。具体地,可以按照从远到近的时序进行排列。可选的,也可以按照从近到远的时序进行排列。
其次,对于同一个虚拟视点而言,虚拟视平面的像素数量是固定的。换言之,同一个虚拟视点对应的多帧画面中各帧画面的像素数量是固定的。因此,以像素为单位,计算相邻两帧的像素的颜色的差值。
然后,根据上述差值和帧间差阈值,可以确定各像素是否属于移动面片对应的像素。其中,帧间差阈值可以根据需要进行设置。具体地,当所述差值小于帧间差阈值时,认为后一帧中的该像素不属于移动面片对应的像素。当所述差值大于或等于帧间差阈值时,认为后一帧中的该像素属于移动面片对应的像素。
最后,根据后一帧中所有属于移动面片渲染结果对应的所有像素,可以确定移动面片渲染结果对应的像素集。根据移动面片渲染结果对应的像素集和像素集中各像素对应的模型,可以确定移动面片。具体地,关于如何确定移动面片将在后文进行详细介绍。
需要说明的是,步骤S108不一定要后于步骤S102发生,只需要步骤S108发生于步骤S100收集完历史渲染结果之后即可。换言之,步骤S108可以发生于步骤S100之后,S110之前的任意时刻。
进一步地,因为步骤S108的执行不依赖于步骤S104的执行,因此移动面片中可以包含低关注度模型中的面片。也即,移动面片渲染结果对应覆盖的像素中,可以包含低关注度模型渲染结果对应覆盖的像素。
S110:渲染引擎500根据高关注度模型、显著面片和移动面片,确定高关注度面片和低关注度面片。
根据步骤S104中确定的高关注度模型、步骤S106中确定的显著面片和步骤S108中确定的移动面片,可以确定高关注度面片。进一步地,可以确定低关注度面片。
步骤S104是通过对所有的历史渲染结果进行统计,确定的高关注度模型。而步骤S106中显著面片是基于高关注度模型的确定,在历史渲染结果中逐帧进行的。此外,步骤S108中移动面片的确定也是逐帧确定的。步骤S110中高关注度面片的确定还是逐帧进行的。
在一些可能的实现方式中,可以根据高关注度模型包含的面片确定高关注度面片。换言之,可以将高关注度模型包含的面片都确定为高关注度面片。
在这一类可能的实现方式中,低关注度面片可以是待渲染内容中包含的面片除高关注度面片以外的面片。
在一些可能的实现方式中,可以根据高关注度模型中的显著面片确定高关注度面片。也即,可以将高关注度模型中的显著面片都确定为高关注度面片。
在这一类可能的实现方式中,低关注度面片可以是待渲染内容中包含的面片除高关注度面片以外的面片。
可选的,低关注度面片可以是高关注度模型中包含的面片除高关注度面片以外的面片。而对于待渲染内容包含的面片中不是高关注度模型包含的面片可以不作标注,在后续的光线追踪中按照现有技术进行常规的光线追踪即可。
在一些可能的实现方式中,可以将移动面片都确定为高关注度面片。
在这一类可能的实现方式中,低关注度面片可以是待渲染内容中包含的面片除高关注度面片以外的面片。
在一些可能的实现方式中,可以将移动面片中属于高关注度模型的面片确定为高关注度面片。
在这一类可能的实现方式中,低关注度面片可以是待渲染内容中包含的面片除高关注度面片以外的面片。
可选的,低关注度面片可以是待渲染内容中除高关注度模型包含的面片以外的面片。
可选的,低关注度面片也可以是高关注度模型包含的面片中除移动面片以外的面片。
可选的,低关注度面片也可以是移动面片中除属于高关注度模型以外的面片。
在一些可能的实现方式中,可以将移动面片中属于显著面片确定为高关注度面片。
在这一类可能的实现方式中,低关注度面片可以是待渲染内容中包含的面片除高关注度面片以外的面片。
可选的,低关注面片可以是下述面片中的一种或多种:移动面片中除高关注度模型包含的面片以外的面片、高关注模型中除显著面片和移动面片以外的面片、移动面 片中属于高关注度模型包含的面片但不属于显著面片的面片。
在确定历史渲染画面中每一帧对应的高关注度模型和高关注度面片后,可以获得各个模型中高关注度面片。进一步地,可以获得各个模型中低关注度面片。
上述对于第一待渲染面片集合的确定操作由渲染引擎500执行,各个模型中高/低关注度面片的信息也将被存储至渲染引擎500中。
在步骤S110中确定了该应用的各模型中的高关注度面片和低关注度面片后,即可据此在属于同一应用的当前待渲染内容中进行光线追踪。具体的光线追踪部分包括S112和S114。
S112:渲染引擎500根据高关注度面片和低关注度面片,获取待渲染内容中的第一待渲染面片集合和第二待渲染面片集合,并分别执行光线追踪。
首先,待渲染内容与步骤S100中所涉采集时间内的多条渲染内容属于同一应用内不同进程(视点)产生的。也即,待渲染内容中包括的一个或多个模型可能部分出现在多条历史渲染内容中。
在获取待渲染内容后,对待渲染内容中模型进行判断,确定待渲染内容中的高关注度模型。进一步地,确定待渲染内容中的第一待渲染面片集合,并基于第一追踪光线数对待渲染内容中第一待渲染面片集合进行光线追踪渲染。同时,可以根据低关注度面片确定第二待渲染面片集合。进一步的,可以基于第二追踪光线数对待渲染内容中第二待渲染面片集合进行光线追踪渲染。
具体的,可以通过获得第一待渲染面片集合在空间中的坐标,确定由高关注度面片确定的高关注度区域的坐标。进一步地,通过建立虚拟视点与所述高关注度区域的连线,可以在虚拟视平面上确定高关注度区域对应的成像区域,并对所述高关注度区域对应的成像区域覆盖的像素执行高SPP的光线追踪。
同理,也可以通过获得第二待渲染面片集合在空间中的坐标,确定由低关注度面片确定的低关注度区域的坐标。进一步地,通过建立虚拟视点与所述低关注度区域的连线,可以在虚拟视平面上确定低关注度区域对应的成像区域,并对所述低关注度区域对应的成像区域覆盖的像素执行高SPP的光线追踪。
可选的,也可以直接对空间中的第一/二待渲染面片集合执行高每面片追踪光线数(sample per mesh,SPM)的光线追踪。
其中,对虚拟视平面中的像素执行一定数量的SPP的光线追踪属于现有技术,故不再赘述。下文以对空间中各面片执行一定数量的SPM的光线追踪为例进行介绍。
首先,无论是以SPM还是SPP为单位统计从视点中发出的光线数量,单一视点可以同时追踪的光线数量受硬件水平限制是有一定的上限。换言之,对空间中第一待渲染面片集合和第二待渲染面片集合执行光线追踪时,总的光线数量存在一定的上限。
在一些可能的实现方式中,对第一待渲染面片集合进行光线追踪的SPM大于对第二待渲染面片集合进行光线追踪的SPM。可选的,可以对每一个第一待渲染面片集合执行相同数量光线的光线追踪。
在一些可能的实现方式中,可以依据高关注面片将待渲染内容分为两部分:高关注度区域和低关注度区域。对于高关注度区域整体分配的较高的光线数量进行光线追踪,从而从低关注度区域分配较低的光线数量进行光线追踪。
在上述两种可能的实现方式中,视点发出的光线数量可以小于其可以发出的光线数量的上限,从而可以进一步地提升光线追踪的效率。
在一些可能的实现方式中,可以对第二待渲染面片集合或低关注度区域进行平滑光追。其中,平滑光追指示的是对于前述面片或区域的光线追踪数量可以与前述面片或区域和第一待渲染面片集合或高关注度区域的距离成反比。具体地,相比于距离第一待渲染面片集合或高关注度区域远的区域,对距离第一待渲染面片集合或高关注度区域近的区域执行更高数量的光线追踪。
在这一种可能的实现方式中,通过平滑光追可以使得高关注度区域渲染结果向低关注度区域渲染结果更自然的过渡,从而进一步提升渲染结果的画质。
上述的光线追踪操作由渲染引擎500执行,第一待渲染面片集合的渲染结果和第二待渲染面片集合的渲染结果将被存储至渲染引擎500中。
在一些可能的实现方式中,还需要对待渲染内容中不属于第一待渲染面片集合或第二待渲染面片集合的面片进行常规的光线追踪渲染。
S114:渲染引擎500获得并存储光线追踪渲染结果。
在步骤S112中获得了第一待渲染面片集合的渲染结果和第二待渲染面片集合的渲染结果后,可以获得待渲染内容的光线追踪渲染结果。
在一些可能的实现方式中,还需要根据待渲染内容中不属于第一待渲染面片集合或第二待渲染面片集合的面片的光线追踪渲染结果,获得待渲染内容的光线追踪渲染结果。
在获得了当前帧对应的待渲染内容的光线追踪渲染结果后,可以将该渲染结果存储至渲染引擎500中的历史渲染结果中。
接下来对本申请中的渲染引擎500的一种架构进行介绍。
图4示出了渲染引擎500的一种架构。具体地,渲染引擎500包括通信单元502、存储单元504和处理单元506。
存储单元504,在步骤S102中用于存储各模型在历史渲染结果中的出现次数和平均停留时长。同时,存储单元504还用于存储空间中的各个模型的信息。在步骤S104中,存储单元504用于存储各个模型的高/低关注度信息。步骤S110中确定的第一待渲染面片集合的信息也将被存储在存储单元504中。
可选的,存储单元504还用于在步骤S106中存储各帧历史渲染结果中的显著面片,以及在步骤S108中存储各帧历史渲染结果中的移动面片。
处理单元506,用于采集历史渲染结果,并将历史渲染结果存储至存储单元504中。在步骤S100中,处理单元506用于从存储单元504中获取历史渲染结果。在步骤S102中,处理单元506用于统计个模型在历史渲染结果中的出现次数和平均停留时长。处理单元506,还用于在步骤S104中根据各模型出现的次数和平均停留时长确定高关注度模型。
在步骤S106中,处理单元506用于确定显著面片。处理单元506还用于在步骤S108中对移动面片进行检测。步骤S110中根据高关注度模型、显著面片和移动面片,确定第一待渲染面片集合的操作,也由处理单元506执行。
处理单元506还用于执行步骤S112中的根据第一待渲染面片集合对待渲染内容进行光线追踪渲染的操作。在步骤S114中,处理单元506用于获得待渲染内容的光线追踪渲染结果。进一步地,处理单元506还用于将获得的待渲染内容的光线追踪渲染结果存储至存储单元204。
其中,处理单元可以包括确定单元508和光追单元510。
具体地,确定单元508,用于采集历史渲染结果,并将历史渲染结果存储至存储单元504中。在步骤S100中,确定单元508用于从存储单元504中获取历史渲染结果。在步骤S102中,确定单元508用于统计个模型在历史渲染结果中的出现次数和平均停留时长。确定单元508,还用于在步骤S104中根据各模型出现的次数和平均停留时长确定高关注度模型。
在步骤S106中,确定单元508用于确定显著面片。确定单元508还用于在步骤S108中对移动面片进行检测。步骤S110中根据高关注度模型、显著面片和移动面片,确定第一待渲染面片集合的操作,也由确定单元508执行。
光追单元510用于执行步骤S112中的根据第一待渲染面片集合对待渲染内容进行光线追踪渲染的操作。在步骤S114中,光追单元510用于获得待渲染内容的光线追踪渲染结果。进一步地,光追单元510还用于将获得的待渲染内容的光线追踪渲染结果存储至存储单元204。
存储单元504,用于存储历史渲染结果和步骤S114中获得的光线追踪渲染结果。
通信单元502,用于接收步骤S112中的待渲染内容。可选的,还用于发送步骤S114中获得的光线追踪渲染结果。
需要说明的是,渲染引擎500中的通信单元502、存储单元504和处理单元506均可以分别部署云端设备上或本地设备上。其中,通信单元502、存储单元504和处理单元506还均可以分别部署在不同的云端设备或本地设备上。
图5提供了一种计算设备600的结构示意图。如图5所示,计算设备600包括:总线602、处理器604、存储器606和通信接口608。处理器604、存储器606和通信接口608之间通过总线602通信。
总线602可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图5中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。总线602可包括在计算设备600各个部件(例如,存储器606、处理器604、通信接口608)之间传送信息的通路。
处理器604可以为中央处理器(central processing unit,CPU)、图形处理器(graphics processing unit,GPU)、微处理器(micro processor,MP)或者数字信号处理器(digital signal processor,DSP)等处理器中的任意一种或多种。
存储器606可以包括易失性存储器(volatile memory),例如随机存取存储器(random access memory,RAM)。存储器604还可以包括非易失性存储器(non-volatile memory),例如只读存储器(read-only memory,ROM),快闪存储器,机械硬盘(hard disk drive,HDD)或固态硬盘(solid state drive,SSD)。存储器606中存储有可执行的程序代码, 处理器604执行该可执行的程序代码以实现前述渲染引擎500的功能,或者执行前述实施例描述的渲染方法400。
通信接口608使用例如但不限于收发器一类的收发模块,来实现计算设备600与其他设备或通信网络之间的通信。例如,可以通过通信接口608获取待渲染内容等。
本申请实施例还提供了一种计算设备集群。如图6所示,所述计算设备集群包括至少一个计算设备600。该计算设备集群中包括的计算设备集群可以全部是终端设备,也可以全部是云服务器,还可以部分是云服务器部分是终端设备。
在上述的三种关于计算设备集群的部署方式下,计算设备集群中的一个或多个计算设备600中的存储器606中可以存有相同的渲染引擎500用于执行渲染方法400的指令。
在一些可能的实现方式中,该计算设备集群中的一个或多个计算设备600也可以用于执行渲染引擎500用于执行渲染方法400的部分指令。换言之,一个或多个计算设备600的组合可以共同执行渲染引擎500用于执行渲染方法400的指令。
需要说明的是,计算设备集群中的不同的计算设备600中的存储器306可以存储不同的指令,用于执行渲染方法400的部分功能。
图7示出了一种可能的实现方式。如图7所示,两个计算设备600A和600B通过通信接口608实现连接。计算设备600A中的存储器上存有用于执行通信单元202、确定单元208和光追单元510的功能的指令。计算设备600B中的存储器上存有用于执行存储单元804的功能的指令。换言之,计算设备600A和600B的存储器606共同存储了渲染引擎500用于执行渲染方法400或渲染方法700的指令。
图7所示的计算设备集群之间的连接方式可以是考虑到本申请提供的渲染方法400或渲染方法700需要大量存储历史帧中的面片的历史渲染结果。因此,考虑将存储功能交由计算设备600B执行。
应理解,图7中示出的计算设备600A的功能也可以由多个计算设备600完成。同样,计算设备600B的功能也可以由多个计算设备600完成。
在一些可能的实现方式中,计算设备集群中的一个或多个计算设备可以通过网络连接。其中,所述网络可以是英特网或局域网等等。图8示出了一种可能的实现方式。如图8所示,两个计算设备600C和600D之间通过网络进行连接。具体地,通过各个计算设备中的通信接口与所述网络进行连接。在这一类可能的实现方式中,计算设备600C中的存储器606中存有执行通信单元502和确定单元508的指令。同时,计算设备600D中的存储器606中存有执行存储单元504和光追单元510的指令。
图8所示的计算设备集群之间的连接方式可以是考虑到本申请提供的渲染方法400或渲染方法700需要进行光线追踪的大量计算和存储大量历史帧中的面片的历史渲染结果,因此考虑将光追单元510和存储单元504实现的功能交由计算设备600D执行。
应理解,图8中示出的计算设备600C的功能也可以由多个计算设备600完成。同样,计算设备600D的功能也可以由多个计算设备600完成。
本申请实施例还提供了一种计算机可读存储介质。所述计算机可读存储介质可以是计算设备能够存储的任何可用介质或者是包含一个或多个可用介质的数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘)等。该计算机可读存储介质包括指令,所述指令指示计算设备执行上述应用于渲染引擎500的渲染方法400。
本申请实施例还提供了一种包含指令的计算机程序产品。所述计算机程序产品可以是包含指令的,能够运行在计算设备上或被储存在任何可用介质中的软件或程序产品。当所述计算机程序产品在至少一个计算机设备上运行时,使得至少一个计算机设备执行上述应用于渲染引擎500的渲染方法400。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的保护范围。

Claims (15)

  1. 一种渲染方法,其特征在于,所述方法包括:
    接收应用的待渲染内容,所述待渲染内容包括至少一个模型,每个模型包括至少一个面片;
    从所述待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合;
    基于第一追踪光线数对所述第一待渲染面片集合进行渲染;
    基于第二追踪光线数对所述第二待渲染面片集合进行渲染,其中,所述第一追踪光线数高于所述第二追踪光线数;
    获得所述第一待渲染面片集合的渲染结果和所述第二待渲染面片集合的渲染结果。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型;
    根据所述多条历史渲染内容中各模型出现的次数,确定所述多条历史渲染内容包括的高关注度模型;
    所述从所述待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合,包括:
    根据所述多条历史渲染内容包括的高关注度模型,确定所述待渲染内容中的高关注度模型;
    从所述待渲染内容中的高关注度模型中,确定所述第一待渲染面片集合。
  3. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型;
    根据所述多条历史渲染内容中各模型的停留帧数,确定所述多条历史渲染内容包括的高关注度模型;
    所述从所述待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合,包括:
    根据所述多条历史渲染内容包括的高关注度模型,确定所述待渲染内容中的高关注度模型;
    从所述待渲染内容中的高关注度模型中,确定所述第一待渲染面片集合。
  4. 如权利要求2或3所述的方法,其特征在于,所述方法还包括:
    获取所述应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型;
    根据所述多条历史渲染内容中各模型的停留帧数和/或出现次数,确定所述多条历史渲染内容包括的高关注度模型;
    基于显著度检测方法,确定所述多条历史渲染内容包括的高关注度模型中的显著面片;
    所述从所述待渲染内容中的高关注度模型中,确定所述第一待渲染面片集合,包括:
    根据所述多条历史渲染内容包括的高关注度模型中的显著面片,确定所述待渲染内容中的高关注度模型中的显著面片作为所述第一待渲染面片集合。
  5. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    获取所述应用的多条历史渲染内容对应的渲染结果;
    基于移动目标检测方法,确定所述多条历史渲染内容包括的模型中的移动面片;
    将所述待渲染内容中的所述移动面片确定为第一待渲染面片集合;
    根据所述第一待渲染面片集合,确定所述第二待渲染面片集合;
    所述基于移动目标检测方法,确定所述多条历史渲染内容包括的模型中的移动面片包括:
    根据两条所述渲染内容对应的渲染结果中同一像素的差值和检测阈值确定移动像素;
    根据所述移动像素,确定所述移动面片。
  6. 如权利要求1至5中任一所述的方法,其特征在于,所述第二待渲染面片集合内各面片的追踪光线数根据该面片和所述第一待渲染面片集合中的面片的距离确定。
  7. 如权利要求1至6中任一所述的方法,其特征在于,所述第二待渲染面片集合根据所述待渲染内容和所述第一待渲染面片集合确定。
  8. 一种用于渲染的装置,其特征在于,所述装置包括通信单元、处理单元和存储单元:
    所述通信单元,用于接收应用的待渲染内容;
    所述存储单元,用于存储所述待渲染内容;
    所述处理单元,用于从所述待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合;基于第一追踪光线数对所述第一待渲染面片集合进行渲染;基于第二追踪光线数对所述第二待渲染面片集合进行渲染,其中,所述第一追踪光线数高于所述第二追踪光线数;获得所述第一待渲染面片集合的渲染结果和所述第二待渲染面片集合的渲染结果。
  9. 如权利要求8所述的装置,其特征在于,所述处理单元还用于:
    获取所述应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型;
    根据所述多条历史渲染内容中各模型出现的次数,确定所述多条历史渲染内容包括的高关注度模型;
    所述从所述待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合,包括:
    根据所述多条历史渲染内容包括的高关注度模型,确定所述待渲染内容中的高关注度模型;
    从所述待渲染内容中的高关注度模型中,确定所述第一待渲染面片集合。
  10. 如权利要求8所述的装置,其特征在于,所述处理单元还用于:
    获取所述应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型;
    根据所述多条历史渲染内容中各模型的停留帧数,确定所述多条历史渲染内容包括的高关注度模型;
    所述从所述待渲染内容中获取第一待渲染面片集合和第二待渲染面片集合,包括:
    根据所述多条历史渲染内容包括的高关注度模型,确定所述待渲染内容中的高关注度模型;
    从所述待渲染内容中的高关注度模型中,确定所述第一待渲染面片集合。
  11. 如权利要求8所述的装置,其特征在于,所述处理单元还用于:
    获取所述应用的多条历史渲染内容对应的渲染结果,每条历史渲染内容包括至少一个模型;
    根据所述多条历史渲染内容中各模型的停留帧数和/或出现次数,确定所述多条历史渲染内容包括的高关注度模型;
    基于显著度检测方法,确定所述多条历史渲染内容包括的高关注度模型中的显著面片;
    所述从所述待渲染内容中的高关注度模型中,确定所述第一待渲染面片集合,包括:
    根据所述多条历史渲染内容包括的高关注度模型中的显著面片,确定所述待渲染内容中的高关注度模型中的显著面片作为所述第一待渲染面片集合。
  12. 如权利要求10或11所述的装置,其特征在于,所述处理单元还用于:
    获取所述应用的多条历史渲染内容对应的渲染结果;
    基于移动目标检测方法,确定所述多条历史渲染内容包括的模型中的移动面片;
    将所述待渲染内容中的所述移动面片确定为第一待渲染面片集合;
    根据所述第一待渲染面片集合,确定所述第二待渲染面片集合;
    所述基于移动目标检测方法,确定所述多条历史渲染内容包括的模型中的移动面片包括:
    根据两条所述渲染内容对应的渲染结果中同一像素的差值和检测阈值确定移动像素;
    根据所述移动像素,确定所述移动面片。
  13. 一种计算设备集群,其特征在于,包括至少一个计算设备,每个计算设备包括处理器和存储器;
    所述至少一个计算设备的处理器用于执行所述至少一个计算设备的存储器中存储的指令,以使得所述计算设备集群执行如权利要求1至5中任一项所述的方法。
  14. 一种包含指令的计算机程序产品,其特征在于,当所述指令被计算机设备集群运行时,使得所述计算机设备集群执行如权利要求的1至5中任一项所述的方法。
  15. 一种计算机可读存储介质,其特征在于,包括计算机程序指令,当所述计算机程序指令由计算设备集群执行时,所述计算设备集群执行如权利要求1至5中任一项所述的方法。
PCT/CN2021/139426 2021-01-21 2021-12-18 一种渲染方法及装置 WO2022156451A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110080552.8 2021-01-21
CN202110080552.8A CN114820910A (zh) 2021-01-21 2021-01-21 一种渲染方法及装置

Publications (1)

Publication Number Publication Date
WO2022156451A1 true WO2022156451A1 (zh) 2022-07-28

Family

ID=82524303

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/139426 WO2022156451A1 (zh) 2021-01-21 2021-12-18 一种渲染方法及装置

Country Status (2)

Country Link
CN (1) CN114820910A (zh)
WO (1) WO2022156451A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557712A (zh) * 2022-08-04 2024-02-13 荣耀终端有限公司 渲染方法、装置、设备及存储介质
CN116485966A (zh) * 2022-10-28 2023-07-25 腾讯科技(深圳)有限公司 视频画面渲染方法、装置、设备和介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067455A (zh) * 2017-04-18 2017-08-18 腾讯科技(深圳)有限公司 一种实时渲染的方法及设备
CN107330966A (zh) * 2017-06-21 2017-11-07 杭州群核信息技术有限公司 一种渲染方法和装置
US20170365089A1 (en) * 2016-06-15 2017-12-21 Disney Enterprises, Inc. Adaptive rendering with linear predictions
CN111429557A (zh) * 2020-02-27 2020-07-17 网易(杭州)网络有限公司 一种毛发生成方法、毛发生成装置及可读存储介质
CN111538557A (zh) * 2020-07-09 2020-08-14 平安国际智慧城市科技股份有限公司 基于层叠样式表的弹幕渲染方法及相关设备
CN112116693A (zh) * 2020-08-20 2020-12-22 中山大学 一种基于cpu的生物分子可视化光线追踪渲染方法
CN112184873A (zh) * 2020-10-19 2021-01-05 网易(杭州)网络有限公司 分形图形创建方法、装置、电子设备和存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170365089A1 (en) * 2016-06-15 2017-12-21 Disney Enterprises, Inc. Adaptive rendering with linear predictions
CN107067455A (zh) * 2017-04-18 2017-08-18 腾讯科技(深圳)有限公司 一种实时渲染的方法及设备
CN107330966A (zh) * 2017-06-21 2017-11-07 杭州群核信息技术有限公司 一种渲染方法和装置
CN111429557A (zh) * 2020-02-27 2020-07-17 网易(杭州)网络有限公司 一种毛发生成方法、毛发生成装置及可读存储介质
CN111538557A (zh) * 2020-07-09 2020-08-14 平安国际智慧城市科技股份有限公司 基于层叠样式表的弹幕渲染方法及相关设备
CN112116693A (zh) * 2020-08-20 2020-12-22 中山大学 一种基于cpu的生物分子可视化光线追踪渲染方法
CN112184873A (zh) * 2020-10-19 2021-01-05 网易(杭州)网络有限公司 分形图形创建方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN114820910A (zh) 2022-07-29

Similar Documents

Publication Publication Date Title
EP3534336B1 (en) Panoramic image generating method and apparatus
JP5536115B2 (ja) 立体表示可能ディスプレイにおける3dビデオ画像のレンダリング
WO2022156451A1 (zh) 一种渲染方法及装置
WO2022063260A1 (zh) 一种渲染方法、装置及设备
US20130335535A1 (en) Digital 3d camera using periodic illumination
CN104299220B (zh) 一种对Kinect深度图像中的空洞进行实时填充的方法
CN109660783A (zh) 虚拟现实视差校正
JP2008090617A (ja) 立体画像生成装置、方法およびプログラム
JP2016537901A (ja) ライトフィールド処理方法
US11232628B1 (en) Method for processing image data to provide for soft shadow effects using shadow depth information
CN108421257B (zh) 不可见元素的确定方法、装置、存储介质和电子装置
CN109510975A (zh) 一种视频图像的提取方法、设备及***
US8633926B2 (en) Mesoscopic geometry modulation
JP7479729B2 (ja) 三次元表現方法及び表現装置
US11288774B2 (en) Image processing method and apparatus, storage medium, and electronic apparatus
US10354399B2 (en) Multi-view back-projection to a light-field
TWI536316B (zh) 用於產生三維景象之裝置及由電腦執行之產生三維景象之方法
WO2020151078A1 (zh) 一种三维重建的方法和装置
CN112991507A (zh) 图像生成***和方法
US9639975B1 (en) System and method for generating shadows
Liu et al. Fog effect for photography using stereo vision
KR100879802B1 (ko) 가상 시점에서의 3차원 영상 생성 방법 및 장치
Chen et al. A quality controllable multi-view object reconstruction method for 3D imaging systems
JP2003099800A (ja) 3次元画像情報生成方法および装置、ならびに3次元画像情報生成プログラムとこのプログラムを記録した記録媒体
Liao et al. Stereo matching and viewpoint synthesis FPGA implementation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21920825

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21920825

Country of ref document: EP

Kind code of ref document: A1