CN114494024B - Image rendering method, device and equipment and storage medium - Google Patents

Image rendering method, device and equipment and storage medium Download PDF

Info

Publication number
CN114494024B
CN114494024B CN202210384063.6A CN202210384063A CN114494024B CN 114494024 B CN114494024 B CN 114494024B CN 202210384063 A CN202210384063 A CN 202210384063A CN 114494024 B CN114494024 B CN 114494024B
Authority
CN
China
Prior art keywords
vertex
index data
rendered
rendering
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210384063.6A
Other languages
Chinese (zh)
Other versions
CN114494024A (en
Inventor
连冠荣
昔文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210384063.6A priority Critical patent/CN114494024B/en
Publication of CN114494024A publication Critical patent/CN114494024A/en
Application granted granted Critical
Publication of CN114494024B publication Critical patent/CN114494024B/en
Priority to PCT/CN2023/078405 priority patent/WO2023197762A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Generation (AREA)

Abstract

The embodiment of the application provides an image rendering method, an image rendering device, equipment and a storage medium, which are at least applied to the technical field of image processing and games, wherein the method comprises the following steps: acquiring data to be rendered and an index data set; the data to be rendered corresponds to a vertex set; performing vertex eliminating processing on part of vertexes in the vertex set based on the data to be rendered and the index data in the index data set to obtain a vertex set to be rendered; and vertex rendering is sequentially carried out on the vertexes to be rendered in the vertex set to be rendered, and rendered images are obtained. The intermittent vertex rendering can be realized by calling one drawing call, and the image rendering efficiency is improved.

Description

Image rendering method, device and equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of internet, and relates to but is not limited to an image rendering method, an image rendering device, image rendering equipment and a storage medium.
Background
A mesh refers to a structure including vertices and faces, a mesh is a common method for 3D game object display, and in 3D game object rendering and display technologies, vertex rendering is generally performed with a minimum unit (triangle) in a mesh as a rendering unit. The Vertex Buffer (VB) is a continuous Vertex data array, and in some image rendering scenarios, intermittent Vertex rendering needs to be performed from the Vertex Buffer.
In the related art, when intermittent vertex rendering is implemented, multiple draw calls (drawcalls) are generally required to be separately called to be implemented, or a Graphics drawing interface of a higher-order Graphics Processor (GPU) is required to implement the intermittent vertex rendering, or data to be rendered is filled to VB in real time according to whether rendering is enabled for parts of an object to be rendered, which relates to a data copy process.
Obviously, the method in the related art needs to call a plurality of drawing calls when realizing intermittent vertex rendering, which greatly increases rendering time and reduces rendering efficiency; a graphic drawing interface of a higher-order GPU is needed, the adaptability is not wide enough, and all machine types can not be suitable; meanwhile, a certain time delay exists in the data copying process, and the image rendering efficiency is greatly influenced.
Disclosure of Invention
The embodiment of the application provides an image rendering method, an image rendering device and a storage medium, which are at least applied to the technical field of image processing and games, and can perform vertex elimination processing on part of vertices in a vertex set based on index data, so that intermittent vertex rendering can be realized by calling one drawing call, and the image rendering efficiency is improved.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an image rendering method, which comprises the following steps:
acquiring data to be rendered and an index data set; the data to be rendered corresponds to a vertex set;
performing vertex eliminating processing on part of vertexes in the vertex set based on the data to be rendered and the index data in the index data set to obtain a vertex set to be rendered;
and vertex rendering is sequentially carried out on the vertexes to be rendered in the vertex set to be rendered, and rendered images are obtained.
In some embodiments, each index data in the set of index data has a mapping relationship with one vertex; the vertex eliminating processing is performed on part of the vertices in the vertex set based on the data to be rendered and the index data in the index data set to obtain the vertex set to be rendered, and the vertex eliminating processing includes: acquiring vertex position information of each vertex in the vertex set; acquiring index data corresponding to each vertex from the index data set; based on the index data, carrying out scaling processing on the vertex position information of each vertex to obtain a scaling result; determining a vertex to be eliminated from the vertex set based on the scaling result of each vertex; and deleting the vertex to be eliminated in the vertex set to obtain the vertex set to be rendered.
In some embodiments, the scaling the vertex position information of each vertex based on the index data to obtain a scaling result includes: multiplying the vertex position information of each vertex with the index data corresponding to the vertex to obtain a vertex position product; determining the vertex position product as a scaling result of the vertex.
In some embodiments, the method further comprises: performing triangulation on the vertexes in the vertex set based on the vertex position information to obtain at least one initial triangle; wherein each of the initial triangles comprises three vertices; correspondingly, the determining a vertex to be eliminated from the vertex set based on the scaling result of each vertex includes: and when the vertex position products corresponding to the three vertices of any initial triangle are all the same, determining the three vertices of the initial triangle as the vertices to be eliminated.
In some embodiments, the determining vertices to be eliminated from the set of vertices based on the scaling result for each vertex comprises: when the vertex position product is first type position data, determining a vertex corresponding to the vertex position product as the vertex to be eliminated; and when the vertex position product is the second type position data, determining the vertex corresponding to the vertex position product as the vertex to be rendered.
In some embodiments, each index data in the set of index data has a mapping relationship with one vertex; the vertex eliminating processing is performed on part of the vertices in the vertex set based on the data to be rendered and the index data in the index data set to obtain the vertex set to be rendered, and the vertex eliminating processing includes: acquiring index data corresponding to each vertex from the index data set; determining a vertex processing state corresponding to each vertex based on the index data; when the vertex processing state of any vertex is an elimination state, determining the vertex as a vertex to be eliminated; and deleting the vertexes to be eliminated in the vertex set to obtain the vertex set to be rendered.
In some embodiments, the method further comprises: acquiring vertex position information of each vertex; performing triangulation on the vertexes in the vertex set based on the vertex position information to obtain at least one initial triangle; wherein each of the initial triangles comprises three vertices; the determining a vertex processing state corresponding to each vertex based on the index data comprises: when the index data corresponding to the three vertexes of any initial triangle are all preset type index data, determining the vertex processing states of the three vertexes of the initial triangle as an elimination state.
In some embodiments, said determining, based on said index data, a vertex processing state corresponding to each vertex comprises: when the index data is first-type index data, determining a vertex processing state of a vertex having the mapping relation with the index data as an elimination state; when the index data is the second type index data, determining the vertex processing state of the vertex having the mapping relation with the index data as a non-elimination state.
In some embodiments, the sequentially performing vertex rendering on the vertices to be rendered in the vertex set to be rendered to obtain a rendered image includes: acquiring vertex position information of each vertex to be rendered; performing triangulation on the vertex to be rendered in the vertex set to be rendered based on the vertex position information to obtain at least one triangle to be rendered; and performing vertex rendering by taking each triangle to be rendered as a rendering unit to obtain the rendered image.
In some embodiments, the method further comprises: acquiring an image rendering request, wherein the image rendering request comprises the data to be rendered and preset rendering demand information; analyzing the rendering requirement information to obtain index data corresponding to each vertex in the vertex set; and integrating the index data corresponding to all the vertexes in the vertex set to obtain the index data set.
In some embodiments, the rendering requirement information includes processing instructions for each vertex; the analyzing the rendering requirement information to obtain index data corresponding to each vertex in the vertex set includes: initializing the index data of each vertex to obtain initialized index data; analyzing the rendering requirement information to obtain a processing instruction of each vertex; when the processing instruction is a rendering instruction, updating the initialized index data of the vertex into second type index data; and when the processing instruction is a non-rendering instruction, updating the initialization index data of the vertex into first type index data.
An embodiment of the present application provides an image rendering apparatus, the apparatus including: the acquisition module is used for acquiring data to be rendered and an index data set; the data to be rendered corresponds to a vertex set; the elimination module is used for carrying out vertex elimination processing on part of vertexes in the vertex set based on the data to be rendered and the index data in the index data set to obtain a vertex set to be rendered; and the vertex rendering module is used for sequentially performing vertex rendering on the vertexes to be rendered in the vertex set to be rendered to obtain the rendered image.
In some embodiments, each index data in the set of index data has a mapping relationship with one vertex; the vertex elimination module is further to: acquiring vertex position information of each vertex in the vertex set; acquiring index data corresponding to each vertex from the index data set; based on the index data, carrying out scaling processing on the vertex position information of each vertex to obtain a scaling result; determining a vertex to be eliminated from the vertex set based on the scaling result of each vertex; and deleting the vertex to be eliminated in the vertex set to obtain the vertex set to be rendered.
In some embodiments, the vertex removal module is further to: multiplying the vertex position information of each vertex with the index data corresponding to the vertex to obtain a vertex position product; determining the vertex position product as a scaling result of the vertex.
In some embodiments, the apparatus further comprises: the first triangulation module is used for triangulating vertexes in the vertex set based on the vertex position information to obtain at least one initial triangle; wherein each of the initial triangles comprises three vertices; correspondingly, the vertex removal module is further configured to: and when the vertex position products corresponding to the three vertices of any initial triangle are all the same, determining the three vertices of the initial triangle as the vertices to be eliminated.
In some embodiments, the vertex removal module is further to: when the vertex position product is first type position data, determining a vertex corresponding to the vertex position product as the vertex to be eliminated; and when the vertex position product is the second type position data, determining the vertex corresponding to the vertex position product as the vertex to be rendered.
In some embodiments, each index data in the set of index data has a mapping relationship with one vertex; the vertex removal module is further to: acquiring index data corresponding to each vertex from the index data set; determining a vertex processing state corresponding to each vertex based on the index data; when the vertex processing state of any vertex is an elimination state, determining the vertex as a vertex to be eliminated; and deleting the vertex to be eliminated in the vertex set to obtain the vertex set to be rendered.
In some embodiments, the apparatus further comprises: the position information acquisition module is used for acquiring vertex position information of each vertex; the second triangulation module is used for triangulating vertexes in the vertex set based on the vertex position information to obtain at least one initial triangle; wherein each of the initial triangles comprises three vertices; the vertex removal module is further to: and when the index data corresponding to the three vertexes of any initial triangle are all preset type index data, determining the vertex processing states of the three vertexes of the initial triangle as an elimination state.
In some embodiments, the vertex removal module is further to: when the index data is first-type index data, determining a vertex processing state of a vertex having the mapping relation with the index data as an elimination state; when the index data is the second type index data, determining the vertex processing state of the vertex having the mapping relation with the index data as a non-elimination state.
In some embodiments, the vertex rendering module is further to: acquiring vertex position information of each vertex to be rendered; performing triangulation on the vertex to be rendered in the vertex set to be rendered based on the vertex position information to obtain at least one triangle to be rendered; and performing vertex rendering by taking each triangle to be rendered as a rendering unit to obtain the rendered image.
In some embodiments, the apparatus further comprises: the request acquisition module is used for acquiring an image rendering request, wherein the image rendering request comprises the data to be rendered and preset rendering demand information; the analysis module is used for analyzing the rendering requirement information to obtain index data corresponding to each vertex in the vertex set; and the integration module is used for integrating the index data corresponding to all the vertexes in the vertex set to obtain the index data set.
In some embodiments, the rendering requirement information includes processing instructions for each vertex; the parsing module is further configured to: initializing the index data of each vertex to obtain initialized index data; analyzing the rendering requirement information to obtain a processing instruction of each vertex; when the processing instruction is a rendering instruction, updating the initialized index data of the vertex into second type index data; and when the processing instruction is a non-rendering instruction, updating the initialization index data of the vertex into first type index data.
An embodiment of the present application provides an image rendering apparatus, including:
a memory for storing executable instructions; and the processor is used for realizing the image rendering method when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer program product or a computer program, wherein the computer program product or the computer program comprises executable instructions, and the executable instructions are stored in a computer readable storage medium; when the processor of the image rendering device reads the executable instructions from the computer readable storage medium and executes the executable instructions, the image rendering method is realized.
An embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to implement the image rendering method when the processor executes the executable instructions.
The embodiment of the application has the following beneficial effects: performing vertex eliminating processing on part of vertexes in a vertex set corresponding to the data to be rendered based on the acquired data to be rendered and the index data in the index data set to obtain a vertex set to be rendered; and vertex rendering is sequentially carried out on the vertexes to be rendered in the vertex set to be rendered, so that rendered images are obtained. Therefore, because the vertex eliminating processing is carried out on part of the vertexes in the vertex set based on the index data, the intermittent vertex rendering can be realized by calling one drawing call, the image rendering efficiency is improved, and the method is suitable for different devices and has high adaptability.
Drawings
Fig. 1A is a schematic structural diagram of a grid provided in an embodiment of the present application;
FIG. 1B is a schematic diagram of vertex-face-edge relationships provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of intermittent vertex rendering;
FIG. 3 is a diagram illustrating an intermittent vertex rendering by invoking multiple draw calls in the related art;
FIG. 4 is a diagram illustrating an intermittent vertex rendering using a high-level GPU graphics rendering interface in the related art;
FIG. 5 is a diagram illustrating an intermittent vertex rendering by data copying in the related art;
FIG. 6 is a diagram illustrating an intermittent vertex rendering by a plurality of renderings in the related art;
FIG. 7 is an alternative architectural diagram of an image rendering system provided by embodiments of the present application;
FIG. 8 is a schematic structural diagram of an image rendering apparatus provided in an embodiment of the present application;
FIG. 9 is an alternative flowchart of an image rendering method according to an embodiment of the present disclosure;
FIG. 10 is a schematic flow chart of another alternative image rendering method provided in the embodiment of the present application;
FIG. 11 is a schematic flow chart of still another alternative image rendering method provided in the embodiment of the present application;
FIG. 12A is a raw image before rendering of certain parts of the body enabled/disabled according to an embodiment of the present application;
FIG. 12B is an image of an embodiment of the present application with body hair portion rendering disabled;
FIG. 13 is a flowchart of an image rendering method provided by an embodiment of the present application;
FIG. 14 is another flowchart of an image rendering method provided by an embodiment of the present application;
FIG. 15 is a schematic diagram illustrating the effect of the method according to the embodiment of the present application applied to a low-end machine;
FIG. 16 is a schematic diagram illustrating the effect of the method according to the embodiment of the present application applied to a middle-end computer;
fig. 17 is a schematic diagram illustrating an effect of the method according to the embodiment of the present application when applied to a high-end machine.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the embodiments of the present application belong. The terminology used in the embodiments of the present application is for the purpose of describing the embodiments of the present application only and is not intended to be limiting of the present application.
Before explaining the aspects of the embodiments of the present application, terms referred to in the embodiments of the present application are explained first:
(1) grid: the term "structure" refers to a structure including a vertex and a plane. Grids are a common method of 3D game object display. A mesh is a data structure that contains a vertex array and a face array. As shown in fig. 1A, the structure diagram of a grid is shown, and in the image rendering process, rendering is performed by taking a grid cell in the grid as a unit, where the grid includes a plurality of grid cells 101.
(2) Vertex: is a data structure containing a 3-degree spatial vector containing the locations of the vertices x, y, z. As shown in fig. 1B, which is a schematic diagram of Vertex-plane-edge relationships, vertices (Vertex = [ x, y, z ]) include Vertex 0, Vertex 1, and Vertex 2, respectively, in fig. 1B.
(3) Dough making: is a data structure containing 3 indexed integers, the indices indicating the vertices used by the vertices of the face. As shown in fig. 1B, plane 1 is represented as: face = [0,1, 2 ].
(4) Side: each face has 3 sides. An edge is a data structure that contains 2 index integers. As shown in fig. 1B, three sides (side 0, side 1, and side 2) are respectively represented as: edge 0= [0,1], edge 1= [1,2], and edge 2= [2,0 ].
(5) Rendering instantiation: using GPU instantiation, multiple copies of the same grid can be drawn (or rendered) at once using a small number of draw calls (Drawcall). This is useful for rendering objects such as buildings, trees, grass, or other objects that are repeated in the scene. GPU instantiations only render the same grid each time a call is drawn, but each instance may have different parameters (e.g., color or scale) to increase variance and reduce repetition in appearance. GPU instantiation may reduce the number of draw calls used per scene. The rendering performance of the project can be significantly improved.
(6) Drawing and calling: the method is characterized in that a Central Processing Unit (CPU) calls a drawing interface of a bottom layer graph to command a GPU to execute rendering operation, the rendering process is realized by adopting a pipeline, the CPU and the GPU work in parallel, the CPU and the GPU are connected through a command buffer area, the CPU sends rendering commands to the command buffer area, and the GPU receives and executes corresponding rendering commands. Here, the reason why the draw call affects drawing is mainly because the CPU needs to call one draw call every time drawing is performed, and each draw call needs many preparation works: detecting rendering state, submitting rendering data, submitting rendering state, etc. The GPU has strong computing capacity and can process rendering tasks quickly. When the draw calls are excessive, the CPU will have much extra overhead for preparation, the CPU itself has a high load, and the GPU may be idle. Since too many draw calls cause a performance bottleneck of the CPU, a lot of time is consumed in the preparation work of the draw calls. It is clear that one direction of optimization is: the small draw calls are merged into one large draw call as much as possible.
(7) Graphics rendering interface for high-order GPUs (e.g., GPU glMulti): refer to Open Graphics interfaces (OpenGLES, Open Graphics Library) that require Graphics rendering interfaces on top of high-end GPU (supporting OpenGLES 3.1 or even above) hardware.
(8) Vertex buffer area: a buffer stored in the GPU, such as OpenGL/Directx, is used to store the vertex data array, which provides a method to upload the vertex data (position, normal vector, color, etc.) to the GPU for non-live mode rendering. The vertex buffer provides a significant performance increase over immediate mode rendering, primarily because the data resides in GPU memory rather than system memory, and thus can be fetched by the GPU and rendered directly.
(9) Intermittent vertex rendering for single draw calls: the vertex buffer is a continuous vertex data array, and some scenes need to intermittently render the vertices from a single draw call of the vertex buffer, namely, the single draw call is not sequentially and completely rendered from 1 to n, but intermittently 1-10, 31-100, 221 and 300 … and 1000-n in the single draw call, and some vertices in the middle are skipped. As shown in fig. 2, which is a schematic diagram of intermittent vertex rendering, only a small amount of processing is performed to determine whether intermediate vertices are displayed, for example, the first row in fig. 2 represents all 20 vertices in the vertex data array: 1,2,3,4,5,6,7,8,9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20. While in the actual rendering process, intermittent vertex rendering is required, only 1,2, 3, 4, 5, 6, 7 and 13, 14, 15, 16, 17, 18, 19, 20 need to be rendered, so that the middle vertices 8, 9, 10, 11, 12 may be skipped without rendering.
Before explaining the image rendering method according to the embodiment of the present application, a method in the related art will be explained first.
In the intermittent vertex rendering process, one implementation manner in the related art is to implement separate draw calls, that is, the intermittent vertex rendering can be implemented only by calling multiple draw calls. As shown in fig. 3, which is a schematic diagram of invoking multiple draw calls for intermittent vertex rendering in the related art, for all 20 vertices in the vertex data array: 1,2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, wherein during rendering of a first line of image data, vertex 4, 5, 6, 7 is a vertex that does not require rendering, and during rendering of a second line of image data, vertices 8, 9, 10, 11, 12 and 13, 14, 15, 16, 17, 18, 19, 20 are vertices that do not require rendering, and therefore during the overall rendering, 5 draw calls need to be called for rendering, wherein the 5 draw calls are: glDrawArray (1, 2, 3) for rendering vertices 1,2, 3; glDrawArray (8, 9, 10, 11, 12) for rendering vertices 8, 9, 10, 11, 12; gldrawrarray (13, 14, 15, 16, 17, 18, 19, 20) for rendering vertices 13, 14, 15, 16, 17, 18, 19, 20; glDrawArray (1, 2, 3) for rendering vertices 1,2, 3; glDrawArray (4, 5, 6, 7) for rendering vertices 4, 5, 6, 7.
Another implementation manner in the related art is a graphics rendering interface requiring a high-order GPU, as shown in fig. 4, which is a schematic diagram of performing intermittent vertex rendering by using the graphics rendering interface of the high-order GPU in the related art, and for all 20 vertices in a vertex data array: 1,2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, wherein during rendering of a first line of image data, vertex 4, 5, 6, 7 is a vertex that does not require rendering, and during rendering of a second line of image data, vertices 8, 9, 10, 11, 12 and 13, 14, 15, 16, 17, 18, 19, 20 are vertices that do not require rendering, and therefore during overall rendering, a graphics rendering interface employing a high-order GPU calls 1 rendering call, wherein the one rendering call is:
glDrawArray(
1,2,3,
8,9,10,11,12,
13,14,15,16,17,18,19,20,
1,2,3,
4,5,6,7
)。
however, the method is not widely adaptable and is not applicable to all models.
Yet another implementation manner in the related art is to fill data to be rendered to VB in real time according to whether rendering is enabled for parts of the object, as shown in fig. 5, which is a schematic diagram of intermittent vertex rendering through data copying in the related art, and for all 20 vertices in a vertex data array: 1,2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, wherein during rendering of a first line of image data, vertex 4, 5, 6, 7 is a vertex that does not require rendering, and during rendering of a second line of image data, vertices 8, 9, 10, 11, 12 and 13, 14, 15, 16, 17, 18, 19, 20 are vertices that do not require rendering, and therefore, during overall rendering, data to be rendered may be copied into a vertex buffer and then rendered. However, this approach involves a data copy process, and thus there is a bottleneck upper limit on performance.
Yet another implementation manner in the related art is that if there is no graphics rendering interface of the high-level GPU, since the parts of the same kind of object have different enabled states, they cannot be instantiated, and multiple renderings are required. As shown in fig. 6, which is a schematic diagram of performing intermittent vertex rendering by multiple renderings in the related art, for an object one, an object two, and an object three, where parts 1,2, 3, and 4 are the same, data and content to be rendered are completely the same, but parts that do not need to be rendered in the object one, the object two, and the object three are different, and thus, for the same part, multiple renderings may be required in different objects. Thus, image rendering efficiency is obviously reduced.
Based on the above problems in the related art, the embodiments of the present application provide an image rendering method, which is a GPU rendering invocation optimization method with wider adaptability, and the method solves the requirement of implementing intermittent vertex rendering from a single draw call in a vertex buffer area in some scenarios, that is, the single draw call is not sequentially and completely rendered from 1 to n, but intermittently rendered from 1 to 10, 31 to 100, 221 and 300 …, and 1000 to n in the single draw call, and some vertices in the middle may be skipped over.
In the image rendering method provided by the embodiment of the application, firstly, data to be rendered and an index data set are obtained; the data to be rendered corresponds to a vertex set; secondly, performing vertex eliminating treatment on partial vertexes in the vertex set based on the data to be rendered and the index data in the index data set to obtain a vertex set to be rendered; and finally, vertex rendering is sequentially carried out on the vertexes to be rendered in the vertex set to be rendered, and the rendered image is obtained. Therefore, because the vertex eliminating processing is carried out on part of the vertexes in the vertex set based on the index data, the intermittent vertex rendering can be realized by calling one drawing call, the image rendering efficiency is improved, and the method is suitable for different devices and has high adaptability.
An exemplary application of the image rendering device according to the embodiment of the present application is described below, and the image rendering device according to the embodiment of the present application may be implemented as a terminal or a server. In one implementation manner, the image rendering device provided in the embodiment of the present application may be implemented as any terminal having an image processing function, a video display function, and an image rendering function, such as a notebook computer, a tablet computer, a desktop computer, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), an intelligent robot, an intelligent home appliance, and an intelligent vehicle-mounted device; in another implementation manner, the image rendering device provided in this embodiment may also be implemented as a server, where the server may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform. The terminal and the server may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment of the present application. Next, an exemplary application when the image rendering apparatus is implemented as a terminal will be explained.
Referring to fig. 7, fig. 7 is an alternative architecture diagram of an image rendering system according to an embodiment of the present application, in order to implement supporting any one of a video application or an image display application, the image rendering system 10 at least includes a terminal 100, a network 200, and a server 300, where the server 300 is a server of the video application or the image display application, and the server 300 may constitute an image rendering device according to the embodiment of the present application. The terminal 100 is connected to the server 300 through a network 200, and the network 200 may be a wide area network or a local area network, or a combination of both. In the image rendering process, the terminal 100 sends an image rendering request to the server 300 through the network 200, after acquiring the data to be rendered and the index data set, the server 300 performs vertex elimination processing on part of vertices in a vertex set corresponding to the data to be rendered based on the data to be rendered and the index data in the index data set to obtain a vertex set to be rendered; and then vertex rendering is carried out on the vertexes to be rendered in the vertex set to be rendered in sequence to obtain the rendered image. After the rendered image is obtained, the rendered image is transmitted to the terminal 100 through the network 200, and the rendered image is displayed on the terminal 100.
The image rendering method provided by the embodiment of the application can be further implemented based on a cloud platform and through a cloud technology, for example, the server 300 may be a cloud server, the cloud server performs vertex elimination processing on part of vertices in the vertex set based on the data to be rendered and the index data in the index data set to obtain the vertex set to be rendered, and the cloud server sequentially performs vertex rendering on the vertices to be rendered in the vertex set to be rendered to obtain the rendered image.
In some embodiments, a cloud storage may be further provided, and the rendered image may be stored in the cloud storage, or the index data set may be stored in the cloud storage, or the data to be rendered and the rendered image are mapped and stored in the cloud storage. Therefore, when the image rendering needs to be performed on the data to be rendered again in the follow-up process, the rendered image can be directly read from the cloud storage.
It should be noted that Cloud technology (Cloud technology) refers to a hosting technology for unifying series resources such as hardware, software, network, etc. in a wide area network or a local area network to implement calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied in the cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.
Fig. 8 is a schematic structural diagram of an image rendering apparatus provided in an embodiment of the present application, where the image rendering apparatus shown in fig. 8 includes: at least one processor 310, memory 350, at least one network interface 320, and a user interface 330. The various components in the image rendering device are coupled together by a bus system 340. It will be appreciated that the bus system 340 is used to enable communications among the components connected. The bus system 340 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 340 in fig. 8.
The Processor 310 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 330 includes one or more output devices 331 that enable presentation of media content, and one or more input devices 332.
The memory 350 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 350 optionally includes one or more storage devices physically located remote from processor 310. The memory 350 may include either volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 350 described in embodiments herein is intended to comprise any suitable type of memory. In some embodiments, memory 350 is capable of storing data, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below, to support various operations.
An operating system 351 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 352 for communicating to other computing devices via one or more (wired or wireless) network interfaces 320, exemplary network interfaces 320 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
an input processing module 353 for detecting one or more user inputs or interactions from one of the one or more input devices 332 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided by the embodiments of the present application may be implemented in software, and fig. 8 illustrates an image rendering apparatus 354 stored in the memory 350, where the image rendering apparatus 354 may be an image rendering apparatus in an image rendering device, and may be software in the form of programs and plug-ins, and the software includes the following software modules: the fetch module 3541, the vertex removal module 3542, and the vertex rendering module 3543, which are logical and thus may be arbitrarily combined or further split depending on the functionality implemented. The functions of the respective modules will be explained below.
In other embodiments, the apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the image rendering method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The image rendering method provided by each embodiment of the present application may be executed by an image rendering device, where the image rendering device may be any terminal having an image processing function, a video display function, and an image rendering function, or may also be a server, that is, the image rendering method provided by each embodiment of the present application may be executed by the terminal, may also be executed by the server, or may also be executed by the terminal interacting with the server.
Referring to fig. 9, fig. 9 is an optional flowchart of an image rendering method provided in an embodiment of the present application, and the following description is made with reference to the steps shown in fig. 9, where it should be noted that the image rendering method in fig. 9 is described by taking a server as an execution subject as an example.
Step 901, acquiring data to be rendered and an index data set; the data to be rendered corresponds to a vertex set.
The data to be rendered is original data used for rendering to obtain a rendered image, the data to be rendered includes data corresponding to each pixel point on the rendered image, and the original image can be obtained by performing image rendering on the data to be rendered, and the original image is an image which is not processed and does not embody any rendering requirement information, that is, an image which does not embody any rendering requirement of a user.
The index data set is obtained based on the rendering requirement information of the user, that is, the index data in the index data set is data capable of reflecting the rendering requirement of the user. And the index data in the index data set is used for calculating the vertex in the vertex set corresponding to the data to be rendered so as to determine whether the corresponding vertex is to be rendered.
In the embodiment of the present application, the vertex set includes at least three vertices, where each three vertices may form a triangle, and the triangle is a minimum unit of image rendering during image rendering. Of course, four vertices may be connected to form a quadrangle, and five vertices may be connected to form a pentagon, for example. The graphics such as the triangle, the quadrangle, the pentagon, etc. are superposed to form the mesh corresponding to the image in the embodiment of the present application. In the actual rendering process, for the graphics in the mesh, the images are divided into triangles, and the triangles are rendered by taking the triangles as rendering units.
In some embodiments, the index data in the index data set may be preset or calculated based on the rendering instruction of the user.
And step S902, performing vertex eliminating processing on part of vertexes in the vertex set based on the data to be rendered and the index data in the index data set to obtain the vertex set to be rendered.
Here, the operation result of each vertex may be obtained by performing a vertex removal operation on each vertex in the vertex set to which the data to be rendered corresponds and the index data in the index data set, and then determining whether each vertex is to be removed based on the operation result. Alternatively, it may be determined whether each vertex is to be eliminated by determining the data type of the index data corresponding to the vertex, that is, the index data corresponding to each vertex may have a tag data therein, the tag data being capable of identifying an elimination status indication of the vertex, from which it may be determined whether the vertex is to be eliminated.
In the embodiment of the application, the index data is preset or calculated based on the rendering instruction of the user, so that the index data can represent the rendering requirement of the user, and the operation result obtained by performing vertex eliminating operation based on the index data can accurately reflect whether each vertex needs to be eliminated. Alternatively, the vertices can be distinguished by the label data, and whether each vertex is to be eliminated can be determined quickly and accurately.
The vertex eliminating process is to eliminate part of the vertices in the vertex set so as to realize that part of the vertices in the vertex set are not rendered. The vertex set to be rendered comprises at least one vertex to be rendered, and the vertex to be rendered is a vertex which needs to be rendered actually. The number of the vertexes to be rendered in the vertex set to be rendered is smaller than the number of the vertexes in the vertex set.
In the embodiment of the present application, part of the vertices are not rendered, that is, are intermittently rendered, that is, all the vertices in the vertex set are not continuously rendered in sequence, but a part of the vertices are intermittently skipped, and are not rendered.
And step S903, performing vertex rendering on the vertexes to be rendered in the vertex set to be rendered in sequence to obtain a rendered image.
In the embodiment of the application, after the set of the vertexes to be rendered with partial vertexes eliminated is obtained, vertex rendering is performed on all the vertexes to be rendered in the set of the vertexes to be rendered, in the rendering process, the vertexes to be rendered in the set of the vertexes to be rendered can be triangulated to obtain a plurality of triangles, and then triangular image rendering is performed by taking each triangle as a unit to obtain a rendered image.
According to the image rendering method provided by the embodiment of the application, based on the acquired data to be rendered and the index data in the index data set, vertex eliminating processing is performed on part of vertices in a vertex set corresponding to the data to be rendered, and the vertex set to be rendered is obtained; and vertex rendering is sequentially carried out on the vertexes to be rendered in the vertex set to be rendered, so that rendered images are obtained. Therefore, because the vertex eliminating processing is carried out on part of the vertexes in the vertex set based on the index data, the intermittent vertex rendering can be realized by calling one drawing call, the image rendering efficiency is improved, and the method is suitable for different devices and has high adaptability.
In some embodiments, at least a terminal and a server are included in an image rendering system. The terminal runs a video application or an image display application, and here, the description is given by taking an example in which the terminal runs an image display application, and the image display application is a game application.
In the process of rendering each frame of picture in the game, which parts of the frame of picture need to be rendered and which parts do not need to be rendered can be predefined, so that different game pictures can be rendered based on the same data to be rendered and predefined information of a user. For example, when the same operation object is operated, the difference between the game screens of the two frames before and after the operation object may be very small, and only the locally displayed information is different, so that it may be predefined before the image rendering which local areas of the game screens need to be rendered and which local areas of the game screens do not need to be rendered. In the process of rendering, based on predefined information, the image rendering method provided by the embodiment of the application can be adopted to render the image, so as to obtain the finally displayed game picture.
Fig. 10 is a schematic flowchart of another alternative image rendering method provided in an embodiment of the present application, and as shown in fig. 10, the method includes the following steps:
step S101, a terminal acquires input data to be rendered and preset rendering requirement information, wherein the data to be rendered corresponds to a vertex set.
Here, the rendering requirement information may be input by a user through a terminal, for example, when data to be rendered is input, a region, a pixel point, or a vertex that does not need to be rendered may be marked corresponding to the data to be rendered, and then all the marked data may constitute the rendering requirement information.
In some embodiments, when performing image rendering, prompt information may be displayed on a current interface of the game application, where the prompt information is used to prompt a user to input rendering requirement information, the user may input a command through a command line input box to form the rendering requirement information, and may also select and mark a region, a pixel point, or a vertex through a mouse or a keyboard, thereby generating the rendering requirement information.
And step S102, the terminal generates an image rendering request based on the data to be rendered and preset rendering demand information.
Here, the data to be rendered and the preset rendering requirement information are encapsulated into the image rendering request.
Step S103, the terminal sends an image rendering request to the server.
And step S104, the server analyzes the rendering requirement information to obtain index data corresponding to each vertex in the vertex set.
In some embodiments, the rendering requirement information includes a processing instruction for each vertex, and correspondingly, the step S104 may be implemented by the following steps S1041 to S1044 (not shown in the figure):
step S1041, performing initialization setting on the index data of each vertex to obtain initialization index data.
Here, the initialization setting may be to set the index data of each vertex to the same initial value, the initial value constituting the initialization index data. For example, the index data of each vertex is set to 0, and at this time, the initialization index data is 0.
Step S1042, parsing the rendering requirement information to obtain a processing instruction for each vertex.
Here, it may be determined whether to perform rendering for each vertex by parsing the rendering requirement information. When a certain vertex needs to be rendered, processing corresponding to the vertex is designated as a rendering instruction; when a certain vertex does not need to be rendered, the processing instruction corresponding to the vertex is a non-rendering instruction.
In some embodiments, it may also be determined whether to perform elimination processing for each vertex by parsing the rendering requirement information. When a certain vertex needs to be eliminated, processing corresponding to the vertex is designated as a non-rendering instruction; when the elimination processing of a certain vertex is not needed, the processing instruction corresponding to the vertex is a rendering instruction.
In step S1043, when the processing instruction is a rendering instruction, the initialized index data of the vertex is updated to the second type index data.
Here, the second type index data is index data for characterizing rendering processing to be performed on the vertex. When the index data of any vertex is the second type index data, it can be determined that the vertex does not need to be eliminated and needs to be rendered.
In the embodiment of the application, the initialized data of the vertex is updated based on the processing instruction as the rendering instruction, and the rendering label is added to the vertex through the second type index data, so that the rendering processing mode of the corresponding vertex can be determined based on the second type index data in the subsequent rendering process.
In step S1044, when the processing instruction is a non-rendering instruction, the initialized index data of the vertex is updated to the first type index data.
Here, the first type index data is index data for characterizing that no rendering processing is to be performed on the vertex. When the index data of any vertex is the first type index data, it can be determined that the vertex needs to be eliminated and is a vertex that does not need to be rendered.
In the embodiment of the application, the initialized data of the vertex is updated based on the fact that the processing instruction is a non-rendering instruction, and the rendering label is added to the vertex through the first type index data, so that the rendering processing mode of the corresponding vertex can be determined based on the first type index data in the subsequent rendering process.
Step S105, the server integrates the index data corresponding to all vertexes in the vertex set to obtain an index data set.
In the embodiment of the application, each index data in the index data set has a mapping relation with one vertex. Here, the index data corresponding to all the vertices in the vertex set may be integrated by counting the index data of all the vertices to form an index data set, where the index data set includes the index data of each vertex and the mapping relationship between the vertex identifier of the vertex and the index data. Through the vertex identification and the mapping relation, the index data corresponding to the vertex can be determined.
In some embodiments, the terminal may further obtain input data to be rendered and index data corresponding to each vertex, integrate the index data corresponding to all the vertices to obtain an index data set, and generate an image rendering request based on the data to be rendered and the index data set, where the image rendering request includes the data to be rendered and the index data set. That is to say, the server may obtain rendering requirement information sent by the terminal, further analyze the rendering requirement information to obtain index data of each vertex, and further integrate the index data to obtain an index data set; or the terminal may analyze the rendering requirement information of the user to obtain the index data of each vertex, and further integrate the index data to obtain the index data set.
In step S106, the server obtains vertex position information of each vertex in the vertex set.
Here, the vertex position information refers to position information of each vertex in the image. The vertex position information may be a coordinate data, i.e. the vertex position information is presented in the form of coordinate data.
In step S107, the server acquires index data corresponding to each vertex from the index data set.
Here, index data corresponding to each vertex may be determined from the index data set based on the vertex identification of each vertex.
And step S108, the server performs scaling processing on the vertex position information of each vertex based on the index data to obtain a scaling result.
In some embodiments, step S108 may be implemented by the following steps S1081 and S1082 (not shown in the figures):
step S1081, the vertex position information of each vertex is multiplied by the index data corresponding to the vertex to obtain a vertex position product.
The scaling processing refers to reduction or enlargement of coordinate data of the vertex position information. In one implementation, the index data may take any positive value. When the index data of any vertex is a positive number smaller than 1, the coordinate data of the vertex can be reduced through the index data; when the index data of any vertex is a number greater than 1, the coordinate data of the vertex can be amplified through the index data; when the index data of any vertex is equal to 1, the coordinate data of the vertex is not changed based on the index data.
In another implementation, the index data may take the value 0 or 1. When the index data is equal to 0, the coordinate data of the vertex can be reduced through the index data; when the index data is equal to 1, the coordinate data of the vertex is not changed based on the index data.
In step S1082, the vertex position product is determined as the scaling result of the vertex.
In step S109, the server determines vertices to be eliminated from the vertex set based on the scaling result of each vertex.
In one implementation, after vertex position information of each vertex in the vertex set is obtained, the vertices in the vertex set can be triangulated based on the vertex position information to obtain at least one initial triangle; wherein each initial triangle includes three vertices. Correspondingly, the vertex to be eliminated is determined from the vertex set based on the scaling result of each vertex, and the three vertices of any initial triangle may be determined as the vertex to be eliminated when the vertex position products corresponding to the three vertices of the initial triangle are all the same.
Here, when the vertex position products corresponding to the three vertices of any initial triangle are all the same, it may be that when the vertex position products corresponding to the three vertices of any initial triangle are all 0, the three vertices of this initial triangle are determined as the vertices to be eliminated. In this way, the vertex position products corresponding to the three vertexes are all 0, namely the coordinate data of the three vertexes are all 0, the three vertexes are located at the same position, the three vertexes are contracted to the same position, and two of the three vertexes are hidden, so that the three vertexes are all the vertexes to be eliminated, and the rendering is not needed.
In another implementation, the vertex to be eliminated is determined from the vertex set based on the scaling result of each vertex, and the method may be: when the vertex position product is the first type position data, determining the vertex corresponding to the vertex position product as the vertex to be eliminated; and when the vertex position product is the second type position data, determining the vertex corresponding to the vertex position product as the vertex to be rendered.
Here, the first type position data may be 0, and when the product of vertex positions of any vertex is 0, the vertex is determined as a vertex to be eliminated; the second type position data may be 1, and when the product of vertex positions of any vertex is 1, the vertex is determined as a vertex to be rendered, i.e., a non-elimination vertex.
And step S110, the server deletes the vertex to be eliminated in the vertex set to obtain the vertex set to be rendered.
And step S111, the server sequentially performs vertex rendering on the vertexes to be rendered in the vertex set to be rendered to obtain the rendered image.
And step S112, the server sends the rendered image to the terminal.
And step S113, the terminal displays the rendered image on the current interface.
According to the image rendering method provided by the embodiment of the application, the vertex position information of each vertex is subjected to scaling processing based on the index data of each vertex to obtain a scaling result, and the vertex to be eliminated is determined from the vertex set based on the scaling result of each vertex. Therefore, the vertex to be eliminated can be accurately determined, the accurate vertex set to be rendered is obtained, and the accurate response to the rendering requirement of the user is realized.
In some embodiments, when performing image rendering, a draw call (Drawcall) may be called to implement the image rendering method of the embodiments of the present application.
Fig. 11 is a schematic flowchart of yet another alternative image rendering method according to an embodiment of the present application, and as shown in fig. 11, the method includes the following steps:
step S121, the terminal obtains input data to be rendered and preset rendering requirement information, wherein the data to be rendered corresponds to a vertex set.
And step S122, the terminal generates an image rendering request based on the data to be rendered and preset rendering demand information.
Step S123, the terminal sends an image rendering request to the server.
In step S124, the server parses the rendering requirement information to obtain index data corresponding to each vertex in the vertex set.
Step S125, the server integrates the index data corresponding to all vertices in the vertex set to obtain an index data set.
Each index data in the index data set has a mapping relationship with one vertex.
In step S126, the server obtains index data corresponding to each vertex from the index data set.
In step S127, the server determines a vertex processing state corresponding to each vertex based on the index data.
In one implementation, the server may further obtain vertex position information of each vertex; performing triangulation on vertexes in the vertex set based on the vertex position information to obtain at least one initial triangle; wherein each initial triangle includes three vertices. Correspondingly, the vertex processing state corresponding to each vertex is determined based on the index data, and may be determined as an elimination state when the index data corresponding to the three vertices of any initial triangle are all preset type index data, that is, the three vertices of the initial triangle are determined as vertices to be eliminated.
In another implementation, determining the vertex processing state corresponding to each vertex based on the index data may be: when the index data is the first type index data, determining the vertex processing state of the vertex which has the mapping relation with the index data as an elimination state; when the index data is the second type index data, the vertex processing state of the vertex having the mapping relation with the index data is determined as the non-elimination state. That is, when the index data is the first type index data, determining the vertex having a mapping relation with the index data as the vertex to be eliminated; and when the index data is the second type index data, determining the vertex which has a mapping relation with the index data as the vertex to be rendered.
In step S128, when the vertex processing state of any vertex is the elimination state, the server determines the vertex as the vertex to be eliminated.
And S129, deleting the vertex to be eliminated in the vertex set by the server to obtain the vertex set to be rendered.
And step S130, the server sequentially performs vertex rendering on the vertexes to be rendered in the vertex set to be rendered to obtain the rendered image.
In some embodiments, step S130 may be implemented by the following steps S1301 to S1303 (not shown in the figure):
step S1301, vertex position information of each vertex to be rendered is obtained.
Step S1302, performing triangulation on the vertex to be rendered in the vertex set to be rendered based on the vertex position information to obtain at least one triangle to be rendered.
Step S1303, performing vertex rendering with each triangle to be rendered as a rendering unit to obtain a rendered image.
Step S131, the server sends the rendered image to the terminal.
And step S132, the terminal displays the rendered image on the current interface.
According to the image rendering method provided by the embodiment of the application, the server determines the vertex processing state corresponding to each vertex based on the index data, and when the vertex processing state of any vertex is an elimination state, the server determines the vertex as a vertex to be eliminated, so that the vertex is eliminated, and a vertex set to be rendered is obtained; and vertex rendering is sequentially carried out on the vertexes to be rendered in the vertex set to be rendered, so that rendered images are obtained. Therefore, because the vertex eliminating processing is carried out on part of the vertexes in the vertex set based on the index data, the intermittent vertex rendering can be realized by calling one drawing call, the image rendering efficiency is improved, and the method is suitable for different devices and has high adaptability. In addition, the vertex to be eliminated can be accurately determined, and the accurate vertex set to be rendered is obtained, so that the accurate response to the rendering requirement of the user is realized.
Next, an image rendering method according to an embodiment of the present application will be described by taking the rendering of a game screen of any game application P1 as an example.
When a user starts playing a game by running the game application P1 on the terminal as a player, a specific game screen is displayed for each operation of the user, and when each game screen is rendered by the server of the game application P1, because of the continuity of the screens in the game application, the game screens of consecutive frames may not be greatly different but have only a local difference, and therefore, at the time of implementation, batch rendering can be performed on consecutive game screens, and the local difference is extracted, and vertex rendering is performed on the to-be-rendered vertex of each game screen. In this case, the rendering data of the previous game screen may constitute data to be rendered of the subsequent game screen, and a portion of the subsequent game screen that is not to be rendered is determined based on the user operation, so as to obtain a vertex to be rendered of the subsequent game screen. That is to say, the method according to the embodiment of the present application can solve the problem of fast and accurate rendering of consecutive similar video frames.
In order to realize rapid and accurate rendering of continuous similar video frames, in the embodiment of the application, data to be rendered of a game picture and an index data set corresponding to user operation are firstly obtained, wherein the index data set corresponds to the user operation, namely the index data set can be determined based on the user operation; the data to be rendered corresponds to a vertex set; then, performing vertex eliminating processing on part of vertexes in the vertex set based on the data to be rendered and the index data in the index data set to obtain a vertex set to be rendered; and finally, vertex rendering is carried out on the vertexes to be rendered in the vertex set to be rendered in sequence to obtain a game picture.
Referring to fig. 12A and 12B, in fig. 12A, an original face image (i.e., an original image) before a user does not perform an operation is presented, and in fig. 12B, a game screen (i.e., a rendered image) that needs to be displayed after the user performs any operation is presented, and corresponding to the user operation, the game screen that needs to be displayed is not rendered for a part of the hair 121 compared with the original face image. Therefore, it is necessary to obtain data to be rendered that can be rendered without including a part of the hair 121. In the implementation process, an operation instruction K1 of a user under an original face image may be acquired, rendering requirement information corresponding to the operation instruction K1 is determined based on K1, and then the rendering requirement information is analyzed to obtain index data of each vertex in a vertex set corresponding to data to be rendered of the original face image in fig. 12A; and determining a vertex processing state corresponding to each vertex based on the index data, wherein the vertex processing state comprises an elimination state and a non-elimination state. After the index data of each vertex is obtained, vertex eliminating processing is carried out on the vertices of a part of hair 121 area in the original face image based on the data to be rendered of the original face image and the index data in the index data set to obtain a vertex set to be rendered of a game picture, and then vertex rendering is carried out on the vertices to be rendered in the vertex set to be rendered in sequence to obtain the game picture of the part of hair 121 area which is not rendered.
In the embodiment of the application, when the game picture is rendered, the corresponding index data set is determined based on the operation instruction of the user, and the index data set can represent the rendering requirement of the user, namely, which vertexes need to be rendered and which vertexes do not need to be rendered in the game picture can be determined based on the index data in the index data set, so that the rapid and accurate rendering of continuous similar video frames can be realized.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described. The embodiment of the application provides an image rendering method, and the image rendering method is a GPU rendering calling optimization method with wider adaptability. The method solves the problem that in some scenes, intermittent vertex rendering is realized from a single draw call of a vertex buffer, namely, the single draw call is not sequentially and completely rendered from 1 to n, but intermittently rendered from 1 to 10, 31 to 100, 221 and 300 … and 1000 to n, and vertices in the middle are skipped.
In the embodiment of the application, whether the middle vertex is displayed or not is judged by only a small amount of processing, the algorithm does not need high-end GPU hardware, does not need a graphics drawing interface of glMultiDraw supported by OpenGL3.1 or more, and only needs to use an intermittent rendering function (for example, the glDrawElements function) to realize that one drawing call is used as the vertex buffer area to intermittently render the vertex. In the embodiment of the application, an index value is added to the vertex data to index the scale factor arrays (scaleFactors [ ]), so that the vertex is controlled to shrink to [0, 0, 0], the purpose of hiding the vertex or the triangular surface and realizing the intermittent vertex rendering is achieved.
In the embodiment of the present application, by shrinking the vertices to the same position (say [0, 0, 0 ]), the purpose of rendering the vertices (and the triangle) is achieved by using the GPU rendering property (when 3 vertices of the triangle are all at the same position, no pixel is rendered). By narrowing down the vertices to the same position (say [0, 0, 0 ]), whether or not to render the vertices and triangles is controlled, and the vertices (and triangles) buffered for the vertices are rendered intermittently.
The method of the embodiment of the application can be at least applied to the following scenes: in a clothing system, rendering of certain parts of the body needs to be enabled/disabled in a single draw call. As shown in fig. 12A and 12B, where fig. 12A is an original image and fig. 12B is an image with rendering of a hair part of a body disabled, the rendering of the hair part 121 in fig. 12B may be disabled for the hair of a person in the image.
The method of the embodiment of the application can use any hardware environment which has a GPU and supports equipment of OpenGL-ES 2.0 standard at least.
The vertex elimination algorithm used in the embodiment of the present application when the vertex elimination processing is implemented mainly relates to the following principle: each vertex is added with vertex index data (scalefactor idx) for indexing an array of scale factors (scaleFactors). When each vertex position (verticali. xyz) is scaled by the array of scale factors during rendering, for example, scaling can be calculated by scaleFactors [ verticali. scalefactor idx ], and the scaling result is expressed as the following formula (1):
p=vertices[i].xyz*scaleFactors[vertices[i].scaleFactorIdx] (1)。
here, denotes multiplication. After the scaling process, the scaled result p can be rendered. Thus, if scaleFactors [ verticali ]. scalefactor idx ] =1, the position of p does not change (equal to verticali.xyz), otherwise scaleFactors [ verticali ]. scalefactor idx ] =0, the position of p becomes the origin [0, 0, 0 ].
In the embodiment of the present application, the GPU has the following characteristics: when the 3 vertices of the triangle are all in the same position, no pixels are rendered. Using this property in addition to the above vertex removal algorithm can produce very useful results. When scaleFactors [ verticals [ i ]. scaleFactor Idx ] of all 3 vertices of the triangle are 1, the vertex positions are unchanged, and thus the triangle shape is rendered normally. However, if the scaleFactors [ verticals [ i ]. scaleFactor Idx ] of all 3 vertices of the triangle are all 0, all vertices of the triangle are shrunk [0, 0, 0], and the triangle is shrunk to a single point [0, 0, 0 ]. Triangle hiding is not rendered.
By scaling of the vertices, it can be controlled whether the triangle is rendered. Therefore, the triangles in the middle of the whole triangle array can be randomly selected and hidden, so that the effect of intermittent vertex (triangle) rendering can be achieved.
Fig. 13 is a flowchart of an image rendering method provided in an embodiment of the present application, and as shown in fig. 13, the method includes the following steps:
in step S1301, the values scaleFactor [ ] of the array of scale factors are loaded.
In step S1302, i =0 is defined.
In step S1303, based on the vertex index data (scaleFactor idx), scaleFactor = scaleFactors [ vertex [ i ]. scaleFactor rldx ] is calculated.
In step S1304, each vertex position (vertex [ i ]. xyz) is scaled by the scaleFactor [ vertectidx ], that is, p.xyz = vertex [ vertexldx ]. xyz _ scaleFactor is calculated.
In step S1305, vertex p is rendered.
Step S1306, define i = i + 1.
In step S1307, it is determined whether i is less than the total number of vertices.
If yes, returning to continue to execute the step S1303; if the judgment result is negative, the process is ended, and the rendering vertex cache is completed.
Fig. 14 is another flowchart of an image rendering method provided in an embodiment of the present application, and as shown in fig. 14, the method includes the following steps:
in step S1401, the value of the scale factor array is initialized to be equal to 1 (i.e., scaleFactors [ ] = 1).
Step S1402, define i = 0.
In step S1403, it is determined whether to start rendering of vertex i.
If the judgment result is yes, go to step S1404; if the determination result is no, step S1405 is executed.
In step S1404, the value of the ith scale factor array is defined to be equal to 0 (i.e., scaleFactors [ i ] = 0).
In step S1405, it is determined whether to open the rendering of vertex i.
If the judgment result is yes, executing step S1406; if the determination result is no, step S1405 is executed.
In step S1406, the value of the ith array of scale factors is defined to be equal to 1 (i.e. scaleFactors [ i ] = 1).
Step S1407, define i = i + 1.
In step S1408, it is determined whether i is less than the total number of values in the array of scale factors.
If the judgment result is yes, returning to continue to execute the step S1403; if the judgment result is negative, the process is ended, and the process of changing whether the vertex is rendered is completed.
Fig. 15 to 17 are schematic diagrams illustrating the effect of the method according to the embodiment of the present invention applied to a low-end machine, a middle-end machine and a high-end machine, respectively, as shown in fig. 15, the average Frame Per Second (FPS) is 58.9, and a variation curve 151 of the FPS is relatively stable during operation. It should be noted that the FPS after 00:06 in fig. 15 is relatively stable, while a large valley occurs at 00:06, mainly because: the effect of other disturbing factors, such as the game just started, is that many other things need to be handled, which are irrelevant to the image rendering process of the embodiment of the present application, and therefore, the trough value can be ignored.
As shown in fig. 16, on the intermediate end machine, the average FPS is 59.6, and the variation curve 161 of the FPS is relatively smooth during operation (ignoring the valley at 00: 06).
As shown in fig. 17, on a high-end machine, the average FPS is 59.6, and during operation, the variation 171 of the FPS is relatively smooth (ignoring the valley at 00: 06).
Based on the image rendering method provided by the embodiment of the application, in the problem of realizing intermittent vertex rendering, because vertex data does not need to be copied and a high-end GPU is not needed, the method of the embodiment of the application has wider adaptability, and the rendering efficiency is not lost due to enabling/disabling.
It is understood that, in the embodiments of the present application, the content related to the user information, for example, information such as data to be rendered, index data set, rendered image, etc., if data related to the user information or business information is involved, when the embodiments of the present application are applied to a specific product or technology, user permission or consent needs to be obtained, and the collection, use and processing of the related data need to comply with relevant laws and regulations and standards of relevant countries and regions.
Continuing with the exemplary structure of the image rendering apparatus 354 implemented as a software module provided in the embodiments of the present application, in some embodiments, as shown in fig. 8, the image rendering apparatus 354 includes:
an obtaining module 3541, configured to obtain data to be rendered and an index data set; the data to be rendered corresponds to a vertex set;
a vertex eliminating module 3542, configured to perform vertex eliminating processing on a part of vertices in the vertex set based on the data to be rendered and the index data in the index data set, so as to obtain a vertex set to be rendered;
and the vertex rendering module 3543 is configured to perform vertex rendering on the vertices to be rendered in the vertex set to be rendered in sequence to obtain a rendered image.
In some embodiments, each index data in the set of index data has a mapping relationship with one vertex; the vertex elimination module is further to: acquiring vertex position information of each vertex in the vertex set; acquiring index data corresponding to each vertex from the index data set; based on the index data, carrying out scaling processing on the vertex position information of each vertex to obtain a scaling result; determining a vertex to be eliminated from the vertex set based on the scaling result of each vertex; and deleting the vertex to be eliminated in the vertex set to obtain the vertex set to be rendered.
In some embodiments, the vertex removal module is further to: multiplying the vertex position information of each vertex with the index data corresponding to the vertex to obtain a vertex position product; determining the vertex position product as a scaling result of the vertex.
In some embodiments, the apparatus further comprises: the first triangulation module is used for triangulating vertexes in the vertex set based on the vertex position information to obtain at least one initial triangle; wherein each of the initial triangles comprises three vertices; correspondingly, the vertex removal module is further configured to: and when the vertex position products corresponding to the three vertices of any initial triangle are all the same, determining the three vertices of the initial triangle as the vertices to be eliminated.
In some embodiments, the vertex removal module is further to: when the vertex position product is first type position data, determining a vertex corresponding to the vertex position product as the vertex to be eliminated; and when the vertex position product is the second type position data, determining the vertex corresponding to the vertex position product as the vertex to be rendered.
In some embodiments, each index data in the set of index data has a mapping relationship with one vertex; the vertex removal module is further to: acquiring index data corresponding to each vertex from the index data set; determining a vertex processing state corresponding to each vertex based on the index data; when the vertex processing state of any vertex is an elimination state, determining the vertex as a vertex to be eliminated; and deleting the vertex to be eliminated in the vertex set to obtain the vertex set to be rendered.
In some embodiments, the apparatus further comprises: the position information acquisition module is used for acquiring vertex position information of each vertex; the second triangulation module is used for triangulating vertexes in the vertex set based on the vertex position information to obtain at least one initial triangle; wherein each of the initial triangles comprises three vertices; the vertex removal module is further to: and when the index data corresponding to the three vertexes of any initial triangle are all preset type index data, determining the vertex processing states of the three vertexes of the initial triangle as an elimination state.
In some embodiments, the vertex removal module is further to: when the index data is first-type index data, determining a vertex processing state of a vertex having the mapping relation with the index data as an elimination state; when the index data is the second type index data, determining the vertex processing state of the vertex having the mapping relation with the index data as a non-elimination state.
In some embodiments, the vertex rendering module is further to: acquiring vertex position information of each vertex to be rendered; performing triangulation on the vertex to be rendered in the vertex set to be rendered based on the vertex position information to obtain at least one triangle to be rendered; and performing vertex rendering by taking each triangle to be rendered as a rendering unit to obtain the rendered image.
In some embodiments, the apparatus further comprises: the request acquisition module is used for acquiring an image rendering request, wherein the image rendering request comprises the data to be rendered and preset rendering demand information; the analysis module is used for analyzing the rendering requirement information to obtain index data corresponding to each vertex in the vertex set; and the integration module is used for integrating the index data corresponding to all the vertexes in the vertex set to obtain the index data set.
In some embodiments, the rendering requirement information includes processing instructions for each vertex; the parsing module is further configured to: initializing and setting the index data of each vertex to obtain initialized index data; analyzing the rendering requirement information to obtain a processing instruction of each vertex; when the processing instruction is a rendering instruction, updating the initialized index data of the vertex into second type index data; and when the processing instruction is a non-rendering instruction, updating the initialization index data of the vertex into first type index data.
It should be noted that the description of the apparatus in the embodiment of the present application is similar to the description of the method embodiment, and has similar beneficial effects to the method embodiment, and therefore, the description is not repeated. For technical details not disclosed in the embodiments of the apparatus, reference is made to the description of the embodiments of the method of the present application for understanding.
Embodiments of the present application provide a computer program product or computer program comprising executable instructions, which are computer instructions; the executable instructions are stored in a computer readable storage medium. When the processor of the image rendering device reads the executable instructions from the computer readable storage medium, and the processor executes the executable instructions, the image rendering device is caused to execute the method of the embodiment of the application.
Embodiments of the present application provide a storage medium having stored therein executable instructions, which when executed by a processor, will cause the processor to perform a method provided by embodiments of the present application, for example, the method as illustrated in fig. 9.
In some embodiments, the storage medium may be a computer-readable storage medium, such as a Ferroelectric Random Access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), a charged Erasable Programmable Read Only Memory (EEPROM), a flash Memory, a magnetic surface Memory, an optical disc, or a Compact disc Read Only Memory (CD-ROM), among other memories; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). By way of example, executable instructions may be deployed to be executed on one computing device (which may be a job run-length determining device), or on multiple computing devices located at one site, or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (12)

1. A method of image rendering, the method comprising:
acquiring data to be rendered and an index data set; the data to be rendered corresponds to a vertex set; wherein each index data in the index data set has a mapping relation with one vertex;
acquiring vertex position information of each vertex in the vertex set; acquiring index data corresponding to each vertex from the index data set;
based on the index data, carrying out scaling processing on the vertex position information of each vertex to obtain a scaling result;
determining a vertex to be eliminated from the vertex set based on the scaling result of each vertex;
deleting vertexes to be eliminated in the vertex set to obtain a vertex set to be rendered;
acquiring vertex position information of each vertex to be rendered in the vertex set to be rendered;
performing triangulation on the vertex to be rendered in the vertex set to be rendered based on the vertex position information to obtain at least one triangle to be rendered;
and performing vertex rendering by taking each triangle to be rendered as a rendering unit to obtain a rendered image.
2. The method according to claim 1, wherein the scaling the vertex position information of each vertex based on the index data to obtain a scaling result comprises:
multiplying the vertex position information of each vertex with the index data corresponding to the vertex to obtain a vertex position product;
determining the vertex position product as a scaling result of the vertex.
3. The method of claim 2, further comprising:
performing triangulation on the vertexes in the vertex set based on the vertex position information to obtain at least one initial triangle; wherein each of the initial triangles comprises three vertices;
correspondingly, the determining a vertex to be eliminated from the vertex set based on the scaling result of each vertex includes:
and when the vertex position products corresponding to the three vertices of any initial triangle are all the same, determining the three vertices of the initial triangle as the vertices to be eliminated.
4. The method of claim 2, wherein the determining vertices to be eliminated from the set of vertices based on the scaling result for each vertex comprises:
when the vertex position product is first type position data, determining a vertex corresponding to the vertex position product as the vertex to be eliminated;
and when the vertex position product is the second type position data, determining the vertex corresponding to the vertex position product as the vertex to be rendered.
5. The method of claim 1, further comprising:
determining a vertex processing state corresponding to each vertex based on the index data;
when the vertex processing state of any vertex is an elimination state, the vertex is determined as the vertex to be eliminated.
6. The method of claim 5, further comprising:
acquiring vertex position information of each vertex;
performing triangulation on the vertexes in the vertex set based on the vertex position information to obtain at least one initial triangle; wherein each of the initial triangles comprises three vertices;
the determining a vertex processing state corresponding to each vertex based on the index data comprises:
and when the index data corresponding to the three vertexes of any initial triangle are all preset type index data, determining the vertex processing states of the three vertexes of the initial triangle as an elimination state.
7. The method of claim 5, wherein determining the vertex processing state corresponding to each vertex based on the index data comprises:
when the index data is first-type index data, determining a vertex processing state corresponding to a vertex having the mapping relation with the index data as an elimination state;
and when the index data is the second type index data, determining the vertex processing state corresponding to the vertex with the mapping relation with the index data as a non-elimination state.
8. The method of claim 1, further comprising:
acquiring an image rendering request, wherein the image rendering request comprises the data to be rendered and preset rendering demand information;
analyzing the rendering requirement information to obtain index data corresponding to each vertex in the vertex set;
and integrating the index data corresponding to all the vertexes in the vertex set to obtain the index data set.
9. The method of claim 8, wherein the rendering requirement information includes a processing instruction for each vertex; the analyzing the rendering requirement information to obtain index data corresponding to each vertex in the vertex set includes:
initializing the index data of each vertex to obtain initialized index data;
analyzing the rendering requirement information to obtain a processing instruction of each vertex;
when the processing instruction is a rendering instruction, updating the initialized index data of the vertex into second type index data;
and when the processing instruction is a non-rendering instruction, updating the initialization index data of the vertex into first type index data.
10. An image rendering apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring data to be rendered and an index data set; the data to be rendered corresponds to a vertex set; wherein each index data in the index data set has a mapping relation with one vertex;
the vertex eliminating module is used for acquiring vertex position information of each vertex in the vertex set; acquiring index data corresponding to each vertex from the index data set; based on the index data, carrying out scaling processing on the vertex position information of each vertex to obtain a scaling result; determining a vertex to be eliminated from the vertex set based on the scaling result of each vertex; deleting vertexes to be eliminated in the vertex set to obtain a vertex set to be rendered;
the vertex rendering module is used for acquiring vertex position information of each vertex to be rendered in the vertex set to be rendered; performing triangulation on the vertex to be rendered in the vertex set to be rendered based on the vertex position information to obtain at least one triangle to be rendered; and performing vertex rendering by taking each triangle to be rendered as a rendering unit to obtain a rendered image.
11. An image rendering apparatus, characterized by comprising:
a memory for storing executable instructions; a processor for implementing the image rendering method of any of claims 1 to 9 when executing executable instructions stored in the memory.
12. A computer-readable storage medium having stored thereon executable instructions for causing a processor to implement the image rendering method of any one of claims 1 to 9 when executing the executable instructions.
CN202210384063.6A 2022-04-13 2022-04-13 Image rendering method, device and equipment and storage medium Active CN114494024B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210384063.6A CN114494024B (en) 2022-04-13 2022-04-13 Image rendering method, device and equipment and storage medium
PCT/CN2023/078405 WO2023197762A1 (en) 2022-04-13 2023-02-27 Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210384063.6A CN114494024B (en) 2022-04-13 2022-04-13 Image rendering method, device and equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114494024A CN114494024A (en) 2022-05-13
CN114494024B true CN114494024B (en) 2022-08-02

Family

ID=81488105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210384063.6A Active CN114494024B (en) 2022-04-13 2022-04-13 Image rendering method, device and equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114494024B (en)
WO (1) WO2023197762A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494024B (en) * 2022-04-13 2022-08-02 腾讯科技(深圳)有限公司 Image rendering method, device and equipment and storage medium
CN117576249B (en) * 2024-01-19 2024-04-02 弈芯科技(杭州)有限公司 Chip layout data processing method and device

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1255227A1 (en) * 2001-04-27 2002-11-06 STMicroelectronics Limited Vertices index processor
US7292239B1 (en) * 2004-08-06 2007-11-06 Nvidia Corporation Cull before attribute read
EP2616954B1 (en) * 2010-09-18 2021-03-31 Google LLC A method and mechanism for rendering graphics remotely
US9466115B2 (en) * 2013-03-15 2016-10-11 Nvidia Corporation Stencil then cover path rendering with shared edges
US9589388B1 (en) * 2013-07-10 2017-03-07 Thinci, Inc. Mechanism for minimal computation and power consumption for rendering synthetic 3D images, containing pixel overdraw and dynamically generated intermediate images
US9495767B2 (en) * 2014-05-15 2016-11-15 Google Inc. Indexed uniform styles for stroke rendering
CN108196835A (en) * 2018-01-29 2018-06-22 东北大学 Pel storage and the method rendered in a kind of game engine
CN109529342B (en) * 2018-11-27 2022-03-04 北京像素软件科技股份有限公司 Data rendering method and device
GB2592604B (en) * 2020-03-03 2023-10-18 Sony Interactive Entertainment Inc Image generation system and method
CN112419463B (en) * 2020-12-04 2024-01-30 Oppo广东移动通信有限公司 Model data processing method, device, equipment and readable storage medium
CN112933599B (en) * 2021-04-08 2022-07-26 腾讯科技(深圳)有限公司 Three-dimensional model rendering method, device, equipment and storage medium
CN114266874A (en) * 2021-12-17 2022-04-01 先临三维科技股份有限公司 Three-dimensional data generation method, device, equipment and storage medium
CN114494024B (en) * 2022-04-13 2022-08-02 腾讯科技(深圳)有限公司 Image rendering method, device and equipment and storage medium

Also Published As

Publication number Publication date
WO2023197762A1 (en) 2023-10-19
CN114494024A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
US11344806B2 (en) Method for rendering game, and method, apparatus and device for generating game resource file
US20220253588A1 (en) Page processing method and related apparatus
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN114494024B (en) Image rendering method, device and equipment and storage medium
KR101130407B1 (en) Systems and Methods for Providing an Enhanced Graphics Pipeline
CA2560475C (en) Integration of three dimensional scene hierarchy into two dimensional compositing system
CN107154063A (en) The shape method to set up and device in image shows region
CN109377554B (en) Large three-dimensional model drawing method, device, system and storage medium
CN111462313A (en) Implementation method, device and terminal of fluff effect
KR101618381B1 (en) Shader-based extensions for a declarative presentation framework
EP3989175A1 (en) Illumination probe generation method, apparatus, storage medium, and computer device
WO2014117559A1 (en) 3d-rendering method and device for logical window
CN111459501A (en) SVG-based Web configuration picture storage and display system, method and medium
US20230343021A1 (en) Visible element determination method and apparatus, storage medium, and electronic device
CN113457161B (en) Picture display method, information generation method, device, equipment and storage medium
CN109448123B (en) Model control method and device, storage medium and electronic equipment
CN112783660B (en) Resource processing method and device in virtual scene and electronic equipment
CN110334027B (en) Game picture testing method and device
US8203567B2 (en) Graphics processing method and apparatus implementing window system
CN113192173B (en) Image processing method and device of three-dimensional scene and electronic equipment
CN113419806B (en) Image processing method, device, computer equipment and storage medium
KR102617789B1 (en) Picture processing methods and devices, storage media and electronic devices
CN111243069B (en) Scene switching method and system of Unity3D engine
CN113888684A (en) Method and apparatus for graphics rendering and computer storage medium
CN110136235B (en) Three-dimensional BIM model shell extraction method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40070941

Country of ref document: HK