CN116630516A - 3D characteristic-based 2D rendering ordering method, device, equipment and medium - Google Patents
3D characteristic-based 2D rendering ordering method, device, equipment and medium Download PDFInfo
- Publication number
- CN116630516A CN116630516A CN202310680150.0A CN202310680150A CN116630516A CN 116630516 A CN116630516 A CN 116630516A CN 202310680150 A CN202310680150 A CN 202310680150A CN 116630516 A CN116630516 A CN 116630516A
- Authority
- CN
- China
- Prior art keywords
- pixel unit
- rendering
- current screen
- same coordinate
- axis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 135
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000012360 testing method Methods 0.000 claims abstract description 18
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 11
- 239000011159 matrix material Substances 0.000 claims description 73
- 230000009466 transformation Effects 0.000 claims description 53
- 230000001131 transforming effect Effects 0.000 claims description 22
- 238000012163 sequencing technique Methods 0.000 claims description 21
- 238000005520 cutting process Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 15
- 238000003860 storage Methods 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000002156 mixing Methods 0.000 description 8
- 238000011161 development Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Image Generation (AREA)
Abstract
The invention relates to the technical field of animation rendering, in particular to a 2D rendering ordering method, device, equipment and medium based on 3D characteristics, wherein the method specifically comprises the following steps: obtaining a depth value of each first pixel unit on the same coordinate in a current screen according to a depth test, determining a first pixel unit closest to an observer in the current screen according to the depth value, and performing animation rendering on the first pixel unit closest to the observer in the current screen; and determining the distance value of each second pixel unit on the same coordinate in the current screen from the camera to the distance value, and sorting each second pixel unit on the same coordinate in the current screen from large to small according to the distance value to obtain a rendering sorting result. According to the invention, by adopting different rendering ordering methods for the opaque pixel units and the semitransparent pixel units, the efficient picture rendering order is realized, and the performance overhead of the traditional ordering algorithm is avoided.
Description
Technical Field
The present invention relates to the field of animation rendering technologies, and in particular, to a 3D characteristic-based 2D rendering ordering method, apparatus, device, and medium.
Background
In a conventional 2D game, the rendering order determines the front-to-back order of the pictures, and when a large number of pictures exist, the ordering process becomes a performance bottleneck. If the complete 3D characteristic is introduced, game development becomes more complex, additional work such as 3D modeling, illumination, projection and the like is required, development difficulty and time cost are increased, calculation and processing amount of graphic rendering are increased, performance requirements on a computer are higher, and performance degradation and battery consumption increase may be caused particularly in an environment with limited resources such as mobile equipment and the like.
Disclosure of Invention
The invention aims to provide a 2D rendering ordering method, device, equipment and medium based on 3D characteristics, which realize efficient picture rendering order by adopting different rendering ordering methods for opaque pixel units and semitransparent pixel units, and avoid the performance cost of the traditional ordering algorithm so as to solve at least one of the problems in the prior art.
The invention provides a 2D rendering ordering method based on 3D characteristics, which specifically comprises the following steps:
acquiring rendering information of each pixel unit in a current screen, and determining a first pixel unit set and a second pixel unit set according to the rendering information, wherein the first pixel unit set comprises a plurality of first pixel units, the first pixel units are opaque pixel units, the second pixel unit set comprises a plurality of second pixel units, and the second pixel units are semitransparent pixel units;
Obtaining a depth value of each first pixel unit on the same coordinate in a current screen according to a depth test, determining a first pixel unit closest to an observer in the current screen according to the depth value, and performing animation rendering on the first pixel unit closest to the observer in the current screen;
determining a distance value of each second pixel unit on the same coordinate in the current screen from a camera to the distance value, sequencing each second pixel unit on the same coordinate in the current screen from large to small according to the distance value, obtaining a rendering sequencing result, and performing animation rendering on each second pixel unit in the rendering sequencing result one by one based on a mixed mode.
Further, the obtaining the depth value of each first pixel unit on the same coordinate in the current screen according to the depth test, determining the first pixel unit closest to the observer in the current screen according to the depth value, and performing animation rendering on the first pixel unit closest to the observer in the current screen specifically includes:
creating a depth buffer area and a color buffer area according to the size of the current screen, wherein the depth buffer area is used for storing the depth value of each coordinate in the current screen, and the color buffer area is used for storing the color value of each coordinate in the current screen;
Calculating the depth value of each first pixel unit on the same coordinate, and comparing the depth value of each first pixel unit on the same coordinate with the depth value on the corresponding coordinate of the depth buffer zone;
if the current first pixel unit depth value is smaller than the depth value on the corresponding coordinate of the depth buffer zone, taking the current first pixel unit depth value as a new depth value on the corresponding coordinate of the depth buffer zone, and simultaneously taking the current first pixel unit color value as a new color value on the corresponding coordinate of the color buffer zone, and performing animation rendering;
otherwise, the next first pixel unit is continuously compared with the depth value on the corresponding coordinate of the depth buffer until the depth values of all the first pixel units on the same coordinate are compared with the depth value on the corresponding coordinate of the depth buffer.
Further, the calculating the depth value of each first pixel unit on the same coordinate specifically includes:
inputting first vertex data of each first pixel unit on the same coordinate into a rendering pipeline;
setting a vertex shader in the rendering pipeline, wherein the vertex shader transforms first vertex data of each first pixel unit based on a model transformation matrix, a view transformation matrix and a projection transformation matrix to obtain second vertex data;
Clipping each first pixel unit beyond the clipping space according to the second vertex data, and obtaining standardized equipment coordinates in the clipping space through perspective division;
mapping the standardized equipment coordinates into the pixel coordinates of the current screen to obtain the pixel coordinates of each first pixel unit on the same coordinate;
and obtaining the depth value of each first pixel unit on the same coordinate through the distance value between the pixel coordinate of each first pixel unit on the same coordinate and the camera.
Furthermore, the transforming the vertex data of each first pixel unit based on the model transformation matrix, the view transformation matrix and the projection transformation matrix to obtain second vertex data specifically includes:
transforming the first vertex data of each first pixel unit on the same coordinate from the model space to the world space based on the model transformation matrix to obtain third vertex data;
transforming third vertex data of each first pixel unit on the same coordinate from world space to observation space based on the view transformation matrix to obtain fourth vertex data;
fourth vertex data of each first pixel unit on the same coordinate is transformed from the observation space to the clipping space based on the projective transformation matrix, and second vertex data is obtained.
Further, the projective transformation matrix is a perspective projection matrix; the transforming the fourth vertex data of each first pixel unit on the same coordinate from the observation space to the clipping space based on the projective transformation matrix to obtain second vertex data specifically includes:
obtaining the view angle, the aspect ratio, the near clipping surface distance and the far clipping surface distance of the observation space;
obtaining a Y-axis scaling factor, the Y-axis scaling factor satisfying Y 0 =1/tan (rads (F/2)), where Y 0 The scaling factor is the Y-axis, and F is the view angle;
obtaining an X-axis scaling factor, the X-axis scaling factor satisfying X 0 =Y 0 A, wherein X 0 For an X-axis zoom factor, A is the aspect ratio;
obtaining a Z-axis scaling factor and a Z-axis offset, the Z-axis scaling factor satisfying Z 1 =-(P 1 +P 2 )/(P 1 -P 2 ) The Z-axis offset satisfies Z 2 =-(2*P 1 *P 2 )/(P 1 -P 2 ) Wherein Z is 1 For Z-axis scaling factor, Z 2 For Z-axis offset, P 1 To cut the surface distance far, P 2 Is the distance of the near cutting surface;
and taking the Y-axis scaling factor, the X-axis scaling factor, the Z-axis scaling factor and the Z-axis offset as an array in the perspective projection matrix, and transforming fourth vertex data of each first pixel unit on the same coordinate from an observation space to a clipping space according to the perspective projection matrix to obtain second vertex data.
Further, the projective transformation matrix is an orthogonal projection matrix; the transforming the fourth vertex data of each first pixel unit on the same coordinate from the observation space to the clipping space based on the projective transformation matrix to obtain second vertex data specifically includes:
setting a left boundary, a right boundary, a bottom boundary and a top boundary of a cutting space, and acquiring a near cutting surface distance and a far cutting surface distance of the observation space;
obtaining a Y-axis scaling factor and a Y-axis offset, the Y-axis scaling factor satisfying Y 1 =2/(T-B), the Y-axis offset satisfiesY 2 = - (t+b)/(T-B), wherein Y 1 For the Y-axis scaling factor, Y 2 The offset is Y-axis, T is top boundary, B is bottom boundary;
obtaining an X-axis scaling factor and an X-axis offset, the X-axis scaling factor satisfying X 1 =2/(R-L), the X-axis offset satisfies X 2 Satisfies- (R+L)/(R-L) wherein X 1 For an X-axis scaling factor, X 2 R is a right boundary, L is a left boundary for X-axis offset;
obtaining a Z-axis scaling factor and a Z-axis offset, the Z-axis scaling factor satisfying Z 3 =-2/(P 1 -P 2 ) The Z-axis offset satisfies Z 4 =-(P 1 +P 2 )/(P 1 -P 2 ) Wherein Z is 3 For Z-axis scaling factor, Z 4 For Z-axis offset, P 1 To cut the surface distance far, P 2 Is the distance of the near cutting surface;
And taking the Y-axis scaling factor, the Y-axis offset, the X-axis scaling factor, the X-axis offset, the Z-axis scaling factor and the Z-axis offset as an array in the orthogonal projection matrix, and transforming fourth vertex data of each first pixel unit on the same coordinate from an observation space to a clipping space according to the orthogonal projection matrix to obtain second vertex data.
Further, determining a distance value of each second pixel unit on the same coordinate in the current screen from the large to the small according to the distance value, and sorting each second pixel unit on the same coordinate in the current screen to obtain a rendering sorting result, and performing animation rendering on each second pixel unit in the rendering sorting result one by one based on a mixed mode, wherein the method specifically comprises the following steps:
acquiring the position of a camera, and acquiring a distance value between each second pixel unit on the same coordinate in a current screen and the camera according to Euclidean distance between each second pixel unit on the same coordinate in the current screen and the camera;
according to the distance value between each second pixel unit on the same coordinate in the current screen and the camera, sequencing each second pixel unit on the same coordinate in the current screen through a quick sequencing algorithm or a merging sequencing algorithm to obtain a rendering sequencing result;
Setting a mixed mode according to the glBlendFunc function of the OpenGL library;
after the mixed mode is started according to the glEnable function of the OpenGL library, each second pixel unit in the rendering ordering result is subjected to animation rendering one by one;
and after each second pixel unit in the rendering ordering result is subjected to animation rendering one by one, disabling the mixed mode according to the glDisable function of the OpenGL library.
The invention also provides a 2D rendering ordering device based on 3D characteristics, which specifically comprises:
the device comprises an acquisition classification module, a display module and a display module, wherein the acquisition classification module is used for acquiring rendering information of each pixel unit in a current screen, and determining a first pixel unit set and a second pixel unit set according to the rendering information, wherein the first pixel unit set comprises a plurality of first pixel units which are opaque, the second pixel unit set comprises a plurality of second pixel units which are semitransparent;
the first rendering ordering module is used for obtaining the depth value of each first pixel unit on the same coordinate in the current screen according to the depth test, determining the first pixel unit closest to the observer in the current screen according to the depth value, and performing animation rendering on the first pixel unit closest to the observer in the current screen;
The second rendering ordering module is used for determining the distance value of each second pixel unit on the same coordinate in the current screen from the large to the small from the distance value to the camera, ordering each second pixel unit on the same coordinate in the current screen according to the distance value, obtaining a rendering ordering result, and performing animation rendering on each second pixel unit in the rendering ordering result one by one based on a mixed mode.
The present invention also provides a computer device comprising: memory and processor and computer program stored on the memory, which when executed on the processor, implements a 3D feature based 2D rendering ordering method as claimed in any one of the above methods.
The invention also provides a computer readable storage medium having stored thereon a computer program which, when run by a processor, implements a 3D feature based 2D rendering ordering method as claimed in any one of the above methods.
Compared with the prior art, the invention has at least one of the following technical effects:
1. by adopting different rendering ordering methods for the opaque pixel units and the semitransparent pixel units, the efficient picture rendering order is realized, and the performance overhead of the traditional ordering algorithm is avoided.
2. The scheme is light: complicated 3D modeling and illumination calculation are not needed, and only depth information and rendering sequence are used for optimization, so that development complexity and resource occupation are reduced.
3. The compatibility is strong: the method can be compatible with the existing 2D game engine and rendering pipelines, large-scale modification of the game engine is not needed, and developers can directly apply the scheme to the existing game framework, so that the game performance is improved without affecting the existing development flow and code library.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a 2D rendering ordering method based on 3D characteristics according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a 2D rendering ordering apparatus based on 3D characteristics according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In a conventional 2D game, the rendering order determines the front-to-back order of the pictures, and when a large number of pictures exist, the ordering process becomes a performance bottleneck. If the complete 3D characteristic is introduced, game development becomes more complex, additional work such as 3D modeling, illumination, projection and the like is required, development difficulty and time cost are increased, calculation and processing amount of graphic rendering are increased, performance requirements on a computer are higher, and performance degradation and battery consumption increase may be caused particularly in an environment with limited resources such as mobile equipment and the like.
Referring to fig. 1, an embodiment of the present invention provides a 2D rendering ordering method based on 3D characteristics, where the method specifically includes:
s101: acquiring rendering information of each pixel unit in a current screen, and determining a first pixel unit set and a second pixel unit set according to the rendering information, wherein the first pixel unit set comprises a plurality of first pixel units, the first pixel units are opaque pixel units, the second pixel unit set comprises a plurality of second pixel units, and the second pixel units are semitransparent pixel units;
and obtaining a depth value of each first pixel unit on the same coordinate in the current screen according to the depth test, determining a first pixel unit closest to the observer in the current screen according to the depth value, and performing animation rendering on the first pixel unit closest to the observer in the current screen.
In the embodiment, the traditional 2D calculates the drawing sequence of all objects through hierarchical sorting, the objects are divided into the smallest pixel units, the pixel units are divided into opaque and semitransparent two types, the opaque pixels disregard the drawing sequence, the shielding relation is realized through depth test in the 3D rendering pipeline, and the semitransparent pixels calculate the drawing sequence through depth (distance from a camera) sorting, so that the number of sorted objects is reduced, and the complexity of a sorting algorithm is reduced.
In some embodiments, the obtaining a depth value of each first pixel unit on the same coordinate in the current screen according to the depth test, determining a first pixel unit closest to the observer in the current screen according to the depth value, and performing animation rendering on the first pixel unit closest to the observer in the current screen specifically includes:
creating a depth buffer area and a color buffer area according to the size of the current screen, wherein the depth buffer area is used for storing the depth value of each coordinate in the current screen, and the color buffer area is used for storing the color value of each coordinate in the current screen;
calculating the depth value of each first pixel unit on the same coordinate, and comparing the depth value of each first pixel unit on the same coordinate with the depth value on the corresponding coordinate of the depth buffer zone;
if the current first pixel unit depth value is smaller than the depth value on the corresponding coordinate of the depth buffer zone, taking the current first pixel unit depth value as a new depth value on the corresponding coordinate of the depth buffer zone, and simultaneously taking the current first pixel unit color value as a new color value on the corresponding coordinate of the color buffer zone, and performing animation rendering;
Otherwise, the next first pixel unit is continuously compared with the depth value on the corresponding coordinate of the depth buffer until the depth values of all the first pixel units on the same coordinate are compared with the depth value on the corresponding coordinate of the depth buffer.
In this embodiment, a Depth test (Depth test) is performed at the pixel processing stage of the rendering pipeline, based on a comparison of the Depth value of a pixel with the Depth value of a corresponding location in the Depth buffer, for determining whether a pixel should be rendered. In the graphics rendering pipeline initialization phase, a depth buffer, typically a two-dimensional array of one single channel, storing one depth value per pixel location, and a color buffer, similar to the depth buffer, storing one color value per pixel location, are created, both having the same resolution.
When the depth buffer is created, an initial depth value is typically set in the depth buffer, which is typically set to a large value representing infinity, most often the initial depth value of the depth buffer is set to the maximum number of positive floats (e.g. float. Maxvalue may be used for single precision floats), or to a specific value, e.g. 1.0, which is set to ensure that all pixels are initially considered unrendered, since any actual depth value should be smaller than this initial value. Thus, when the rendering pipeline begins processing fragments and performing depth testing, the first pixel units on each coordinate pass the depth test and their depth values will be updated to the depth buffer.
When the rendering pipeline processes one pixel unit, the depth value of the current pixel unit is compared to the depth value stored at the corresponding pixel location in the depth buffer.
If the depth value of the current pixel unit is less than the depth value in the depth buffer, this indicates that the pixel unit is closer to the viewer and should be rendered. In this case, the depth value stored at the corresponding pixel location in the depth buffer is updated according to the depth value of the current pixel unit for subsequent depth test comparison.
If the depth value of the current pixel unit is greater than or equal to the depth value in the depth buffer, it is indicated that the pixel unit should be discarded after the pixel unit that has been previously rendered, and no rendering is performed.
Through such a depth comparison process, it can be ensured that the farther pixel units do not overlap the nearer pixel units, thereby creating a correct occlusion relationship. The result of the depth test determines whether each pixel unit should be rendered, ensuring that the object is rendered in the correct depth order.
In some embodiments, the calculating the depth value of each first pixel unit on the same coordinate specifically includes:
inputting first vertex data of each first pixel unit on the same coordinate into a rendering pipeline;
Setting a vertex shader in the rendering pipeline, wherein the vertex shader transforms first vertex data of each first pixel unit based on a model transformation matrix, a view transformation matrix and a projection transformation matrix to obtain second vertex data;
clipping each first pixel unit beyond the clipping space according to the second vertex data, and obtaining standardized equipment coordinates in the clipping space through perspective division;
mapping the standardized equipment coordinates into the pixel coordinates of the current screen to obtain the pixel coordinates of each first pixel unit on the same coordinate;
and obtaining the depth value of each first pixel unit on the same coordinate through the distance value between the pixel coordinate of each first pixel unit on the same coordinate and the camera.
In this embodiment, in the rendering pipeline, the conversion of the pixel's position into coordinates of the clipping space or viewing space is accomplished by vertex processing and transformation stages, as follows:
first, vertex Input (Vertex Input): vertex data for the model is input into the rendering pipeline.
Second, vertex Shader (Vertex Shader): each vertex is processed and transformed. In the vertex shader, a series of transformation operations such as model transformation, view transformation, and projection transformation can be performed. These transformations transform the coordinates of vertices from model space to clipping space or viewing space. Wherein the model transformation (Model Transformation) is to transform the position of the vertex from the model space to the world space or other space, the view transformation (View Transformation) is to transform the position of the vertex from the world space to the viewing space, i.e. the coordinate system space of the camera, and the projection transformation (Projection Transformation) is to transform the position of the vertex from the viewing space to the clipping space. In projective transformation, perspective projection or orthographic projection is typically used to define the range of the crop space and the perspective effect.
Third, clip (clip): and cutting the vertexes in the cutting space, and eliminating vertexes beyond the cutting space range. And checking the clipping space coordinates of each vertex, and judging whether the clipping space coordinates are out of the clipping space range. If the vertex is located inside the clipping space, it will continue to participate in the subsequent rendering process; if the vertex is outside the clipping space, it will be clipped out and not participate in subsequent rendering.
Fourth, perspective division (Perspective Division): the coordinates in the crop space are perspective divided and converted into standardized device coordinates (Normalized Device Coordinates, NDC). Perspective division is achieved by dividing the W component of the homogeneous coordinates of the vertex. The standardized device coordinate is a standardized coordinate space, which ranges from [ -1,1]. The origin of coordinates is located in the center of the screen, in the positive X direction to the right and in the positive Y direction to the up.
In perspective division, the clipping space coordinates (clip space coordinates) of the vertex are represented as a vector of the vec4 type, i.e., vec4 (x, y, z, W), and the coordinates of the vertex are divided by the W component, i.e., x ' =x/W, y ' =y/W, and z ' =z/W, which is effectively converting the coordinates of the clipping space to Normalized Device Coordinates (NDC). The result obtained by the division operation is a Normalized Device Coordinate (NDC), where the range of x ', y ' and z ' is typically [ -1,1]. It should be noted that perspective division divided by the W component is only applicable to perspective projection, which is typically 1 for orthogonal projection, and can be reduced to X, Y and Z components using vertices directly. The result of perspective division is passed to a subsequent stage of the rendering pipeline, such as a rasterization stage, for screen mapping.
Fifth, screen Mapping (Screen Mapping): the standardized device coordinates are mapped to pixel coordinates of the screen space. This step typically involves scaling and panning the coordinates to fit the actual size of the screen.
In the rasterization stage, the X and Y components of the Normalized Device Coordinates (NDC) will fall within the range of [ -1,1], the Z component will fall within the range of [0,1], and finally, the rendering pipeline maps the NDC coordinates to the pixel coordinates of the screen space for viewport transformation, scaling, and translation to accommodate the actual screen size and position.
In this process, the depth value of each vertex is also calculated and passed to the fragment processing stage. By transforming and clipping the vertices, the resulting pixel units will contain the coordinates and depth values of the screen space. This depth value will be used for subsequent depth testing to determine if the pixel unit should be drawn into the depth buffer.
In some embodiments, transforming the vertex data of each first pixel unit based on the model transformation matrix, the view transformation matrix and the projective transformation matrix to obtain second vertex data specifically includes:
transforming the first vertex data of each first pixel unit on the same coordinate from the model space to the world space based on the model transformation matrix to obtain third vertex data;
Transforming third vertex data of each first pixel unit on the same coordinate from world space to observation space based on the view transformation matrix to obtain fourth vertex data;
fourth vertex data of each first pixel unit on the same coordinate is transformed from the observation space to the clipping space based on the projective transformation matrix, and second vertex data is obtained.
In some embodiments, the projective transformation matrix is a perspective projection matrix; the transforming the fourth vertex data of each first pixel unit on the same coordinate from the observation space to the clipping space based on the projective transformation matrix to obtain second vertex data specifically includes:
obtaining the view angle, the aspect ratio, the near clipping surface distance and the far clipping surface distance of the observation space;
obtaining a Y-axis scaling factor, saidThe Y-axis scaling factor satisfies Y 0 =1/tan (rads (F/2)), where Y 0 The scaling factor is the Y-axis, and F is the view angle;
obtaining an X-axis scaling factor, the X-axis scaling factor satisfying X 0 =Y 0 A, wherein X 0 For an X-axis zoom factor, A is the aspect ratio;
obtaining a Z-axis scaling factor and a Z-axis offset, the Z-axis scaling factor satisfying Z 1 =-(P 1 +P 2 )/(P 1 -P 2 ) The Z-axis offset satisfies Z 2 =-(2*P 1 *P 2 )/(P 1 -P 2 ) Wherein Z is 1 For Z-axis scaling factor, Z 2 For Z-axis offset, P 1 To cut the surface distance far, P 2 Is the distance of the near cutting surface;
and taking the Y-axis scaling factor, the X-axis scaling factor, the Z-axis scaling factor and the Z-axis offset as an array in the perspective projection matrix, and transforming fourth vertex data of each first pixel unit on the same coordinate from an observation space to a clipping space according to the perspective projection matrix to obtain second vertex data.
In this embodiment, a 4X4 matrix with an initial value of 0 may be created as the perspective projection matrix, the X-axis scaling factor is set to the first column of the perspective projection matrix, the Y-axis scaling factor is set to the second column of the perspective projection matrix, the Z-axis scaling factor is set to the third column of the perspective projection matrix, the Z-axis offset is set to the fourth column of the perspective projection matrix, and then the fourth vertex data of each first pixel unit on the same coordinate is transformed from the viewing space to the clipping space through the perspective projection matrix.
In some embodiments, the projective transformation matrix is an orthogonal projection matrix; the transforming the fourth vertex data of each first pixel unit on the same coordinate from the observation space to the clipping space based on the projective transformation matrix to obtain second vertex data specifically includes:
Setting a left boundary, a right boundary, a bottom boundary and a top boundary of a cutting space, and acquiring a near cutting surface distance and a far cutting surface distance of the observation space;
obtaining a Y-axis scaling factor and a Y-axis offset, the Y-axis scaling factor satisfying Y 1 =2/(T-B), the Y-axis offset satisfies Y 2 = - (t+b)/(T-B), wherein Y 1 For the Y-axis scaling factor, Y 2 The offset is Y-axis, T is top boundary, B is bottom boundary;
obtaining an X-axis scaling factor and an X-axis offset, the X-axis scaling factor satisfying X 1 =2/(R-L), the X-axis offset satisfies X 2 Satisfies- (R+L)/(R-L) wherein X 1 For an X-axis scaling factor, X 2 R is a right boundary, L is a left boundary for X-axis offset;
obtaining a Z-axis scaling factor and a Z-axis offset, the Z-axis scaling factor satisfying Z 3 =-2/(P 1 -P 2 ) The Z-axis offset satisfies Z 4 =-(P 1 +P 2 )/(P 1 -P 2 ) Wherein Z is 3 For Z-axis scaling factor, Z 4 For Z-axis offset, P 1 To cut the surface distance far, P 2 Is the distance of the near cutting surface;
and taking the Y-axis scaling factor, the Y-axis offset, the X-axis scaling factor, the X-axis offset, the Z-axis scaling factor and the Z-axis offset as an array in the orthogonal projection matrix, and transforming fourth vertex data of each first pixel unit on the same coordinate from an observation space to a clipping space according to the orthogonal projection matrix to obtain second vertex data.
In this embodiment, a 4X4 matrix with an initial value of 0 may be created as the orthogonal projection matrix, the X-axis scaling factor is set to the first row and first column of the orthogonal projection matrix, the Y-axis scaling factor is set to the second row and second column of the orthogonal projection matrix, the Z-axis scaling factor is set to the third row and third column of the orthogonal projection matrix, the X-axis offset is set to the fourth row and first column of the orthogonal projection matrix, the Y-axis offset is set to the fourth row and second column of the orthogonal projection matrix, the Z-axis offset is set to the fourth row and third column of the orthogonal projection matrix, and then the fourth vertex data of each first pixel unit on the same coordinate is transformed from the observation space to the clipping space through the orthogonal projection matrix.
S102: determining a distance value of each second pixel unit on the same coordinate in the current screen from a camera to the distance value, sequencing each second pixel unit on the same coordinate in the current screen from large to small according to the distance value, obtaining a rendering sequencing result, and performing animation rendering on each second pixel unit in the rendering sequencing result one by one based on a mixed mode.
In some embodiments, determining a distance value of each second pixel unit on the same coordinate in the current screen from the camera, sorting each second pixel unit on the same coordinate in the current screen from large to small according to the distance value, obtaining a rendering sorting result, and performing animation rendering on each second pixel unit in the rendering sorting result one by one based on a mixed mode, wherein the method specifically includes:
Acquiring the position of a camera, and acquiring a distance value between each second pixel unit on the same coordinate in a current screen and the camera according to Euclidean distance between each second pixel unit on the same coordinate in the current screen and the camera;
according to the distance value between each second pixel unit on the same coordinate in the current screen and the camera, sequencing each second pixel unit on the same coordinate in the current screen through a quick sequencing algorithm or a merging sequencing algorithm to obtain a rendering sequencing result;
setting a mixed mode according to the glBlendFunc function of the OpenGL library;
after the mixed mode is started according to the glEnable function of the OpenGL library, each second pixel unit in the rendering ordering result is subjected to animation rendering one by one;
and after each second pixel unit in the rendering ordering result is subjected to animation rendering one by one, disabling the mixed mode according to the glDisable function of the OpenGL library.
In this embodiment, the position of the camera is first determined, the camera typically being located at a point in the scene, which can be represented by a position vector of the camera. For each semitransparent pixel unit, its distance from the camera position is calculated, typically using the euclidean distance between the center point of the object and the camera position, which can be determined from the bounding box or geometry of the object. After associating objects with their distance values, a data structure (e.g., array, list, dictionary, etc.) may be used to store each object and its corresponding distance value.
When drawing a semitransparent pixel unit, a Blending Mode (Blending Mode) needs to be enabled to blend a new pixel color with an existing color, so as to realize linear interpolation of a source color (color of a source object) and a target color (background color). The enabling (glEnable function) and disabling (glDisable function) of the blend mode are settings for the entire rendering pipeline. Once blending is enabled, all subsequent rendering operations are affected by the blending mode until it is explicitly disabled.
In some cases, it may be desirable to draw other opaque objects or perform other rendering operations after drawing the translucent object, and these operations do not require a mixed mode. Therefore, after the translucent object is drawn, the subsequent rendering operation is ensured not to be affected by the blending by disabling the blending, and the normal drawing effect is maintained. The mixing is enabled and disabled at proper time, so that the action range of the mixing mode can be controlled to be effective only when needed, and the accuracy and consistency of the rendering result are ensured.
Referring to fig. 2, the embodiment of the present invention further provides a 2D rendering ordering apparatus 2 based on 3D characteristics, where the apparatus 2 specifically includes:
The obtaining classification module 201 is configured to obtain rendering information of each pixel unit in a current screen, determine a first pixel unit set and a second pixel unit set according to the rendering information, where the first pixel unit set includes a plurality of first pixel units, the first pixel units are opaque pixel units, and the second pixel unit set includes a plurality of second pixel units, and the second pixel units are semitransparent pixel units;
a first rendering ordering module 202, configured to obtain a depth value of each first pixel unit on the same coordinate in the current screen according to a depth test, determine a first pixel unit closest to the observer in the current screen according to the depth value, and perform animation rendering on the first pixel unit closest to the observer in the current screen;
and the second rendering ordering module 203 is configured to determine a distance value of each second pixel unit on the same coordinate in the current screen from the camera to the second camera, order each second pixel unit on the same coordinate in the current screen from the large to the small according to the distance value, obtain a rendering ordering result, and perform animation rendering on each second pixel unit in the rendering ordering result one by one based on a mixed mode.
It can be understood that the content in the 3D characteristic-based 2D rendering ordering method embodiment shown in fig. 1 is applicable to the 3D characteristic-based 2D rendering ordering device embodiment, and the functions specifically implemented by the 3D characteristic-based 2D rendering ordering device embodiment are the same as those of the 3D characteristic-based 2D rendering ordering method embodiment shown in fig. 1, and the advantages achieved are the same as those achieved by the 3D characteristic-based 2D rendering ordering method embodiment shown in fig. 1.
It should be noted that, because the content of information interaction and execution process between the above devices is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Referring to fig. 3, an embodiment of the present invention further provides a computer device 3, including: memory 302 and processor 301 and a computer program 303 stored on memory 302, which computer program 303, when executed on processor 301, implements a 3D feature based 2D rendering ordering method as described in any of the above methods.
The computer device 3 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The computer device 3 may include, but is not limited to, a processor 301, a memory 302. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the computer device 3 and is not meant to be limiting as the computer device 3, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 301 may be a central processing unit (Central Processing Unit, CPU), the processor 301 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 302 may in some embodiments be an internal storage unit of the computer device 3, such as a hard disk or a memory of the computer device 3. The memory 302 may in other embodiments also be an external storage device of the computer device 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 3. Further, the memory 302 may also include both an internal storage unit and an external storage device of the computer device 3. The memory 302 is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, such as program code for the computer program. The memory 302 may also be used to temporarily store data that has been output or is to be output.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when being run by a processor, implements the 3D characteristic-based 2D rendering ordering method according to any one of the above methods.
In this embodiment, the integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the disclosed embodiments of the application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Claims (10)
1. A 2D rendering ordering method based on 3D characteristics, the method specifically comprising:
acquiring rendering information of each pixel unit in a current screen, and determining a first pixel unit set and a second pixel unit set according to the rendering information, wherein the first pixel unit set comprises a plurality of first pixel units, the first pixel units are opaque pixel units, the second pixel unit set comprises a plurality of second pixel units, and the second pixel units are semitransparent pixel units;
obtaining a depth value of each first pixel unit on the same coordinate in a current screen according to a depth test, determining a first pixel unit closest to an observer in the current screen according to the depth value, and performing animation rendering on the first pixel unit closest to the observer in the current screen;
Determining a distance value of each second pixel unit on the same coordinate in the current screen from a camera to the distance value, sequencing each second pixel unit on the same coordinate in the current screen from large to small according to the distance value, obtaining a rendering sequencing result, and performing animation rendering on each second pixel unit in the rendering sequencing result one by one based on a mixed mode.
2. The method according to claim 1, wherein the obtaining the depth value of each first pixel unit on the same coordinate in the current screen according to the depth test, determining the first pixel unit closest to the observer in the current screen according to the depth value, and performing animation rendering on the first pixel unit closest to the observer in the current screen specifically includes:
creating a depth buffer area and a color buffer area according to the size of the current screen, wherein the depth buffer area is used for storing the depth value of each coordinate in the current screen, and the color buffer area is used for storing the color value of each coordinate in the current screen;
calculating the depth value of each first pixel unit on the same coordinate, and comparing the depth value of each first pixel unit on the same coordinate with the depth value on the corresponding coordinate of the depth buffer zone;
If the current first pixel unit depth value is smaller than the depth value on the corresponding coordinate of the depth buffer zone, taking the current first pixel unit depth value as a new depth value on the corresponding coordinate of the depth buffer zone, and simultaneously taking the current first pixel unit color value as a new color value on the corresponding coordinate of the color buffer zone, and performing animation rendering;
otherwise, the next first pixel unit is continuously compared with the depth value on the corresponding coordinate of the depth buffer until the depth values of all the first pixel units on the same coordinate are compared with the depth value on the corresponding coordinate of the depth buffer.
3. The method according to claim 2, wherein the calculating the depth value of each first pixel unit on the same coordinate specifically includes:
inputting first vertex data of each first pixel unit on the same coordinate into a rendering pipeline;
setting a vertex shader in the rendering pipeline, wherein the vertex shader transforms first vertex data of each first pixel unit based on a model transformation matrix, a view transformation matrix and a projection transformation matrix to obtain second vertex data;
Clipping each first pixel unit beyond the clipping space according to the second vertex data, and obtaining standardized equipment coordinates in the clipping space through perspective division;
mapping the standardized equipment coordinates into the pixel coordinates of the current screen to obtain the pixel coordinates of each first pixel unit on the same coordinate;
and obtaining the depth value of each first pixel unit on the same coordinate through the distance value between the pixel coordinate of each first pixel unit on the same coordinate and the camera.
4. A method according to claim 3, wherein transforming the vertex data of each first pixel unit based on the model transformation matrix, the view transformation matrix and the projective transformation matrix to obtain the second vertex data comprises:
transforming the first vertex data of each first pixel unit on the same coordinate from the model space to the world space based on the model transformation matrix to obtain third vertex data;
transforming third vertex data of each first pixel unit on the same coordinate from world space to observation space based on the view transformation matrix to obtain fourth vertex data;
fourth vertex data of each first pixel unit on the same coordinate is transformed from the observation space to the clipping space based on the projective transformation matrix, and second vertex data is obtained.
5. The method of claim 4, wherein the projective transformation matrix is a perspective projection matrix; the transforming the fourth vertex data of each first pixel unit on the same coordinate from the observation space to the clipping space based on the projective transformation matrix to obtain second vertex data specifically includes:
obtaining the view angle, the aspect ratio, the near clipping surface distance and the far clipping surface distance of the observation space;
obtaining a Y-axis scaling factor, the Y-axis scaling factor satisfying Y 0 =1/tan (rads (F/2)), where Y 0 The scaling factor is the Y-axis, and F is the view angle;
obtaining an X-axis scaling factor, the X-axis scaling factor satisfying X 0 =Y 0 A, wherein X 0 For an X-axis zoom factor, A is the aspect ratio;
obtaining a Z-axis scaling factor and a Z-axis offset, the Z-axis scaling factor satisfying Z 1 =-(P 1 +P 2 )/(P 1 -P 2 ) The Z-axis offset satisfies Z 2 =-(2*P 1 *P 2 )/(P 1 -P 2 ) Wherein Z is 1 For Z-axis scaling factor, Z 2 For Z-axis offset, P 1 To cut the surface distance far, P 2 Is the distance of the near cutting surface;
and taking the Y-axis scaling factor, the X-axis scaling factor, the Z-axis scaling factor and the Z-axis offset as an array in the perspective projection matrix, and transforming fourth vertex data of each first pixel unit on the same coordinate from an observation space to a clipping space according to the perspective projection matrix to obtain second vertex data.
6. The method of claim 4, wherein the projective transformation matrix is an orthogonal projection matrix; the transforming the fourth vertex data of each first pixel unit on the same coordinate from the observation space to the clipping space based on the projective transformation matrix to obtain second vertex data specifically includes:
setting a left boundary, a right boundary, a bottom boundary and a top boundary of a cutting space, and acquiring a near cutting surface distance and a far cutting surface distance of the observation space;
obtaining a Y-axis scaling factor and a Y-axis offset, the Y-axis scaling factor satisfying Y 1 =2/(T-B), the Y-axis offset satisfies Y 2 = - (t+b)/(T-B), wherein Y 1 For the Y-axis scaling factor, Y 2 The offset is Y-axis, T is top boundary, B is bottom boundary;
obtaining an X-axis scaling factor and an X-axis offset, the X-axis scaling factor satisfying X 1 =2/(R-L), the X-axis offset satisfies X 2 Satisfies- (R+L)/(R-L) wherein X 1 For an X-axis scaling factor, X 2 R is a right boundary, L is a left boundary for X-axis offset;
obtaining a Z-axis scaling factor and a Z-axis offset, the Z-axis scaling factor satisfying Z 3 =-2/(P 1 -P 2 ) The Z-axis offset satisfies Z 4 =-(P 1 +P 2 )/(P 1 -P 2 ) Wherein Z is 3 For Z-axis scaling factor, Z 4 For Z-axis offset, P 1 To cut the surface distance far, P 2 Is the distance of the near cutting surface;
and taking the Y-axis scaling factor, the Y-axis offset, the X-axis scaling factor, the X-axis offset, the Z-axis scaling factor and the Z-axis offset as an array in the orthogonal projection matrix, and transforming fourth vertex data of each first pixel unit on the same coordinate from an observation space to a clipping space according to the orthogonal projection matrix to obtain second vertex data.
7. The method according to claim 1, wherein determining the distance value of each second pixel unit from the camera on the same coordinate in the current screen, sorting each second pixel unit on the same coordinate in the current screen from large to small according to the distance value, obtaining a rendering sorting result, and performing animation rendering on each second pixel unit in the rendering sorting result one by one based on a mixed mode, specifically comprises:
acquiring the position of a camera, and acquiring a distance value between each second pixel unit on the same coordinate in a current screen and the camera according to Euclidean distance between each second pixel unit on the same coordinate in the current screen and the camera;
according to the distance value between each second pixel unit on the same coordinate in the current screen and the camera, sequencing each second pixel unit on the same coordinate in the current screen through a quick sequencing algorithm or a merging sequencing algorithm to obtain a rendering sequencing result;
Setting a mixed mode according to the glBlendFunc function of the OpenGL library;
after the mixed mode is started according to the glEnable function of the OpenGL library, each second pixel unit in the rendering ordering result is subjected to animation rendering one by one;
and after each second pixel unit in the rendering ordering result is subjected to animation rendering one by one, disabling the mixed mode according to the glDisable function of the OpenGL library.
8. 2D rendering ordering apparatus based on 3D characteristics, characterized in that the apparatus specifically comprises:
the device comprises an acquisition classification module, a display module and a display module, wherein the acquisition classification module is used for acquiring rendering information of each pixel unit in a current screen, and determining a first pixel unit set and a second pixel unit set according to the rendering information, wherein the first pixel unit set comprises a plurality of first pixel units which are opaque, the second pixel unit set comprises a plurality of second pixel units which are semitransparent;
the first rendering ordering module is used for obtaining the depth value of each first pixel unit on the same coordinate in the current screen according to the depth test, determining the first pixel unit closest to the observer in the current screen according to the depth value, and performing animation rendering on the first pixel unit closest to the observer in the current screen;
The second rendering ordering module is used for determining the distance value of each second pixel unit on the same coordinate in the current screen from the large to the small from the distance value to the camera, ordering each second pixel unit on the same coordinate in the current screen according to the distance value, obtaining a rendering ordering result, and performing animation rendering on each second pixel unit in the rendering ordering result one by one based on a mixed mode.
9. A computer device, comprising: memory and processor and computer program stored on the memory, which when executed on the processor, implements the 3D characteristic based 2D rendering ordering method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, implements the 3D property based 2D rendering ordering method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310680150.0A CN116630516B (en) | 2023-06-09 | 2023-06-09 | 3D characteristic-based 2D rendering ordering method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310680150.0A CN116630516B (en) | 2023-06-09 | 2023-06-09 | 3D characteristic-based 2D rendering ordering method, device, equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116630516A true CN116630516A (en) | 2023-08-22 |
CN116630516B CN116630516B (en) | 2024-01-30 |
Family
ID=87636573
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310680150.0A Active CN116630516B (en) | 2023-06-09 | 2023-06-09 | 3D characteristic-based 2D rendering ordering method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116630516B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050057574A1 (en) * | 2003-09-12 | 2005-03-17 | Andrews Jeffrey A. | Methods and systems for transparent depth sorting |
CN101295408A (en) * | 2007-04-27 | 2008-10-29 | 新奥特硅谷视频技术有限责任公司 | 3D videotext rendering method and system |
CN102722861A (en) * | 2011-05-06 | 2012-10-10 | 新奥特(北京)视频技术有限公司 | CPU-based graphic rendering engine and realization method |
WO2018140223A1 (en) * | 2017-01-25 | 2018-08-02 | Advanced Micro Devices, Inc. | Stereo rendering |
CN111462278A (en) * | 2020-03-17 | 2020-07-28 | 稿定(厦门)科技有限公司 | Depth-based material sorting rendering method, medium, equipment and device |
CN112541960A (en) * | 2019-09-19 | 2021-03-23 | 阿里巴巴集团控股有限公司 | Three-dimensional scene rendering method and device and electronic equipment |
CN112837402A (en) * | 2021-03-01 | 2021-05-25 | 腾讯科技(深圳)有限公司 | Scene rendering method and device, computer equipment and storage medium |
CN113052951A (en) * | 2021-06-01 | 2021-06-29 | 腾讯科技(深圳)有限公司 | Object rendering method and device, computer equipment and storage medium |
CN115063330A (en) * | 2022-06-13 | 2022-09-16 | 北京大甜绵白糖科技有限公司 | Hair rendering method and device, electronic equipment and storage medium |
CN115423923A (en) * | 2022-09-02 | 2022-12-02 | 珠海金山数字网络科技有限公司 | Model rendering method and device |
-
2023
- 2023-06-09 CN CN202310680150.0A patent/CN116630516B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050057574A1 (en) * | 2003-09-12 | 2005-03-17 | Andrews Jeffrey A. | Methods and systems for transparent depth sorting |
CN101295408A (en) * | 2007-04-27 | 2008-10-29 | 新奥特硅谷视频技术有限责任公司 | 3D videotext rendering method and system |
CN102722861A (en) * | 2011-05-06 | 2012-10-10 | 新奥特(北京)视频技术有限公司 | CPU-based graphic rendering engine and realization method |
WO2018140223A1 (en) * | 2017-01-25 | 2018-08-02 | Advanced Micro Devices, Inc. | Stereo rendering |
CN112541960A (en) * | 2019-09-19 | 2021-03-23 | 阿里巴巴集团控股有限公司 | Three-dimensional scene rendering method and device and electronic equipment |
CN111462278A (en) * | 2020-03-17 | 2020-07-28 | 稿定(厦门)科技有限公司 | Depth-based material sorting rendering method, medium, equipment and device |
CN112837402A (en) * | 2021-03-01 | 2021-05-25 | 腾讯科技(深圳)有限公司 | Scene rendering method and device, computer equipment and storage medium |
CN113052951A (en) * | 2021-06-01 | 2021-06-29 | 腾讯科技(深圳)有限公司 | Object rendering method and device, computer equipment and storage medium |
CN115063330A (en) * | 2022-06-13 | 2022-09-16 | 北京大甜绵白糖科技有限公司 | Hair rendering method and device, electronic equipment and storage medium |
CN115423923A (en) * | 2022-09-02 | 2022-12-02 | 珠海金山数字网络科技有限公司 | Model rendering method and device |
Also Published As
Publication number | Publication date |
---|---|
CN116630516B (en) | 2024-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7463261B1 (en) | Three-dimensional image compositing on a GPU utilizing multiple transformations | |
US7663621B1 (en) | Cylindrical wrapping using shader hardware | |
TWI592902B (en) | Control of a sample mask from a fragment shader program | |
US9965886B2 (en) | Method of and apparatus for processing graphics | |
CN108038897B (en) | Shadow map generation method and device | |
US8531457B2 (en) | Apparatus and method for finding visible points in a cloud point | |
US8970583B1 (en) | Image space stylization of level of detail artifacts in a real-time rendering engine | |
US8115783B2 (en) | Methods of and apparatus for processing computer graphics | |
US8059119B2 (en) | Method for detecting border tiles or border pixels of a primitive for tile-based rendering | |
US20100134634A1 (en) | Image processing system | |
CN110163831B (en) | Method and device for dynamically displaying object of three-dimensional virtual sand table and terminal equipment | |
US10078911B2 (en) | System, method, and computer program product for executing processes involving at least one primitive in a graphics processor, utilizing a data structure | |
US20060284883A1 (en) | Device for processing pixel rasterization and method for processing the same | |
CN111754381B (en) | Graphics rendering method, apparatus, and computer-readable storage medium | |
US7400325B1 (en) | Culling before setup in viewport and culling unit | |
GB2406252A (en) | Generation of texture maps for use in 3D computer graphics | |
US8941660B2 (en) | Image generating apparatus, image generating method, and image generating integrated circuit | |
CN111161398A (en) | Image generation method, device, equipment and storage medium | |
US8068120B2 (en) | Guard band clipping systems and methods | |
US7292239B1 (en) | Cull before attribute read | |
CN114002701A (en) | Method, device, electronic equipment and system for rendering point cloud in real time | |
GB2444628A (en) | Sorting graphics data for processing | |
CN114359048A (en) | Image data enhancement method and device, terminal equipment and storage medium | |
CN116630516B (en) | 3D characteristic-based 2D rendering ordering method, device, equipment and medium | |
WO2023115408A1 (en) | Image processing apparatus and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |