CN113822961B - Method, device, equipment and medium for 2D rendering of 3D model - Google Patents

Method, device, equipment and medium for 2D rendering of 3D model Download PDF

Info

Publication number
CN113822961B
CN113822961B CN202111107616.5A CN202111107616A CN113822961B CN 113822961 B CN113822961 B CN 113822961B CN 202111107616 A CN202111107616 A CN 202111107616A CN 113822961 B CN113822961 B CN 113822961B
Authority
CN
China
Prior art keywords
model
mixing
rendering
semitransparent
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111107616.5A
Other languages
Chinese (zh)
Other versions
CN113822961A (en
Inventor
谢天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Boguan Information Technology Co Ltd
Original Assignee
Guangzhou Boguan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Boguan Information Technology Co Ltd filed Critical Guangzhou Boguan Information Technology Co Ltd
Priority to CN202111107616.5A priority Critical patent/CN113822961B/en
Publication of CN113822961A publication Critical patent/CN113822961A/en
Application granted granted Critical
Publication of CN113822961B publication Critical patent/CN113822961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Generation (AREA)

Abstract

The disclosure provides a method for 2D rendering of a 3D model, a device for 2D rendering of a 3D model, electronic equipment and a computer readable storage medium; relates to the technical field of image processing. The method comprises the following steps: acquiring N semitransparent pixel fragments obtained after 3D rendering of a pre-generated 3D graphic object; sorting the N semitransparent pixel fragments according to the distance relative to the virtual camera; mixing and superposing the N sequenced semitransparent pixel fragments; setting the color of a target model as black, and mixing and superposing N semitransparent pixel elements subjected to mixing and superposition with the target model to obtain a 3D model; and mixing and superposing the 3D model and the bottom background, and controlling a 2D rendering engine to render so as to obtain a target 2D rendering image. The present disclosure may improve the degree of reduction of 3D images.

Description

Method, device, equipment and medium for 2D rendering of 3D model
Technical Field
The present disclosure relates to the field of image processing technology, and in particular, to a method for 2D rendering of a 3D model based on image processing technology, an apparatus for 2D rendering of a 3D model, an electronic device, and a computer readable storage medium.
Background
With the development of image processing technology, more and more 3D rendering technologies are applied to occasions such as games, animations, modeling, etc., so that images represent more realistic 3D effects.
Taking an application scene of a game as an example, one application mode is to perform modeling through a 3D engine in a 2D game, and then render the built model onto a 2D bottom background. In this way, an image effect close to that of a 3D game can be achieved, while also avoiding a surge in the computational load due to the use of a 3D model.
However, the existing 3D modeling 2D rendering method often cannot set appropriate values for the shading factor of the color channel and the transparent blending factor of the alpha (alpha) channel, so that the reduction degree of the rendered image effect is not high compared with the corresponding 3D image, and a satisfactory 3D-like effect cannot be achieved.
Furthermore, a sequence independent transparent (OIT) rendering method may also be employed. However, such an approach typically incurs higher additional computational and/or hardware costs.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of an embodiment of the present disclosure is to provide a method for 2D rendering a 3D model, an apparatus for 2D rendering a 3D model, an electronic device, and a computer-readable storage medium, so as to overcome, at least to some extent, a problem of low reduction degree of a 3D effect caused by incorrect values of a shading factor and a transparent blending factor.
According to one aspect of the present disclosure, there is provided a method of 2D rendering a 3D model, comprising:
Acquiring N semitransparent pixel fragments obtained after 3D rendering of a pre-generated 3D graphic object;
Sorting the N semitransparent pixel fragments according to the distance relative to the virtual camera;
Mixing and superposing the sequenced N semitransparent pixel fragments;
Setting the color of a preset target model to be black, and mixing and superposing the N semitransparent pixel elements subjected to mixing and superposition with the target model to obtain a 3D model;
mixing and superposing the 3D model and a bottom background, and controlling a 2D rendering engine to render the 3D model and the bottom background subjected to mixing and superposition so as to obtain a target 2D rendering image;
Wherein N is a natural number greater than 1.
In an exemplary embodiment of the present disclosure, the N semitransparent pixel tiles that are ordered are mixed and superimposed based on a semitransparent mixing algorithm or an additive mixing algorithm, or the 3D model is mixed and superimposed with an underlying background based on the semitransparent mixing algorithm or additive mixing algorithm.
In one exemplary embodiment of the present disclosure, the semi-transparent blending algorithm is an alpha blending algorithm.
In an exemplary embodiment of the present disclosure, before the mixing and stacking the N semitransparent pixel fragments subjected to the mixing and stacking with the target model, the method further includes: the transparency of the object model is set to be transparent.
In an exemplary embodiment of the present disclosure, the sequentially performing hybrid stacking on the N ordered semitransparent pixel tiles includes: and sequentially mixing and superposing the N semitransparent pixel fragments according to the sequence from far to near of the virtual camera.
In an exemplary embodiment of the disclosure, the blending and overlaying the 3D model with an underlying background includes: determining a source transparent mixing factor corresponding to the 3D model, and determining a target transparent mixing factor corresponding to the bottom background; and mixing and superposing the 3D model and a bottom background according to the source transparent mixing factor and the target transparent mixing factor.
In an exemplary embodiment of the disclosure, when the 3D model is mixed and superimposed with an underlying background based on the semi-transparent blending algorithm, the determining a target transparent blending factor corresponding to the underlying background includes: the target transparent blending factor is determined based on the semi-transparent blending algorithm and the source transparent blending factor.
In an exemplary embodiment of the present disclosure, when the 3D model is mixed and superimposed with an underlying background based on the additive blending algorithm, the determining the source transparent blending factor corresponding to the 3D model and the target transparent blending factor corresponding to the underlying background includes: the source transparent mixing factor is set to 0 and the target transparent mixing factor is set to 1.
According to one aspect of the present disclosure, there is provided an apparatus for 2D rendering of a 3D model, including:
The 3D rendering module is used for acquiring N semitransparent pixel fragments obtained after the 3D rendering of the pre-generated 3D graphic object;
The sorting module is used for sorting the N semitransparent pixel fragments according to the distance relative to the virtual camera;
The mixing module is used for mixing and superposing the N sequenced semitransparent pixel fragments; setting the color of a preset target model to be black, and mixing and superposing the N semitransparent pixel elements subjected to mixing and superposition with the target model to obtain a 3D model; mixing and superposing the 3D model and a bottom background;
The 2D rendering module is used for rendering the 3D model and the bottom background which are subjected to mixed superposition so as to obtain a target 2D rendering image;
Wherein N is a natural number greater than 1.
According to one aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of any of the above via execution of the executable instructions.
According to one aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any one of the above.
Exemplary embodiments of the present disclosure may have some or all of the following advantages:
In the method for performing 2D rendering on the 3D model provided by the exemplary embodiment of the present disclosure, on one hand, by setting the color of the target model to be black, the preferred values of the shading factor and the transparent blending factor adopted by each semitransparent pixel unit when performing hybrid superposition can be determined, so that the rendered image effect has a higher reduction degree relative to the corresponding 3D image, and further, the game immersion degree of the user is facilitated to be improved. On the other hand, the method is realized based on the traditional method of sorting the fragments, and the method of OIT rendering is avoided, so that the addition of extra operation cost and/or hardware cost is avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 schematically illustrates an application scenario diagram of a method of 2D rendering of a 3D model according to one embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow diagram of a method of 2D rendering of a 3D model according to one embodiment of the disclosure;
FIG. 3 schematically illustrates an implementation schematic diagram of a method of 2D rendering of a 3D model according to one embodiment of the present disclosure;
FIG. 4A illustrates an effect diagram of an exemplary 3D model; FIG. 4B illustrates a resulting 2D image after image rendering of a 3D model by a method of 2D rendering of the 3D model according to one embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of an apparatus for 2D rendering of a 3D model according to one embodiment of the disclosure;
fig. 6 shows a schematic diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 illustrates an application scenario of a method of 2D rendering of a 3D model according to an embodiment of the present disclosure, wherein a system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may be accessed in various types of connections, such as wired, wireless communication links, or fiber optic cables. The terminal devices 101, 102, 103 may be various electronic devices having data computing processing capabilities including, for example, but not limited to, desktop computers, portable computers, personal Digital Assistant (PDA) devices, tablet computers, and the like. It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks and servers as practical. For example, the server 105 may be a server cluster formed by a plurality of servers.
For example, in one exemplary embodiment, a user may send an instruction to the server 105 via the network 104 through the terminal device 101, 102 or 103 to invoke the 3D model and the corresponding 3D and 2D rendering engines stored on the server 105, and perform an arithmetic process by the server 105 based on a method of 2D rendering the 3D model according to an embodiment of the present disclosure, to complete 2D rendering for the 3D model; the server 105 may then issue the rendered image to the terminal device 101, 102, or 103, presenting the rendering results to the user.
In addition, the user may perform 3D modeling or store the 3D model on the terminal device 101, 102 or 103, and may control the terminal device 101, 102 or 103 to run the corresponding 3D and 2D rendering engines to perform operation processing based on the method for performing 2D rendering on the 3D model according to the embodiment of the present disclosure, so as to complete 2D rendering on the 3D model, as long as the terminal device 101, 102 or 103 has processing computing capability required for implementing the method.
It should be understood by those skilled in the art that the above application scenario is only for example, and the present exemplary embodiment is not limited thereto.
By the method for performing 2D rendering on the 3D model, the optimal values of the shielding factors and the transparent mixing factors adopted by each semitransparent pixel element during mixing and superposition can be determined, so that the rendered image effect has higher reduction degree relative to the corresponding 3D image, and further the game immersion degree of a user is improved.
The following describes the technical scheme of the embodiments of the present disclosure in detail:
The present exemplary embodiment provides a method of 2D rendering of a 3D model, which may be run on a server having a correlation operation processing capability or on a terminal device such as a personal computer. Referring to fig. 2, the method of 2D rendering of the pair of 3D models may include the steps of:
s210, obtaining N semitransparent pixel fragments obtained after 3D rendering of a pre-generated 3D graphic object;
s220, sorting the N semitransparent pixel patches according to the distance relative to the virtual camera;
s230, mixing and superposing the N sequenced semitransparent pixel fragments;
S240, setting the color of a preset target model to be black, and mixing and superposing the N semitransparent pixel elements subjected to mixing and superposition with the target model to obtain a 3D model;
S250, mixing and superposing the 3D model and a bottom background, and controlling a 2D rendering engine to render the 3D model and the bottom background subjected to mixing and superposition so as to obtain a target 2D rendering image; wherein N is a natural number greater than 1.
In the method for performing 2D rendering on the 3D model provided by the exemplary embodiment of the present disclosure, on one hand, by setting the color of the target model to be black, the preferred values of the shading factor and the transparent blending factor adopted by each semitransparent pixel unit when performing hybrid superposition can be determined, so that the rendered image effect has a higher reduction degree relative to the corresponding 3D image, and further, the game immersion degree of the user is facilitated to be improved. On the other hand, the method is realized based on the traditional method of sorting the fragments, and the method of OIT rendering is avoided, so that the addition of extra operation cost and/or hardware cost is avoided.
In another embodiment, the above steps are described in more detail below.
In step S210, N semitransparent pixel primitives obtained after the 3D rendering of the pre-generated 3D graphic object are acquired.
In the present exemplary embodiment, a 3D rendering engine such as FINALRENDER, MAXWELL RENDER, etc. may be invoked to perform 3D rendering on a pre-generated 3D graphics object in response to a control command of a user, and N semitransparent pixel fragments obtained through the 3D rendering may be acquired, where N is a natural number greater than 1. The N semitransparent pixel patches and the object model may together constitute a 3D model. Wherein the target model may be a basic model for which the user has previously completed modeling by a tool such as a game physics engine, and may be written with characteristic data such as a rigid body, a collision body, or the like, for example; by this target model, actions and physical effects such as running, jumping, hitting, and collision of the virtual character corresponding thereto in the game can be presented. While the pre-generated 3D graphical object may be pre-fabricated by an image development tool, such as a UE4 engine, which may correspond to, for example, five sense organs, hair, clothing, skill effects, light shadow effects, textures or particle effects, etc. drawn for the virtual character. N semitransparent pixel fragments rendered based on the pre-generated 3D graphic object can be mixed and overlapped on the target model layer by layer, so that the appearance effect of the virtual character and various animation special effects are displayed on the basis of the target model. That is, the N semitransparent pixel primitives are combined with the target model to form a 3D model corresponding to the virtual character in the game.
In step S220, the N semitransparent pixel tiles are ordered according to distance from the virtual camera.
In this exemplary embodiment, since a hierarchical rendering manner is adopted for the N semitransparent pixel elements, the position of each semitransparent pixel element relative to the virtual camera, that is, whether the order of the viewing angles of the virtual characters is correct when the game player experiences the game, often determines whether the N semitransparent pixel elements in combination can present a correct appearance effect. For example, for a virtual character, the layer on which the hair is located, if rendered below the layer on which the skin is located, can create an uncoordinated visual experience for the player. Thus, the generated N semitransparent pixel tiles may be ordered in terms of distance relative to the virtual camera.
As shown in fig. 3, in the example shown in fig. 3, N semitransparent pixel patches are in total 3021 to 302N, and for example, after sorting, the semitransparent pixel patch 3021 is closest to the virtual camera 301 in the correct order, and the semitransparent pixel patch 302N is farthest from the virtual camera 301, then the semitransparent pixel patch 3021 is referred to as a1 st layer semitransparent pixel patch, the semitransparent pixel patch adjacent thereto is referred to as a 2 nd layer semitransparent pixel patch, and so on, the semitransparent pixel patch 302N farthest from the virtual camera 301 is referred to as an nth layer semitransparent pixel patch.
In step S230, the N semitransparent pixel tiles that are ordered are mixed and superimposed.
In this example embodiment, the above-described N semitransparent pixel tiles that are ordered are mixed and superimposed. In one example, the N semitransparent pixel tiles may be hybrid superimposed based on a semitransparent blending algorithm or an additive blending algorithm. In a further example, the semi-transparent blending algorithm may be an alpha blending algorithm. The alpha blending algorithm is a classical semitransparent blending algorithm, and the excellent and vivid light and shadow color effect can be realized by blending and superposing the semitransparent pixel fragments based on the algorithm. The recursive formula of the alpha blending algorithm can be expressed as:
CFinal _n= (a NCN)+(1-AN) CFinal _n-1 (formula 1)
Whereas the recursive formula of the additive mixing algorithm can be expressed as:
CFinal _n= (a NCN) + CFinal _n-1 (formula 2)
Wherein CFinal _N is the color obtained by mixing and overlapping N layers of semitransparent pixel elements, and CFinal _N-1 is the color obtained by mixing and overlapping N-1 layers of semitransparent pixel elements; a N is the transparency of the N-th layer semitransparent pixel fragment; c N is the color of the N-th layer semitransparent pixel fragment.
In one example, as shown in fig. 3, when the N semitransparent pixel tiles that are sequenced are mixed and superimposed based on the semitransparent mixing algorithm or the additive mixing algorithm, each semitransparent pixel tile may be sequentially mixed and superimposed layer by layer in order from far to near the virtual camera 301. That is, starting from the N-th layer of semitransparent pixel segment 302N, it is mixed and superimposed with the next layer of semitransparent pixel segment 302N-1, and further to semitransparent pixel segment 302N-2, and so on, until mixed and superimposed to the 1-th layer of semitransparent pixel segment 3021. In this process, when blending and superimposing are performed based on the alpha blending algorithm, the sub-terms of the semitransparent pixel fragments of each layer can be expressed as:
layer N: c N×AN×(1-AN-1)×(1-AN-2)×...×(1-A1);
+
Layer N-1: c N-1×AN-1×(1-AN-2)×(1-AN-3)×...×(1-A1);
+
……
+
Layer 1: c 1×A1.
When the additive mixing algorithm is based on the mixing and overlapping, the additive mixing algorithm is characterized in that the respective visibility of each layer of semitransparent pixel elements for the mixing and overlapping is not mutually influenced, namely, the transparency of each semi-transparent pixel element cannot form influence components on the transparency of the semitransparent pixel elements of other layers, so that the sub-items of the semitransparent pixel elements of each layer can be expressed as follows:
Layer N: c N×AN;
+
Layer N-1: c N-1×AN-1;
+
……
+
Layer 1: c 1×A1.
And adding the sub-items to obtain the color effect of the mixed and overlapped semitransparent pixel elements of each layer. The resulting color effect already contains the influence of the transparency term C i (i e N), that is to say the influence of the alpha channel has been taken into account, so that the color effect can also exhibit a correct translucent effect. By means of the method that the semi-transparent pixel elements are sequenced and then mixed and overlapped, the 3D image rendering effect with the semitransparent effect can be correctly presented.
In step S240, the color of the preset target model is set to be black, and the N semitransparent pixel elements subjected to hybrid stacking are hybrid stacked with the target model, so as to obtain a 3D model.
In this exemplary embodiment, after the N semitransparent pixel elements after being sorted are mixed and superimposed, as shown in fig. 3, the N semitransparent pixel elements 3021 to 302N after being mixed and superimposed may be further mixed and superimposed with the preset target model 303, so that the two elements are combined to form a 3D model corresponding to the game virtual character. According to the above hybrid superposition method, the sub-terms corresponding to the object model 303 are:
and (3) a target model: c RT×ART×(1-AN)×(1-AN-1)×...×(1-A1) (alpha blending algorithm); or (b)
C RT×ART (additive mixing algorithm);
Where C RT is the color of the target model 303 and A RT is the transparency of the target model 303. However, since the target model 303 is a virtual physical model obtained by modeling, it is often difficult to determine the RGB components of its exact color. In view of this, the color of the target model 303 may be set to black, that is, to C RT =0. In this way, the corresponding sub-term of the object model 303 is set to 0, so that the object model 303 does not affect the N semitransparent pixel elements on the upper layer on the color channel during the hybrid stacking, thereby determining the preferred value of the shading factor adopted by each semitransparent pixel element during the hybrid stacking, so that the N semitransparent pixel elements 3021 to 302N can still coincide with the theoretical operation result based on the alpha blending algorithm after the hybrid stacking is performed on the object model 303, and further the rendered image effect has a higher reduction degree relative to the corresponding 3D image.
In one example, before the N semitransparent pixel fragments subjected to the hybrid superimposition are hybrid superimposed with the target model, the transparency of the target model 303 may be further set to be transparent, that is, a RT =0, in addition to the color of the target model 303 being set to be black. In this way, the target model 303 will not affect the N semitransparent pixel elements on the upper layer on the alpha channel, so that the mixed and overlapped result of the N semitransparent pixel elements 3021 to 302N and the target model 303 is more consistent with the theoretical operation result.
In step S250, the 3D model and the underlying background are mixed and superimposed, and a 2D rendering engine is controlled to render the 3D model and the underlying background which are mixed and superimposed, so as to obtain a target 2D rendered image.
In this example embodiment, as shown in fig. 3, after the 3D model is obtained by the hybrid superimposition, the 3D model may be hybrid superimposed with the underlying background 304 in the 2D game to obtain a target 2D rendered image that is finally presented in the 2D game. In one example, the 3D model may be mixed overlaid with the underlying background 304 based on a semi-transparent mixing algorithm or an additive mixing algorithm. In a further example, the semi-transparent blending algorithm may be an alpha blending algorithm. The 3D model is basically formed by mixing and overlapping semitransparent pixel fragments, so that the 3D model has a component of an alpha channel; while the underlying background 304 in a 2D game also tends to relate to the components of the alpha channel. Thus, in one example, a source transparent blending factor corresponding to the 3D model and a target transparent blending factor corresponding to the underlying background 304 may be determined, so that a visibility effect between the 3D model and the underlying background 304 caused by the component of the alpha channel is determined based on the source transparent blending factor and the target transparent blending factor, and further, the 3D model and the underlying background 304 may be properly blended and superimposed according to the source transparent blending factor and the target transparent blending factor.
In a further example, when blending and overlaying the 3D model with the underlying background 304 based on a semi-transparent blending algorithm, such as an alpha blending algorithm, as explained above, the component of the alpha channel of the object model 303 that is the lowest layer of the 3D model is a RT×(1-AN)×(1-AN-1)×...×(1-A1 during the hierarchical rendering blending; in accordance with the method of the present disclosure, however, the object model 303 may be set such that the alpha channel has no effect on the visibility of the upper N semitransparent pixel tiles 3021 to 302N, and thus the component α RM of the alpha channel of the 3D model, that is, the source transparent blending factor is actually (1-a N)×(1-AN-1)×...×(1-A1). And based on the alpha blending algorithm and the source transparent blending factor described above, the component alpha B of the alpha channel of the underlying background 304 can be deduced, i.e., the target transparent blending factor is (1- (1-A N)×(1-AN-1)×...×(1-A1)). That is, when the source transparent blending factor corresponding to the 3D model is α RM, the target transparent blending factor α B corresponding to the bottom background 304 is actually 1- α RM. The estimation process may be as follows:
Assuming that only one layer of semitransparent pixel fragments exists, the component alpha 1 of the alpha channel of the layer is A 1×1+(1-A1)×0=(1-(1-A1), and the estimation result is correct;
assuming that the above estimation result is correct for the translucent pixel cells from the N-th layer to the 2-th layer, when the above estimation result is mixed and superimposed on the 1-th layer translucent pixel cell, there are:
αN=(1-(1-AN)×(1-AN-1)×(1-AN-2)×...×(1-A2))×(1-A1)+A1
=(1-A1)-(1-AN)×(1-AN-1)×(1-AN-2)×...×(1-A1)+A1
=1-(1-AN)×(1-AN-1)×...×(1-A1)。
From this, the above estimation result was correct.
In yet a further example, when the 3D model is mixed overlaid with the underlying background 304 based on an additive blending algorithm, as explained above, each layer of semi-transparent pixel tiles that are mixed overlaid do not affect other semi-transparent pixel tiles in the alpha channel. That is, when every two semitransparent pixel fragments are mixed and overlapped, the alpha channel component of the source fragment does not influence the alpha channel component of the target fragment. In view of this, for the case where the 3D model is mixed and superimposed with the underlying background 304, the source transparent blending factor α RM may be set to 0 and the target transparent blending factor α B may be set to 1 directly in the alpha channel.
Through the above example, the preferred value of the transparent blending factor is determined, so that the result of blending and overlapping the 3D model and the bottom background 304 can be identical to the theoretical operation result based on the alpha blending algorithm, so that the rendered image effect has a higher reduction degree relative to the corresponding 3D image, and further the game immersion degree of the user is improved. In addition, from N semitransparent pixel fragments to a target model and then to the bottom background of the 2D game, all the mixing and overlapping processes are based on ordered hierarchical rendering mixing algorithms, and the OIT rendering mode is avoided, so that the addition of extra operation cost and/or hardware cost is avoided.
Fig. 4A and 4B illustrate the contrasting effect of a 2D image rendered by the methods of the present disclosure to the image effect of a corresponding 3D model. FIG. 4A illustrates an image effect diagram of an exemplary 3D model; and fig. 4B shows the effect of a 2D image obtained after rendering the 3D model by the method of the present disclosure. As can be seen by comparing the image effects of fig. 4A and fig. 4B, the 2D image rendered by the method according to the present disclosure restores the semitransparent effects of the 3D model, such as the light shadow special effect, the texture, etc., to a higher degree, so that it is helpful to provide a good user experience for the user, and improve the game immersion of the user.
It should be noted that although the steps of the methods in the present disclosure are depicted in the accompanying drawings in a particular order, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
Further, in this example embodiment, there is further provided an apparatus for 2D rendering of a 3D model, and referring to fig. 5, the apparatus 500 for 2D rendering of a 3D model may include a 3D rendering module 510, a sorting module 520, a mixing module 530, and a 2D rendering module 540, where:
The 3D rendering module 510 may be configured to obtain N semitransparent pixel primitives obtained after 3D rendering of a pre-generated 3D graphics object;
The sorting module 520 may be configured to sort the N semitransparent pixel primitives according to a distance relative to the virtual camera;
The blending module 530 may be configured to blend and superimpose the N sorted semitransparent pixel primitives; setting the color of a preset target model to be black, and mixing and superposing the N semitransparent pixel elements subjected to mixing and superposition with the target model to obtain a 3D model; mixing and superposing the 3D model and a bottom background; and
The 2D rendering module 540 may be configured to render the 3D model and the underlying background after being mixed and superimposed, so as to obtain a target 2D rendered image; wherein N is a natural number greater than 1.
In an exemplary embodiment of the present disclosure, the blending module 530 may be configured to blend and superimpose the N semitransparent pixel tiles that are ordered based on a semitransparent blending algorithm or an additive blending algorithm, or to blend and superimpose the 3D model with an underlying background based on the semitransparent blending algorithm or the additive blending algorithm.
In one exemplary embodiment of the present disclosure, the semi-transparent blending algorithm may be an alpha blending algorithm.
In one exemplary embodiment of the present disclosure, the mixing module 530 may also be used to: and setting the transparency of the target model to be transparent before the N semitransparent pixel fragments subjected to mixed superposition are mixed and superposed with the target model.
In one exemplary embodiment of the present disclosure, the mixing module 530 may be configured to: and sequentially mixing and superposing the N semitransparent pixel fragments according to the sequence from far to near of the virtual camera.
In one exemplary embodiment of the present disclosure, the mixing module 530 may be configured to: determining a source transparent mixing factor corresponding to the 3D model, and determining a target transparent mixing factor corresponding to the bottom background; and mixing and superposing the 3D model and a bottom background according to the source transparent mixing factor and the target transparent mixing factor.
In one exemplary embodiment of the present disclosure, the mixing module 530 may be configured to: when the 3D model is mixed and superimposed with an underlying background based on the semi-transparent blending algorithm, the target transparent blending factor is determined based on the semi-transparent blending algorithm and the source transparent blending factor.
In one exemplary embodiment of the present disclosure, the mixing module 530 may be configured to: when the 3D model is mixed superimposed with an underlying background based on the additive blending algorithm, the source transparent blending factor is set to 0 and the target transparent blending factor is set to 1.
The specific details of each module or unit in the apparatus for 2D rendering a 3D model are described in detail in the corresponding method for 2D rendering a 3D model, and thus are not described herein.
Fig. 6 shows a schematic diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
It should be noted that, the computer system 600 of the electronic device shown in fig. 6 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present disclosure.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for system operation are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT) display, a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The computer program, when executed by a Central Processing Unit (CPU) 601, performs the various functions defined in the method and apparatus of the present application.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the method as described in the above embodiments.
It should be noted that the computer readable medium shown in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. A method of 2D rendering of a 3D model, comprising:
Acquiring N semitransparent pixel fragments obtained after 3D rendering of a pre-generated 3D graphic object;
Sorting the N semitransparent pixel fragments according to the distance relative to the virtual camera;
mixing and superposing the N sequenced semitransparent pixel fragments based on a semitransparent mixing algorithm or an additive mixing algorithm;
setting the color of a preset target model to be black, and mixing and superposing the N semitransparent pixel fragments subjected to mixing and superposition with the target model based on a semitransparent mixing algorithm or an additive mixing algorithm to obtain a 3D model; wherein the target model is a pre-modeled base model;
Mixing and superposing the 3D model and a bottom background based on a semitransparent mixing algorithm or an additive mixing algorithm, and controlling a 2D rendering engine to render the 3D model and the bottom background subjected to mixing and superposition so as to obtain a target 2D rendering image;
Wherein N is a natural number greater than 1.
2. The method of 2D rendering of a 3D model of claim 1, the method comprising:
Based on a semitransparent mixing algorithm or an additive mixing algorithm, performing mixed superposition on the N semitransparent pixel fragments subjected to sequencing, or
And mixing and superposing the 3D model and a bottom background based on the semitransparent mixing algorithm or the additive mixing algorithm.
3. The method of 2D rendering of a 3D model of claim 2, wherein the semi-transparent blending algorithm is an alpha blending algorithm.
4. The method of 2D rendering of a 3D model of claim 1, wherein prior to the blending the blended superimposed N semitransparent pixel primitives with the target model, the method further comprises:
the transparency of the object model is set to be transparent.
5. The method for 2D rendering of a 3D model of claim 1, wherein the sequentially performing hybrid stacking on the N ordered semitransparent pixel tiles comprises:
and sequentially mixing and superposing the N semitransparent pixel fragments according to the sequence from far to near of the virtual camera.
6. The method of 2D rendering a 3D model of claim 2, wherein the blending the 3D model with an underlying background comprises:
Determining a source transparent mixing factor corresponding to the 3D model, and determining a target transparent mixing factor corresponding to the bottom background;
and mixing and superposing the 3D model and a bottom background according to the source transparent mixing factor and the target transparent mixing factor.
7. The method of 2D rendering a 3D model of claim 6, wherein when the 3D model is mixed and overlaid with an underlying background based on the semi-transparent blending algorithm, the determining a target transparent blending factor corresponding to the underlying background comprises:
The target transparent blending factor is determined based on the semi-transparent blending algorithm and the source transparent blending factor.
8. The method of 2D rendering a 3D model of claim 6, wherein the determining the source transparent blending factor corresponding to the 3D model and the target transparent blending factor corresponding to the underlying background when the 3D model is blended and overlaid with the underlying background based on the additive blending algorithm comprises:
The source transparent mixing factor is set to 0 and the target transparent mixing factor is set to 1.
9. An apparatus for 2D rendering of a 3D model, comprising:
The 3D rendering module is used for acquiring N semitransparent pixel fragments obtained after the 3D rendering of the pre-generated 3D graphic object;
The sorting module is used for sorting the N semitransparent pixel fragments according to the distance relative to the virtual camera;
The mixing module is used for mixing and superposing the N sequenced semitransparent pixel fragments based on a semitransparent mixing algorithm or an additive mixing algorithm; setting the color of a preset target model to be black, and mixing and superposing the N semitransparent pixel fragments subjected to mixing and superposition with the target model based on a semitransparent mixing algorithm or an additive mixing algorithm to obtain a 3D model; wherein the target model is a pre-modeled base model; mixing and superposing the 3D model and a bottom background;
The 2D rendering module is used for rendering the 3D model and the bottom background subjected to the mixed superposition based on a semitransparent mixing algorithm or an additive mixing algorithm so as to obtain a target 2D rendering image;
Wherein N is a natural number greater than 1.
10. An electronic device, comprising:
a memory; and
A processor coupled to the memory, the processor configured to perform the method of 2D rendering of a 3D model as claimed in any one of claims 1-8 based on instructions stored in the memory.
11. A computer readable storage medium having stored thereon a program which when executed by a processor implements a method of 2D rendering a 3D model according to any of claims 1-8.
CN202111107616.5A 2021-09-22 2021-09-22 Method, device, equipment and medium for 2D rendering of 3D model Active CN113822961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111107616.5A CN113822961B (en) 2021-09-22 2021-09-22 Method, device, equipment and medium for 2D rendering of 3D model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111107616.5A CN113822961B (en) 2021-09-22 2021-09-22 Method, device, equipment and medium for 2D rendering of 3D model

Publications (2)

Publication Number Publication Date
CN113822961A CN113822961A (en) 2021-12-21
CN113822961B true CN113822961B (en) 2024-04-26

Family

ID=78915127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111107616.5A Active CN113822961B (en) 2021-09-22 2021-09-22 Method, device, equipment and medium for 2D rendering of 3D model

Country Status (1)

Country Link
CN (1) CN113822961B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005327125A (en) * 2004-05-14 2005-11-24 Mitsubishi Precision Co Ltd Collision detecting method and collision detecting apparatus
CN101281545A (en) * 2008-05-30 2008-10-08 清华大学 Three-dimensional model search method based on multiple characteristic related feedback
AU2013237644A1 (en) * 2007-08-29 2013-10-24 Setred As Rendering improvement for 3D display
CN103559730A (en) * 2013-11-20 2014-02-05 广州博冠信息科技有限公司 Rendering method and device
CN104240276A (en) * 2014-09-04 2014-12-24 无锡梵天信息技术股份有限公司 Screen-space-based method for simulating real skin of figure through sub-surface scattering
CN106725565A (en) * 2016-11-18 2017-05-31 天津大学 A kind of cone-beam XCT imaging quality assessment methods under sparse projection
CN109903347A (en) * 2017-12-08 2019-06-18 北大方正集团有限公司 A kind of colour-mixed method, system, computer equipment and storage medium
CN110443893A (en) * 2019-08-02 2019-11-12 广联达科技股份有限公司 Extensive building scene rendering accelerated method, system, device and storage medium
CN110443877A (en) * 2019-08-06 2019-11-12 网易(杭州)网络有限公司 Method, apparatus, terminal device and the storage medium of model rendering
CN111105491A (en) * 2019-11-25 2020-05-05 腾讯科技(深圳)有限公司 Scene rendering method and device, computer readable storage medium and computer equipment
CN112258621A (en) * 2020-10-19 2021-01-22 北京声影动漫科技有限公司 Method for observing three-dimensional rendering two-dimensional animation in real time
CN112346811A (en) * 2021-01-08 2021-02-09 北京小米移动软件有限公司 Rendering method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005327125A (en) * 2004-05-14 2005-11-24 Mitsubishi Precision Co Ltd Collision detecting method and collision detecting apparatus
AU2013237644A1 (en) * 2007-08-29 2013-10-24 Setred As Rendering improvement for 3D display
CN101281545A (en) * 2008-05-30 2008-10-08 清华大学 Three-dimensional model search method based on multiple characteristic related feedback
CN103559730A (en) * 2013-11-20 2014-02-05 广州博冠信息科技有限公司 Rendering method and device
CN104240276A (en) * 2014-09-04 2014-12-24 无锡梵天信息技术股份有限公司 Screen-space-based method for simulating real skin of figure through sub-surface scattering
CN106725565A (en) * 2016-11-18 2017-05-31 天津大学 A kind of cone-beam XCT imaging quality assessment methods under sparse projection
CN109903347A (en) * 2017-12-08 2019-06-18 北大方正集团有限公司 A kind of colour-mixed method, system, computer equipment and storage medium
CN110443893A (en) * 2019-08-02 2019-11-12 广联达科技股份有限公司 Extensive building scene rendering accelerated method, system, device and storage medium
CN110443877A (en) * 2019-08-06 2019-11-12 网易(杭州)网络有限公司 Method, apparatus, terminal device and the storage medium of model rendering
CN111105491A (en) * 2019-11-25 2020-05-05 腾讯科技(深圳)有限公司 Scene rendering method and device, computer readable storage medium and computer equipment
CN112258621A (en) * 2020-10-19 2021-01-22 北京声影动漫科技有限公司 Method for observing three-dimensional rendering two-dimensional animation in real time
CN112346811A (en) * 2021-01-08 2021-02-09 北京小米移动软件有限公司 Rendering method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Virtual Reality System for Improved Image-Based Planning of Complex Cardiac Procedures;Deng Shujie;《Journal information》;全文 *
一种混合CBCT成像***标定方法;仇庆;闫士举;;中国医学影像技术;20140120(01);全文 *
基于CUDA渲染器的顺序独立透明现象的单遍高效绘制;黄梦成;刘芳;刘学慧;吴恩华;;软件学报;20110815(08);全文 *
虚拟现实引入次世代建模技术的可行性分析;卞妍;;科学技术创新(28);全文 *
高精度三维地震(I):数据采集;熊翥;;勘探地球物理进展(01);全文 *

Also Published As

Publication number Publication date
CN113822961A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN103678631B (en) page rendering method and device
KR101528215B1 (en) Method for displaying a 3d scene graph on a screen
JP2004038926A (en) Texture map editing
CN109887062B (en) Rendering method, device, equipment and storage medium
CN111583381B (en) Game resource map rendering method and device and electronic equipment
US7064755B2 (en) System and method for implementing shadows using pre-computed textures
CN111047509A (en) Image special effect processing method and device and terminal
CN105631923A (en) Rendering method and device
CN112274934B (en) Model rendering method, device, equipment and storage medium
CN111583378B (en) Virtual asset processing method and device, electronic equipment and storage medium
CN112734896A (en) Environment shielding rendering method and device, storage medium and electronic equipment
US8004515B1 (en) Stereoscopic vertex shader override
CN112580213B (en) Method and device for generating display image of electric field lines and storage medium
CN113822961B (en) Method, device, equipment and medium for 2D rendering of 3D model
CN116630516B (en) 3D characteristic-based 2D rendering ordering method, device, equipment and medium
CN114998504B (en) Two-dimensional image illumination rendering method, device and system and electronic device
CN113724364B (en) Setting method and device for realizing shielding and rendering-free body by utilizing polygons
CN116485967A (en) Virtual model rendering method and related device
US6731297B1 (en) Multiple texture compositing
CN117793442B (en) Image video masking method, device, equipment and medium based on point set
CN117032617B (en) Multi-screen-based grid pickup method, device, equipment and medium
US8427490B1 (en) Validating a graphics pipeline using pre-determined schedules
CA2308249C (en) Triangle strip length maximization
US20230316597A1 (en) Method and apparatus for rendering hair, computer storage medium, electronic device
CN114119828A (en) Semitransparent object rendering sorting method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant