CN116468839A - Model rendering method and device, storage medium and electronic device - Google Patents

Model rendering method and device, storage medium and electronic device Download PDF

Info

Publication number
CN116468839A
CN116468839A CN202310351816.8A CN202310351816A CN116468839A CN 116468839 A CN116468839 A CN 116468839A CN 202310351816 A CN202310351816 A CN 202310351816A CN 116468839 A CN116468839 A CN 116468839A
Authority
CN
China
Prior art keywords
target
result
sampling
model
rendering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310351816.8A
Other languages
Chinese (zh)
Inventor
林琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310351816.8A priority Critical patent/CN116468839A/en
Publication of CN116468839A publication Critical patent/CN116468839A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/55Radiosity
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application discloses a model rendering method and device, a storage medium and an electronic device. The method comprises the following steps: obtaining a target texture map, wherein the target texture map is used for storing texture coordinate information of an initial model; sampling the target texture mapping to obtain a target sampling result, wherein the target sampling result is used for determining the mapping position of a target image in an initial model, and the target image is used for displaying a plurality of elements to be rendered corresponding to the initial model; and rendering and outputting the initial model based on the target sampling result and the target image to obtain a target model. The method and the device solve the technical problem that the performance consumption of the rendering mode of the three-dimensional model in the related technology is high.

Description

Model rendering method and device, storage medium and electronic device
Technical Field
The present invention relates to the field of computers, and in particular, to a model rendering method, a device, a storage medium, and an electronic device.
Background
Currently, in the field of games, three-dimensional sky model rendering has become an important part in game scene rendering. In a game scene, it is generally necessary to present a glow effect of the sun or moon to realize a sense of reality of the sky, and a conventional three-dimensional model glow effect is generally obtained by defaulting a three-dimensional model to a circular shape and defaulting the glow of the three-dimensional model to a relaxed glow effect. In the related art, a particle system or a bulletin board (billboards) is generally used to make a three-dimensional model with glow effect, but the performance consumption of both manufacturing methods is high.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
At least some embodiments of the present application provide a method, an apparatus, a storage medium, and an electronic device for rendering a model, so as to at least solve a technical problem in the related art that performance consumption of a rendering mode of a three-dimensional model is high.
According to one embodiment of the present application, there is provided a model rendering method, including: obtaining a target texture map, wherein the target texture map is used for storing texture coordinate information of an initial model; sampling the target texture mapping to obtain a target sampling result, wherein the target sampling result is used for determining the mapping position of a target image in an initial model, and the target image is used for displaying a plurality of elements to be rendered corresponding to the initial model; and rendering and outputting the initial model based on the target sampling result and the target image to obtain a target model.
According to one embodiment of the present application, there is further provided a model rendering apparatus, including an obtaining module, configured to obtain a target texture map, where the target texture map is used to store texture coordinate information of an initial model; the sampling module is used for carrying out sampling processing on the target texture mapping to obtain a target sampling result, wherein the target sampling result is used for determining the mapping position of a target image in the initial model, and the target image is used for displaying a plurality of elements to be rendered corresponding to the initial model; and the rendering module is used for rendering and outputting the initial model based on the target sampling result and the target image to obtain a target model.
According to one embodiment of the present application, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the model rendering method of any one of the above when being executed by a processor.
According to one embodiment of the present application, there is also provided an electronic device including a memory having a computer program stored therein and a processor configured to run the computer program to perform the model rendering method in any of the above.
In at least some embodiments of the present application, a target texture map is obtained, and then the target texture map is sampled to obtain a target sampling result, and finally, an initial model is rendered and output based on the target sampling result and the target image to obtain a target model, so that the purpose of rendering the model is achieved, the technical effect of reducing performance consumption while guaranteeing the rendering effect of the model is achieved, and further, the technical problem of higher performance consumption of a rendering mode of a three-dimensional model in the related art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a schematic diagram of a method of fabricating a solar effect in the related art;
FIG. 2 is a schematic diagram of a method for producing a moon effect in the related art;
FIG. 3 is a schematic diagram of another method for producing a solar effect in the related art;
fig. 4 is a hardware structure block diagram of a mobile terminal of a model rendering method according to an embodiment of the present application;
FIG. 5 is a flow chart of a model rendering method according to one embodiment of the present application;
FIG. 6 is a schematic diagram of a target texture map according to one embodiment of the present application;
FIG. 7 is a schematic diagram of a spatial transformation according to one embodiment of the present application;
FIG. 8 is a schematic illustration of determining a first parameter according to one embodiment of the present application;
FIG. 9 is a schematic diagram of determining a second parameter according to one embodiment of the present application;
FIG. 10 is a schematic diagram of selecting sample setup information according to one embodiment of the present application;
FIG. 11 is a block diagram of a model rendering device according to an alternative embodiment of the present application;
fig. 12 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in describing embodiments of the present application are applicable to the following explanation:
UE4: a game engine, which is called Unreal4, called UE4 for short, is commonly used for game making;
game engine: the game engine refers to the core components of some compiled editable computer game systems or some interactive real-time image applications. These systems provide game designers with the various tools required to write games in order to allow the game designer to easily and quickly make game programs without starting from zero;
Three-dimensional model: a three-dimensional model is a polygonal representation of an object, typically displayed with a computer or other video device;
sky ball: in the game, a hemispherical model or a spherical model is always used for the sky, so a model representing the sky is called a sky sphere;
UE engine particle system: the UE engine is used for expressing the function of the special effect, and an engine function system and a particle system are used for expressing the special effect through emitting particles;
billboards: a material performance description refers to a model that, after being given a material, behaves like a bulletin board, meaning that a straight surface rotates against the lens and always faces against the lens camera. For example, the sky model glow of the scheme is as if it were facing our lens effect when we see the sky model glow at any angle.
UV: "UV" in three-dimensional modeling is understood to be the "skin" of a three-dimensional model, which is then unfolded and then rendered on a two-dimensional plane and imparted to an object. "UV" refers herein to the abbreviation of u, v texture map coordinates (which are similar to the X, Y, Z axes of a spatial model) that define information about the location of each point on the picture that is interrelated to the 3D model, which determines the location of the surface texture map. UV is to precisely correspond each point on the image to the surface of the model object, and the gap position between the points can be subjected to image smoothing interpolation by software, which is called UV mapping. In the engine algorithm, UV is only two channels, the R and G channels, respectively. Pictures using UV sampling we can change the sampling effect of the picture by editing the effect of the UV map.
In the related art, in the field of games, the following three ways of producing sky effects can be generally adopted:
mode one: the sky effect was produced using conventional methods. Fig. 1 is a schematic diagram of a method for manufacturing a solar effect in the related art, as shown in fig. 1, a circular area can be calculated by using Distance (PositionWS, sunPosWS) for manufacturing the solar effect, the circular area is drawn into a shape of the sun, and the circular area is drawn into a slightly larger circle as a halation of the sun, so as to integrate the glow effect of the sun.
For the production of moon effect, fig. 2 is a schematic diagram of a production method of moon effect in the related art, as shown in fig. 2, moon can be drawn in texture map (UV) with fixed size in screen space, but the moon obtained by this method is at the same rotation angle in different angles, and is only suitable for the case that the lens is not rotated.
The general moon effect manufacturing method is to calculate the rising and falling angle of solar east and west by using a complex algorithm through UV made by the light direction and the visual angle direction so as to estimate the correct position of the moon.
Because the whole sky system is too complex, the method for manufacturing the sky effect is only suitable for computer games, but is too complicated for mobile phone games.
Mode two: solar effects were made using billboards or particle systems. Both the billboards and particle systems are constructed by providing a dough model with material, writing bulletin board functionality in the material (i.e., always aligned with the lens) and adding transparent material to create a pseudo glow effect that is superimposed on the sun. However, this method has disadvantages in that the sun hangs on the sky using particles and a glow effect is made of a transparent material of a double layer, thereby easily causing high performance consumption.
Mode three: the solar effect is produced by using a later-stage box. Fig. 3 is a schematic diagram of another method for manufacturing a solar effect in the related art. The later box is a later processing tool of the UE engine, and after corresponding materials are manufactured, the glow effect of the sun is drawn based on the later processing. In addition, it is difficult to ensure that this method is applicable to all models of mobile phones, and therefore the versatility of the method is low.
In a possible implementation manner, the inventor has put forward a model rendering method based on the technical concept that the method is used for obtaining the target texture map, further sampling the target texture map to obtain a target sampling result, and finally rendering and outputting an initial model based on the target sampling result and the target image to obtain a target model, thereby achieving the purpose of rendering the model, realizing the technical effect of reducing performance consumption while guaranteeing the rendering effect, and further solving the technical problem of higher performance consumption of the rendering mode of the three-dimensional model in the related art.
The above-described method embodiments to which the present disclosure relates may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, the mobile terminal can be a smart phone, a tablet computer, a palm computer, a mobile internet device, a PAD, a game machine and other terminal devices.
Fig. 4 is a block diagram of a hardware structure of a mobile terminal of a model rendering method according to an embodiment of the present application. As shown in fig. 4, the mobile terminal may include one or more (only one is shown in fig. 4) processors 402 (the processor 402 may include, but is not limited to, a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU), a digital signal processing (Digital Signal Processing, DSP) chip, a microprocessor (Micro Controller Unit, MCU), a programmable logic device (Field Programmable Gate Array, FPGA), a Neural network processor (Neural-network Processor Unit, NPU), a tensor processor (Tensor Processing Unit, TPU), an artificial intelligence (Artificial Intelligence, AI) type processor, etc.), and a memory 404 for storing data, and in one embodiment of the present application may further include: input output device 408 and display device 410.
In some optional embodiments, which are based on game scenes, the device may further provide a human-machine interaction interface with a touch-sensitive surface, where the human-machine interaction interface may sense finger contacts and/or gestures to interact with a Graphical User Interface (GUI), where the human-machine interaction functions may include the following interactions: executable instructions for performing the above-described human-machine interaction functions, such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music, and/or web browsing, are configured/stored in a computer program product or readable storage medium executable by one or more processors.
It will be appreciated by those skilled in the art that the structure shown in fig. 4 is merely illustrative and not limiting on the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 4, or have a different configuration than shown in fig. 4.
According to one embodiment of the present application, an embodiment of a model rendering method is provided, it being noted that the steps shown in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that shown or described herein.
In one possible implementation, the embodiment of the application provides a model rendering method. FIG. 5 is a flow chart of a model rendering method according to one embodiment of the present application, as shown in FIG. 5, the method comprising the steps of:
in step S52, a target texture map is acquired, where the target texture map is used to store texture coordinate information of the initial model.
In step S52, the target texture map may be used to store texture coordinate information of the initial model, where the texture coordinate information may include corrected texture map coordinates (UV) of the initial model.
In a preferred embodiment, the initial model may be a spherical or hemispherical initial sky sphere model, and the texture coordinate information of the initial sky sphere model may be stored using a target texture map, where the target texture map may be a sun map, a sun glow map, a sky background map, etc.
FIG. 6 is a schematic diagram of a target texture map according to one embodiment of the present application, as shown in FIG. 6, because the sky sphere model is spherical, the spherical surface cannot be tiled into a square without stretching problem, and there is a problem of UV distortion. Therefore, the texture map coordinates UV of the sky sphere model need to be corrected to solve the UV distortion problem.
Step S54, sampling processing is carried out on the target texture mapping to obtain a target sampling result, wherein the target sampling result is used for determining the mapping position of a target image in the initial model, and the target image is used for displaying a plurality of elements to be rendered corresponding to the initial model.
In an alternative embodiment, the number of elements to be rendered includes any one of: virtual sun, virtual light, and virtual sky background.
In the step S54, after the target texture map is obtained, a sampling process may be performed on the target texture map to obtain a target sampling result, where the target sampling result may be used to determine a mapping position of the target image in the initial model.
Specifically, the target image may be a glow map, a sun map and a sky background map, and may be used to display a plurality of elements to be rendered corresponding to the initial model.
For example, after sampling the target texture map, the sampling results may be used to determine the mapping locations of the glow map, solar map, and sky background map in the initial sky sphere model.
And step S56, rendering and outputting the initial model based on the target sampling result and the target image to obtain a target model.
In the step S56, after the target sampling result is obtained, the initial model may be rendered and output based on the target sampling result and the target image, so as to obtain the target model.
In an alternative embodiment, the object model is a sky sphere model that faces the lens when the sun glow is seen at any angle.
For example, the initial sky sphere model can be rendered based on the mapping positions of the glow map, the solar map and the sky background map in the initial sky sphere model and the glow map, the solar map and the sky background map, so that the sky sphere model which is opposite to the lens when the sun glow is seen at any angle is obtained.
Based on the steps S52 to S56, the target texture map is obtained, then the target texture map is sampled to obtain a target sampling result, and finally the initial model is rendered and output based on the target sampling result and the target image to obtain a target model, so that the purpose of rendering the model is achieved, the technical effect of reducing performance consumption while ensuring the rendering effect is achieved, and the technical problem of higher performance consumption of the rendering mode of the three-dimensional model in the related technology is solved.
The model rendering method in the above embodiment is further described below.
Optionally, in step S52, acquiring the target texture map includes:
step S521, obtaining original coordinate information corresponding to the initial model.
In the step S521, when the target texture map is acquired, the original coordinate information corresponding to the initial model may be acquired first, where the original coordinate information corresponding to the initial model is the absolute world position coordinate and may be denoted as (x, y, z).
The absolute world position coordinates (x, y, z) may be calculated according to a conversion formula of the spherical coordinates and the rectangular coordinates, wherein the conversion formula is shown as formula (1).
Wherein,,the spherical coordinates corresponding to the initial model.
Step S522, performing space conversion on the original coordinate information to obtain a first channel vector and a second channel vector.
In the step S522, after the original coordinate information corresponding to the initial model is obtained, the original coordinate information may be spatially converted to obtain a first channel vector and a second channel vector, where the first channel vector is an R channel vector, and the second channel vector is a G channel vector.
In an alternative embodiment, spatially converting the original coordinate information includes: the original coordinate information is converted from an absolute scene space to a local space and then from the local space to a viewing space. Wherein, after converting from absolute scene space to local space, UV does not move at will; after conversion from the local space to the viewing space, the UV will move with the camera, always facing the camera. After spatial conversion of the original coordinate information, an R channel vector and a G channel vector can be obtained.
Fig. 7 is a schematic diagram of spatial transformation according to one embodiment of the present application, as shown in fig. 7, original coordinate information corresponding to an initial model may be input in an absolute world position coordinate frame, so that the original coordinate information may be transformed from an absolute scene space into a local space, then from the local space into a viewing space, and finally an R channel vector and a G channel vector may be obtained.
In step S523, data conversion is performed on the second channel vector to obtain a conversion result.
In the step S523, after performing spatial conversion on the original coordinate information to obtain the first channel vector and the second channel vector, data conversion may be performed on the second channel vector to obtain a conversion result.
Specifically, the second channel vector is a G channel vector, and the data conversion may be performed on the G channel vector, where the G channel vector is required to be subjected to data conversion because the G channel vector is upside down, as shown in fig. 7, the G channel vector may be converted by using 1-x, so as to obtain a converted G channel vector.
Step S524, merging processing is performed based on the first channel vector and the conversion result, and the target texture map is obtained.
In step S524, after the data conversion is performed on the second channel vector to obtain the conversion result, the merging process may be performed based on the first channel vector and the conversion result to obtain the target texture map.
Specifically, as shown in fig. 7, after converting the G channel vector by 1-x, the R channel vector and the converted G channel vector may be combined, thereby obtaining the corrected UV.
Based on the steps S521 to S524, the original coordinate information corresponding to the initial model is obtained, and then the original coordinate information is spatially converted to obtain a first channel vector and a second channel vector, then the second channel vector is subjected to data conversion to obtain a conversion result, and finally the target texture map is obtained based on the combination of the first channel vector and the conversion result, so that the UV distortion of the initial model can be corrected, and the UV of the initial model is always faced to the front.
Optionally, in step S54, the sampling process is performed on the target texture map, and the obtaining a target sampling result includes:
in step S541, a target adjustment parameter is acquired, where the target adjustment parameter is used to rotate and/or scale the target texture map.
In step S541, when the target texture map is sampled to obtain the target sampling result, the target adjustment parameter may be obtained, where the target adjustment parameter is used to rotate and/or scale the target texture map.
Since the glow needs to be rotated according to the change in the viewing angle, the corrected UV of the initial model needs to be rotated.
Specifically, the target adjustment parameters may include a random rotation angle and a random scaling parameter, and the target texture map may be rotated and/or scaled using the random rotation angle and/or the random scaling parameter.
In step S542, the target texture map is adjusted by using the target adjustment parameters to obtain a target adjustment result.
In the step S542, after the target adjustment parameter is obtained, the target texture map may be adjusted by using the target adjustment parameter to obtain a target adjustment result.
Specifically, the correction UV may be rotated and scaled by using a random rotation angle and a random scaling parameter, or may be rotated by using only a random rotation angle.
Step S543, sampling the target adjustment result to obtain a target sampling result.
In the step S543, after the target texture map is adjusted by using the target adjustment parameter to obtain the target adjustment result, the target adjustment result may be sampled to obtain the target sampling result.
Specifically, by sampling the corrected UV, the mapping position of the target image in the initial model can be determined, including the mapping positions of the sun map, glow map and sky background map in the initial model.
Based on the steps S541 to S543, the target adjustment parameters are obtained, and then the target texture map is adjusted by using the target adjustment parameters, so as to obtain a target adjustment result, and finally the target adjustment result is sampled, so as to obtain a target sampling result, and further, the mapping position of the target image in the initial model can be determined according to the target sampling result.
Optionally, in step S541, the target adjustment parameters further include: the method comprises the steps of obtaining a first parameter, wherein the first parameter is used for determining a rotation angle corresponding to a target texture map, and obtaining the first parameter comprises the following steps:
in step S5411, a first position, a second position, and a target radius are acquired, where the first position is used to represent a center position of the target object, the second position is used to represent a camera position, and the target radius is used to represent a radius of the target object.
The first parameter is a random parameter value related to a center position of the target object, a camera position and a radius of the target object, and the random parameter value can be used for randomly rotating the target texture map, wherein the center position of the target object is a world scene space center position, the camera position is a position of the camera in the world scene space, and the radius of the target object is calculated in a Unreal unit.
Step S5412 determines a first parameter based on the first location, the second location, and the target radius.
In step S5412 described above, after the first position, the second position, and the target radius are acquired, the first parameter may be determined based on the first position, the second position, and the target radius, and in particular, the first parameter may be determined according to fig. 8.
FIG. 8 is a schematic diagram of determining a first parameter according to one embodiment of the present application, as shown in FIG. 8, (center position of target object-camera position) ×1 is a random value related to the center position of target object and camera position; thereafter, breakOutFloart2 components and arctangent 2Fast were performed to obtain a first value. The luminance_rotation speed is multiplied by 1e-05, multiplied by the radius of the target object, and then multiplied by the first value, so that a random parameter value, i.e., a first parameter, related to the camera position, the center position of the target object, and the radius of the target object can be obtained, and is used for determining a rotation angle corresponding to the corrected UV, so that the corrected UV is randomly rotated by using the rotation angle.
Based on the steps S5411 to S5412, the first parameter is determined by acquiring the first position, the second position and the target radius, and then the target texture map can be rotated by using the first parameter.
Optionally, in step S541, the target adjustment parameters further include: the second parameter is used for determining a scaling factor corresponding to the target texture map, and the obtaining the second parameter comprises the following steps:
in step S5413, the first position, the view size, and the scaling factor are acquired.
In the step S5413, the second parameter is a random parameter value related to the center position of the target object, the view size, and the scaling factor, and may be used to scale the target texture map, where the view size is a two-dimensional vector representing the size of the view in units of pixels.
Step S5414 determines a second parameter based on the first position, the view size, and the scaling factor.
In step S5414 described above, after the first position, the view size, and the scaling factor are acquired, the second parameter may be determined based on the first position, the view size, and the scaling factor, and in particular, the second parameter may be determined according to fig. 9.
Fig. 9 is a schematic diagram of determining a second parameter according to one embodiment of the present application, where after the center position of the target object is spatially transformed, the second parameter is obtained by dividing the center position of the target object by the view size, taking the absolute value, and multiplying the absolute value by a scaling coefficient after a series of calculations, so that the target texture map can be scaled by using the second parameter.
Based on the above steps S5413 to S5414, the target texture map can be scaled using the second parameter by acquiring the first position, the view size, and the scaling factor, and further determining the second parameter based on the first position, the view size, and the scaling factor.
Optionally, in step S542, adjusting the target texture map with the target adjustment parameter, and obtaining the target adjustment result includes:
in step S5421, the target texture map is adjusted by using the first parameter and the second parameter to obtain a first adjustment result.
In the step S5421, after determining the first parameter and the second parameter, the target texture map may be adjusted by using the first parameter and the second parameter to obtain a first adjustment result, where adjusting the target texture map by using the first parameter and the second parameter may include: the target texture map is randomly rotated and/or randomly scaled.
In step S5422, the target texture map is adjusted by using the first parameter to obtain a second adjustment result.
In the step S5422, the adjusting the target texture map with the first parameter to obtain the second adjustment result, where the adjusting the target texture map with the first parameter may include: the target texture map is randomly rotated.
Step S5423 obtains a target adjustment result based on the first adjustment result and the second adjustment result.
In step S5423 described above, after the first adjustment result and the second adjustment result are obtained, the target adjustment result may be obtained based on the first adjustment result and the second adjustment result.
Based on the above steps S5421 to S5423, the target texture map is adjusted by using the first parameter and the second parameter to obtain a first adjustment result, and the target texture map is adjusted by using the first parameter to obtain a second adjustment result, and finally the target adjustment result is obtained based on the first adjustment result and the second adjustment result, so that the target texture map can be randomly rotated and/or randomly scaled.
Optionally, in step S56, the target sampling result includes: the method comprises the steps of obtaining a first sampling result and a second sampling result, wherein the first sampling result is obtained by sampling a first adjusting result, the second sampling result is obtained by sampling a second adjusting result, a target image comprises a first image, a second image and a third image, rendering and outputting an initial model based on the target sampling result and the target image, and obtaining a target model comprises the following steps:
In step S561, the initial model is rendered and output by using the first sampling result and the first image, so as to obtain a first rendering result, where the first image is used to display the virtual light.
In the step S561, after the first adjustment result is sampled, a first sampling result may be obtained, and further the first sampling result and the first image may be used to render and output the initial model, so as to obtain a first rendering result.
Specifically, after the corrected UV after random rotation and random scaling is sampled, the mapping position of the virtual light in the initial model can be determined, and then the initial model is rendered and output by using the mapping position of the virtual light and the first image for displaying the virtual light, so that the glow effect after rendering can be obtained. For example, the initial sky sphere model can be rendered and output by using the mapping position of the virtual light and the glow map, so as to obtain a glow effect after rendering.
And step S562, rendering and outputting the initial model by using the second sampling result and a second image to obtain a second rendering result, wherein the second image is used for displaying the virtual sun.
In the step S562, a second sampling result may be obtained after the second adjustment result is sampled, and then the initial model may be rendered and output by using the second sampling result and the second image, so as to obtain a second rendering result.
Specifically, after the corrected UV after random rotation is sampled, the mapping position of the virtual sun in the initial model can be determined, and then the initial model is rendered and output by using the mapping position of the virtual sun and the second image for displaying the virtual sun, so that the rendered solar effect can be obtained. For example, the initial sky sphere model can be rendered and output by using the mapping position of the virtual sun and the sun map, so as to obtain the rendered solar effect.
Step S563, performing superposition processing on the first rendering result, the second rendering result and the third image to obtain a rendering output file, where the third image is used to display the virtual sky background.
In the step S563, after the first rendering result and the second rendering result are obtained, the first rendering result, the second rendering result, and the third image may be subjected to superposition processing, so as to obtain a rendering output file, where the third image is used to display the virtual sky background.
For example, the rendered glow effect, the rendered solar effect, and a third image for displaying the virtual sky background may be superimposed to obtain a rendered output file including the glow effect, the solar effect, and the sky background effect.
And S564, displaying the rendered output file by using the graphic user interface to obtain a target model.
For example, the glow effect, the sun effect and the sky background effect after rendering can be displayed in the game interface, so that the sky ball model with the glow effect, the sun effect and the sky background effect can be obtained, and when the sun glow is seen at any angle, the sun ball model is opposite to the lens.
Based on the steps S561 to S563, the first sampling result and the first image are utilized to render and output the initial model, so as to obtain a first rendering result, the second sampling result and the second image are utilized to render and output the initial model, so as to obtain a second rendering result, further, the first rendering result, the second rendering result and the third image are subjected to superposition processing, so as to obtain a rendering output file, finally, the rendering output file is displayed by using a graphical user interface, so as to obtain a target model, thereby obtaining a sky sphere model with glow effect, solar effect and sky background effect, and when the sun glow is seen at any angle, the sky sphere model is opposite to the lens.
Optionally, in step S543, the sampling process is performed on the target adjustment result, and obtaining the target sampling result includes:
In step S5431, sampling setting information is acquired, wherein the sampling setting information is used to determine a sampling range.
In step S5431 described above, when sampling the target adjustment result, sampling setting information for determining the sampling range may be acquired.
Specifically, the sampling setting information may include: the method comprises the steps of a Mip value mode, a sampler source, automatic checking of the Mip deviation, constant coordinates, constant Mip value and brightness, and determining a sampling range according to sampling setting information.
FIG. 10 is a schematic diagram of selecting sample setting information according to one embodiment of the present application, as shown in FIG. 10, the Mic value mode may be set to none; setting the sampler source to be shared: restriction, sharing by setting: limiting, the map can be made to appear only in the first quadrant in the UV space, and not repeatedly; the hook is used for automatically checking the Mip deviation; setting the constant coordinates to 0; setting the constant value of mips to-1; and a glow map is selected.
In step S5432, the target adjustment result is sampled based on the sampling setting information, so as to obtain a target sampling result.
In step S5432, after the sampling setting information is acquired, the target adjustment result may be sampled based on the sampling setting information, so as to obtain a target sampling result.
Specifically, the target texture map subjected to random rotation and/or random scaling may be sampled by using the sampling setting information, so that mapping positions of the virtual light, the virtual sun and the virtual sky background in the initial model may be determined.
Based on the above steps S5431 to S5432, by acquiring the sampling setting information and further performing sampling processing on the target adjustment result based on the sampling setting information, a target sampling result is obtained, and the sampling range can be limited, so that the mapping positions of the virtual glow, the virtual sun and the virtual sky background in the initial model can be determined.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiment also provides a model rendering device, which is used for implementing the above embodiment and the preferred implementation manner, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 11 is a block diagram of a model rendering apparatus according to one embodiment of the present application, as shown in fig. 11, the apparatus includes: an obtaining module 1101, configured to obtain a target texture map, where the target texture map is used to store texture coordinate information of an initial model; the sampling module 1102 is configured to perform sampling processing on the target texture map to obtain a target sampling result, where the target sampling result is used to determine a mapping position of a target image in the initial model, and the target image is used to display a plurality of elements to be rendered corresponding to the initial model; and the rendering module 1103 is configured to render and output the initial model based on the target sampling result and the target image, so as to obtain a target model.
Optionally, the obtaining module 1101 is further configured to obtain original coordinate information corresponding to the initial model; performing space conversion on the original coordinate information to obtain a first channel vector and a second channel vector; performing data conversion on the second channel vector to obtain a conversion result; and combining based on the first channel vector and the conversion result to obtain the target texture map.
Optionally, the sampling module 1102 is further configured to obtain a target adjustment parameter, where the target adjustment parameter is used to rotate and/or scale the target texture map; adjusting the target texture mapping by using the target adjustment parameters to obtain a target adjustment result; and sampling the target adjustment result to obtain a target sampling result.
Optionally, the target adjustment parameters further include: the first parameter, where the first parameter is used to determine a rotation angle corresponding to the target texture map, and the sampling module 1102 is further configured to obtain the first parameter includes: acquiring a first position, a second position and a target radius, wherein the first position is used for representing the central position of a target object, the second position is used for representing the position of a camera, and the target radius is used for representing the radius of the target object; a first parameter is determined based on the first location, the second location, and the target radius.
Optionally, the target adjustment parameters further include: the second parameter, where the second parameter is used to determine a scaling factor corresponding to the target texture map, and the sampling module 1102 is further configured to obtain the second parameter includes: acquiring a first position, a view size and a scale factor; a second parameter is determined based on the first position, the view size, and the scaling factor.
Optionally, the sampling module 1102 is further configured to adjust the target texture map with a target adjustment parameter, where obtaining the target adjustment result includes: adjusting the target texture mapping by using the first parameter and the second parameter to obtain a first adjustment result; adjusting the target texture mapping by using the first parameter to obtain a second adjustment result; and obtaining a target adjustment result based on the first adjustment result and the second adjustment result.
Optionally, the target sampling result includes: the method includes the steps of obtaining a first sampling result and a second sampling result, wherein the first sampling result is obtained by sampling the first adjustment result, the second sampling result is obtained by sampling the second adjustment result, the target image includes a first image, a second image and a third image, the rendering module 1103 is further configured to render and output the initial model based on the target sampling result and the target image, and obtaining the target model includes: rendering and outputting the initial model by using the first sampling result and a first image to obtain a first rendering result, wherein the first image is used for displaying virtual rays; rendering and outputting the initial model by using a second sampling result and a second image to obtain a second rendering result, wherein the second image is used for displaying the virtual sun; performing superposition processing on the first rendering result, the second rendering result and a third image to obtain a rendering output file, wherein the third image is used for displaying a virtual sky background; and displaying the rendered output file by using the graphic user interface to obtain a target model.
Optionally, the sampling module 1102 is further configured to obtain sampling setting information, where the sampling setting information is used to determine a sampling range; and carrying out sampling processing on the target adjustment result based on the sampling setting information to obtain a target sampling result.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Alternatively, in this embodiment, the above-mentioned computer-readable storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for performing the steps of:
s1, acquiring a target texture map, wherein the target texture map is used for storing texture coordinate information of an initial model;
s2, sampling the target texture map to obtain a target sampling result, wherein the target sampling result is used for determining the mapping position of a target image in the initial model, and the target image is used for displaying a plurality of elements to be rendered corresponding to the initial model;
and S3, rendering and outputting the initial model based on the target sampling result and the target image to obtain a target model.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: the obtaining the target texture map comprises the following steps: acquiring original coordinate information corresponding to an initial model; performing space conversion on the original coordinate information to obtain a first channel vector and a second channel vector; performing data conversion on the second channel vector to obtain a conversion result; and combining based on the first channel vector and the conversion result to obtain the target texture map.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: sampling the target texture map to obtain a target sampling result, wherein the step of obtaining the target sampling result comprises the following steps: obtaining target adjustment parameters, wherein the target adjustment parameters are used for rotating and/or zooming the target texture map; adjusting the target texture mapping by using the target adjustment parameters to obtain a target adjustment result; and sampling the target adjustment result to obtain a target sampling result.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: the target adjustment parameters further include: the method comprises the steps of obtaining a first parameter, wherein the first parameter is used for determining a rotation angle corresponding to a target texture map, and obtaining the first parameter comprises the following steps: acquiring a first position, a second position and a target radius, wherein the first position is used for representing the central position of a target object, the second position is used for representing the position of a camera, and the target radius is used for representing the radius of the target object; a first parameter is determined based on the first location, the second location, and the target radius.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: the target adjustment parameters further include: the second parameter is used for determining a scaling factor corresponding to the target texture map, and the obtaining the second parameter comprises the following steps: acquiring a first position, a view size and a scale factor; a second parameter is determined based on the first position, the view size, and the scaling factor.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: adjusting the target texture map by using the target adjustment parameters, wherein the obtaining the target adjustment result comprises the following steps: adjusting the target texture mapping by using the first parameter and the second parameter to obtain a first adjustment result; adjusting the target texture mapping by using the first parameter to obtain a second adjustment result; and obtaining a target adjustment result based on the first adjustment result and the second adjustment result.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: the target sampling result includes: the method comprises the steps of obtaining a first sampling result and a second sampling result, wherein the first sampling result is obtained by sampling a first adjusting result, the second sampling result is obtained by sampling a second adjusting result, a target image comprises a first image, a second image and a third image, rendering and outputting an initial model based on the target sampling result and the target image, and obtaining a target model comprises the following steps: rendering and outputting the initial model by using the first sampling result and a first image to obtain a first rendering result, wherein the first image is used for displaying virtual rays; rendering and outputting the initial model by using a second sampling result and a second image to obtain a second rendering result, wherein the second image is used for displaying the virtual sun; performing superposition processing on the first rendering result, the second rendering result and a third image to obtain a rendering output file, wherein the third image is used for displaying a virtual sky background; and displaying the rendered output file by using the graphic user interface to obtain a target model.
Optionally, the above computer readable storage medium is further configured to store program code for performing the steps of: sampling the target adjustment result to obtain a target sampling result, wherein the step of obtaining the target sampling result comprises the following steps: acquiring sampling setting information, wherein the sampling setting information is used for determining a sampling range; and carrying out sampling processing on the target adjustment result based on the sampling setting information to obtain a target sampling result.
In the computer-readable storage medium of this embodiment, a technical solution for model rendering is provided. The target texture mapping is obtained, then the target texture mapping is sampled, a target sampling result is obtained, finally, the initial model is rendered and output based on the target sampling result and the target image, and a target model is obtained, so that the purpose of rendering the model is achieved, the technical effect of reducing performance consumption while the rendering effect is ensured is achieved, and the technical problem that the performance consumption of the rendering mode of the three-dimensional model in the related technology is higher is solved.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a computer readable storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present application.
In an exemplary embodiment of the present application, a computer-readable storage medium stores thereon a program product capable of implementing the method described above in the present embodiment. In some possible implementations, the various aspects of the embodiments of the present application may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the present application as described in the "exemplary methods" section of the embodiments, when the program product is run on the terminal device.
A program product for implementing the above method according to an embodiment of the present application may employ a portable compact disc read-only memory (CD-ROM) and comprise program code and may be run on a terminal device, such as a personal computer. However, the program product of the embodiments of the present application is not limited thereto, and in the embodiments of the present application, the computer-readable storage medium may be any tangible medium that can contain, or store the program for use by or in connection with the instruction execution system, apparatus, or device.
Any combination of one or more computer readable media may be employed by the program product described above. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present application also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring a target texture map, wherein the target texture map is used for storing texture coordinate information of an initial model;
s2, sampling the target texture map to obtain a target sampling result, wherein the target sampling result is used for determining the mapping position of a target image in the initial model, and the target image is used for displaying a plurality of elements to be rendered corresponding to the initial model;
And S3, rendering and outputting the initial model based on the target sampling result and the target image to obtain a target model.
Optionally, the above processor may be further configured to perform the following steps by a computer program: the obtaining the target texture map comprises the following steps: acquiring original coordinate information corresponding to an initial model; performing space conversion on the original coordinate information to obtain a first channel vector and a second channel vector; performing data conversion on the second channel vector to obtain a conversion result; and combining based on the first channel vector and the conversion result to obtain the target texture map.
Optionally, the above processor may be further configured to perform the following steps by a computer program: sampling the target texture map to obtain a target sampling result, wherein the step of obtaining the target sampling result comprises the following steps: obtaining target adjustment parameters, wherein the target adjustment parameters are used for rotating and/or zooming the target texture map; adjusting the target texture mapping by using the target adjustment parameters to obtain a target adjustment result; and sampling the target adjustment result to obtain a target sampling result.
Optionally, the above processor may be further configured to perform the following steps by a computer program: the target adjustment parameters further include: the method comprises the steps of obtaining a first parameter, wherein the first parameter is used for determining a rotation angle corresponding to a target texture map, and obtaining the first parameter comprises the following steps: acquiring a first position, a second position and a target radius, wherein the first position is used for representing the central position of a target object, the second position is used for representing the position of a camera, and the target radius is used for representing the radius of the target object; a first parameter is determined based on the first location, the second location, and the target radius.
Optionally, the above processor may be further configured to perform the following steps by a computer program: the target adjustment parameters further include: the second parameter is used for determining a scaling factor corresponding to the target texture map, and the obtaining the second parameter comprises the following steps: acquiring a first position, a view size and a scale factor; a second parameter is determined based on the first position, the view size, and the scaling factor.
Optionally, the above processor may be further configured to perform the following steps by a computer program: adjusting the target texture map by using the target adjustment parameters, wherein the obtaining the target adjustment result comprises the following steps: adjusting the target texture mapping by using the first parameter and the second parameter to obtain a first adjustment result; adjusting the target texture mapping by using the first parameter to obtain a second adjustment result; and obtaining a target adjustment result based on the first adjustment result and the second adjustment result.
Optionally, the above processor may be further configured to perform the following steps by a computer program: the target sampling result includes: the method comprises the steps of obtaining a first sampling result and a second sampling result, wherein the first sampling result is obtained by sampling a first adjusting result, the second sampling result is obtained by sampling a second adjusting result, a target image comprises a first image, a second image and a third image, rendering and outputting an initial model based on the target sampling result and the target image, and obtaining a target model comprises the following steps: rendering and outputting the initial model by using the first sampling result and a first image to obtain a first rendering result, wherein the first image is used for displaying virtual rays; rendering and outputting the initial model by using a second sampling result and a second image to obtain a second rendering result, wherein the second image is used for displaying the virtual sun; and superposing the first rendering result, the second rendering result and a third image to obtain a rendering output file, wherein the third image is used for displaying the virtual sky background.
Optionally, the above processor may be further configured to perform the following steps by a computer program: sampling the target adjustment result to obtain a target sampling result, wherein the step of obtaining the target sampling result comprises the following steps: acquiring sampling setting information, wherein the sampling setting information is used for determining a sampling range; and carrying out sampling processing on the target adjustment result based on the sampling setting information to obtain a target sampling result.
In the electronic device of the embodiment, a technical scheme of model rendering is provided. The target texture mapping is obtained, then the target texture mapping is sampled, a target sampling result is obtained, finally, the initial model is rendered and output based on the target sampling result and the target image, and a target model is obtained, so that the purpose of rendering the model is achieved, the technical effect of reducing performance consumption while the rendering effect is ensured is achieved, and the technical problem that the performance consumption of the rendering mode of the three-dimensional model in the related technology is higher is solved.
Fig. 12 is a schematic diagram of an electronic device according to an embodiment of the present application. As shown in fig. 12, the electronic device 1200 is merely an example, and should not impose any limitation on the functionality and scope of use of the embodiments of the present application.
As shown in fig. 12, the electronic apparatus 1200 is in the form of a general purpose computing device. The components of the electronic device 1200 may include, but are not limited to: the at least one processor 1210, the at least one memory 1220, a bus 1230 connecting the various system components (including the memory 1220 and the processor 1210), and a display 1240.
Wherein the memory 1220 stores program code that can be executed by the processor 1210, such that the processor 1210 performs the steps according to various exemplary implementations of the present application described in the above method section of the embodiments of the present application.
Memory 1220 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 12201 and/or cache memory 12202, and may further include Read Only Memory (ROM) 12203, and may include nonvolatile memory, such as one or more magnetic storage devices, flash memory, or other nonvolatile solid state memory.
In some examples, memory 1220 may also include a program/utility 12204 having a set (at least one) of program modules 205, such program modules 12205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Memory 1220 may further include memory located remotely from processor 1210, which may be connected to electronic device 1200 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Bus 1230 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processor 1210, or a local bus using any of a variety of bus architectures.
The display 1240 may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the electronic device 1200.
Optionally, the electronic apparatus 1200 may also communicate with one or more external devices 1300 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic apparatus 1200, and/or with any device (e.g., router, modem, etc.) that enables the electronic apparatus 1200 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1250. Also, the electronic device 1200 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet via the network adapter 1260. As shown in fig. 12, the network adapter 1260 communicates with other modules of the electronic device 1200 over a bus 1230. It should be appreciated that although not shown in fig. 12, other hardware and/or software modules may be used in connection with the electronic device 1200, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The electronic device 1200 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power supply, and/or a camera.
It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 12 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the electronic device 1200 may also include more or fewer components than shown in fig. 12, or have a different configuration than shown in fig. 4. The memory 1220 may be used to store a computer program and corresponding data, such as a computer program and corresponding data corresponding to a model rendering method in an embodiment of the present application. The processor 1210 executes various functional applications and data processing by executing a computer program stored in the memory 1220, that is, implements the model rendering method described above.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (11)

1. A method of model rendering, the method comprising:
obtaining a target texture map, wherein the target texture map is used for storing texture coordinate information of an initial model;
sampling the target texture mapping to obtain a target sampling result, wherein the target sampling result is used for determining the mapping position of a target image in the initial model, and the target image is used for displaying a plurality of elements to be rendered corresponding to the initial model;
and rendering and outputting the initial model based on the target sampling result and the target image to obtain a target model.
2. The model rendering method of claim 1, wherein obtaining the target texture map comprises:
acquiring original coordinate information corresponding to the initial model;
performing space conversion on the original coordinate information to obtain a first channel vector and a second channel vector;
Performing data conversion on the second channel vector to obtain a conversion result;
and combining the first channel vector and the conversion result to obtain the target texture map.
3. The model rendering method according to claim 1, wherein sampling the target texture map to obtain the target sampling result comprises:
obtaining a target adjustment parameter, wherein the target adjustment parameter is used for rotating and/or zooming the target texture map;
adjusting the target texture mapping by utilizing the target adjustment parameters to obtain a target adjustment result;
and sampling the target adjustment result to obtain the target sampling result.
4. A model rendering method according to claim 3, wherein the target adjustment parameters further comprise: the first parameter is used for determining a rotation angle corresponding to the target texture map, and the obtaining the first parameter includes:
acquiring a first position, a second position and a target radius, wherein the first position is used for representing the central position of a target object, the second position is used for representing the position of a camera, and the target radius is used for representing the radius of the target object;
The first parameter is determined based on the first location, the second location, and the target radius.
5. The model rendering method of claim 4, wherein the target adjustment parameters further comprise: and a second parameter, where the second parameter is used to determine a scaling factor corresponding to the target texture map, and the obtaining the second parameter includes:
acquiring the first position, the view size and the scale factor;
the second parameter is determined based on the first position, the view size, and the scaling factor.
6. The model rendering method of claim 5, wherein adjusting the target texture map using the target adjustment parameter to obtain the target adjustment result comprises:
adjusting the target texture map by using the first parameter and the second parameter to obtain a first adjustment result;
adjusting the target texture mapping by using the first parameter to obtain a second adjustment result;
and obtaining the target adjustment result based on the first adjustment result and the second adjustment result.
7. The model rendering method of claim 6, wherein the target sampling result comprises: the method comprises the steps of obtaining a first sampling result and a second sampling result, wherein the first sampling result is obtained by sampling the first adjustment result, the second sampling result is obtained by sampling the second adjustment result, the target image comprises a first image, a second image and a third image, rendering and outputting the initial model based on the target sampling result and the target image, and obtaining the target model comprises the following steps:
Rendering and outputting the initial model by using the first sampling result and the first image to obtain a first rendering result, wherein the first image is used for displaying virtual rays;
rendering and outputting the initial model by using the second sampling result and the second image to obtain a second rendering result, wherein the second image is used for displaying virtual sun;
performing superposition processing on the first rendering result, the second rendering result and the third image to obtain a rendering output file, wherein the third image is used for displaying a virtual sky background;
and displaying the rendering output file by using a graphic user interface to obtain the target model.
8. A model rendering method according to claim 3, wherein sampling the target adjustment result to obtain the target sampling result comprises:
acquiring sampling setting information, wherein the sampling setting information is used for determining a sampling range;
and carrying out sampling processing on the target adjustment result based on the sampling setting information to obtain the target sampling result.
9. A model rendering apparatus, the apparatus comprising:
The acquisition module is used for acquiring a target texture map, wherein the target texture map is used for storing texture coordinate information of an initial model;
the sampling module is used for carrying out sampling processing on the target texture mapping to obtain a target sampling result, wherein the target sampling result is used for determining the mapping position of a target image in the initial model, and the target image is used for displaying a plurality of elements to be rendered corresponding to the initial model;
and the rendering module is used for rendering and outputting the initial model based on the target sampling result and the target image to obtain a target model.
10. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, wherein the computer program is arranged to perform the model rendering method of any one of claims 1 to 8 when being run by a processor.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the model rendering method as claimed in any of claims 1 to 8.
CN202310351816.8A 2023-03-29 2023-03-29 Model rendering method and device, storage medium and electronic device Pending CN116468839A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310351816.8A CN116468839A (en) 2023-03-29 2023-03-29 Model rendering method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310351816.8A CN116468839A (en) 2023-03-29 2023-03-29 Model rendering method and device, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN116468839A true CN116468839A (en) 2023-07-21

Family

ID=87179966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310351816.8A Pending CN116468839A (en) 2023-03-29 2023-03-29 Model rendering method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN116468839A (en)

Similar Documents

Publication Publication Date Title
US11694392B2 (en) Environment synthesis for lighting an object
CN105354355B (en) A kind of Design of Simulation System and implementation method based on three-dimensional motion what comes into a driver's
US9886102B2 (en) Three dimensional display system and use
US9269324B2 (en) Orientation aware application demonstration interface
CN112766215B (en) Face image processing method and device, electronic equipment and storage medium
US9754398B1 (en) Animation curve reduction for mobile application user interface objects
Ratican et al. A proposed meta-reality immersive development pipeline: Generative ai models and extended reality (xr) content for the metaverse
US20180276870A1 (en) System and method for mass-animating characters in animated sequences
CN115738249A (en) Method and device for displaying three-dimensional model of game role and electronic device
CN109697748A (en) Model compression processing method, model pinup picture processing method device, storage medium
US20140292754A1 (en) Easy selection threshold
CN110889384A (en) Scene switching method and device, electronic equipment and storage medium
US11308669B1 (en) Shader for graphical objects
WO2019042028A1 (en) All-around spherical light field rendering method
CN116452704A (en) Method and device for generating lens halation special effect, storage medium and electronic device
CN112862981B (en) Method and apparatus for presenting a virtual representation, computer device and storage medium
CN116468839A (en) Model rendering method and device, storage medium and electronic device
CN111862338B (en) Display method and device for simulated eyeglass wearing image
CN114820980A (en) Three-dimensional reconstruction method and device, electronic equipment and readable storage medium
CN114299203A (en) Processing method and device of virtual model
KR20160010780A (en) 3D image providing system and providing method thereof
CN115082607B (en) Virtual character hair rendering method, device, electronic equipment and storage medium
CN116778053B (en) Target engine-based map generation method, device, equipment and storage medium
Khan et al. A 3D Classical Object Viewer for Device Compatible Display
WO2024077518A1 (en) Interface display method and apparatus based on augmented reality, and device, medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination