CN115228083A - Resource rendering method and device - Google Patents

Resource rendering method and device Download PDF

Info

Publication number
CN115228083A
CN115228083A CN202210879186.7A CN202210879186A CN115228083A CN 115228083 A CN115228083 A CN 115228083A CN 202210879186 A CN202210879186 A CN 202210879186A CN 115228083 A CN115228083 A CN 115228083A
Authority
CN
China
Prior art keywords
rendering
scene
target
resource
scene area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210879186.7A
Other languages
Chinese (zh)
Inventor
苏泰梁
马钦
刘文剑
黄锦寿
杨林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Digital Network Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Digital Network Technology Co Ltd filed Critical Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority to CN202210879186.7A priority Critical patent/CN115228083A/en
Publication of CN115228083A publication Critical patent/CN115228083A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

The application provides a resource rendering method and a device, wherein the resource rendering method comprises the following steps: dividing a target scene to obtain a scene area of the target scene; adjusting the rendering precision of rendering resources corresponding to the scene area, and generating at least one rendering optimization resource corresponding to the scene area; in response to a picture rendering instruction aiming at a target scene, determining target rendering resources corresponding to a scene area according to rendering resources and at least one rendering optimization resource; and rendering a scene area through the target rendering resource, and generating a display picture of the target scene according to an area rendering result. The target scene is divided to obtain a plurality of scene areas of the target scene, and the rendering precision of rendering resources corresponding to each scene area is adjusted, so that the effect of reducing memory consumption is achieved. And meanwhile, a plurality of rendering optimization resources with different rendering precisions are generated, and when the picture is rendered at the later stage, the rendering resources with different rendering precisions can be selected for rendering, so that the rendering efficiency is improved.

Description

Resource rendering method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a resource rendering method and apparatus, a computing device, and a computer-readable storage medium.
Background
With the development of informatization and intelligent technology, the requirements of image rendering on the performance of computer hardware are higher and higher. When a scene includes a large amount of image graphics, the rendering speed of the scene is often limited, increasing the computer rendering pressure. Therefore, how to improve the rendering efficiency of rendering a large scene is a problem that needs to be solved urgently at present.
Disclosure of Invention
In view of this, embodiments of the present application provide a resource rendering method and apparatus, a computing device, and a computer-readable storage medium, so as to solve technical defects existing in the prior art.
According to a first aspect of embodiments of the present application, there is provided a resource rendering method, including:
dividing a target scene to obtain a scene area of the target scene;
adjusting the rendering precision of rendering resources corresponding to the scene area, and generating at least one rendering optimization resource corresponding to the scene area;
in response to a picture rendering instruction for the target scene, determining a target rendering resource corresponding to the scene area according to the rendering resource and the at least one rendering optimization resource;
and rendering the scene area through the target rendering resource, and generating a display picture of the target scene according to an area rendering result.
According to a second aspect of embodiments of the present application, there is provided a resource rendering apparatus, including:
the device comprises a dividing module, a judging module and a judging module, wherein the dividing module is configured to divide a target scene and obtain a scene area of the target scene;
the adjusting module is configured to adjust rendering precision of rendering resources corresponding to the scene area and generate at least one rendering optimization resource corresponding to the scene area;
a determination module configured to determine, in response to a screen rendering instruction for the target scene, a target rendering resource corresponding to the scene area according to the rendering resource and the at least one rendering optimization resource;
and the generating module is configured to render the scene area through the target rendering resource and generate a display picture of the target scene according to an area rendering result.
According to a third aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the resource rendering method when executing the computer instructions.
According to a fourth aspect of embodiments herein, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the resource rendering method.
According to a fifth aspect of embodiments herein, there is provided a chip storing computer instructions which, when executed by the chip, implement the steps of the resource rendering method.
The application provides a resource rendering method, which comprises the following steps: dividing a target scene to obtain a scene area of the target scene; adjusting the rendering precision of rendering resources corresponding to the scene area, and generating at least one rendering optimization resource corresponding to the scene area; in response to a picture rendering instruction for the target scene, determining a target rendering resource corresponding to the scene area according to the rendering resource and the at least one rendering optimization resource; and rendering the scene area through the target rendering resource, and generating a display picture of the target scene according to an area rendering result.
In the embodiment of the application, the target scene is divided to obtain the plurality of scene areas of the target scene, and the rendering precision of rendering resources corresponding to each scene area is adjusted, so that the effect of reducing memory consumption is achieved. And meanwhile, a plurality of rendering optimization resources with different rendering precisions are generated, and when the picture is rendered at the later stage, the rendering resources with different rendering precisions can be selected for rendering, so that the rendering efficiency is improved.
Drawings
FIG. 1 is a block diagram of a computing device provided by an embodiment of the present application;
FIG. 2 is a flowchart of a resource rendering method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a target scene division provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a resource rendering method provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a resource rendering apparatus according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application is intended to encompass any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present application. The word "if," as used herein, may be interpreted as "responsive to a determination," depending on the context.
First, the noun terms to which one or more embodiments of the present invention relate are explained.
LOD: multilevel details (Levels of details), the LOD technique aims to simplify the model under certain conditions, and to replace the model with a simpler model for a model with a longer camera distance.
OpenGL: the Open Graphics Library or Open Graphics Library is a cross-language, cross-platform application programming interface for rendering 2D and 3D vector Graphics.
DrawCall: it may be understood that a call by the CPU to the underlying graphics rendering interface instructs the GPU to perform a rendering operation. DrawCall is the number of times the openGL is drawn, and a simple drawing order for openGL is: setting color-drawing mode-vertex coordinates-drawing-end, the above steps are repeated every frame. This is a DrawCall.
The traditional LOD technology only carries out surface reduction optimization on a model under a scene, and can meet basic performance requirements under the condition that DrawCall and a chartlet memory are well controlled under some small scenes, but in a large world scene, the number of objects in a visible area is increased by several orders of magnitude due to an overlarge scene and an overlong visual range, and the traditional LOD technology cannot meet the performance completely, so that a better LOD scheme for the large world is needed.
Based on this, a resource rendering method and apparatus, a computing device and a computer readable storage medium are provided in the present application, which are described in detail in the following embodiments one by one.
FIG. 1 shows a block diagram of a computing device 100 according to an embodiment of the present application. The components of the computing device 100 include, but are not limited to, memory 110 and processor 120. The processor 120 is coupled to the memory 110 via a bus 130 and a database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 140 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-mentioned components of the computing device 100 and other components not shown in fig. 1 may also be connected to each other, for example, by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 1 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
Wherein, the processor 120 may execute the steps in the resource rendering method shown in fig. 2. Fig. 2 shows a flowchart of a resource rendering method according to an embodiment of the present application, including step 202 to step 208.
Step 202: and dividing the target scene to obtain a scene area of the target scene.
The target scene can be understood as a virtual game scene, a virtual teaching scene and the like, the target scene can comprise a plurality of static objects, the static objects can be understood as some house buildings, trees, flowers and plants and other objects, each static object corresponds to one virtual model, the virtual model is composed of some rendering resources, and after the rendering is performed according to the rendering resources, the target scene can be presented to a user. When the target scene is a virtual game scene, the player can operate the virtual character to perform movement control in the target scene, and each virtual model in the target scene needs to be rendered according to the position of the virtual character in the target scene in the movement process of the virtual character in the target scene.
In practical application, dividing the target scene may be understood as layering the target scene, each layer may have a respective rendering rule, the rendering rule includes rendering accuracy, number of model renderings, and the like, the target scene is divided to obtain a plurality of scene areas belonging to the target scene, each scene area includes the same or different virtual models, rendering resources corresponding to the virtual models in each scene area are accumulated, that is, rendering resources corresponding to the scene area are rendered, and rendering is performed according to the rendering resources to obtain a display picture corresponding to the scene area.
In an embodiment of the present application, the target scene is a game scene, and the scene includes virtual models of buildings, vehicles, roads, and the like. The target scene is divided, and 3 scene areas belonging to the target scene can be obtained, namely a scene area a, a scene area B and a scene area C.
Specifically, dividing a target scene to obtain a scene area of the target scene includes:
dividing a target scene according to the sight distance information and/or the scene attribute information to obtain a division result;
and determining a scene area of the target scene according to the division result.
As shown in fig. 3, fig. 3 is a schematic view of dividing a target scene provided in this embodiment, where if the virtual character operated by the player is at point a, if the distance information is the distance from point a to point B, the player may see all virtual models at the farthest distance from point B at point a, and it should be noted that the distance information is circular range information with the camera as a center and the distance as a radius. For example, in fig. 3, if the virtual character manipulated by the player is at point B, the range of the apparent distance information should include all the ranges from point a to point C. According to the division of the visual distance information, the target scene can be divided according to the distance from the camera to the scene area, so that the scene areas under different visual distances can be rendered with different rendering precisions in the later period based on the distance from the camera to the scene area, and the rendering efficiency is improved and the memory consumption is reduced. The scene attribute information can be understood as information such as scene complexity, virtual model aggregation degree or scene area size in a scene area, and can be divided according to the scene attribute information, so that a target scene can be divided into a plurality of scene areas from the aspect of rendering difficulty.
In practical application, the target scene is divided, at least one division standard can be selected from the visual range information and the scene attribute information for division, and when the target scene is divided according to the visual range information and the scene attribute information, more different scene areas can be divided from the target scene, so that different numbers of scene areas can be selected to be rendered to a greater extent in the later period, and the rendering efficiency is further improved.
In an embodiment of the present application, referring to fig. 3, if the line-of-sight information is 200 meters (m), 3 scene areas of the target scene, namely, a scene area a, a scene area B, and a scene area C in fig. 3, are separated by a distance of 100m according to the line-of-sight information.
In another embodiment of the present application, the target scene is divided according to scene attribute information, where the scene attribute information includes a size of the scene, and when the target scene is 300 square meters, the target scene is divided into 2 scene areas, namely a scene area a and a scene area B, and a size of each scene area is 150 square meters.
Step 204: and adjusting the rendering precision of the rendering resources corresponding to the scene area, and generating at least one rendering optimization resource corresponding to the scene area.
The rendering resource can be understood as a resource required for rendering a scene area, and adjusting the rendering precision of the rendering resource can be understood as adjusting a parameter when rendering is performed according to the rendering resource, so that the precision of a rendered model changes. For example, after rendering is performed according to an initial rendering resource with a rendering precision of 1, a very real virtual model can be rendered, and after rendering is performed by adjusting a rendering resource with a rendering precision of 0.5, some details of the virtual model may not be displayed. After the rendering precision of the rendering resources is adjusted, rendering optimization resources with lower rendering precision can be generated, the rendering optimization resources are represented as optimized rendering resources, and compared with a virtual model obtained by rendering according to the rendering optimization resources, the virtual model obtained by rendering according to the rendering resources is lower in model precision, namely, the performance on some details is slightly weaker, but the rendering time is faster, and the memory consumption of a client is less.
In practical application, the lower the rendering precision of the rendering resources is, the shorter the rendering processing process according to the rendering resources is, thereby reducing the rendering time. Therefore, in the target scene, the rendering precision of some virtual models outside the visual range of the user can be reduced, the rendering pressure of the client side is reduced in the complex target scene, and the rendering efficiency is improved. The rendering precision of the rendering resource may be set in advance by a developer, for example, 3 levels of rendering precision are set in advance: the rendering method may be a rendering method, and the rendering method may be different in texture of the model or in a map of the model, and the level of the specific rendering accuracy and the rendering method corresponding to each rendering accuracy may be determined according to an actual situation, which is not limited specifically herein.
In an embodiment of the present application, the rendering precision of the rendering resource of the scene area a is adjusted according to the above example, the rendering precision of the rendering resource of the scene area a is originally high, and is adjusted to be in the rendering precision and the rendering precision, one rendering optimization resource may be generated according to the low rendering precision, and the other rendering optimization resource may be generated according to the rendering precision.
Specifically, adjusting the rendering precision of the rendering resource corresponding to the scene area, and generating at least one rendering optimization resource corresponding to the scene area includes:
determining rendering resources corresponding to the scene area, and determining at least one rendering precision level corresponding to the rendering resources;
and generating rendering optimization resources corresponding to each rendering precision grade according to each rendering precision grade.
The rendering precision levels corresponding to the rendering resources can be understood as level information for distinguishing different rendering precisions, each rendering precision level corresponds to one rendering precision, and the virtual models rendered by the rendering resources under different rendering precision levels have different precisions.
In practical applications, the rendering precision levels may be three levels, i.e., low, medium, and high, or may be differentiated according to the rendering ratios of the virtual models, such as rendering 100%, rendering 80%, and rendering 50%, where the virtual model rendered in the case of rendering 100% is a normal virtual model, the virtual model rendered in the case of rendering 80% may be missing on the map, the virtual model rendered in the case of rendering 50% may be only one model contour, and the specific rendering precision level may be determined according to the actual situation.
In an embodiment of the present application, following the above example, rendering resources corresponding to a scene area are obtained, rendering precision levels corresponding to the rendering resources are determined to be low, medium, and high, and a rendering precision level corresponding to a current rendering resource is determined to be high, and then a corresponding rendering optimization resource is generated according to the low and medium rendering precision levels.
Specifically, the rendering optimization resource corresponding to any one rendering precision level may be generated in the following manner, including:
obtaining an optimization strategy corresponding to the target rendering precision level;
optimizing the rendering resources based on the optimization strategy, and generating rendering optimization resources corresponding to the rendering precision grade.
The optimization strategy may be understood as a strategy for optimizing a rendering mode of a model, for example, a virtual model is composed of 100 mesh surfaces, and the optimization of the strategy may be completed by reducing the number of the mesh surfaces, and reducing the number of the mesh surfaces to 50 mesh surfaces. Alternatively, the optimization strategy may be to reduce the number of maps.
In practical application, one rendering precision level corresponds to one optimization strategy, so that rendering optimization resources under each rendering precision level are different, and subsequently, rendering virtual models are different in precision according to the rendering optimization resources corresponding to different rendering precision levels. The optimization strategy also includes the optimization degree, for example, the optimization strategy with a low rendering precision level reduces the total number of the grid surfaces by 80%, and the optimization strategy with a rendering precision level reduces the total number of the grid surfaces by 50%.
In a specific embodiment of the application, along with the above example, an optimization strategy corresponding to a target rendering precision level being low is obtained, rendering resources of a scene area a are optimized based on the optimization strategy, rendering optimization resources corresponding to the low rendering precision level are generated, people in each vehicle in the area can be seen according to the scene area a rendered by the rendering resources, and only the appearance outline of each vehicle in the area can be seen according to the scene area a rendered by the rendering optimization resources.
Specifically, optimizing the rendering resource based on the optimization strategy and generating the rendering optimized resource corresponding to the rendering precision level includes:
obtaining model map data and model grid data in the rendering resources;
merging the model mapping data based on the optimization strategy, and/or reducing the surface of the model grid data based on the optimization strategy;
and generating rendering optimization resources corresponding to the rendering precision grade according to the processing result.
In actual application, under the condition that the optimization strategy is merging optimization, merging processing is carried out on the model mapping data based on the optimization strategy to obtain model mapping optimization data, and rendering optimization resources corresponding to the rendering precision grade are generated according to the model mapping optimization data and the model grid data;
under the condition that the optimization strategy is surface reduction optimization, performing surface reduction processing on the model grid data based on the optimization strategy to obtain model grid optimization data, and generating rendering optimization resources corresponding to the rendering precision level according to the model grid optimization data;
and under the condition that the optimization strategy is merging optimization and subtractive optimization, merging the model mapping data based on the optimization strategy to obtain model mapping optimization data, carrying out subtractive processing on the model grid data based on the optimization strategy to obtain model grid optimization data, and generating rendering optimization resources corresponding to the rendering precision grade according to the model mapping optimization data and the model grid optimization data.
The model map data can be understood as texture data on the virtual model, and the model map data are combined according to the optimization strategy, so that the memory occupied by the client during rendering can be effectively reduced, and the model map optimization data at the rendering precision level can be obtained after combination. The model mesh data can be understood as mesh meshes of the virtual model, and by reducing the number of the mesh meshes of the virtual model, the occupied memory of a client during rendering can be effectively reduced, and model mesh optimization data under the model precision level can be obtained after reduction. Rendering optimization resources corresponding to the rendering precision level may be generated based on the model map optimization data and the model mesh optimization data.
In practical application, the rendering precision can be effectively reduced by optimizing the ways of mapping combination and grid reduction in the strategy, so that the adjustment of the rendering precision of rendering resources is realized. It should be noted that, in specific implementation, a multithreading manner may be used to implement the optimization strategy, so as to improve the speed of rendering precision adjustment.
In an embodiment of the present application, the above example is used, model map data in a scene area a is obtained, the model map data includes 100 pictures, model map data is merged based on an optimization strategy, model map optimization data is obtained, and the model map optimization data includes 50 pictures. Obtaining model grid data in a scene area A, wherein the model grid data comprises 100 mesh grids, performing surface reduction processing on the model grid data based on an optimization strategy to obtain model grid optimization data, the model grid optimization data comprises 50 grids, and rendering optimization resources corresponding to the model precision grade are formed according to the model chartlet optimization data and the model grid optimization data.
In practical application, the optimization strategy may also select one of map merging and surface reduction processing for optimization, specifically, optimizing the rendering resources based on the optimization strategy, and generating the rendering optimized resources corresponding to the rendering precision level, including:
obtaining model mapping data in the rendering resources, and merging the model mapping data based on the optimization strategy to obtain model mapping optimization data;
obtaining model grid data in the rendering resources;
and generating rendering optimization resources corresponding to the rendering precision grade according to the model mapping optimization data and the model grid data.
And generating rendering optimization resources corresponding to the rendering precision level according to the optimized model mapping optimization data and the initial model grid data.
In an embodiment of the present application, the above example is used, model map data in a scene area a is obtained, the model map data includes 100 pictures, model map data is merged based on an optimization strategy, model map optimization data is obtained, and the model map optimization data includes 50 pictures. And obtaining model grid data in the scene area A, wherein the model grid data comprises 100 mesh grids, and rendering optimization resources corresponding to the model precision grade are formed according to the model mapping optimization data and the model grid data.
Correspondingly, optimizing the rendering resources based on the optimization strategy and generating the rendering optimized resources corresponding to the rendering precision level includes:
obtaining model map data in the rendering resources;
obtaining model grid data in the rendering resources, and performing surface reduction processing on the model grid data based on the optimization strategy to obtain model grid optimization data;
and generating rendering optimization resources corresponding to the rendering precision grade according to the model chartlet data and the model grid optimization data.
And generating rendering optimization resources corresponding to the rendering precision level according to the optimized model grid optimization data and the initial model mapping data.
In an embodiment of the application, the above example is used to obtain model mapping data in the scene area a, where the model mapping data has 100 mesh grids, and the model grid data is reduced based on the optimization strategy to obtain model grid optimized data, where the model grid optimized data has 50 mesh grids. Obtaining model map data in the scene area A, wherein the model grid data comprises 100 pictures, and rendering optimization resources corresponding to the model precision grade are formed according to the model grid optimization data and the model map data.
Step 206: and in response to a picture rendering instruction aiming at the target scene, determining a target rendering resource corresponding to the scene area according to the rendering resource and the at least one rendering optimization resource.
The image rendering instruction may be understood as an instruction for rendering a target scene when a user controls a virtual character to enter the target scene, and may be an instruction for rendering the target scene when the target control virtual character moves in the target scene. The image rendering instruction includes the position of the virtual character in the target scene and the view distance information, i.e., the visible range, of the virtual character.
In practical application, each scene area corresponds to a rendering resource and at least one rendering optimization resource, so that when a target scene is rendered, that is, when each scene area is rendered, the rendering resource used in rendering each scene area needs to be determined, and the rendering resource used in rendering each scene area needs to be selected from the rendering resource and the rendering optimization resource. In actual implementation, the distance between the virtual character and the scene area may be selected, and the smaller the distance between the virtual character and the scene area is, the higher the rendering precision level of rendering resources of the scene area is, it can be understood that when a user manipulates the virtual character to be close to the scene area, the higher the quality of the model in the scene area should be, so as to present a clearer picture to the user. Or may be determined based on the line-of-sight information calculated from the virtual character as a starting point.
In an embodiment of the present application, in response to a picture rendering instruction for a target scene, the picture rendering instruction includes position information and visible range information of a virtual character, and when the position of the virtual character is in a scene area a, as shown in fig. 3, when the visible range of the virtual character in the scene area a is up to a point B, a target rendering resource of the scene area a is an initial rendering resource, and target rendering resources of the scene areas B and C are rendering optimization resources.
Specifically, determining a target rendering resource corresponding to the scene area according to the rendering resource and the at least one rendering optimization resource includes:
determining a target spacing distance corresponding to the scene area according to rendering position information carried in the picture rendering instruction;
determining a target rendering precision grade corresponding to the target spacing distance based on the corresponding relation between the spacing distance and the rendering precision grade;
and acquiring the target rendering resource corresponding to the target rendering precision grade according to the target rendering precision grade.
The rendering position information can be understood as position information of the virtual character in the target scene, and the rendering position information can also include visual range information, which is the visual range information of the virtual character in the target scene. The spacing distance between the virtual character and the scene area can be calculated according to the rendering position information, and then the rendering precision grade corresponding to the target spacing distance can be determined based on the corresponding relation between the spacing distance and the rendering precision grade.
In practical application, the corresponding relationship between the spacing distance and the rendering precision level can be set in advance for developers, for example, when the spacing distance is 0-50m, the corresponding rendering precision level is high; when the rendering precision is 51-100m, the corresponding rendering precision level is middle; at 101-200m, the corresponding rendering precision level is low. The farther the scene area is from the virtual character, the lower the rendering precision level of the rendering resources used in rendering is, thereby reducing memory consumption in rendering.
In a specific embodiment of the present application, following the above example, if the virtual character is at point a, the spacing distance to the scene area a is 0m, the spacing distance to the scene area B is 100m, and the spacing distance to the scene area C is 200m, it is determined according to the spacing distance and the rendering precision level that the rendering precision level corresponding to the rendering resource used when the scene area a is rendered is high, the rendering precision level corresponding to the rendering resource used when the scene area B is rendered is medium, and the rendering precision level corresponding to the rendering resource used when the scene area C is rendered is low, and the rendering resource under the rendering precision level corresponding to each scene area is obtained.
Step 208: and rendering the scene area through the target rendering resource, and generating a display picture of the target scene according to an area rendering result.
After the target rendering resources used in rendering each scene area are determined, each scene area can be rendered based on the target rendering resources, so that the display picture of the target scene is generated.
In a specific embodiment of the present application, following the above example, the rendering resource with a high rendering precision level is used to render the scene area a, and the rendering optimization resource with a medium rendering precision level is used to render the scene area B, so that the rendering optimization resource with a low rendering precision level renders the scene area B, and a display picture of the target scene is generated according to a region rendering result.
In practical application, in consideration of the feeling of a user on a target scene picture, virtual resources in a scene area closer to a virtual character controlled by the user should be loaded preferentially, so that in a rendering process, rendering resources can be prioritized, rendering resources with a higher importance degree are subjected to optimized rendering processing, and thus a corresponding important picture is presented preferentially for the user, specifically, the rendering of the scene area through the target rendering resources includes:
according to the rendering precision grade corresponding to each scene area, adding the target rendering resources corresponding to each scene area to a rendering queue in a sequence from top to bottom;
and calling each target rendering resource in the rendering queue from top to bottom to render each scene area.
The rendering precision level of the rendering resources of the scene area closer to the user is higher, so that the target rendering resources corresponding to each scene area can be subjected to priority sequencing according to the sequence of the rendering precision levels and are added into the rendering queue, and the rendering resources in the rendering queue are called according to the sequence of the rendering precision levels from high to low when the rendering is performed at the later stage, so that the rendering resources with high rendering precision levels are rendered preferentially, and the picture with high importance can be displayed for the user preferentially.
In practical application, in the process of rendering a target scene, due to the fact that rendering speed is high, a user does not feel great on the sequence of each rendered scene, but in order to avoid that frame rate for rendering important pictures is time-consuming seriously due to too large rendering resources, the user sees blocked pictures, and some unimportant resources are preferentially loaded, which is very bad for the use experience of the user, the priority of the rendering resources can be sequenced, so that the above problems are avoided, and in the specific implementation, the sequencing can be realized in a Job multithreading mode.
In a specific embodiment of the present application, following the above example, according to the rendering precision level corresponding to each scene area, sorting is performed in order from high to low, and the rendering resources of the scene area a with the high rendering precision level, the rendering resources of the scene area B with the medium rendering precision level, and the rendering resources of the scene area C with the low rendering precision level are respectively added to the rendering queue in order from high to low, and then, during rendering, the rendering resources corresponding to the scene area a are called first, the rendering resources corresponding to the scene area B are called, and finally the rendering resources corresponding to the scene area C are called.
Specifically, before rendering each scene area according to each target rendering resource stored in the rendering queue, the method further includes:
determining an idle scene area of the target scene according to the rendering position information;
deleting the target rendering resources corresponding to the idle scene area in the rendering queue to obtain a target rendering queue;
correspondingly, calling each target rendering resource in the rendering queue in the order from top to bottom to render each scene area, including:
and calling each target rendering resource in the target rendering queue in the order from top to bottom to render each scene area.
The idle scene area can be understood as a scene area with relatively low loading rendering precision level and low importance degree, or a scene area outside the visual range of the virtual character, and for the idle scene area, because the rendering detail degree in the scene area is small, and a user may not notice a virtual model in the idle scene area under the current frame picture, the user can select not to render the virtual model in the scene area, so that the rendering pressure of a client is reduced, and the memory consumption is reduced.
In an embodiment of the present application, the above example is used, the scene area C is determined as the restricted scene area according to the rendering position information, and the target rendering resource corresponding to the scene area C in the rendering queue is deleted, so that when the target scene is subsequently rendered, the virtual model in the scene area C is selected not to be rendered.
The application provides a resource rendering method, which comprises the following steps: dividing a target scene to obtain a scene area of the target scene; adjusting the rendering precision of rendering resources corresponding to the scene area, and generating at least one rendering optimization resource corresponding to the scene area; in response to a picture rendering instruction for the target scene, determining a target rendering resource corresponding to the scene area according to the rendering resource and the at least one rendering optimization resource; and rendering the scene area through the target rendering resource, and generating a display picture of the target scene according to an area rendering result. The target scene is divided to obtain a plurality of scene areas of the target scene, rendering precision of rendering resources corresponding to each scene area is adjusted to obtain rendering resources under different rendering precision levels, when the rendering is performed in the later period, the rendering resources corresponding to the rendering precision levels can be selected according to the view distance information, the scene areas are rendered, accordingly, the effect of reducing memory consumption is achieved, the rendering efficiency is improved, the number and the effect of visible objects under far view distance are guaranteed, the running performance of a client during rendering is improved, and the far view distance effect of the target scene is remarkably improved.
Fig. 4 shows a resource rendering method according to an embodiment of the present application, which is described by taking rendering of a game scene as an example, and includes steps 402 to 420.
Step 402: and dividing the game scene according to the visual distance information and the scene attribute information to obtain a scene area A, a scene area B and a scene area C of the game scene.
Step 404: and determining an initial rendering resource corresponding to each scene area, and determining a rendering precision level corresponding to the initial rendering resource.
The rendering precision levels comprise a low level, a medium level and a high level, and the rendering precision level corresponding to the initial rendering resource is high.
Step 406: and obtaining model mapping data in initial rendering resources, and merging the model mapping data based on the optimization strategy to obtain model mapping optimization data.
Step 408: and obtaining model grid data in initial rendering resources, and performing surface reduction processing on the model grid data based on the optimization strategy to obtain model grid optimization data.
Step 410: and generating rendering optimization resources corresponding to the rendering precision grade according to the model chartlet optimization data and the model grid optimization data.
And generating rendering optimization resources of each scene area, wherein each scene area corresponds to the respective initial rendering resource and the two rendering optimization resources.
Step 412: and determining the target spacing distance corresponding to the scene area according to the rendering position information carried in the picture rendering instruction.
The rendering position information is a point A of a virtual character controlled by a user in a scene area A, the target interval distance corresponding to the scene area A is determined to be 0m, the target interval distance corresponding to the scene area B is determined to be 100m, and the target interval distance corresponding to the scene area C is determined to be 200m.
Step 414: and determining a target rendering precision grade corresponding to the target spacing distance based on the corresponding relation between the spacing distance and the rendering precision grade.
The rendering precision grade corresponding to the scene area A is determined to be high, the rendering precision grade corresponding to the scene area B is determined to be medium, and the rendering precision grade corresponding to the scene area C is determined to be low.
Step 416: and acquiring target rendering resources corresponding to the target rendering precision grade according to the target rendering precision grade corresponding to each scene area.
The method comprises the steps of obtaining an initial rendering resource with a high rendering precision level corresponding to a scene area A, obtaining a rendering optimization resource with a medium rendering precision level corresponding to a scene area B, and obtaining a rendering optimization resource with a low rendering precision level corresponding to a scene area C.
Step 418: and adding the target rendering resources corresponding to each scene area to the rendering queue according to the rendering precision grade corresponding to each scene area from top to bottom.
And adding rendering resources corresponding to the scene area A, the scene area B and the scene area C into a rendering queue in sequence according to the sequencing result.
Step 420: and calling each target rendering resource in the rendering queue from top to bottom to render each scene area.
The rendering resource of the scene area A is called preferentially, then the rendering resource of the scene area B is called, and finally the rendering resource of the scene area C is called.
According to the resource rendering method applied to rendering game scenes, a plurality of scene areas of a target scene are obtained by dividing the target scene, rendering resources under different rendering precision levels are obtained by adjusting the rendering precision of rendering resources corresponding to each scene area, and during later rendering, rendering resources corresponding to the rendering precision levels can be selected according to the line-of-sight information to render the scene areas, so that the effect of reducing memory consumption is achieved, and the rendering efficiency is improved.
Corresponding to the above method embodiment, the present application further provides a resource rendering device embodiment, and fig. 5 shows a schematic structural diagram of a resource rendering device according to an embodiment of the present application. As shown in fig. 5, the apparatus 500 includes:
a dividing module 502 configured to divide a target scene, obtaining a scene area of the target scene;
an adjusting module 504, configured to adjust rendering precision of rendering resources corresponding to the scene area, and generate at least one rendering optimization resource corresponding to the scene area;
a determining module 506, configured to determine, in response to a screen rendering instruction for the target scene, a target rendering resource corresponding to the scene area according to the rendering resource and the at least one rendering optimization resource;
a generating module 508, configured to render the scene area through the target rendering resource, and generate a display of the target scene according to a result of the area rendering.
Optionally, the dividing module 502 is further configured to:
dividing a target scene according to the sight distance information and/or the scene attribute information to obtain a division result;
and determining a scene area of the target scene according to the division result.
Optionally, the adjusting module 504 is further configured to:
determining rendering resources corresponding to the scene area, and determining at least one rendering precision level corresponding to the rendering resources;
and generating rendering optimization resources corresponding to each rendering precision grade according to each rendering precision grade.
Optionally, the adjusting module 504 is further configured to:
obtaining an optimization strategy corresponding to the target rendering precision level;
and optimizing the rendering resources based on the optimization strategy, and generating rendering optimized resources corresponding to the rendering precision grade.
Optionally, the adjusting module 504 is further configured to:
obtaining model map data and model grid data in the rendering resources;
merging the model mapping data based on the optimization strategy, and/or reducing the surface of the model grid data based on the optimization strategy;
and generating rendering optimization resources corresponding to the rendering precision grade according to the processing result.
Optionally, the determining module 506 is further configured to:
determining a target spacing distance corresponding to the scene area according to rendering position information carried in the picture rendering instruction;
determining a target rendering precision grade corresponding to the target spacing distance based on the corresponding relation between the spacing distance and the rendering precision grade;
and acquiring the target rendering resource corresponding to the target rendering precision grade according to the target rendering precision grade.
Optionally, the generating module 508 is further configured to:
according to the rendering precision grade corresponding to each scene area, adding the target rendering resources corresponding to each scene area to a rendering queue in a sequence from top to bottom;
and calling each target rendering resource in the rendering queue from top to bottom to render each scene area.
Optionally, the generating module 508 is further configured to:
determining an idle scene area of the target scene according to the rendering position information;
deleting the target rendering resources corresponding to the idle scene area in the rendering queue to obtain a target rendering queue;
correspondingly, calling each target rendering resource in the rendering queue in the order from top to bottom to render each scene area, including:
and calling each target rendering resource in the target rendering queue in the order from top to bottom to render each scene area.
The application provides a resource rendering device, includes: the device comprises a dividing module, a judging module and a judging module, wherein the dividing module is configured to divide a target scene and obtain a scene area of the target scene; the adjusting module is configured to adjust rendering precision of rendering resources corresponding to the scene area and generate at least one rendering optimization resource corresponding to the scene area; a determination module configured to determine, in response to a screen rendering instruction for the target scene, a target rendering resource corresponding to the scene area according to the rendering resource and the at least one rendering optimization resource; and the generating module is configured to render the scene area through the target rendering resource and generate a display picture of the target scene according to an area rendering result. The target scene is divided to obtain a plurality of scene areas of the target scene, and the rendering precision of rendering resources corresponding to each scene area is adjusted, so that the effect of reducing memory consumption is achieved. And meanwhile, a plurality of rendering optimization resources with different rendering precisions are generated, and when the picture is rendered at the later stage, the rendering resources with different rendering precisions can be selected for rendering, so that the rendering efficiency is improved.
The foregoing is a schematic solution of a resource rendering apparatus according to this embodiment. It should be noted that the technical solution of the resource rendering apparatus and the technical solution of the resource rendering method belong to the same concept, and details that are not described in detail in the technical solution of the resource rendering apparatus can be referred to the description of the technical solution of the resource rendering method.
It should be noted that the components in the device claims should be understood as functional modules that are necessary to implement the steps of the program flow or the steps of the method, and each functional module is not limited to an actual functional division or separation. The device claims defined by such a set of functional modules are to be understood as a functional module framework for implementing the solution mainly by means of a computer program as described in the specification, and not as a physical device for implementing the solution mainly by means of hardware.
There is also provided in an embodiment of the present application a computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the resource rendering method when executing the computer instructions.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the resource rendering method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the resource rendering method.
An embodiment of the present application also provides a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the resource rendering method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium and the technical solution of the resource rendering method belong to the same concept, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the resource rendering method.
The embodiment of the application discloses a chip, which stores computer instructions, and the computer instructions are executed by a processor to realize the steps of the resource rendering method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in source code form, object code form, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art will appreciate that the embodiments described in this specification are presently considered to be preferred embodiments and that acts and modules are not required in the present application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (11)

1. A method of resource rendering, comprising:
dividing a target scene to obtain a scene area of the target scene;
adjusting the rendering precision of rendering resources corresponding to the scene area, and generating at least one rendering optimization resource corresponding to the scene area;
in response to a picture rendering instruction for the target scene, determining a target rendering resource corresponding to the scene area according to the rendering resource and the at least one rendering optimization resource;
and rendering the scene area through the target rendering resource, and generating a display picture of the target scene according to an area rendering result.
2. The method of claim 1, wherein partitioning the target scene to obtain scene regions of the target scene comprises:
dividing a target scene according to the sight distance information and/or the scene attribute information to obtain a division result;
and determining a scene area of the target scene according to the division result.
3. The method of claim 1, wherein adjusting rendering precision of rendering resources corresponding to the scene area to generate at least one rendering optimization resource corresponding to the scene area comprises:
determining rendering resources corresponding to the scene area, and determining at least one rendering precision level corresponding to the rendering resources;
and generating rendering optimization resources corresponding to each rendering precision grade according to each rendering precision grade.
4. The method of claim 3, wherein the rendering optimization resources for any one rendering precision level are generated by:
obtaining an optimization strategy corresponding to the target rendering precision level;
and optimizing the rendering resources based on the optimization strategy, and generating rendering optimized resources corresponding to the rendering precision grade.
5. The method of claim 4, wherein optimizing the rendering resources based on the optimization strategy and generating rendering optimized resources corresponding to the rendering accuracy level comprises:
obtaining model map data and model grid data in the rendering resources;
merging the model mapping data based on the optimization strategy, and/or reducing the surface of the model grid data based on the optimization strategy;
and generating rendering optimization resources corresponding to the rendering precision grade according to the processing result.
6. The method of claim 3, wherein determining the target rendering resources corresponding to the scene area based on the rendering resources and the at least one rendering optimization resource comprises:
determining a target spacing distance corresponding to the scene area according to rendering position information carried in the picture rendering instruction;
determining a target rendering precision grade corresponding to the target spacing distance based on the corresponding relation between the spacing distance and the rendering precision grade;
and acquiring the target rendering resource corresponding to the target rendering precision grade according to the target rendering precision grade.
7. The method of claim 3, wherein rendering the scene area through the target rendering resources comprises:
according to the rendering precision grade corresponding to each scene area, adding the target rendering resources corresponding to each scene area to a rendering queue in a sequence from top to bottom;
and calling each target rendering resource in the rendering queue from top to bottom to render each scene area.
8. The method of claim 7, wherein prior to rendering each scene region according to each target rendering resource stored in the rendering queue, further comprising:
determining an idle scene area of the target scene according to the rendering position information;
deleting the target rendering resources corresponding to the idle scene area in the rendering queue to obtain a target rendering queue;
correspondingly, calling each target rendering resource in the rendering queue in the order from top to bottom to render each scene area, including:
and calling each target rendering resource in the target rendering queue in the order from top to bottom to render each scene area.
9. A resource rendering apparatus, comprising:
the device comprises a dividing module, a judging module and a judging module, wherein the dividing module is configured to divide a target scene and obtain a scene area of the target scene;
the adjusting module is configured to adjust rendering precision of rendering resources corresponding to the scene area and generate at least one rendering optimization resource corresponding to the scene area;
a determination module configured to determine, in response to a screen rendering instruction for the target scene, a target rendering resource corresponding to the scene area according to the rendering resource and the at least one rendering optimization resource;
and the generating module is configured to render the scene area through the target rendering resource and generate a display picture of the target scene according to an area rendering result.
10. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-8 when executing the computer instructions.
11. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 8.
CN202210879186.7A 2022-07-25 2022-07-25 Resource rendering method and device Pending CN115228083A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210879186.7A CN115228083A (en) 2022-07-25 2022-07-25 Resource rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210879186.7A CN115228083A (en) 2022-07-25 2022-07-25 Resource rendering method and device

Publications (1)

Publication Number Publication Date
CN115228083A true CN115228083A (en) 2022-10-25

Family

ID=83676142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210879186.7A Pending CN115228083A (en) 2022-07-25 2022-07-25 Resource rendering method and device

Country Status (1)

Country Link
CN (1) CN115228083A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117440110A (en) * 2023-10-23 2024-01-23 神力视界(深圳)文化科技有限公司 Virtual shooting control method, medium and mobile terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117440110A (en) * 2023-10-23 2024-01-23 神力视界(深圳)文化科技有限公司 Virtual shooting control method, medium and mobile terminal

Similar Documents

Publication Publication Date Title
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
CN104200506A (en) Method and device for rendering three-dimensional GIS mass vector data
US20230120253A1 (en) Method and apparatus for generating virtual character, electronic device and readable storage medium
WO2023231537A1 (en) Topographic image rendering method and apparatus, device, computer readable storage medium and computer program product
EP4394713A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
JP2023519728A (en) 2D image 3D conversion method, apparatus, equipment, and computer program
CN115082609A (en) Image rendering method and device, storage medium and electronic equipment
CN111784817B (en) Shadow display method and device, storage medium and electronic device
CN115228083A (en) Resource rendering method and device
CN109377552B (en) Image occlusion calculating method, device, calculating equipment and storage medium
JP2023525945A (en) Data Optimization and Interface Improvement Method for Realizing Augmented Reality of Large-Scale Buildings on Mobile Devices
EP4287134A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
CN117101127A (en) Image rendering method and device in virtual scene, electronic equipment and storage medium
CN115526976A (en) Virtual scene rendering method and device, storage medium and electronic equipment
CN116152405B (en) Service processing method and device, computer equipment and storage medium
CN114047998B (en) Object updating method and device
CN117298571A (en) Star data construction method and device
US11954802B2 (en) Method and system for generating polygon meshes approximating surfaces using iteration for mesh vertex positions
CN113398575B (en) Thermodynamic diagram generation method and device, computer readable medium and electronic equipment
US20230394767A1 (en) Method and system for generating polygon meshes approximating surfaces using root-finding and iteration for mesh vertex positions
US20240005588A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
US20220351479A1 (en) Style transfer program and style transfer method
CN117611728A (en) Image display method and device, computing device and computer readable storage medium
CN116109745A (en) Object rendering method and device based on instantiation technology
CN115719392A (en) Virtual character generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination