CN117197275A - Terrain rendering method and device - Google Patents

Terrain rendering method and device Download PDF

Info

Publication number
CN117197275A
CN117197275A CN202311172675.XA CN202311172675A CN117197275A CN 117197275 A CN117197275 A CN 117197275A CN 202311172675 A CN202311172675 A CN 202311172675A CN 117197275 A CN117197275 A CN 117197275A
Authority
CN
China
Prior art keywords
rendering
data
grass
topographic
terrain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311172675.XA
Other languages
Chinese (zh)
Inventor
刘立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yiju Future Network Technology Co ltd
Original Assignee
Guangzhou Yiju Future Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yiju Future Network Technology Co ltd filed Critical Guangzhou Yiju Future Network Technology Co ltd
Priority to CN202311172675.XA priority Critical patent/CN117197275A/en
Publication of CN117197275A publication Critical patent/CN117197275A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The application discloses a terrain rendering method and a terrain rendering device, which are used for obtaining mixed proportion data of a plurality of terrain texture maps at pixel points based on parameters of the pixel points in a sampling image; and rendering the vertexes corresponding to the pixel points in the terrain grid area based on the terrain texture maps and the mixed proportion data. Through limited topography texture map, can reach the rendering effect of different topography textures through different mix proportion, need not to draw a topography texture map alone to each anticipated rendering effect to reduce the work load of drawing a large amount of topography texture maps, improve topography rendering efficiency.

Description

Terrain rendering method and device
Technical Field
The application relates to the technical field of computers, in particular to a terrain rendering method and device.
Background
To increase the diversity of terrain rendering, a plurality of different terrain texture maps are typically manually drawn and then rendered with different terrain texture maps in different rendering areas.
When the method is adopted, a large number of topographic texture maps are required to be drawn manually, so that the efficiency of topographic rendering is low.
Disclosure of Invention
The application provides a terrain rendering method and a terrain rendering device, which can reduce the workload of drawing a large amount of terrain texture maps and improve the terrain rendering efficiency. The technical scheme is as follows.
In a first aspect, there is provided a terrain rendering method, the method comprising:
acquiring mixed proportion data of a plurality of topographic texture maps at pixel points based on parameters of the pixel points in the sampling image;
and rendering the vertexes corresponding to the pixel points in the terrain grid area based on the terrain texture maps and the mixed proportion data.
In one possible implementation manner, the rendering the vertices corresponding to the pixels in the terrain mesh area based on the plurality of terrain texture maps and the blending proportion data includes:
mixing a plurality of topographic texture maps based on the mixing proportion data to obtain a mixed texture map;
and rendering the vertexes corresponding to the pixel points in the terrain grid area based on the mixed texture map.
In one possible implementation manner, the blending proportion data includes a weight of each of the plurality of topographic maps, and the obtaining blending proportion data of the plurality of topographic maps at the pixel point based on the parameter of the pixel point in the sampled image includes:
And acquiring the weight of the topographic texture map corresponding to the color channel at the pixel point based on the value of the color channel of the pixel point in the sampling image.
In one possible implementation manner, the plurality of topographic texture maps includes a first topographic texture map, a second topographic texture map, a third topographic texture map, and a fourth topographic texture map, and the obtaining the weight of the topographic texture map corresponding to the color channel at the pixel point based on the value of the color channel of the pixel point in the sampled image includes:
acquiring the weight of the first topographic texture map at the pixel point based on the value of the red channel of the pixel point in the sampling image, wherein the first topographic texture map corresponds to the red channel;
acquiring the weight of the second topographic texture map at the pixel point based on the value of the blue channel of the pixel point in the sampling image, wherein the second topographic texture map corresponds to the blue channel;
acquiring the weight of the third topographic texture map at the pixel point based on the value of the green channel of the pixel point in the sampling image, wherein the third topographic texture map corresponds to the green channel;
and acquiring the weight of the fourth topographic texture map at the pixel point based on the value of the opaque channel of the pixel point in the sampling image, wherein the fourth topographic texture map corresponds to the opaque channel.
In one possible implementation, the sampled image includes N partitioned sampled images, and the terrain mesh region includes N rendered regions;
rendering vertices corresponding to the pixel points in the terrain mesh region based on the plurality of terrain texture maps and the mixing ratio data, including:
acquiring Mth mixing proportion data based on the Mth partition sampling image;
rendering vertices corresponding to pixel points in an Mth rendering area based on the plurality of terrain texture maps and the Mth mixing proportion data;
wherein N is an integer greater than 1, and M is a positive integer less than or equal to N.
In one possible implementation manner, the rendering the vertex corresponding to the pixel point in the mth rendering area based on the plurality of topographic texture maps and the mth mixed proportion data includes:
mixing the terrain texture maps based on the Mth mixing proportion data to obtain an Mth mixing texture map;
and rendering the vertex corresponding to the pixel point in the Mth rendering area based on the Mth mixed texture map.
In one possible implementation, the plurality of topographic maps comprises K sets of topographic maps, and the sample image comprises K region sample images;
The obtaining the mixed proportion data of the plurality of topographic texture maps at the pixel points based on the parameters of the pixel points in the sampling image comprises the following steps:
acquiring the L scale data of the topographic texture map in the L topographic texture map set at the pixel point based on the parameters of the pixel point in the L regional sampling image;
all of the L-th ratio data constitute the mixing ratio data;
wherein K is an integer greater than 1, and L is a positive integer less than or equal to K.
In one possible implementation, the method further includes:
obtaining rendering data of the terrain grid area based on the value of each color channel of the pixel point of the sampling image;
and inserting a grass insert into the terrain mesh area based on the rendering data.
In one possible implementation, the inserting a grass blade into the terrain mesh area based on the rendering data includes:
the grass blades are inserted into the unit area in response to determining that rendering data within the unit area in the terrain mesh area satisfies a grass texture feature.
In one possible implementation, the inserting a grass blade into the terrain mesh area based on the rendering data includes:
Determining the number of the grass blades based on the duty ratio of the grass texture indicated by the rendering data, wherein the number of the grass blades is positively correlated with the duty ratio of the grass texture;
inserting said number of said grass blades into said unit area.
In one possible implementation, the inserting a grass blade into the terrain mesh area based on the rendering data includes:
determining a grass blade based on the rendering data and a distance between a unit area in the terrain mesh area and the rendering center, the grass blade comprising at least one vertex, a number of vertices in the grass blade being inversely related to the distance;
the grass blades are inserted into the unit areas.
In one possible implementation, the grass blades include a first grass blade and a second grass blade, the number of vertices in the first grass blade being greater than the number of vertices in the second grass blade, the determining a grass blade based on the rendering data and a distance between a unit area in the terrain mesh area and the rendering center, comprising:
in response to determining that the distance is less than or equal to a distance threshold, determining the first grass insert; or,
In response to determining that the distance is greater than or equal to a distance threshold, the second grass insert is determined.
In one possible implementation manner, the obtaining the mixed proportion data of the plurality of topographic texture maps at the pixel points based on the parameters of the pixel points in the sampled image includes:
acquiring mixed proportion data of a topographic texture map corresponding to the gray value at the pixel point based on the gray value of the pixel point in the sampling image; or,
obtaining mixed proportion data of a topographic texture map corresponding to a transparency channel at the pixel point based on the value of the transparency channel of the pixel point in the sampling image; or,
and obtaining mixed proportion data of the topographic texture map corresponding to the height value at the pixel point based on the height value of the pixel point in the sampling image.
In a second aspect, there is provided a terrain rendering apparatus, the apparatus comprising:
the acquisition module is used for acquiring mixed proportion data of a plurality of topographic texture maps at pixel points based on parameters of the pixel points in the sampling image;
and the rendering module is used for rendering the vertexes corresponding to the pixel points in the terrain grid area based on the terrain texture maps and the mixed proportion data.
In a possible implementation manner, the rendering module is configured to mix a plurality of topographic texture maps based on the mixing proportion data to obtain a mixed texture map; and rendering the vertexes corresponding to the pixel points in the terrain grid area based on the mixed texture map.
In one possible implementation manner, the mixing proportion data includes a weight of each of the plurality of topographic texture maps, and the obtaining module is configured to obtain the weight of the topographic texture map corresponding to the color channel at the pixel point based on the value of the color channel of the pixel point in the sampled image.
In one possible implementation manner, the plurality of topographic texture maps includes a first topographic texture map, a second topographic texture map, a third topographic texture map, and a fourth topographic texture map, and the acquiring module is configured to acquire a weight of the first topographic texture map at the pixel point based on a value of a red channel of the pixel point in the sampled image, where the first topographic texture map corresponds to the red channel; acquiring the weight of the second topographic texture map at the pixel point based on the value of the blue channel of the pixel point in the sampling image, wherein the second topographic texture map corresponds to the blue channel; acquiring the weight of the third topographic texture map at the pixel point based on the value of the green channel of the pixel point in the sampling image, wherein the third topographic texture map corresponds to the green channel; and acquiring the weight of the fourth topographic texture map at the pixel point based on the value of the opaque channel of the pixel point in the sampling image, wherein the fourth topographic texture map corresponds to the opaque channel.
In one possible implementation, the sampled image includes N partitioned sampled images, and the terrain mesh region includes N rendered regions;
the rendering module is used for acquiring the Mth mixing proportion data based on the Mth partition sampling image; rendering vertices corresponding to pixel points in an Mth rendering area based on the plurality of terrain texture maps and the Mth mixing proportion data; wherein N is an integer greater than 1, and M is a positive integer less than or equal to N.
In a possible implementation manner, the rendering module is configured to mix the plurality of topographic texture maps based on the mth mixing proportion data to obtain the mth mixing texture map; and rendering the vertex corresponding to the pixel point in the Mth rendering area based on the Mth mixed texture map.
In one possible implementation, the plurality of topographic maps comprises K sets of topographic maps, and the sample image comprises K region sample images;
the obtaining module is configured to obtain, based on parameters of a pixel point in the first area sampling image, first scale data of a topographic texture map in a first topographic texture map set at the pixel point; all of the L-th ratio data constitute the mixing ratio data;
Wherein K is an integer greater than 1, and L is a positive integer less than or equal to K.
In a possible implementation manner, the obtaining module is further configured to obtain rendering data of the terrain mesh area based on a value of each color channel of the pixel point of the sampled image;
the rendering module is further configured to insert a grass insert into the terrain mesh area based on the rendering data.
In one possible implementation, the rendering module is configured to insert the grass blades into a unit area in the terrain mesh area in response to determining that rendering data within the unit area satisfies a grass texture feature.
In one possible implementation, the rendering module is configured to determine, based on a duty cycle of the grass texture indicated by the rendering data, a number of the grass blades, the number of the grass blades being positively correlated with the duty cycle of the grass texture; inserting said number of said grass blades into said unit area.
In one possible implementation, the rendering module is configured to determine a grass blade based on the rendering data and a distance between a unit area in the terrain mesh area and the rendering center, the grass blade including at least one vertex, a number of vertices in the grass blade being inversely related to the distance; the grass blades are inserted into the unit areas.
In one possible implementation, the grass blades include a first grass blade and a second grass blade, the number of vertices in the first grass blade being greater than the number of vertices in the second grass blade, the rendering module to determine the first grass blade in response to determining that the distance is less than or equal to a distance threshold; alternatively, the second grass blade is determined in response to determining that the distance is greater than or equal to a distance threshold.
In one possible implementation manner, the obtaining module is configured to obtain, based on a gray value of the pixel point in the sampled image, mixed proportion data of a topographic texture map at the pixel point, where the mixed proportion data corresponds to the gray value; or, based on the value of the transparency channel of the pixel point in the sampling image, obtaining the mixed proportion data of the topographic texture map corresponding to the transparency channel at the pixel point; or, based on the height value of the pixel point in the sampling image, obtaining the mixed proportion data of the topographic texture map corresponding to the height value at the pixel point.
In a third aspect, a server is provided, the server comprising: a processor coupled to a memory having stored therein at least one computer program instruction that is loaded and executed by the processor to cause the server to implement the method of the first aspect or any of the alternatives of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium having stored therein at least one instruction which when executed on a computer causes the computer to perform the method of the first aspect or any of the alternatives of the first aspect.
In a fifth aspect, there is provided a computer program product comprising one or more computer program instructions which, when loaded and run by a computer, cause the computer to carry out the method of the first aspect or any of the alternatives of the first aspect.
In a sixth aspect, there is provided a chip comprising programmable logic circuitry and/or program instructions for implementing the method of the first aspect or any of the alternatives of the first aspect, when the chip is run.
In a seventh aspect, a server cluster is provided, where the server cluster includes a first server and a second server, and the first server and the second server are configured to cooperatively implement a method according to the first aspect or any of the alternatives of the first aspect.
From this, the embodiment of the application has the following beneficial effects:
Obtaining mixed proportion data of a plurality of topographic texture maps at pixel points based on parameters of the pixel points in the sampling image; based on a plurality of topographic texture maps and mixing proportion data, the vertices corresponding to the pixel points in the topographic grid area are rendered, so that different topographic texture rendering effects can be achieved through limited topographic texture maps and different mixing proportions, and one topographic texture map does not need to be drawn independently for each expected rendering effect, so that the workload of drawing a large number of topographic texture maps is reduced, and the topographic rendering efficiency is improved.
Drawings
FIG. 1 is a flow chart of a terrain rendering method provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a sampled image according to an embodiment of the present application;
FIG. 3 is a schematic diagram of rendering effects obtained according to the sampled image shown in FIG. 2 according to an embodiment of the present application;
FIG. 4 is a schematic diagram of terrain mesh data provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terrain rendering device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a terrain rendering method according to an embodiment of the present application. The method shown in fig. 1 includes the following steps S110 to S120.
Step S110, based on parameters of pixel points in the sampling image, mixing proportion data of a plurality of topographic texture maps at the pixel points is obtained.
In one possible implementation, the blending ratio data of the plurality of topographic texture maps at the pixel points is obtained based on the color data of the pixel points in the sampled image. For example, the sampled image includes a first pixel, the color data of the first pixel includes values of a plurality of color channels of the first pixel, and a ratio between the values of different color channels of the first pixel is obtained as mixed ratio data of a plurality of topographic texture maps. For example, the value of the first color channel of the first pixel point in the sampled image is 0.3, the sum of the values of all color channels (including the first color channel and other color channels except the first color channel) of the first pixel point is 1, the first color channel corresponds to the first topographic texture map, the ratio of the value of the first color channel to the sum of the values of all color channels is 0.3, and then the weight of the first topographic texture map is 0.3 when the vertex corresponding to the first pixel point is rendered.
The sampled image may be parsed to obtain values for color channels of pixels in the sampled image. And acquiring the weight of the topographic texture map corresponding to the color channel at the pixel point, and taking the weight of each topographic texture map as the mixing proportion data of the pixel point.
In one possible implementation, the value of each color channel is normalized such that the value of each color channel is between 0 and 1; then, the sum of the values of each color channel is calculated; then, the proportion of the value of each color channel in the sum is calculated to obtain the mixing proportion data.
The mixing proportion of each topographic texture map is accurately controlled by acquiring the weight of the corresponding topographic texture map at the pixel point based on the color channel value of the pixel point in the sampled image. The values of the different color channels correspond to the weights of the different topographic texture maps, so that the display intensity of the different topographic texture maps can be determined according to the color channel values of the pixel points in the rendering process, and the presentation of the topographic texture can be finely adjusted. In addition, each topographic texture map corresponds to different color channels in the sampled image, so that a specific texture mixing mode can be determined according to the value of each different color channel, and more flexible and diversified topographic texture combinations are realized.
In one possible implementation, by sampling the red channel (R), the green channel (G), the blue channel (B), and the opaque channel (a) of the image to each correspond to one topographic texture map, the plurality of topographic texture maps includes, illustratively, a first topographic texture map, a second topographic texture map, a third topographic texture map, and a fourth topographic texture map, the first topographic texture map corresponding to the red channel, the second topographic texture map corresponding to the blue channel, the third topographic texture map corresponding to the green channel, and the fourth topographic texture map corresponding to the opaque channel. Analyzing the sampling image to obtain the value of the red channel of the pixel point, the value of the blue channel of the pixel point, the value of the green channel of the pixel point and the value of the opaque channel of the pixel point in the sampling image. Acquiring the weight of the first topographic texture map at the pixel point based on the value of the red channel of the pixel point in the sampling image; acquiring the weight of the second topographic texture map at the pixel point based on the value of the blue channel of the pixel point in the sampling image; acquiring the weight of the third topographic texture map at the pixel point based on the value of the green channel of the pixel point in the sampling image; and acquiring the weight of the fourth topographic texture map at the pixel point based on the value of the opaque channel of the pixel point in the sampling image.
In one possible implementation, the terrain texture weight is determined from the ratio between the four color channels R, G, B and a. For example, the sum of four color channels of the value of the red channel, the value of the blue channel, the value of the green channel, and the value of the opaque channel is obtained. Determining the ratio of the value of a red channel of a pixel point in a sampling image to the sum of four color channels, and taking the ratio as the weight of a first topographic texture map at the pixel point; determining the ratio of the value of the blue channel of the pixel point in the sampling image to the sum of the four color channels, and taking the ratio as the weight of the second topographic texture map at the pixel point; determining the ratio of the value of the green channel of the pixel point in the sampling image to the sum of the four color channels as the weight of the third topographic texture map at the pixel point; and determining the ratio of the value of the opaque channel of the pixel point in the sampling image to the sum of the four color channels, and taking the ratio as the weight of the fourth topographic texture map at the pixel point.
For example, referring to fig. 2 and fig. 3, fig. 2 is a schematic diagram of a sampled image according to an embodiment of the present application, fig. 3 is a schematic diagram of rendering effects obtained according to the sampled image shown in fig. 2, color data of each pixel point in the sampled image in fig. 2 includes values of four color channels R, G, B and a, the four color channels of the sampled image correspond to four topographic texture maps, and the rendering effects obtained according to the sampled image in fig. 2 are shown in fig. 3.
For example, the sampled image includes a first pixel point, where the value of the first pixel point in the red channel is 0.3, the value of the first pixel point in the green channel (G) is 0.4, the value in the blue channel (B) is 0.2, and the value in the opacity channel (a) is 0.1. The sum of the four color channel values is calculated: 0.3+0.4+0.2+0.1=1.0; then, the ratio between the value of each color channel and the sum is calculated, and the weight of the red channel (R) is obtained: 0.3/1.0=0.3 (30%), weight of green channel (G): weight of 0.4/1.0=0.4 (40%), blue channel (B): weight of 0.2/1.0=0.2 (20%), opacity channel (a): 0.1/1.0=0.1 (10%). The weight of the first topographic texture map at the pixel point is determined to be 0.3, the weight of the second topographic texture map at the pixel point is determined to be 0.4, the weight of the third topographic texture map at the pixel point is determined to be 0.2, and the weight of the fourth topographic texture map at the pixel point is determined to be 0.1.
In another possible implementation, the cyan channel (C), the magenta channel (M), the yellow channel (Y), and the black channel (a) of the topographic map respectively correspond to one topographic map, and the plurality of topographic maps includes, illustratively, a first topographic map corresponding to the cyan channel, a second topographic map corresponding to the yellow channel, a third topographic map corresponding to the magenta channel, and a fourth topographic map corresponding to the black channel. Analyzing the sampling image to obtain the cyan channel value, the yellow channel value, the magenta channel value and the black channel value of the pixel point in the sampling image. Acquiring the weight of the first topographic texture map at the pixel point based on the value of the cyan channel of the pixel point in the sampling image; acquiring the weight of the second topographic texture map at the pixel point based on the value of the yellow channel of the pixel point in the sampling image; acquiring the weight of the third topographic texture map at the pixel point based on the value of the magenta channel of the pixel point in the sampling image; and acquiring the weight of the fourth topographic texture map at the pixel point based on the value of the black channel of the pixel point in the sampling image.
For example, the sum of four color channels of the value of the cyan channel, the value of the yellow channel, the value of the magenta channel, and the value of the black channel is obtained. Determining the ratio of the value of a cyan channel of a pixel point in a sampling image to the sum of four color channels, and taking the ratio as the weight of a first topographic texture map at the pixel point; determining the ratio of the value of the yellow channel of the pixel point in the sampling image to the sum of the four color channels, and taking the ratio as the weight of the second topographic texture map at the pixel point; determining the ratio of the value of the magenta channel of the pixel point in the sampling image to the sum of the four color channels, and taking the ratio as the weight of the third topographic texture map at the pixel point; and determining the ratio of the value of the black channel of the pixel point in the sampling image to the sum of the four color channels, and taking the ratio as the weight of the fourth topographic texture map on the pixel point.
In still another possible implementation manner, the Hue channel (Hue), saturation channel (Saturation) and luminance channel (Y) of the topographic texture map respectively correspond to one topographic texture map, and the plurality of topographic texture maps includes, for example, a first topographic texture map, a second topographic texture map and a third topographic texture map, the first topographic texture map corresponding to the Hue channel, the second topographic texture map corresponding to the luminance channel, and the third topographic texture map corresponding to the Saturation channel. Analyzing the sampling image to obtain the value of the hue channel of the pixel point, the value of the brightness channel of the pixel point and the value of the saturation channel of the pixel point in the sampling image. Acquiring the weight of the first topographic texture map at the pixel point based on the value of the hue channel of the pixel point in the sampling image; acquiring the weight of the second topographic texture map at the pixel point based on the value of the brightness channel of the pixel point in the sampling image; and acquiring the weight of the third topographic texture map at the pixel point based on the value of the saturation channel of the pixel point in the sampling image.
In one possible implementation, the terrain texture weight is determined from the ratio between the three color channels, hue, saturation and brightness. For example, the sum of three color channels, that is, the value of the hue channel, the value of the saturation channel, and the value of the brightness channel, is obtained. Determining the sum of the value of a hue channel of a pixel point in a sampling image and three color channels, and taking the sum as the weight of a first topographic texture map at the pixel point; determining the sum of the saturation channel value and the three color channels of the pixel point in the sampling image as the weight of the second topographic texture map at the pixel point; and determining the sum of the brightness channel value and the three color channels of the pixel points in the sampling image as the weight of the third topographic texture map at the pixel points.
In another possible implementation, the mixed proportion data of the plurality of topographic texture maps at the pixel points is obtained based on the height values of the pixel points in the sampled image. For example, the height value of a pixel is used as a weight of a topographic texture map. For example, a first topographical texture map is used in response to determining that the height value of the pixel point is greater than a threshold value, and a second topographical texture map is used in response to determining that the height value of the pixel point is less than or equal to the threshold value. For another example, the height value of the pixel and the value of each color channel of the pixel are normalized. For example, the height value of a pixel is linearly mapped or normalized, and the color channel values are divided by 255 (or other range) such that the height value of the pixel scales to the same range of values as each color channel of the pixel. Then, calculating the sum of the value of each color channel of the pixel point and the height value of the pixel point; and then, calculating the ratio of the value of each color channel of the pixel point and the height value of the pixel point in the sum to obtain mixed ratio data.
In another possible implementation, the blending proportion data of the plurality of topographic texture maps at the pixel points is obtained based on the normal vector of the pixel points in the sampled image. For example, at least one of the slope of the normal vector of the pixel point or the direction of the normal vector is mapped to a weight of a topographic texture map.
In another possible implementation, the mixed proportion data of the plurality of topographic texture maps at the pixel points is obtained based on the illumination values of the pixel points in the sampled image. For example, the illumination value of the pixel point is used as the weight of a topographic texture map.
In another possible implementation, the blending proportion data of the plurality of topography texture maps at the pixel points is obtained based on the shading values or the shadow values of the pixel points in the sampled image. For example, the shading value or the shadow value of the pixel point is used as the weight of a topographic texture map.
In another possible implementation, based on the gray values of the pixels in the sampled image, the blending ratio data of the topographic texture map at the pixels corresponding to the gray values is obtained.
And step S120, rendering the vertexes corresponding to the pixel points in the terrain grid area based on the terrain texture maps and the mixed proportion data.
A terrain mesh area is a discretized area used to represent terrain, typically for terrain rendering and simulation. In computer graphics, a topographical grid area may be considered a two-dimensional or three-dimensional grid consisting of a series of adjacent vertices and edges connecting them. In the two-dimensional case, the terrain mesh area is a mesh plane, consisting of a series of vertices and edges. Each vertex represents a point of the topographical surface and the edges represent the connection between adjacent points. Each vertex may be assigned a height value to simulate a change in height of the terrain. The use of interpolation algorithms can infer the height of unknown points by known height values to form a smooth terrain model. In three dimensions, a topographical grid region is a grid of triangles (or other polygons) connected to resemble the surface of a grid. Each vertex represents a point of the terrain surface and each triangle represents a small segment of the terrain. Each vertex may be assigned a height value to simulate the change in height of the terrain, and a continuous terrain surface model may be obtained by interpolating and smoothing the height values over each triangle. The fineness of the terrain mesh region depends on the resolution of the mesh, i.e., the number of vertices. Higher resolution may provide more detailed and realistic terrain manifestations, but may also increase computational and rendering complexity. Lower resolution may improve performance but may result in reduced definition of the terrain surface.
In one possible implementation, step S130 includes the following steps a to C.
And step A, determining the height value of each pixel point and the position information of each pixel point in the grid through a preset sequence array in the selected grid. And constructing a terrain grid area based on the height value of the pixel point and the position information of the pixel point.
In one possible implementation, the terrain mesh data behaves like a gray scale map. For example, the height change of the terrain is represented using the color shading (gray scale) in the height topography. Specifically, when an area of an area in the altitude map is whiter, the greater the altitude value of the area is indicated, that is, the higher the terrain of the area is. Conversely, when a region in a high topography is darker or darker, it is indicated that the region has a smaller height value, i.e., the topography is lower. For example, referring to fig. 4, fig. 4 is a schematic diagram of terrain mesh data according to an embodiment of the present application, and by observing the brightness of the color in fig. 4, the height change condition of the terrain can be intuitively understood.
A sequence array refers to an array in which sequences are used as data. In other words, each element in the sequence array is a sequence type of data. The number of bits per element (i.e., sequence) in the sequence array may be set according to the accuracy requirements. For example, the number of bits of the sequence may be 8 bits or 16 bits, etc. The greater the number of bits of the sequence, the finer the height value representation can be provided. The value of the sequence corresponds to the height value. The sequence is in binary format.
The storage form of the array means that the distribution of the sequences is ordered. The number of the sequence data of each row and each column of the array corresponds to the number of the pixel points in the terrain grid. For example, the number of sequence data per row of the array indicates the number of pixels per row in the terrain grid. The number of sequence data per column of the array indicates the number of pixels per column in the terrain mesh. For example, the array has 1080 sequences per row, the array has 960 sequences per column, the data representing the terrain grid contains 1080×960 pixels, and the terrain grid has 1080 pixels per row, and 960 pixels per column. The horizontal distance between each pixel point is the same, so that the sequence is used as the height value of the pixel point to render the height topographic map.
And B, obtaining vertex data in the grid based on the coordinate information of the pixel points in the grid and the height values of the pixel points.
In one possible implementation, each pixel in the sampled image serves as a vertex in the terrain mesh area. The data of the vertex includes coordinate information and a height value, which is subsequently used for texture rendering of the vertex.
And C, acquiring all vertex data of the terrain grid area, and then performing texture rendering on all vertices of the terrain grid area according to texture feature configuration required to be rendered.
In general, texture rendering is required to be performed on the whole terrain grid area, but some terrains are limited by hardware because of larger terrains, the whole terrain grid area cannot be simultaneously rendered at one time, or the whole terrain grid area cannot be rendered at one time without occupying too much hardware. Therefore, the complete terrain grid area is divided into a plurality of rendering areas, and then the rendering areas are sequentially rendered according to a preset rule, so that the rendering of the whole terrain grid area is realized. Alternatively, instead of rendering the entire terrain mesh area at the same time, a specific rendering area may be rendered as needed, for example, a rendering area related to a position of a designated virtual character within a preset radius may be used as a specific rendering area, so that a hardware requirement required for rendering may be reduced.
The texture feature configuration is performed by using a topographic texture map as topographic texture data. Specifically, the topographic texture map is analyzed to obtain color distribution data of all pixel points in the topographic texture map, the color data are RGB data, and then the color distribution data are used as topographic texture data (including the number of the pixel points and the colors corresponding to the pixel points).
In one possible implementation, the terrain mesh region to be rendered is rendered entirely by closely paving terrain texture data in the terrain mesh region.
By tiling is meant tiling and filling the topographical texture data (or map) within the topographical grid area such that the entire area surface of the topographical grid area is completely covered by the map.
In one possible implementation, the precision of tiling is adjusted by configuring the tiling parameters. For example, the pixel size of the topographic texture map is 20×20, a tile is required in a rendering area with a size of 40×40, and according to the difference of tiling degrees, 1 can be selected in the rendering area with a size of 40×40: 1, wherein at the moment, each pixel point in a rendering area of 40 x 40 is covered and rendered one by one, and at the moment, the number of tiled topographic texture maps is 4; when 1 is selected: 4, only one quarter of the pixels in the rendering area of 40×40 are covered, i.e. one pixel out of every 4 pixels is rendered.
Rendering through only a single terrain texture map may result in the rendered terrain appearing single. However, if the blending ratio data is not introduced, only different topographic texture maps are used for rendering in different rendering areas, and when more topographic texture maps are needed, excessive manpower is required to draw the topographic texture maps. In this embodiment, the mixing proportion data is obtained based on the sampling image, and rendering is performed based on the mixing proportion data and the plurality of topographic texture maps, so that more topographic texture effects can be achieved through the limited topographic texture maps and different mixing proportions, and the number of manually drawn topographic texture maps is greatly reduced.
According to the method provided by the embodiment, mixed proportion data of a plurality of topographic texture maps at pixel points are obtained based on parameters of the pixel points in the sampling image; based on a plurality of topographic texture maps and mixing proportion data, the vertices corresponding to the pixel points in the topographic grid area are rendered, so that different topographic texture rendering effects can be achieved through limited topographic texture maps and different mixing proportions, and one topographic texture map does not need to be drawn independently for each expected rendering effect, so that the workload of drawing a large number of topographic texture maps is reduced, and the topographic rendering efficiency is improved.
In addition, mixing multiple texture maps can present more diverse surface features in different areas, making the terrain more realistic and detailed, than using only a single terrain texture map.
In addition, the mixing proportion data is acquired by using the parameters of the pixel points in the sampling image, so that the accuracy of the terrain texture rendering effect is improved. Each pixel point can determine the used terrain texture map and the corresponding mixing proportion according to the numerical value of the parameter, which is helpful for accurately rendering the effect to each pixel point, thereby presenting details and characteristics of the terrain more finely and enabling the terrain to be more real and lifelike.
In addition, if new terrain effects are needed or original texture effects are modified, the original texture images can be added or replaced, or the mixing proportion of the original texture images can be adjusted, and a large number of single texture images do not need to be drawn and modified again, so that the extensibility is good.
The following illustrates an implementation of acquiring a plurality of topographical maps and sampling images.
A topographical texture map is an image that contains topographical texture data that is used to impart different textures and colors to a topographical surface in a topographical rendering.
In one possible implementation, the color data for each pixel in the topographical texture map represents the texture characteristics of that pixel. The color data includes values for one or more color channels.
Illustratively, the texture features are represented by RGBA data of a topographic texture map. Specifically, the color data of each pixel in the topographic texture map includes a value of the pixel in a Red channel (Red, R for short), a value of the pixel in a Green channel (Green, G for short), a value of the pixel in a Blue channel (Blue, B for short), and a value of the pixel in an opaque channel (Alpha for short a). The combination of the four parameters R, G, B and a is also referred to as RGBA. The value of each color channel typically ranges from 0 to 255. An Alpha value of 0 indicates complete transparency and an Alpha value of 1 indicates complete opacity.
Illustratively, the texture features are represented by CMYK data of a topographical texture map. Specifically, the color data of each pixel in the topographic texture map includes a value of the pixel in a Cyan (Cyan) channel, a value of the pixel in a Magenta (Magenta) channel, a value of the pixel in a Yellow (Yellow) channel, and a value of the pixel in a black (Key, K) channel.
Illustratively, the texture features are represented by HSL data of a topographical texture map. Specifically, the color data of each pixel in the topographic texture map includes a value of the pixel in a Hue (Hue) channel, a value of the pixel in a Saturation (Saturation) channel, and a value of the pixel in a brightness (Lightness) channel.
Illustratively, the texture features are represented by HSV data of a topographical texture map. Specifically, the color data of each pixel in the topographic texture map includes a Value of the pixel in a Hue (Hue) channel, a Value of the pixel in a Saturation (Saturation) channel, and a Value of the pixel in a brightness (Value) channel.
The blending ratio data is used to indicate the blending ratio of the plurality of topographic texture maps during the rendering process. For example, the blending proportion data includes a weight for each of the plurality of topographical maps. As another example, the blending proportion data includes a proportional relationship between weights of different ones of the plurality of topographical texture maps. As another example, the blending proportion data includes a duty cycle of the weight of each of the plurality of topographical maps in a sum of the weights of all of the topographical maps. According to the mixed proportion data of one pixel point, which of a plurality of topographic texture maps and the proportion of the topographic texture map in the rendering process are adopted when the vertex corresponding to the pixel point is rendered can be determined.
The sample image refers to an image for extracting the mixing ratio data. Parameters of pixel points in the sampled image are used for indicating mixing proportion data of vertexes corresponding to the pixel points. The blending ratio data corresponding to different pixels in the sampled image may be the same. The mixing ratio data corresponding to different pixels in the sampled image may also be different.
In one possible implementation, the parameters of the pixels in the sampled image include color data of the pixels in the sampled image. The color data includes values for one or more color channels. The values of the color channels are used to indicate the mixing ratio data. Specifically, each color channel in the sampled image corresponds to a topographical texture map. The value of a color channel indicates the weight of the topographic texture map to which that color channel corresponds. For example, the weight corresponding to a topographic map is the proportion of the value of the color channel corresponding to the topographic map in the sum of the values of each color channel.
The topographic texture data may be obtained from a variety of sources, such as remote sensing data acquisition, topographic measurements, or simulations. The sampled image may be drawn by a designer based on the intended rendering effect. The designer may configure the sampled image to the web page through a command line or web page. Alternatively, the sampled image may be an actual image captured by a sensor, camera or other device; alternatively, the sampled image may be a synthetic image generated by a computer.
In another possible implementation, the sampled image is a digital elevation image, and the parameter of the pixels in the sampled image includes a height value of the pixels in the sampled image, the height value of the pixels being indicative of the blending ratio data.
In another possible implementation, the parameters of the pixels in the sampled image include gradients of the pixels in the sampled image (e.g., height differences around the pixels), the gradients of the pixels being used to indicate the blending ratio data.
In another possible implementation, the parameters of the pixels in the sampled image include a normal vector of the pixels in the sampled image, the normal vector of the pixels being used to indicate the blending ratio data.
In another possible implementation, the parameter of the pixel in the sampled image includes an illumination value of the pixel in the sampled image, the illumination value of the pixel being used to indicate the blending ratio data.
In another possible implementation, the parameter of the pixel in the sampled image includes an occlusion value or a shading value of the pixel in the sampled image, the occlusion value or the shading value of the pixel being used to indicate the blending ratio data.
An implementation of rendering using blending proportion data in the embodiment of fig. 1 is illustrated below.
In one possible implementation, the acquired blending proportion data includes weights for each of the plurality of topographical maps at a same pixel point. Determining the terrain texture color of the pixel point based on the color data of the plurality of terrain texture maps at the same pixel point and the weight of each terrain texture map at the pixel point; for example, based on the weight of each topographic map at the pixel point, carrying out weighted summation on the color data of the topographic maps at the pixel point, and taking the weighted summation value of the color data of the topographic maps at the pixel point as the topographic texture color of the pixel point; or, based on the weight of each topographic map at the pixel point, carrying out weighted average on the color data of the topographic maps at the pixel point, and taking the weighted average of the color data of the topographic maps at the pixel point as the topographic texture color of the pixel point. Coloring vertexes corresponding to the pixel points in the terrain grid area based on the terrain texture colors of the pixel points, so that the vertexes are rendered. The blending proportion data determines the contribution degree of different topographic texture maps during the rendering process. The greater the weight the greater the contribution of the terrain texture map to the rendered terrain texture color.
For example, the above-mentioned plurality of topographic maps include a topographic map a and a topographic map B, the obtained mixed proportion data of the plurality of topographic maps at a pixel includes weight a and weight B, weight a is a weight corresponding to the topographic map a, weight B is a weight corresponding to the topographic map B, and a color of the topographic map at a vertex corresponding to the pixel is, for example, =weight a+weight B, color a and color B are colors of the topographic map a and the topographic map B at the pixel, respectively.
In one possible implementation, each pixel point in the sampled image is traversed; and rendering the vertexes corresponding to the pixel points in the terrain grid area based on the mixed proportion data of the pixel points of the terrain texture maps for the pixel points traversed at present until each pixel point in the sampled image is traversed. In another possible implementation, the rendering is performed in parallel for each vertex in the terrain mesh region. For example, a thread pool is created, a first vertex in a terrain mesh area is rendered by a first thread in the thread pool using the mixed proportion data of the first pixel point and the plurality of terrain texture maps, and a first vertex in the terrain mesh area is rendered by a second thread using the mixed proportion data of the first pixel point and the plurality of terrain texture maps.
In one possible implementation, each pixel in the sampled image corresponds to a vertex in the terrain mesh area, i.e., the precision of the vertex reaches the pixel. In this case, taking the example that the terrain mesh region includes P vertices, for the Q-th vertex of the P vertices, the Q-th vertex in the terrain mesh region is rendered based on the mixed proportion data of the plurality of terrain texture maps at the Q-th pixel. Wherein P is an integer greater than 1, and Q is a positive integer less than or equal to P. In this way, the rendering effect can be accurate to each pixel point.
Illustratively, the plurality of topographical maps includes a first topographical map, a second topographical map, a third topographical map, and a fourth topographical map. The values of the four color channels of the Q pixel R, G, B and A in the sampled image are 0.2,0.3,0.1 and 0.4 respectively, and the mixing ratio data corresponding to the Q pixel is (0.2,0.3,0.1,0.4). It was determined that at the Q-th pixel point, the first topographic texture map had a weight of 20%, the second topographic texture map had a weight of 30%, the third topographic texture map had a weight of 10%, and the fourth topographic texture map had a weight of 40%. And mixing the first topographic texture map, the second topographic texture map, the third topographic texture map and the fourth topographic texture map according to the weights of the first topographic texture map, the second topographic texture map, the third topographic texture map and the fourth topographic texture map, so as to render the Q-th vertex.
Taking the example of a terrain mesh area comprising four vertices, two terrain texture maps are used for blending to render the terrain mesh area, for example two terrain texture maps are obtained, one being a grass texture map and one being a rock texture map. The blending proportion data for vertex 1 includes 0.8 (weight of the grass texture map at vertex 1) and 0.2 (weight of the rock texture map at vertex 1). The blending proportion data for vertex 2 includes 0.5 (weight of the grass texture map at vertex 2) and 0.5 (weight of the rock texture map at vertex 2). The blending proportion data for vertex 3 includes 0.3 (weight of the grass texture map at vertex 3) and 0.7 (weight of the rock texture map at vertex 3). The blending proportion data for vertex 4 includes 0.6 (weight of the grass texture map at vertex 4) and 0.4 (weight of the rock texture map at vertex 4). According to the above-mentioned mixing proportion data of the four vertices, the color data of the four vertices in the mixed texture map are respectively (0.8×grass pixel color+0.2×rock pixel color), (0.5×grass pixel color+0.5×rock pixel color), (0.3×grass pixel color+0.7×rock pixel color), and (0.6×grass pixel color+0.4×rock pixel color). The same holds true for implementations where the local terrain mesh area includes a greater number of vertices and uses more terrain texture maps.
In another possible implementation, a plurality of pixel points in the sampled image correspond to a vertex in a terrain mesh region. In this case, taking the case that the terrain mesh area includes P vertices as an example, for a Q-th vertex of the P vertices, for example, the Q-th vertex corresponds to a Q-th pixel point set, the mixing ratio data of each pixel point in the Q-th pixel point set of the plurality of terrain texture maps is averaged to obtain the mixing ratio data of the Q-th vertex, and rendering the Q-th vertex in the terrain mesh area based on the mixing ratio data of the Q-th vertex of the plurality of terrain texture maps.
In one possible implementation, a plurality of topographic texture maps are mixed based on the mixing proportion data to obtain a mixed texture map; and rendering the vertexes corresponding to the pixel points in the terrain grid area based on the mixed texture map. For example, a plurality of topographic texture maps are blended based on blending ratio data using an image synthesis algorithm, such as linear interpolation or alpha blending, to obtain a blended texture map. The color data of each pixel in the mixed texture map is, for example, a weighted average or a weighted sum of the color data of the plurality of topographic texture maps at the pixel. Then, for the Q-th vertex of the P vertices of the terrain mesh region, color data of the Q-th pixel is acquired from the mixed texture map, and the Q-th vertex of the terrain mesh region is colored using the color data of the Q-th pixel. Because the mixed texture map comprises the mixed topographic texture data of two or more topographic texture maps, the transition of the topographic texture data of different pixel points in the mixed texture map is relatively smooth, so that the probability of uneven transition caused by obvious topographic texture boundaries between adjacent vertexes is reduced. In addition, for the same vertex in the terrain grid area, the vertex is colored by using the mixed texture map, so that the rendering effect of the vertex presents corresponding textures in a plurality of terrain texture maps, and each terrain texture map is not required to be used for coloring the vertex once, thereby reducing the times of coloring operation to be executed and reducing the calculation amount of rendering.
In another possible implementation manner, the vertices corresponding to the pixels are sequentially rendered by mixing the proportion data with a plurality of topographic texture maps. Taking as an example the process of rendering for a first vertex in a terrain mesh region, the first vertex in the terrain mesh region corresponds to a first pixel point in a sampled image, for example. Color data of a first topographic texture map at a first pixel point is obtained, and first color data of a first vertex in a first topographic grid area is determined based on the color data of the first topographic texture map at the first pixel point and a weight corresponding to the first topographic texture map at the first pixel point. For example, the first color data of the first vertex is a product between the color data of the first topographic texture map at the first pixel point and the weight of the first topographic texture map at the first pixel point; the first vertex in the terrain mesh area is colored using the first color data of the first vertex. Then, the first vertex in the terrain mesh area is recoloured based on the color data of the second terrain texture map at the first pixel point and the weight of the second terrain texture map corresponding to the first pixel point. For example, the first color data of the first vertex in the terrain mesh area, the color data of the second terrain texture map at the first pixel point, and the weight corresponding to the second terrain texture map at the first pixel point are weighted and summed to obtain the second color data of the first vertex. The first vertex is recoloured using the second colour data of the first vertex. And so on, repeating the step of rendering the corresponding vertices in the terrain mesh area using one terrain texture map based on the weights of the terrain texture map until a plurality of terrain texture maps have been used. By sequentially rendering the vertices by using the topographic texture map, additional resources are not required to synthesize and store the hybrid texture map, thereby saving memory and computing resources. In addition, each topographic texture map directly influences the color on the vertex, so that the details of the original texture map are reserved, and the rendering result is more real.
In one possible implementation, the terrain mesh region includes N rendered regions, and the sampled image includes N partitioned sampled images; acquiring Mth mixing proportion data based on the Mth partition sampling image; rendering vertices corresponding to pixel points in an Mth rendering area based on the plurality of terrain texture maps and the Mth mixing proportion data; wherein N is an integer greater than 1, and M is a positive integer less than or equal to N.
The partitioned sampled image is a sampled image corresponding to a particular rendered area in the terrain mesh area. The different partitioned sampled images contain blending ratio data for different rendered regions. Optionally, the N rendering regions correspond to N partitioned sampled images. The N partitioned sampled images correspond to N blending ratio data. Each of the N rendering regions corresponds to one of the N partitioned sampled images. The M-th zone sampled image represents a sampled image corresponding to the M-th rendered area in the terrain mesh area. The M-th partition sampled image includes the M-th blending ratio data. The Mth partition sampling image is used for rendering the Mth rendering area.
The mth mixing ratio data refers to mixing ratio data corresponding to the mth rendering area. For example, the mth blending proportion data includes a weight of each of the plurality of topographical maps in the mth rendering area. Alternatively, the mth blending proportion data includes a proportional relationship between weights of each of the plurality of topographic texture maps in the mth rendering region. The mth mixing ratio is used for rendering the mth rendering region. The process of acquiring the mth mixing ratio data based on the mth division sampling image may refer to the description of step S120. For example, the ratio of the values of different color channels in the mth partition sampled image is determined as the mixing ratio data of the mth rendering region.
Because the corresponding subarea sampling image is provided for each rendering area, different rendering areas can be rendered by utilizing different mixing proportion data according to different rendering areas, so that the difference of rendering effects among the different rendering areas is realized, for example, effects of mountain areas, grasslands and rivers are required to be respectively rendered in the different rendering areas in the terrain grid area, and the weight of a rock texture map is the largest in the mixing proportion data obtained based on the subarea sampling image corresponding to the mountain areas; the weight of the grassland texture map is the largest in the mixed proportion data obtained based on the subarea sampling image corresponding to the grassland; in the mixed proportion data obtained based on the regional sampling images corresponding to the rivers, the weight of the water surface texture map is the largest, and different areas after rendering the terrain grid areas can show different terrain features due to the fact that the weights of different terrain texture maps in different rendering areas are different, so that the terrain grid areas show rich and various appearances.
The rendering area may be a sub-area in the terrain mesh area. One terrain mesh region includes one or more rendering regions. In one possible implementation, the terrain mesh area is partitioned according to a preset rule to obtain one or more rendering areas.
In one possible implementation of determining the rendering region, the size of the rendering region is determined. And determining the number of the rendering areas to be divided according to the total size of the terrain grid area and the size of the rendering areas. And dividing the complete terrain grid area according to the determined size and the number of the rendering areas. The terrain mesh area may be divided into rendering areas of the same size using an equal division method. The terrain mesh area may also be unevenly divided according to specific needs and terrain features to better accommodate different portions of the terrain. For example, when the terrain is changing more or the details are more, the rendering area may be sized smaller to more finely depict the details of the terrain. And when the terrain variation is gentle or the details are small, the size of the rendering area can be set larger to reduce the calculation amount of rendering.
By dividing the terrain mesh region into one or more rendering regions, it facilitates splitting a complex terrain rendering process (rendering for the entire terrain mesh region) into multiple smaller rendering tasks (rendering for one rendering region), thereby facilitating parallel processing (e.g., rendering multiple rendering regions in parallel), thereby improving rendering performance and efficiency. And rendering each rendering area, so that the rendering of the whole terrain grid area is realized.
In another possible implementation of determining the rendering area, the rendering area is determined from the terrain mesh area as needed. For example, a rendering area is determined based on a position where the virtual object is located and a preset radius, and a difference between a boundary of the rendering area and the position where the virtual object is located is the preset radius.
The virtual object is, for example, an object in a game. The virtual object is, for example, a virtual character, e.g., the virtual object is a game character. Game characters have models, animations, and interactive behaviors that can move freely in the game world and interact with other objects. As another example, the virtual object is a virtual prop item, e.g., the virtual object is a collectible collection, weapon, equipment, or other gain item. Alternatively, the virtual object is an enemy or monster that is designed to fight or otherwise interact with the player. Alternatively, the virtual object is an environmental object in the game scene, such as a virtual tree, a virtual building, a virtual rock, or the like. Alternatively, the virtual object is an NPC (non-player character), which is a virtual object controlled by a game program. The NPC may be a merchant, resident, or other character that provides services such as tasks, conversations, or purchases, with which a user may interact, obtain tasks, obtain information, or exchange items. Alternatively, the virtual object is a game task target.
The position where the virtual object is located is, for example, two-dimensional coordinates or three-dimensional coordinates of the virtual object. As one possible implementation manner of determining the rendering area based on the position where the virtual object is located, the position of the virtual object is used as a center of sphere, a radius is set, and all pixel points with a distance from the center of sphere being less than or equal to the radius are selected as the pixel points in the rendering area. The distance between the pixel point and the sphere center can be obtained by calculating the Euclidean distance between the pixel point and the sphere center. For example, the virtual object is located at (x, y, z) and the preset radius is r, and then the boundary of the rendering area may be defined as a spherical area with the virtual object location as the center and the radius of r. As another possible implementation manner of determining the rendering area based on the position of the virtual object, the position of the virtual object is used as a rectangular center point, a width and a length are set, and whether a pixel point is in the rectangular area or not is determined by calculating a lateral distance and a longitudinal distance between the virtual object and the pixel point, so as to determine whether the pixel point belongs to the rendering area. As still another possible implementation manner of determining the rendering area based on the position where the virtual object is located, the position of the virtual object is used as a circle center, a radius is set, and a pixel point with a distance from the circle center being equal to or smaller than the radius is selected as a pixel point in the rendering area.
Because the rendering area is limited to the area around the virtual object, compared with the case of rendering the whole terrain grid area, the rendering of the area far away from the virtual object is not needed, so that the calculated amount of the processor rendering is reduced, the memory space occupied by the rendering process is saved, and the computer hardware requirement required by the rendering is reduced. In addition, the region around the virtual character is rendered with the grass texture, so that the region around the virtual character is in sharp contrast with other regions, for example, the green color and rich details of the grass texture can make the virtual character more obvious, the player can focus attention on the virtual character, and the sense of existence of the virtual character in a game scene is improved.
In one possible implementation, each rendering region of the N rendering regions is traversed, and for the currently traversed mth rendering region, using the blending proportion data corresponding to the mth rendering region and the plurality of topographic texture maps, the vertices corresponding to the pixels in the mth rendering region are rendered until m=n, that is, rendering of the nth rendering region is completed.
In another possible implementation, rendering is performed in parallel for each of the N rendering regions, thereby reducing overall rendering time. For example, a thread pool is created, the thread pool including N threads, each of the N threads for rendering one rendering region. And executing the rendering process of the N rendering areas in parallel by N threads, wherein an M-th thread in the N threads renders the vertexes corresponding to the pixel points in the M-th rendering area by using the mixed proportion data corresponding to the M-th rendering area and the terrain texture maps.
In one possible implementation, mixing the plurality of topographic texture maps based on the mth mixing proportion data to obtain an mth mixing texture map; and rendering the vertex corresponding to the pixel point in the Mth rendering area based on the Mth mixed texture map.
The mth mixed texture map is a texture map generated by mixing the mth mixed proportion data and the plurality of topographic texture maps. The Mth mixed texture map is used for rendering the vertexes corresponding to the pixel points in the Mth rendering area. The N partitioned sampled images may correspond to N blended texture maps. The 1 st rendering region may be rendered based on the 1 st hybrid texture map, while the 2 nd rendering region is rendered based on the 2 nd hybrid texture map, and so on.
In another possible implementation manner, based on the mth mixed proportion data, rendering the vertices corresponding to the pixel points in the mth rendering area sequentially by using the plurality of terrain texture maps.
In one possible implementation, the plurality of topographical maps comprises K sets of topographical maps, and the sampled image comprises K region sampled images; acquiring the L-th proportion data of the topographic texture map in the L-th topographic texture set at the pixel point based on the parameters of the pixel point in the first area sampling image; all of the L-th ratio data constitute mixed ratio data; wherein K is an integer greater than 1, and L is a positive integer less than or equal to K.
The regional sampling images refer to sampling images corresponding to the same rendering region in the terrain grid region. In particular, one rendering region may correspond to a plurality of sample images, and different sample images corresponding to the same rendering region may include different blending ratio data. The present embodiment defines one or more sample images corresponding to the same rendering region as a region sample image. One region sampled image corresponds to one set of topographical texture maps. The set of topographical maps includes one or more topographical maps. The number of the topographic maps included in the topographic map set corresponding to different ones of the K regional sample images may be the same or different. The first L region sampled image corresponds to the L-th set of topographical textures. The L-th scale data refers to the mixed scale data contained in one of the L-th region sampling images. In the case that the L-th rendering region corresponds to a sample image, the L-th region sample image is a sample image, and a blending ratio data may be obtained based on the parameters of each pixel point in the L-th region sample image. Under the condition that the L-th rendering area corresponds to a plurality of sampling images, the first L-region sampling image is a plurality of sampling images, based on the parameter of each region sampling image in the first L-region sampling image at one pixel point, one proportion data can be obtained, and the proportion data corresponding to each region sampling image in the first L-region sampling image are combined in a linear combination mode, an average value mode or a weighted average mode and the like, so that the mixed proportion data formed by each L-th proportion data can be obtained.
For example, rendering region 1 in the terrain mesh region corresponds to two sampled images, sampled image 1 and sampled image 2 may be referred to as a region sampled image. The sampled image 1 corresponds to the set of topographical texture maps 1. The sampled image 2 corresponds to the set of topographical texture maps 2. According to the parameters of the pixel point in the sampling image 1, the proportion data of each topographic texture map in the topographic texture map set 1 on the pixel point can be determined, and the proportion data 1 is obtained. According to the parameters of the pixel points in the sampled image 2, the proportion data of each topographic texture map in the topographic texture map set 2 on the pixel points can be determined, and the proportion data 2 can be obtained. The comparative example data 1 is combined with the scale data 2 to obtain mixed scale data of the pixel points in the rendering area 1.
Because the same rendering area is rendered based on the mixed proportion data corresponding to each area sampling image and the topographic texture map set, the topographic texture map set corresponding to each area sampling image can contain different topographic texture maps, and the combination of various different textures is realized by mixing and rendering as many topographic texture maps as possible, so that the diversity of the topography is increased. For example, in a mountain area, a mountain texture can be rendered using one sampled image and a corresponding set of topographic texture maps, while a rock texture is rendered using another sampled image and a corresponding set of topographic texture maps, achieving a rendering effect of rock covering the mountain, rendering the terrain more realistic and fine.
In one possible implementation, rendering data for a terrain mesh region is obtained based on values (e.g., RGB data) for each color channel of a sampled image pixel; a grass blade is inserted into the terrain mesh area based on the rendering data.
The method has the advantages that the rendering data are obtained through the RGB data of the sampled image and the topographic texture map, the grassland area is obtained through inserting the grass inserting sheet, the rendered topographic grid area is not required to be read to obtain the rendering data, so that the grassland area is obtained, the data used in the process of rendering the topographic are multiplexed, the data processing amount is reduced, the processed data can be read without waiting for the completion of the topographic rendering, and the time is shortened.
Grass blades are small sheet elements used to represent grass. The grass blades represent a small grass mat or leaf in a real grass. Grass blades generally take the form of planar or near-planar geometries. The grass insert includes at least one vertex. For example, one grass blade may have four vertices and the grass blade may have a diamond geometry or a rectangular geometry. A single grass blade may also have three vertices and the grass blade may have a triangular geometry. Of course, three vertices and four vertices are merely examples of the number of vertices of a grass blade, one grass blade may have five vertices, a grass blade may have an irregular polygonal shape, or one grass blade may have six vertices, a grass blade may have a convex hexagonal shape. The grass blades may also be 3D models.
In one possible implementation of obtaining a grass blade, determining a number of vertices of the grass blade, creating a geometric body having the number of vertices based on the number of vertices of the grass blade; mapping the map of grass onto the geometry results in a grass insert. Among other things, the map of grass includes characteristics of the grass such as the color of the grass, the texture and transparency of the grass, etc.
In one possible implementation, a grass blade is inserted into a unit area in response to determining that rendering data within the unit area in the terrain mesh area satisfies a grass texture feature. Because the rendering data meets the grassland texture characteristics as the standard for determining the insertion of the grassland inserting sheets, the grassland inserting sheets are automatically inserted under the condition that the rendering data of the unit area meets the grassland texture characteristics, and the grassland area is not required to be manually selected, so that the labor cost brought by manually selecting the grassland area is saved, and the rendering efficiency is improved. In addition, when the rendering data of the unit area does not meet the texture characteristics of the grassland, the grass inserting sheet does not need to be inserted, which is equivalent to filtering out some areas unsuitable for inserting the grass inserting sheet before inserting the grass inserting sheet, so that the probability of inserting the grass inserting sheet in the unnecessary areas is reduced, and the rendering load and the resource consumption are reduced. In addition, as the grass insert can simulate the distribution and the form of plants in a real grassland, the unit area after the grass insert is inserted visually presents the characteristics of the grassland, the effect of the grassland is rendered, and the reality and the detail of rendering are enhanced.
In one possible implementation, the number of target pixels in the unit area is determined based on the color data of the pixels in the unit area and the grassland color interval; and determining that the rendering data of the unit area meets the texture characteristics of the grassland based on the number of target pixel points in the unit area, and inserting a grass insert into the unit area.
The grass color interval refers to a range for describing the grass color characteristics. For example, under RGB space, a grass color interval may include a range of R, G, B three components; under HSV space, a grass color interval may include ranges of H (hue), S (saturation), and V (brightness) components. In one possible implementation of determining a grass color interval, a real grass map tile is sampled to determine the grass color interval. For example, a variety of real grass photos are collected, including grass photos of different scenes, under different lighting conditions. Sample images representing grass colors are selected from these real grass tiles, and color data is extracted from the sample images as grass color sections. Optionally, the range of the determined grassland color interval is adjusted based on the environmental conditions corresponding to the grassland image, such as sunlight intensity, shadow, reflection of surrounding objects, and the like, so that the grassland color interval can comprise grassland color changes under different environments.
The target pixel point is a pixel point of which the color data belongs to a grassland color interval. For example, the color data of one pixel is compared with the upper and lower bounds of the grassland color section. If the color data of the pixel is within the grassland color interval, the pixel is the target pixel. Considering that color data is typically defined by a plurality of color channels, in one possible implementation, a value of each color channel in the color data of a pixel is compared with a range upper and lower bound of a corresponding channel in a grassland color range, and if the value of each color channel in the color data of a pixel belongs to the range upper and lower bound of the corresponding channel, the pixel is the target pixel. Taking an example in RGB space, for example, if a pixel is within the upper and lower bounds of the red channel, the value of which is within the grassland color interval, and the value of which is within the upper and lower bounds of the green channel, the value of which is within the grassland color interval, and the value of which is within the upper and lower bounds of the blue channel, the value of which is within the grassland color interval, the pixel is the target pixel.
In one possible implementation, each pixel in the unit area is traversed, if the color data of the pixel is within the defined grassland color interval, the pixel is determined as a target pixel, the number of recorded target pixels is increased by one until the last pixel in the unit area is traversed, and the number of recorded target pixels is output.
In one possible implementation, the number of target pixel points within a unit area is compared to a number threshold. And when the number of the target pixel points in the unit area is larger than or equal to the number threshold, determining that the rendering data of the unit area meets the texture characteristics of the grassland, and inserting the grassland inserting sheet into the unit area. When the number of the target pixel points in the unit area is smaller than the number threshold, the rendering data of the unit area is determined to not meet the texture characteristics of the grassland, and the grass inserting sheet is not required to be inserted into the unit area. By comparing the number of the target pixel points in the unit area with the number threshold value, whether the grassland texture characteristics are met or not can be rapidly judged, and the method is relatively simple and easy to realize. And compared with manual processing, the method can automatically select the area to be rendered into the grassland, so that the method is faster and more accurate.
In another possible implementation manner, a ratio between the number of target pixel points in the unit area and the total number of pixel points in the unit area is obtained, and the ratio is compared with a first ratio threshold; and when the ratio is greater than or equal to the first ratio threshold, determining that the rendering data of the unit area meets the texture characteristics of the grassland, and inserting a grass insert into the unit area. And when the ratio is smaller than the first ratio threshold, determining that the rendering data of the unit area does not meet the texture characteristics of the grassland, and inserting a grass insert into the unit area is not needed. In one exemplary scenario, a plurality of topographical texture maps (e.g., 5 topographical texture maps) are used in a unit area to render, the plurality of topographical texture maps comprising a grass texture map, the ratio of the grass texture map determining whether to insert a grass blade into the unit area. The duty ratio of the grass texture map is, for example, the ratio between the number of pixels (target pixels) occupied by the grass texture and the total number of pixels, that is, the ratio between the number of target pixels in the unit area and the total number of pixels in the unit area. If the grass texture occupies a large portion of the pixels, the grass texture will have a higher duty cycle. For example, the first ratio threshold is 20%, and when the lawn texture ratio is greater than 20%, which indicates that enough lawn texture exists in the unit area, the grass blades are inserted into the unit area, so that the sense of realism of the grass is increased. In addition, by comparing the ratio between the number of the target pixels and the total number of the pixels in the unit area, the relative proportion of the grasslands in the rendering data can be better grasped, in addition, the total number of the pixels in the unit area can be changed under different zoom levels or visual angles, and the ratio between the number of the target pixels and the total number of the pixels in the unit area has relative invariance, so that the rendering requirements under different scales can be met. And compared with manual processing, the method can automatically select the area to be rendered into the grassland, so that the method is faster and more accurate.
In one possible implementation, based on the color data of each pixel point in the unit area, acquiring a statistical value of the color data of the pixel point in the unit area, determining a ratio between the statistical value of the color data and a grassland color threshold value, and comparing the ratio between the statistical value of the color data and the grassland color threshold value with a second ratio threshold value; if the ratio between the statistical value of the color data and the grassland color threshold is greater than or equal to a second ratio threshold, determining that the rendering data of the unit area meets grassland texture characteristics, and inserting a grassland inserting sheet into the unit area. If the ratio between the statistical value of the color data and the grassland color threshold is smaller than the second ratio threshold, determining that the rendering data of the unit area does not meet the grassland texture characteristics, and no grass inserting sheet is required to be inserted into the unit area. The ratio between the statistical value of the color data and the grass color threshold can be understood as the grass texture duty cycle. Taking the second ratio threshold of 20% as an example, if the lawn texture ratio is greater than or equal to 20%, a grass blade is inserted into the unit area.
The manner of obtaining the statistical value of the color data of the pixel points in the unit area is, for example: an average value of color data of each pixel point in the unit area is acquired. For example, adding the color data of all the pixels in the unit area, dividing the sum of the color data by the number of the pixels to obtain an average value of the color data; accordingly, if the average value of the color data in the unit area is greater than or equal to the second ratio threshold, a grass blade is inserted into the unit area. The color data of all pixel points in the unit area are utilized, which is equivalent to considering the color characteristics of the whole unit area, so that the method is more accurate. Meanwhile, the average value is calculated to serve as a quantization condition for judging whether the grass area or the grass inserting sheet should be selected, so that the influence of noise interference is reduced, and the result is smoother and stable.
The manner of obtaining the statistical value of the color data of the pixel points in the unit area is as follows: and acquiring color data of a central pixel point in the unit area. Correspondingly, if the color data of the central pixel point in the unit area is larger than or equal to the second ratio threshold value, inserting a grass insert sheet into the unit area. Because the color data of the central pixel point is utilized, and the color data of all the pixel points in the unit area are not required to be utilized, the data quantity required to be processed is reduced, and the calculation efficiency is improved. And moreover, the color characteristics of corners of the unit area can be better captured, and the method is suitable for scenes with obvious grassland edges.
The method for obtaining the statistic value of the color data of the pixel points in the unit area is as follows: and obtaining the average value of the color data of the pixel points of each corner point in the unit area. For example, the average value of the color data of the four corner points of the upper left corner point, the lower left corner point, the upper right corner point, and the lower right corner point in the unit area is acquired. Correspondingly, if the average value of the color data of the pixel points of each corner point in the unit area is larger than or equal to the second ratio threshold value, inserting a grass insert sheet into the unit area.
In another possible implementation, the rendering data includes texture data of pixels within a unit area. Whether to insert the grass insert into the unit area can be judged according to the texture data of the pixel points in the unit area. Grass materials are often of a degree of fineness and may include fine textured elements (e.g., grass blades, stems, etc.). It is possible to detect whether or not texture elements conforming to the characteristics of grasslands (such as fine and regular textures, for example, elongated spots, mottled shapes, etc.) are present in the texture data of the unit area, and if texture elements conforming to the characteristics of grasslands are present in the texture data of the unit area, the unit area is judged as the grassland area, and a grass blade is inserted into the unit area.
In another possible implementation, the rendering data includes illumination data for pixels within a unit area. Whether to insert the grass insert sheet into the unit area can be judged according to the illumination data of the pixel points in the unit area. Grasslands generally exhibit a change in shade upon illumination, with a certain highlighting and shading effect. The illumination data in the unit area can be detected to determine whether the illumination distribution and the shadow effect conforming to the characteristics of the grassland exist, thereby determining whether to insert the grass insert into the unit area. For example, if the illumination data in a unit area is represented to have a brighter illumination reflection characteristic and a reflection change occurs under different illumination angles, the unit area is judged as a grassland area, and a grass insert is inserted into the unit area.
In another possible implementation, at least two of color data, texture data, illumination data, texture data and geometric data of the unit area are combined to determine whether the unit area meets the grassland texture feature, and when the unit area meets the grassland texture feature, the grass insert is inserted into the unit area, and when the unit area does not meet the grassland texture feature, the grass insert is not inserted into the unit area.
In one possible implementation, the rendering data includes a duty cycle of the grass texture within the unit area, and the number of grass blades is determined based on the duty cycle of the grass texture within the unit area. The number of grass blades is positively correlated with the duty cycle of the grass texture. That is, the larger the duty cycle of the grass texture, the greater the number of grass blades. Illustratively, a first mapping relationship between the number of grass blades and the duty cycle of the grass texture is preset, an input parameter of the first mapping relationship includes the duty cycle of the grass texture in the unit area, and an output parameter of the first mapping relationship includes the number of grass blades. The number of grass blades is determined based on the occupancy of the grass texture within the unit area and the first mapping relationship. The first mapping relation between the number of the grass blades and the duty ratio of the grass texture can be a function, can be a linear first mapping relation, and can be an exponential function, a logarithmic function, a piecewise function or the like to take a nonlinear function as the first mapping relation. Because the quantity of the grass inserting sheets is positively correlated with the duty ratio of the grass textures, the situation that the grass corresponding to greener places of the grass in reality is simulated is realized, and the quantity of the grass is more adaptive to the colors of the grass.
In one possible implementation, the rendering data includes color data of pixel points within a unit area, a ratio between the color data of the pixel points and a grass color threshold is obtained, and a number of grass blades is determined based on the ratio between the color data of the pixel points and the grass color threshold. The grass color threshold is used to determine a color range belonging to the grass. The ratio between the color data of the pixel point and the grass color threshold is obtained, for example, by comparing the distances, for example, calculating the distance (such as euclidean distance, difference degree, etc.) between the color value of the pixel point and the grass color threshold, and converting the distance into the ratio. The method of obtaining the ratio between the color data of the pixel point and the grass color threshold value is, for example, to calculate the similarity between the color value of the pixel point and the grass color threshold value, and the similarity is converted into the ratio. The method of obtaining the ratio between the color data of the pixel point and the grassland color threshold value is, for example, to compare the color value of the pixel point with the grassland color threshold value to obtain a boolean value, where the boolean value indicates whether the color value of the pixel point is in the color range indicated by the grassland color threshold value.
In one possible implementation, the number of grass blades is determined based on the ratio between the average of the color data for each pixel point within the unit area and the grass color threshold. For example, a second mapping relation between the average value of the color data of the pixel points and the number of the grass blades is set, and the number of the grass blades is determined based on the average value of the color data of the pixel points and the second mapping relation. For another example, when the ratio between the average value of the color data of each pixel point in the unit area and the grassland color threshold value is greater than or equal to the threshold value, determining the number of the grass blades as the first number; and when the ratio of the average value of the color data of each pixel point in the unit area to the grassland color threshold value is smaller than the threshold value, determining the number of the grass blades as a second number. The number of grass blades may be positively correlated with the ratio between the average of the color data for each pixel point and the grass color threshold. Thus, as the average of the color data increases, the number of grass blades increases accordingly, thereby simulating more grass in greener areas.
In one possible implementation, the number of grass blades is determined based on the ratio between the color data of the center pixel point within the unit area and the grass color threshold. For example, a third mapping relation between the color data of the center pixel point and the number of grass blades is set, and the number of grass blades is determined based on the color data of the center pixel point and the third mapping relation. For another example, when the ratio between the color data of the central pixel point in the unit area and the grassland color threshold value is greater than or equal to the threshold value, determining the number of the grass blades as a first number; and when the ratio of the color data of the central pixel point in the unit area to the grassland color threshold value is smaller than the threshold value, determining the number of the grass inserting sheets as a second number. The number of grass blades may be positively correlated with the ratio between the average of the color data for each pixel point and the grass color threshold. The number of grass blades may be positively correlated with the ratio between the color data of the center pixel point and the grass color threshold. Thus, when the color data of the central pixel point is increased, the number of the grass inserting sheets is correspondingly increased, so that the grass corresponding to the greener place of the simulated grass is more.
In one possible implementation, the number of grass blades is determined based on the ratio between the average of the color data of the pixel points of the respective corner points within the unit area and the grass color threshold. For example, a fourth mapping relation between the average value of the color data of the pixel points of each corner point and the number of grass blades is set, and the number of grass blades is determined based on the average value of the color data of the pixel points of each corner point and the fourth mapping relation. For another example, when the ratio between the average value of the color data of the pixel points of each corner point and the grassland color threshold value is greater than or equal to the threshold value, determining the number of the grass blades as a first number; and when the ratio of the average value of the color data of the pixel points of each corner point to the grassland color threshold value is smaller than the threshold value, determining the number of the grass inserting sheets as a second number. The number of grass blades may be positively correlated with the ratio between the average of the color data of the pixel points of the respective corner points and the grass color threshold. Thus, when the color data of each corner point is increased, the number of the grass inserting sheets is correspondingly increased, so that the grass corresponding to the greener place of the simulated grass is more.
In general, the more the number of the vertexes of the grass insert sheet is, the richer geometric details, smoother curved surfaces and finer texture mapping can be provided, the higher the precision is, the better the rendering effect is, but the more the number of the vertexes of the grass insert sheet is increased, the more the vertexes are calculated and rendered, and the higher the requirement on hardware is caused; conversely, the fewer the number of vertices of the grass blades, the simpler or coarser the appearance of the grass, and the lower the accuracy, resulting in a reduced rendering effect, but also reducing the vertex calculation and rendering operations, thereby reducing the hardware requirements.
For example, when four vertices are used to form two triangular diamond shaped grass blades, so that the grass blades more closely resemble the appearance of real grass, but each grass blade requires rendering four vertices, which is more demanding on hardware. In contrast, the grass insert pieces with three vertexes forming a triangle have a slight difference in realistic effect, but the requirements on hardware can be reduced.
In order to give consideration to the rendering effect and the processing overhead of the rendering, a grass insert to be inserted into the unit area may be determined based on the rendering distance between the unit area and the rendering center in the rendering area, and the number of vertices in the grass insert is inversely related to the rendering distance between the unit area and the rendering center. In other words, the closer the rendering distance between the unit area and the rendering center is, the more the number of the vertexes of the grass blades inserted into the unit area is, so that the rendering effect is more realistic, and the more the rendering distance between the unit area and the rendering center is, the fewer the number of the vertexes of the grass blades inserted into the unit area is, so that the processing overhead generated by rendering is reduced, and the hardware requirement is lowered. Particularly, under the condition that the position of the virtual character is taken as a rendering center, the grassland far away from the virtual character presents a form with less details and less quantity of vertexes, and the grassland near the virtual character presents a form with more details and quantity of vertexes, so that a visual perspective effect is simulated, and the reality and three-dimensional sense of a game are improved.
In one possible implementation, the grass blades include a first grass blade and a second grass blade, the number of vertices in the first grass blade being greater than the number of vertices in the second grass blade, the rendering distance being compared to a distance threshold; responsive to determining that the rendering distance is less than or equal to the distance threshold, determining a first grass insert; alternatively, in response to determining that the rendering distance is greater than or equal to the distance threshold, a second grass blade is determined. For example, when it is determined that the rendering distance is less than or equal to the distance threshold, a grass blade having four vertices (first grass blade) is determined, a grass blade having four vertices is inserted into the unit area, and when it is determined that the rendering distance is greater than or equal to the distance threshold, a grass blade having three vertices (second grass blade) is determined, a grass blade having three vertices is inserted into the unit area. The distance threshold may be set according to the size of the terrain mesh area or rendering area or the requirements of the rendering effect.
In another possible implementation, the grass blades are determined based on the rendering distance and the first correspondence. The first correspondence indicates a correspondence between the rendering distance and the first number, the number of vertices in the grass blades being the first number. The correspondence may be represented using an array, a table, or a function. For example, using the rendering distance as an index, searching a first corresponding relation in a table form to obtain a first quantity; the candidate grass blades are screened for grass blades having a first number of vertices. The correspondence between the rendering distance and the first number may be calculated using linear interpolation. Alternatively, the correspondence between the rendering distance and the number of vertices is represented using an exponential function, allowing the number of vertices at a far distance to decrease exponentially. Alternatively, the first correspondence in the form of a nonlinear function is fitted using a curve function, a polynomial, or the like, to obtain a more accurate number of vertices.
In one possible implementation, boundaries between multiple rendering regions in a terrain mesh region are smoothed. Smoothing refers to processing boundaries between rendering regions to reduce discontinuities, jagged or noticeable transitions, making the rendering boundaries more natural and smooth. The manner of smoothing includes, but is not limited to, at least one of fusion seams, texture transitions, normal averaging, vertex scaling, or edge blurring.
Blending seams, such as by blending texture, color, or other properties of adjacent rendered regions, may be accomplished by drawing a seam region at the boundary, performing texture interpolation, color blending, etc. within the seam region. The seam region may use fade and transition effects to gradually merge features of adjacent rendering regions. Texture transitions, for example, use and transition multiple textures through blending techniques to smooth the boundaries between rendering regions. Texture transition can be realized by using techniques such as map fusion and weight blending, and the transition effect between different textures can be controlled by adjusting the transparency of the textures or using Alpha maps. The normal average, for example, smoothes the normal vector between adjacent rendering regions. By calculating the normal direction of the adjacent surface and performing averaging or interpolation processing, the normal difference can be reduced, the boundary can be smoother, and a more realistic rendering effect can be produced using the smoothed normal. Vertex scaling is achieved by, for example, adjusting the vertex positions on the boundaries of adjacent rendering regions to make gradual transitions smooth between adjacent regions, and vertex scaling may be achieved by interpolation or weighting, etc., to make the height and shape changes of adjacent regions transition smoothly. Edge blurring, for example, applies blurring effects around rendered boundaries to reduce jaggy feel and hard edge phenomena of edges. Edge blurring may be performed on edge pixels using gaussian blur or the like algorithms and smooth blending of the color of the edge pixels with surrounding pixels.
Alternatively, the method of FIG. 1 is performed by a computing device. Alternatively, the method shown in FIG. 1 is performed cooperatively by a computing device cluster that includes a plurality of computing devices. For example, computing device a performs S110 in the method shown in fig. 1, and computing device B performs S120 in the method shown in fig. 1. The computing device is, for example, a terminal or a server. In one possible implementation, the method shown in FIG. 1 is performed by a computing device through an operating application. The application program is, for example, browser software or client software, and the execution subject of the method shown in fig. 1 is not limited in this embodiment.
Fig. 5 is a schematic structural diagram of a terrain rendering device according to an embodiment of the present application, and an apparatus 500 shown in fig. 5 includes:
the obtaining module 510 is configured to obtain mixed proportion data of a plurality of topographic texture maps at pixel points based on parameters of the pixel points in the sampled image;
and the rendering module 520 is configured to render vertices corresponding to the pixel points in the terrain mesh area based on the plurality of terrain texture maps and the mixed proportion data.
In one possible implementation, the rendering module 520 is configured to blend the plurality of topographic texture maps based on the blending proportion data to obtain a blended texture map; and rendering the vertexes corresponding to the pixel points in the terrain grid area based on the mixed texture map.
In one possible implementation, the blending proportion data includes a weight of each of the plurality of topographic texture maps, and the obtaining module 510 is configured to obtain the weight of the topographic texture map corresponding to the color channel at the pixel point based on the value of the color channel of the pixel point in the sampled image.
In one possible implementation, the plurality of topographic texture maps includes a first topographic texture map, a second topographic texture map, a third topographic texture map, and a fourth topographic texture map, and the obtaining module 510 is configured to obtain a weight of the first topographic texture map at a pixel point based on a value of the red channel of the pixel point in the sampled image, where the first topographic texture map corresponds to the red channel; acquiring the weight of a second topographic texture map at the pixel point based on the value of the blue channel of the pixel point in the sampling image, wherein the second topographic texture map corresponds to the blue channel; acquiring the weight of a third topographic texture map at the pixel point based on the value of the green channel of the pixel point in the sampling image, wherein the third topographic texture map corresponds to the green channel; and acquiring the weight of the fourth topographic texture map at the pixel point based on the value of the opaque channel of the pixel point in the sampling image, wherein the fourth topographic texture map corresponds to the opaque channel.
In one possible implementation, the sampled image includes N partitioned sampled images and the terrain mesh region includes N rendered regions;
a rendering module 520 for acquiring mth mixing ratio data based on the mth partition sampled image; rendering vertices corresponding to pixel points in an Mth rendering area based on the plurality of terrain texture maps and the Mth mixing proportion data; wherein N is an integer greater than 1, and M is a positive integer less than or equal to N.
In one possible implementation, the rendering module 520 is configured to blend the plurality of topographic texture maps based on the mth blending proportion data to obtain an mth blending texture map; and rendering the vertex corresponding to the pixel point in the Mth rendering area based on the Mth mixed texture map.
In one possible implementation, the plurality of topographical maps comprises K sets of topographical maps, and the sampled image comprises K region sampled images;
the obtaining module 510 is configured to obtain, based on parameters of a pixel point in the first area sampling image, the L-th scale data of the topographic texture map in the L-th topographic texture map set at the pixel point; all of the L-th ratio data constitute mixed ratio data;
wherein K is an integer greater than 1, and L is a positive integer less than or equal to K.
In one possible implementation, the obtaining module 510 is further configured to obtain rendering data of the terrain mesh area based on a value of each color channel of the sampled image pixel point;
the rendering module 520 is also used to insert grass blades into the terrain mesh area based on the rendering data.
In one possible implementation, the rendering module 520 is configured to insert a grass blade into a unit area in response to determining that the rendering data within the unit area in the terrain mesh area satisfies the grass texture feature.
In one possible implementation, the rendering module 520 is configured to determine a number of grass blades based on the duty cycle of the grass texture indicated by the rendering data, where the number of grass blades is positively correlated with the duty cycle of the grass texture; a number of grass blades are inserted into the unit area.
In one possible implementation, the rendering module 520 is configured to determine a grass insert based on the rendering data and a distance between a unit area and a rendering center in the terrain mesh area, the grass insert including at least one vertex, a number of vertices in the grass insert being inversely related to the distance; a grass blade is inserted into the unit area.
In one possible implementation, the grass blades include a first grass blade and a second grass blade, the number of vertices in the first grass blade being greater than the number of vertices in the second grass blade, and the rendering module 520 is configured to determine the first grass blade in response to determining that the distance is less than or equal to the distance threshold; alternatively, in response to determining that the distance is greater than or equal to the distance threshold, a second grass blade is determined.
In one possible implementation manner, the obtaining module 510 is configured to obtain, based on a gray value of a pixel in the sampled image, mixed proportion data of a topographic texture map at the pixel corresponding to the gray value; or, based on the value of the transparency channel of the pixel point in the sampling image, obtaining the mixed proportion data of the topographic texture map corresponding to the transparency channel at the pixel point; or, based on the height value of the pixel point in the sampling image, obtaining the mixed proportion data of the topographic texture map corresponding to the height value at the pixel point.
The terrain rendering device provided in the above embodiment only uses the division of the above functional modules to illustrate when rendering the terrain, and in practical application, the above functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the terrain rendering device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the terrain rendering device and the terrain rendering method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the terrain rendering device and the terrain rendering method are detailed in the method embodiments and are not described herein again.
Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application, where a server 600 includes: the processor 601, the processor 601 is coupled to the memory 602, the memory 602 stores at least one computer program instruction that is loaded and executed by the processor 601 to cause the server 600 to implement the method provided by the embodiment of fig. 1.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are referred to each other, and each embodiment is mainly described as a difference from other embodiments.
A refers to B, referring to a simple variation where A is the same as B or A is B.
The terms first and second and the like in the description and in the claims of embodiments of the application, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order of the objects, and should not be interpreted to indicate or imply relative importance. For example, the first and second topographical maps are used to distinguish between different topographical maps, rather than to describe a particular order of topographical maps, nor should the first topographical map be understood to be more important than the second topographical map.
Information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals, which are all authorized by the user or sufficiently authorized by the parties, and the collection, use, and processing of the relevant data require compliance with relevant laws and regulations and standards of the relevant country and region. For example, the topographic map referred to in the present application is acquired with sufficient authorization.
In the embodiments of the present application, unless otherwise indicated, the meaning of "at least one" means one or more, and the meaning of "a plurality" means two or more. For example, a plurality of topographical maps refers to two or more topographical maps.
The above-described embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (16)

1. A terrain rendering method, the method comprising:
acquiring mixed proportion data of a plurality of topographic texture maps at pixel points based on parameters of the pixel points in the sampling image;
and rendering the vertexes corresponding to the pixel points in the terrain grid area based on the terrain texture maps and the mixed proportion data.
2. The method of claim 1, wherein rendering vertices in the terrain mesh region corresponding to the pixel points based on the plurality of terrain texture maps and the blending ratio data comprises:
mixing a plurality of topographic texture maps based on the mixing proportion data to obtain a mixed texture map;
And rendering the vertexes corresponding to the pixel points in the terrain grid area based on the mixed texture map.
3. The method of claim 2, wherein the blending ratio data includes a weight for each of the plurality of topographical maps, wherein the obtaining blending ratio data for the plurality of topographical maps at the pixel points based on parameters of the pixel points in the sampled image comprises:
and acquiring the weight of the topographic texture map corresponding to the color channel at the pixel point based on the value of the color channel of the pixel point in the sampling image.
4. A method according to claim 3, wherein the plurality of topographical maps comprises a first topographical map, a second topographical map, a third topographical map, and a fourth topographical map, wherein the obtaining weights at the pixel points for the topographical maps corresponding to the color channels based on values of the color channels of the pixel points in the sampled image comprises:
acquiring the weight of the first topographic texture map at the pixel point based on the value of the red channel of the pixel point in the sampling image, wherein the first topographic texture map corresponds to the red channel;
Acquiring the weight of the second topographic texture map at the pixel point based on the value of the blue channel of the pixel point in the sampling image, wherein the second topographic texture map corresponds to the blue channel;
acquiring the weight of the third topographic texture map at the pixel point based on the value of the green channel of the pixel point in the sampling image, wherein the third topographic texture map corresponds to the green channel;
and acquiring the weight of the fourth topographic texture map at the pixel point based on the value of the opaque channel of the pixel point in the sampling image, wherein the fourth topographic texture map corresponds to the opaque channel.
5. The method of claim 1, wherein the sampled image comprises N partitioned sampled images and the terrain mesh region comprises N rendered regions;
rendering vertices corresponding to the pixel points in the terrain mesh region based on the plurality of terrain texture maps and the mixing ratio data, including:
acquiring Mth mixing proportion data based on the Mth partition sampling image;
rendering vertices corresponding to pixel points in an Mth rendering area based on the plurality of terrain texture maps and the Mth mixing proportion data;
Wherein N is an integer greater than 1, and M is a positive integer less than or equal to N.
6. The method of claim 5, wherein rendering vertices corresponding to pixels in an mth rendering region based on the plurality of topographical texture maps and the mth blending ratio data comprises:
mixing the terrain texture maps based on the Mth mixing proportion data to obtain an Mth mixing texture map;
and rendering the vertex corresponding to the pixel point in the Mth rendering area based on the Mth mixed texture map.
7. The method of claim 1, wherein the plurality of topographical maps comprises K sets of topographical maps, the sampled image comprising K region sampled images;
the obtaining the mixed proportion data of the plurality of topographic texture maps at the pixel points based on the parameters of the pixel points in the sampling image comprises the following steps:
acquiring the L scale data of the topographic texture map in the L topographic texture map set at the pixel point based on the parameters of the pixel point in the L regional sampling image;
all of the L-th ratio data constitute the mixing ratio data;
wherein K is an integer greater than 1, and L is a positive integer less than or equal to K.
8. The method according to claim 1, wherein the method further comprises:
obtaining rendering data of the terrain grid area based on the value of each color channel of the pixel point of the sampling image;
and inserting a grass insert into the terrain mesh area based on the rendering data.
9. The method of claim 8, wherein the inserting a grass insert into the terrain mesh area based on the rendering data comprises:
the grass blades are inserted into the unit area in response to determining that rendering data within the unit area in the terrain mesh area satisfies a grass texture feature.
10. The method of claim 8, wherein the inserting a grass insert into the terrain mesh area based on the rendering data comprises:
determining the number of the grass blades based on the duty ratio of the grass texture indicated by the rendering data, wherein the number of the grass blades is positively correlated with the duty ratio of the grass texture;
inserting said number of said grass blades into said unit area.
11. The method of claim 8, wherein the inserting a grass insert into the terrain mesh area based on the rendering data comprises:
Determining a grass blade based on the rendering data and a distance between a unit area in the terrain mesh area and the rendering center, the grass blade comprising at least one vertex, a number of vertices in the grass blade being inversely related to the distance;
the grass blades are inserted into the unit areas.
12. The method of claim 11, wherein the grass blades include a first grass blade and a second grass blade, the number of vertices in the first grass blade being greater than the number of vertices in the second grass blade, the determining a grass blade based on the rendering data and a distance between a unit area in the terrain mesh area and the rendering center comprising:
in response to determining that the distance is less than or equal to a distance threshold, determining the first grass insert; or,
in response to determining that the distance is greater than or equal to a distance threshold, the second grass insert is determined.
13. The method according to claim 1, wherein the obtaining the mixed proportion data of the plurality of topographic texture maps at the pixel points based on the parameters of the pixel points in the sampled image includes:
acquiring mixed proportion data of a topographic texture map corresponding to the gray value at the pixel point based on the gray value of the pixel point in the sampling image; or,
Obtaining mixed proportion data of a topographic texture map corresponding to a transparency channel at the pixel point based on the value of the transparency channel of the pixel point in the sampling image; or,
and obtaining mixed proportion data of the topographic texture map corresponding to the height value at the pixel point based on the height value of the pixel point in the sampling image.
14. A terrain rendering device, comprising:
the acquisition module is used for acquiring mixed proportion data of a plurality of topographic texture maps at pixel points based on parameters of the pixel points in the sampling image;
and the rendering module is used for rendering the vertexes corresponding to the pixel points in the terrain grid area based on the terrain texture maps and the mixed proportion data.
15. A server, the server comprising: a processor coupled to a memory having stored therein at least one computer program instruction that is loaded and executed by the processor to cause the server to implement the method of any of claims 1-13.
16. A computer readable storage medium, characterized in that at least one instruction is stored in the storage medium, which instructions, when run on a computer, cause the computer to perform the method according to any of claims 1-13.
CN202311172675.XA 2023-09-12 2023-09-12 Terrain rendering method and device Pending CN117197275A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311172675.XA CN117197275A (en) 2023-09-12 2023-09-12 Terrain rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311172675.XA CN117197275A (en) 2023-09-12 2023-09-12 Terrain rendering method and device

Publications (1)

Publication Number Publication Date
CN117197275A true CN117197275A (en) 2023-12-08

Family

ID=88992025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311172675.XA Pending CN117197275A (en) 2023-09-12 2023-09-12 Terrain rendering method and device

Country Status (1)

Country Link
CN (1) CN117197275A (en)

Similar Documents

Publication Publication Date Title
US11386528B2 (en) Denoising filter
US20190266788A1 (en) System and method of rendering a surface
US20070253640A1 (en) Image manipulation method and apparatus
US7265761B2 (en) Multilevel texture processing method for mapping multiple images onto 3D models
CN110115841B (en) Rendering method and device for vegetation object in game scene
JP3626144B2 (en) Method and program for generating 2D image of cartoon expression from 3D object data
CN112169324A (en) Rendering method, device and equipment of game scene
CN106898040A (en) Virtual resource object rendering intent and device
CN113457137B (en) Game scene generation method and device, computer equipment and readable storage medium
CN110610504A (en) Pencil drawing generation method and device based on skeleton and tone
CA2227502C (en) Method and system for determining and or using illumination maps in rendering images
CN117197275A (en) Terrain rendering method and device
CN115761138A (en) Three-dimensional house model determining method and device, electronic equipment and storage medium
CN117475053A (en) Grassland rendering method and device
CN116485981A (en) Three-dimensional model mapping method, device, equipment and storage medium
CN117197276A (en) Grassland rendering method and device
CN115409932A (en) Texture mapping and completion method of three-dimensional human head and face model
Kennelly et al. Non-photorealistic rendering and terrain representation
CN111462343B (en) Data processing method and device, electronic equipment and storage medium
US20240193864A1 (en) Method for 3d visualization of sensor data
CN107481184A (en) A kind of low polygon style figure generation interactive system
CN115888103A (en) Game display control method, device, computer equipment and medium
Cheok et al. Humanistic Oriental art created using automated computer processing and non-photorealistic rendering
Zhao Interactive texture replacement of cartoon characters based on deep learning model
JP4585298B2 (en) Drawing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination