CN117197276A - Grassland rendering method and device - Google Patents

Grassland rendering method and device Download PDF

Info

Publication number
CN117197276A
CN117197276A CN202311172716.5A CN202311172716A CN117197276A CN 117197276 A CN117197276 A CN 117197276A CN 202311172716 A CN202311172716 A CN 202311172716A CN 117197276 A CN117197276 A CN 117197276A
Authority
CN
China
Prior art keywords
unit area
grass
rendering
data
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311172716.5A
Other languages
Chinese (zh)
Inventor
刘立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Yiju Future Network Technology Co ltd
Original Assignee
Guangzhou Yiju Future Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yiju Future Network Technology Co ltd filed Critical Guangzhou Yiju Future Network Technology Co ltd
Priority to CN202311172716.5A priority Critical patent/CN117197276A/en
Publication of CN117197276A publication Critical patent/CN117197276A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The application discloses a grassland rendering method and a grassland rendering device, which are used for acquiring rendering data in a unit area in a rendering area; the grass blades are inserted into the unit area in response to determining that the rendering data of the unit area satisfies a grass texture feature. Because the rendering data meets the grassland texture characteristics as the standard for determining the insertion of the grassland inserting sheets, the grassland inserting sheets are automatically inserted under the condition that the rendering data of the unit area meets the grassland texture characteristics, and therefore, the grassland area is not required to be manually selected, the labor cost brought by manually selecting the grassland area is saved, and the rendering efficiency is improved.

Description

Grassland rendering method and device
Technical Field
The application relates to the technical field of computers, in particular to a grassland rendering method and device.
Background
To render the effect of grass in the terrain, a grass area is typically manually selected in the rendered area, and a grass blade is inserted in the selected grass area.
However, when the above method is adopted, since the grassland area is manually selected, when the whole rendering area is relatively large, a great deal of manpower is required, resulting in inefficiency of rendering.
Disclosure of Invention
The embodiment of the application provides a grassland rendering method and a grassland rendering device, which can save labor cost brought by manually selecting grassland areas and improve rendering efficiency. The technical scheme is as follows.
In a first aspect, there is provided a method of grass rendering, the method comprising:
acquiring rendering data in a unit area in a rendering area;
the grass blades are inserted into the unit area in response to determining that the rendering data of the unit area satisfies a grass texture feature.
In one possible implementation, the rendering data includes color data of pixels in the unit area, and the determining that the rendering data of the unit area satisfies the grassland texture feature includes:
determining the number of target pixel points in the unit area based on the color data of the pixel points in the unit area and the grassland color interval, wherein the color data of the target pixel points belong to the grassland color interval;
and determining that the rendering data of the unit area meets the grassland texture characteristics based on the number of the target pixel points in the unit area.
In one possible implementation manner, the determining that the rendering data of the unit area meets the grassland texture feature based on the number of target pixel points in the unit area includes:
Determining that the number of target pixel points in the unit area is greater than or equal to a number threshold; or,
and determining that the ratio between the number of the target pixel points in the unit area and the total number of the pixel points in the unit area is greater than or equal to a first ratio threshold.
In one possible implementation, the rendering data includes color data of pixels in the unit area, and the determining that the rendering data of the unit area satisfies the grassland texture feature includes:
acquiring the statistical value of the color data of the pixel points in the unit area;
determining that a ratio between the statistical value of the color data and a grass color threshold is greater than or equal to a second ratio threshold.
In one possible implementation manner, the obtaining the statistical value of the color data of the pixel points in the unit area includes at least one of the following:
acquiring an average value of color data of each pixel point in the unit area; or,
acquiring color data of a central pixel point in the unit area; or,
and obtaining the average value of the color data of the pixel points of each angular point in the unit area.
In one possible implementation, the inserting the grass blade into the unit area includes:
Determining the number of grass blades based on the rendering data within the unit area;
inserting said number of said grass blades into said unit area.
In one possible implementation, the rendering data includes a duty cycle of a grass texture within the unit area, and the determining the number of grass blades based on the rendering data of the unit area includes:
and determining the number of the grass blades based on the duty ratio of the grass texture in the unit area, wherein the number of the grass blades is positively correlated with the duty ratio of the grass texture.
In one possible implementation, the rendering data includes color data of pixels in the unit area, and the determining the number of the grass blades based on the rendering data in the unit area includes:
and determining the number of the grass inserting sheets based on the ratio between the color data of the pixel points and the grass color threshold value.
In one possible implementation, the determining the number of grass blades based on a ratio between the color data of the pixel point and a grass color threshold includes:
determining the number of the grass blades based on the ratio between the average value of the color data of each pixel point in the unit area and the grass color threshold; or,
Determining the number of the grass blades based on the ratio between the color data of the central pixel point in the unit area and the grass color threshold; or,
and determining the number of the grass inserting sheets based on the ratio between the average value of the color data of the pixel points of each angular point in the unit area and the grassland color threshold value.
In one possible implementation manner, before the acquiring the rendering data in the unit area in the rendering area, the method further includes:
and determining the rendering area based on the position of the virtual object and a preset radius, wherein the difference between the boundary of the rendering area and the position of the virtual object is the preset radius.
In a second aspect, there is provided a lawn rendering apparatus, the apparatus comprising:
the acquisition module is used for acquiring the rendering data in the unit area in the rendering area;
a determining module for determining that the rendering data of the unit area satisfies a grassland texture feature;
and the rendering module is used for inserting the grass inserting sheet into the unit area.
In a possible implementation manner, the rendering data includes color data of pixels in the unit area, and the determining module is configured to determine, based on the color data of pixels in the unit area and a grassland color interval, a number of target pixels in the unit area, where the color data of the target pixels belongs to the grassland color interval; and determining that the rendering data of the unit area meets the grassland texture characteristics based on the number of the target pixel points in the unit area.
In a possible implementation manner, the determining module is configured to determine that the number of target pixel points in the unit area is greater than or equal to a number threshold; or determining that the ratio between the number of the target pixel points in the unit area and the total number of the pixel points in the unit area is greater than or equal to a first ratio threshold.
In a possible implementation manner, the rendering data includes color data of pixels in the unit area, and the determining module is configured to obtain a statistical value of the color data of the pixels in the unit area; determining that a ratio between the statistical value of the color data and a grass color threshold is greater than or equal to a second ratio threshold.
In one possible implementation manner, the acquiring module is configured to perform at least one of the following:
acquiring an average value of color data of each pixel point in the unit area; or,
acquiring color data of a central pixel point in the unit area; or,
and obtaining the average value of the color data of the pixel points of each angular point in the unit area.
In one possible implementation, the determining module is configured to determine the number of the grass blades based on rendering data in the unit area; the rendering module is used for inserting the quantity of the grass inserting sheets into the unit area.
In one possible implementation, the rendering data includes a duty cycle of the grass texture within the unit area, and the determining module is configured to determine the number of grass blades based on the duty cycle of the grass texture within the unit area, the number of grass blades being positively correlated with the duty cycle of the grass texture.
In one possible implementation, the rendering data includes color data of pixels in the unit area, and the determining module is configured to determine the number of the grass blades based on a ratio between the color data of the pixels and a grass color threshold.
In one possible implementation manner, the determining module is configured to determine the number of the grass blades based on a ratio between an average value of color data of each pixel point in the unit area and a grass color threshold; or determining the number of the grass blades based on the ratio between the color data of the central pixel point in the unit area and the grass color threshold; or determining the number of the grass inserting sheets based on the ratio between the average value of the color data of the pixel points of each angular point in the unit area and the grass color threshold value.
In one possible implementation manner, the determining module is further configured to determine the rendering area based on a location where the virtual object is located and a preset radius, where a difference between a boundary of the rendering area and the location where the virtual object is located is the preset radius.
In a third aspect, a server is provided, the server comprising: a processor coupled to a memory having stored therein at least one computer program instruction that is loaded and executed by the processor to cause the server to implement the method of the first aspect or any of the alternatives of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium having stored therein at least one instruction which when executed on a computer causes the computer to perform the method of the first aspect or any of the alternatives of the first aspect.
In a fifth aspect, there is provided a computer program product comprising one or more computer program instructions which, when loaded and run by a computer, cause the computer to carry out the method of the first aspect or any of the alternatives of the first aspect.
In a sixth aspect, there is provided a chip comprising programmable logic circuitry and/or program instructions for implementing the method of the first aspect or any of the alternatives of the first aspect, when the chip is run.
In a seventh aspect, a server cluster is provided, where the server cluster includes a first server and a second server, and the first server and the second server are configured to cooperatively implement a method according to the first aspect or any of the alternatives of the first aspect.
From this, the embodiment of the application has the following beneficial effects:
because the rendering data meets the grassland texture characteristics as the standard for determining the insertion of the grassland inserting sheets, the grassland inserting sheets are automatically inserted under the condition that the rendering data of the unit area meets the grassland texture characteristics, and therefore, the grassland area is not required to be manually selected, the labor cost brought by manually selecting the grassland area is saved, and the rendering efficiency is improved.
In addition, as the grass insert can simulate the distribution and the form of plants in a real grassland, the unit area after the grass insert is inserted visually presents the characteristics of the grassland, the effect of the grassland is rendered, and the reality and the detail of rendering are enhanced.
Drawings
FIG. 1 is a flow chart of a method for grass rendering provided by an embodiment of the application;
FIG. 2 is a schematic diagram of terrain mesh data provided by an embodiment of the present application;
fig. 3 is a schematic structural diagram of a lawn rendering device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a grassland rendering method according to an embodiment of the present application. The method shown in fig. 1 includes the following steps S110 to S120.
Step S110, obtaining rendering data in a unit area in the rendering area.
The rendered area is a rendered terrain mesh area. The rendering area may be a sub-area of the terrain mesh area or a portion of the terrain mesh area. One terrain mesh region includes one or more rendering regions.
In one possible implementation, the terrain mesh area is partitioned according to a preset rule to obtain one or more rendering areas. For example, the size of the rendering region is determined. And determining the number of the rendering areas to be divided according to the total size of the terrain grid area and the size of the rendering areas. And dividing the complete terrain grid area according to the determined size and the number of the rendering areas. The terrain mesh area may be divided into rendering areas of the same size using an equal division method. The terrain mesh area may also be unevenly divided according to specific needs and terrain features to better accommodate different portions of the terrain. For example, when the terrain is changing more or the details are more, the rendering area may be sized smaller to more finely depict the details of the terrain. And when the terrain variation is gentle or the details are small, the size of the rendering area can be set larger to reduce the calculation amount of rendering.
By dividing the terrain mesh region into one or more rendering regions, it facilitates splitting a complex terrain rendering process (rendering for the entire terrain mesh region) into multiple smaller rendering tasks (rendering for one rendering region), thereby facilitating parallel processing (e.g., rendering multiple rendering regions in parallel), thereby improving rendering performance and efficiency. And rendering each rendering area, so that the rendering of the whole terrain grid area is realized.
In another possible implementation of determining the rendering area, the rendering area is determined from the terrain mesh area as needed. For example, a rendering area is determined based on a position where a virtual object is located and a preset radius, and a difference between a boundary of the rendering area and the position where the virtual object is located is the preset radius.
The virtual object is, for example, an object in a game. The virtual object is, for example, a virtual character, e.g., the virtual object is a game character. Game characters have models, animations, and interactive behaviors that can move freely in the game world and interact with other objects. As another example, the virtual object is a virtual prop item, e.g., the virtual object is a collectible collection, weapon, equipment, or other gain item. Alternatively, the virtual object is an enemy or monster that is designed to fight or otherwise interact with the player. Alternatively, the virtual object is an environmental object in the game scene, such as a virtual tree, a virtual building, a virtual rock, or the like. Alternatively, the virtual object is an NPC (non-player character), which is a virtual object controlled by a game program. The NPC may be a merchant, resident, or other character that provides services such as tasks, conversations, or purchases, with which a user may interact, obtain tasks, obtain information, or exchange items. Alternatively, the virtual object is a game task target.
The position where the virtual object is located is, for example, two-dimensional coordinates or three-dimensional coordinates of the virtual object. As one possible implementation manner of determining the rendering area based on the position where the virtual object is located, the position of the virtual object is used as a center of sphere, a radius is set, and all pixel points with a distance from the center of sphere being less than or equal to the radius are selected as the pixel points in the rendering area. The distance between the pixel point and the sphere center can be obtained by calculating the Euclidean distance. For example, the virtual object is located at (x, y, z) and the preset radius is r, and then the boundary of the rendering area may be defined as a spherical area with the virtual object location as the center and the radius of r. As another possible implementation manner of determining the rendering area based on the position of the virtual object, the position of the virtual object is used as a rectangular center point, a width and a length are set, and whether a pixel point is in the rectangular area or not is determined by calculating a lateral distance and a longitudinal distance between the virtual object and the pixel point, so as to determine whether the pixel point belongs to the rendering area. As still another possible implementation manner of determining the rendering area based on the position where the virtual object is located, the position of the virtual object is used as a circle center, a radius is set, and a pixel point with a distance from the circle center being equal to or smaller than the radius is selected as a pixel point in the rendering area.
Because the rendering area is limited to the area around the virtual object, compared with the case of rendering the whole terrain grid area, the rendering of the area far away from the virtual object is not needed, so that the calculated amount of the processor rendering is reduced, the memory space occupied by the rendering process is saved, and the computer hardware requirement required by the rendering is reduced. In addition, the region around the virtual character is rendered with the grass texture, so that the region around the virtual character is in sharp contrast with other regions, for example, the green color and rich details of the grass texture can make the virtual character more obvious, the player can focus attention on the virtual character, and the sense of existence of the virtual character in a game scene is improved.
The unit area refers to mutually independent small areas divided in the rendering area. Each unit area contains a portion of the terrain and has a certain size and position. For example, the entire rendering area is divided into unit areas according to a fixed size. For example, the size of the unit area is set according to the actual requirement, for example, the unit area is in square or rectangle, and the side length or width and height of each unit area can be a fixed pixel value or based on relative proportion. As another example, the unit area is adaptively divided according to a visible range around the virtual object or a camera field of view.
In one possible implementation, the rendering data includes color data for pixel points within a unit area. The color data includes values for a plurality of color channels. For example, the color data of a pixel includes a value of the pixel in a Red channel (Red, R for short), a value of the pixel in a Green channel (Green, G for short), a value of the pixel in a Blue channel (Blue, B for short), and a value of the pixel in an opaque channel (Alpha for short). The combination of the four parameters R, G, B and a is also referred to as RGBA. As another example, the color data of a pixel includes a value of the pixel in a Cyan (Cyan) channel, a value of the pixel in a Magenta (Magenta) channel, a value of the pixel in a Yellow (Yellow) channel, and a value of the pixel in a black (Key, K) channel. For another example, the color data of a pixel includes a value of the pixel in a Hue (Hue) channel, a value of the pixel in a Saturation (Saturation) channel, and a value of the pixel in a brightness (light) channel. For another example, the color data of a pixel includes a Value of the pixel in a Hue (Hue) channel, a Value of the pixel in a Saturation (Saturation) channel, and a Value of the pixel in a Value (Value) channel.
Step S120, inserting a grass blade into the unit area in response to determining that the rendering data of the unit area satisfies the grass texture feature.
Grass blades are small sheet elements used to represent grass. The grass blades represent a small grass mat or leaf in a real grass. Grass blades generally take the form of planar or near-planar geometries. The grass insert includes at least one vertex. For example, one grass blade may have four vertices and the grass blade may have a diamond geometry or a rectangular geometry. A single grass blade may also have three vertices and the grass blade may have a triangular geometry. Of course, three vertices and four vertices are merely examples of the number of vertices of a grass blade, one grass blade may have five vertices, a grass blade may have an irregular polygonal shape, or one grass blade may have six vertices, a grass blade may have a convex hexagonal shape. The grass blades may also be 3D models.
In one possible implementation of obtaining a grass blade, determining a number of vertices of the grass blade, creating a geometric body having the number of vertices based on the number of vertices of the grass blade; mapping the map of grass onto the geometry results in a grass insert. Among other things, the map of grass includes characteristics of the grass such as the color of the grass, the texture and transparency of the grass, etc.
In one possible implementation of determining that the rendering data of the unit area satisfies the grassland texture feature, determining the number of target pixel points in the unit area based on the color data of the pixel points in the unit area and the grassland color interval; and determining that the rendering data of the unit area meets the texture characteristics of the grassland based on the number of target pixel points in the unit area, and inserting a grass insert into the unit area.
The grass color interval refers to a range for describing the grass color characteristics. For example, under RGB space, a grass color interval may include a range of R, G, B three components; under HSV space, a grass color interval may include ranges of H (hue), S (saturation), and V (brightness) components. In one possible implementation of determining a grass color interval, a real grass map tile is sampled to determine the grass color interval. For example, a variety of real grass photos are collected, including grass photos of different scenes, under different lighting conditions. Sample images representing grass colors are selected from these real grass tiles, and color data is extracted from the sample images as grass color sections. Optionally, the range of the determined grassland color interval is adjusted based on the environmental conditions corresponding to the grassland image, such as sunlight intensity, shadow, reflection of surrounding objects, and the like, so that the grassland color interval can comprise grassland color changes under different environments.
The target pixel point is a pixel point of which the color data belongs to a grassland color interval. For example, the color data of one pixel is compared with the upper and lower bounds of the grassland color section. If the color data of the pixel is within the grassland color interval, the pixel is the target pixel. Considering that color data is typically defined by a plurality of color channels, in one possible implementation, a value of each color channel in the color data of a pixel is compared with a range upper and lower bound of a corresponding channel in a grassland color range, and if the value of each color channel in the color data of a pixel belongs to the range upper and lower bound of the corresponding channel, the pixel is the target pixel. Taking an example in RGB space, for example, if a pixel is within the upper and lower bounds of the red channel, the value of which is within the grassland color interval, and the value of which is within the upper and lower bounds of the green channel, the value of which is within the grassland color interval, and the value of which is within the upper and lower bounds of the blue channel, the value of which is within the grassland color interval, the pixel is the target pixel.
In one possible implementation, each pixel in the unit area is traversed, if the color data of the pixel is within the defined grassland color interval, the pixel is determined as a target pixel, the number of recorded target pixels is increased by one until the last pixel in the unit area is traversed, and the number of recorded target pixels is output.
In one possible implementation, the number of target pixel points within a unit area is compared to a number threshold. And when the number of the target pixel points in the unit area is larger than or equal to the number threshold, determining that the rendering data of the unit area meets the grassland texture characteristics, and inserting the grassland inserting sheet into the unit area. And when the number of the target pixel points in the unit area is smaller than the number threshold, determining that the rendering data of the unit area does not meet the grassland texture characteristics, and inserting the grassland inserting sheet into the unit area is not needed. By comparing the number of the target pixel points in the unit area with the number threshold value, whether the grassland texture characteristics are met or not can be rapidly judged, and the method is relatively simple and easy to realize. And compared with manual processing, the method can automatically select the area to be rendered into the grassland, so that the method is faster and more accurate.
In another possible implementation manner, a ratio between the number of target pixel points in the unit area and the total number of pixel points in the unit area is obtained, and the ratio is compared with a first ratio threshold; and when the ratio is greater than or equal to a first ratio threshold, determining that the rendering data of the unit area meets the texture characteristics of the grassland, and inserting the grass inserting sheet into the unit area. And when the ratio is smaller than a first ratio threshold, determining that the rendering data of the unit area does not meet the texture characteristics of the grassland, and not inserting the grass inserting sheet into the unit area. In one exemplary scenario, a plurality of topographical texture maps (e.g., 5 topographical texture maps) are used in a unit area to render, the plurality of topographical texture maps comprising a grass texture map, the ratio of the grass texture map determining whether to insert a grass blade into the unit area. The duty ratio of the grass texture map is, for example, the ratio between the number of pixels (target pixels) occupied by the grass texture and the total number of pixels, that is, the ratio between the number of target pixels in the unit area and the total number of pixels in the unit area. If the grass texture occupies a large portion of the pixels, the grass texture will have a higher duty cycle. For example, the first ratio threshold is 20%, and when the lawn texture ratio is greater than 20%, which indicates that enough lawn texture exists in the unit area, the grass blades are inserted into the unit area, so that the sense of realism of the grass is increased. In addition, by comparing the ratio between the number of the target pixels and the total number of the pixels in the unit area, the relative proportion of the grasslands in the rendering data can be better grasped, in addition, the total number of the pixels in the unit area can be changed under different zoom levels or visual angles, and the ratio between the number of the target pixels and the total number of the pixels in the unit area has relative invariance, so that the rendering requirements under different scales can be met. And compared with manual processing, the method can automatically select the area to be rendered into the grassland, so that the method is faster and more accurate.
In one possible implementation, based on the color data of each pixel point in the unit area, acquiring a statistical value of the color data of the pixel point in the unit area, determining a ratio between the statistical value of the color data and a grassland color threshold value, and comparing the ratio between the statistical value of the color data and the grassland color threshold value with a second ratio threshold value; and if the ratio between the statistic value of the color data and the grassland color threshold value is greater than or equal to a second ratio threshold value, determining that the rendering data of the unit area meets grassland texture characteristics, and inserting the grassland inserting sheet into the unit area. If the ratio between the statistical value of the color data and the grassland color threshold is smaller than a second ratio threshold, determining that the rendering data of the unit area does not meet grassland texture characteristics, and not inserting the grassland inserting sheet into the unit area. The ratio between the statistical value of the color data and the grass color threshold can be understood as the grass texture duty cycle. Taking the second ratio threshold of 20% as an example, if the lawn texture ratio is greater than or equal to 20%, the grass blades are inserted into the unit area.
The manner of obtaining the statistical value of the color data of the pixel points in the unit area is, for example: an average value of color data of each pixel point in the unit area is acquired. For example, adding the color data of all the pixels in the unit area, dividing the sum of the color data by the number of the pixels to obtain an average value of the color data; correspondingly, if the average value of the color data in the unit area is greater than or equal to a second ratio threshold value, the grass insert sheet is inserted into the unit area. The color data of all pixel points in the unit area are utilized, which is equivalent to considering the color characteristics of the whole unit area, so that the method is more accurate. Meanwhile, the average value is calculated to serve as a quantization condition for judging whether the grass area or the grass inserting sheet should be selected, so that the influence of noise interference is reduced, and the result is smoother and stable.
The manner of obtaining the statistical value of the color data of the pixel points in the unit area is as follows: and acquiring color data of a central pixel point in the unit area. Correspondingly, if the color data of the central pixel point in the unit area is larger than or equal to the second ratio threshold value, the grass insert sheet is inserted into the unit area. Because the color data of the central pixel point is utilized, and the color data of all the pixel points in the unit area are not required to be utilized, the data quantity required to be processed is reduced, and the calculation efficiency is improved. And moreover, the color characteristics of corners of the unit area can be better captured, and the method is suitable for scenes with obvious grassland edges.
The method for obtaining the statistic value of the color data of the pixel points in the unit area is as follows: and obtaining the average value of the color data of the pixel points of each corner point in the unit area. For example, the average value of the color data of the four corner points of the upper left corner point, the lower left corner point, the upper right corner point, and the lower right corner point in the unit area is acquired. Correspondingly, if the average value of the color data of the pixel points of each angular point in the unit area is larger than or equal to a second ratio threshold value, the grass insert sheet is inserted into the unit area.
In another possible implementation, the rendering data includes texture data of pixels within a unit area. Whether the grass insert sheet is inserted into the unit area can be judged according to texture data of pixel points in the unit area. Grass materials are often of a degree of fineness and may include fine textured elements (e.g., grass blades, stems, etc.). It is possible to detect whether or not texture elements conforming to the characteristics of grasslands (such as fine and regular textures, for example, elongated spots, mottled shapes, etc.) are present in the texture data of the unit area, and if texture elements conforming to the characteristics of grasslands are present in the texture data of the unit area, the unit area is judged as the grassland area, and the grass blades are inserted into the unit area.
In another possible implementation, the rendering data includes illumination data for pixels within a unit area. Whether the grass inserting sheet is inserted into the unit area can be judged according to the illumination data of the pixel points in the unit area. Grasslands generally exhibit a change in shade upon illumination, with a certain highlighting and shading effect. The illumination data in the unit area can be detected to determine whether the illumination distribution and the shadow effect conforming to the characteristics of the grassland exist, so as to determine whether the grass insert sheet is inserted into the unit area. For example, if the illumination data in the unit area is represented to have a brighter illumination reflection characteristic and the reflection change is generated under different illumination angles, the unit area is judged as a grassland area, and the grass insert sheet is inserted into the unit area.
In another possible implementation, at least two of color data, texture data, illumination data, texture data and geometric data of the unit area are combined to determine whether the unit area meets the grassland texture feature, and when the unit area meets the grassland texture feature, the grass insert is inserted into the unit area, and when the unit area does not meet the grassland texture feature, the grass insert is not inserted into the unit area.
In one possible implementation, the number of grass blades is determined based on rendering data within the unit area; obtaining the number of grass blades; a number of grass blades are inserted into the unit area. In this way, the number of the grass blades in the unit area is more matched with the rendering data of the unit area, so that the distribution of the grass blades is coordinated with the whole environment of the unit area. Moreover, the number of the grass inserting sheets is proper, the probability of increasing the rendering burden and reducing the performance due to too many inserted grass inserting sheets is reduced, and the probability of poor rendering effect due to too few grass inserting sheets is reduced.
In one possible implementation, the rendering data includes a duty cycle of the grass texture within the unit area, and the number of grass blades is determined based on the duty cycle of the grass texture within the unit area. The number of grass blades is positively correlated with the duty cycle of the grass texture. That is, the larger the duty cycle of the grass texture, the greater the number of grass blades. Illustratively, a first mapping relationship between the number of grass blades and the duty cycle of the grass texture is preset, an input parameter of the first mapping relationship includes the duty cycle of the grass texture in the unit area, and an output parameter of the first mapping relationship includes the number of grass blades. The number of grass blades is determined based on the occupancy of the grass texture within the unit area and the first mapping relationship. The first mapping relation between the number of the grass blades and the duty ratio of the grass texture can be a function, can be a linear first mapping relation, and can be an exponential function, a logarithmic function, a piecewise function or the like to take a nonlinear function as the first mapping relation. Because the quantity of the grass inserting sheets is positively correlated with the duty ratio of the grass textures, the situation that the grass corresponding to greener places of the grass in reality is simulated is realized, and the quantity of the grass is more adaptive to the colors of the grass.
In one possible implementation, the rendering data includes color data of pixel points within a unit area, a ratio between the color data of the pixel points and a grass color threshold is obtained, and a number of grass blades is determined based on the ratio between the color data of the pixel points and the grass color threshold. The grass color threshold is used to determine a color range belonging to the grass. The ratio between the color data of the pixel point and the grass color threshold is obtained, for example, by comparing the distances, for example, calculating the distance (such as euclidean distance, difference degree, etc.) between the color value of the pixel point and the grass color threshold, and converting the distance into the ratio. The method of obtaining the ratio between the color data of the pixel point and the grass color threshold value is, for example, to calculate the similarity between the color value of the pixel point and the grass color threshold value, and the similarity is converted into the ratio. The method of obtaining the ratio between the color data of the pixel point and the grassland color threshold value is, for example, to compare the color value of the pixel point with the grassland color threshold value to obtain a boolean value, where the boolean value indicates whether the color value of the pixel point is in the color range indicated by the grassland color threshold value.
In one possible implementation, the number of grass blades is determined based on the ratio between the average of the color data for each pixel point within the unit area and the grass color threshold. For example, a second mapping relation between the average value of the color data of the pixel points and the number of the grass blades is set, and the number of the grass blades is determined based on the average value of the color data of the pixel points and the second mapping relation. For another example, when the ratio between the average value of the color data of each pixel point in the unit area and the grassland color threshold value is greater than or equal to the threshold value, determining the number of the grass blades as the first number; and when the ratio of the average value of the color data of each pixel point in the unit area to the grassland color threshold value is smaller than the threshold value, determining the number of the grass blades as a second number. The number of grass blades may be positively correlated with the ratio between the average of the color data for each pixel point and the grass color threshold. Thus, as the average of the color data increases, the number of grass blades increases accordingly, thereby simulating more grass in greener areas.
In one possible implementation, the number of grass blades is determined based on the ratio between the color data of the center pixel point within the unit area and the grass color threshold. For example, a third mapping relation between the color data of the center pixel point and the number of grass blades is set, and the number of grass blades is determined based on the color data of the center pixel point and the third mapping relation. For another example, when the ratio between the color data of the central pixel point in the unit area and the grassland color threshold value is greater than or equal to the threshold value, determining the number of the grass blades as a first number; and when the ratio of the color data of the central pixel point in the unit area to the grassland color threshold value is smaller than the threshold value, determining the number of the grass inserting sheets as a second number. The number of grass blades may be positively correlated with the ratio between the average of the color data for each pixel point and the grass color threshold. The number of grass blades may be positively correlated with the ratio between the color data of the center pixel point and the grass color threshold. Thus, when the color data of the central pixel point is increased, the number of the grass inserting sheets is correspondingly increased, so that the grass corresponding to the greener place of the simulated grass is more.
In one possible implementation, the number of grass blades is determined based on the ratio between the average of the color data of the pixel points of the respective corner points within the unit area and the grass color threshold. For example, a fourth mapping relation between the average value of the color data of the pixel points of each corner point and the number of grass blades is set, and the number of grass blades is determined based on the average value of the color data of the pixel points of each corner point and the fourth mapping relation. For another example, when the ratio between the average value of the color data of the pixel points of each corner point and the grassland color threshold value is greater than or equal to the threshold value, determining the number of the grass blades as a first number; and when the ratio of the average value of the color data of the pixel points of each corner point to the grassland color threshold value is smaller than the threshold value, determining the number of the grass inserting sheets as a second number. The number of grass blades may be positively correlated with the ratio between the average of the color data of the pixel points of the respective corner points and the grass color threshold. Thus, when the color data of each corner point is increased, the number of the grass inserting sheets is correspondingly increased, so that the grass corresponding to the greener place of the simulated grass is more.
According to the method provided by the embodiment, the rendering data meets the grassland texture characteristics and is used as the standard for determining the insertion of the grassland inserting sheets, and the grassland inserting sheets are automatically inserted under the condition that the rendering data of the unit area meets the grassland texture characteristics, and the grassland area is not required to be manually selected, so that the labor cost brought by manually selecting the grassland area is saved, and the rendering efficiency is improved.
In addition, when the rendering data of the unit area does not meet the texture characteristics of the grassland, the grass inserting sheet does not need to be inserted, which is equivalent to filtering out some areas unsuitable for inserting the grass inserting sheet before inserting the grass inserting sheet, so that the probability of inserting the grass inserting sheet in the unnecessary areas is reduced, and the rendering load and the resource consumption are reduced.
In addition, as the grass insert can simulate the distribution and the form of plants in a real grassland, the unit area after the grass insert is inserted visually presents the characteristics of the grassland, the effect of the grassland is rendered, and the reality and the detail of rendering are enhanced.
The implementation of constructing terrain mesh data is illustrated below.
A terrain mesh area is a discretized area used to represent terrain, typically for terrain rendering and simulation. In computer graphics, a topographical grid area may be considered a two-dimensional or three-dimensional grid consisting of a series of adjacent vertices and edges connecting them. In the two-dimensional case, the terrain mesh area is a mesh plane, consisting of a series of vertices and edges. Each vertex represents a point of the topographical surface and the edges represent the connection between adjacent points. Each vertex may be assigned a height value to simulate a change in height of the terrain. The use of interpolation algorithms can infer the height of unknown points by known height values to form a smooth terrain model. In three dimensions, a topographical grid region is a grid of triangles (or other polygons) connected to resemble the surface of a grid. Each vertex represents a point of the terrain surface and each triangle represents a small segment of the terrain. Each vertex may be assigned a height value to simulate the change in height of the terrain, and a continuous terrain surface model may be obtained by interpolating and smoothing the height values over each triangle. The fineness of the terrain mesh region depends on the resolution of the mesh, i.e., the number of vertices. Higher resolution may provide more detailed and realistic terrain manifestations, but may also increase computational and rendering complexity. Lower resolution may improve performance but may result in reduced definition of the terrain surface.
In one possible implementation, in the selected grid, the height value of each pixel point in the grid and the position information of each pixel point are determined through a preset sequence array. And constructing a terrain grid area based on the height value of the pixel point and the position information of the pixel point.
In one possible implementation, the terrain mesh data behaves like a gray scale map. For example, the height change of the terrain is represented using the color shading (gray scale) in the height topography. Specifically, when an area of an area in the altitude map is whiter, the greater the altitude value of the area is indicated, that is, the higher the terrain of the area is. Conversely, when a region in a high topography is darker or darker, it is indicated that the region has a smaller height value, i.e., the topography is lower. For example, referring to fig. 2, fig. 2 is a schematic diagram of terrain mesh data according to an embodiment of the present application, and by observing the brightness of the color in fig. 2, the height change condition of the terrain can be intuitively understood.
A sequence array refers to an array in which sequences are used as data. In other words, each element in the sequence array is a sequence type of data. The number of bits per element (i.e., sequence) in the sequence array may be set according to the accuracy requirements. For example, the number of bits of the sequence may be 8 bits or 16 bits, etc. The greater the number of bits of the sequence, the finer the height value representation can be provided. The value of the sequence corresponds to the height value. The sequence is in binary format.
The storage form of the array means that the distribution of the sequences is ordered. The number of the sequence data of each row and each column of the array corresponds to the number of the pixel points in the terrain grid. For example, the number of sequence data per row of the array indicates the number of pixels per row in the terrain grid. The number of sequence data per column of the array indicates the number of pixels per column in the terrain mesh. For example, the array has 1080 sequences per row, the array has 960 sequences per column, the data representing the terrain grid contains 1080×960 pixels, and the terrain grid has 1080 pixels per row, and 960 pixels per column. The horizontal distance between each pixel point is the same, so that the sequence is used as the height value of the pixel point to render the height topographic map.
The manner in which the rendering region is constructed is illustrated below.
In one possible implementation, the terrain mesh region is rendered based on vertex data in the terrain mesh data and texture feature configuration, resulting in a rendered region.
In one possible implementation, vertex data in the terrain mesh data is obtained based on coordinate information of pixel points in the terrain mesh data and height values of the pixel points. Each pixel point in the sampled image serves as a vertex in the terrain mesh area. The data of the vertex includes coordinate information and a height value, which is subsequently used for texture rendering of the vertex.
And the server acquires all vertex data of the terrain grid area, and then performs texture rendering on all vertices of the terrain grid area according to texture feature configuration required to be rendered.
In general, texture rendering is required to be performed on the whole terrain grid area, but some terrains are limited by hardware because of larger terrains, the whole terrain grid area cannot be simultaneously rendered at one time, or the whole terrain grid area cannot be rendered at one time without occupying too much hardware. Therefore, the complete terrain grid area is divided into a plurality of rendering areas, and then the rendering areas are sequentially rendered according to a preset rule, so that the rendering of the whole terrain grid area is realized. Alternatively, instead of rendering the entire terrain mesh area at the same time, a specific rendering area may be rendered as needed, for example, a rendering area related to a position of a designated virtual character within a preset radius may be used as a specific rendering area, so that a hardware requirement required for rendering may be reduced.
The texture feature configuration is performed by using a topographic texture map as topographic texture data. Specifically, the topographic texture map is analyzed to obtain color distribution data of all pixel points in the topographic texture map, the color data are RGB data, and then the color distribution data are used as topographic texture data (including the number of the pixel points and the colors corresponding to the pixel points).
In one possible implementation, the terrain mesh region to be rendered is rendered entirely by closely paving terrain texture data in the terrain mesh region.
By tiling is meant tiling and filling the topographical texture data (or map) within the topographical grid area such that the entire area surface of the topographical grid area is completely covered by the map.
In one possible implementation, the precision of tiling is adjusted by configuring the tiling parameters. For example, the pixel size of the topographic texture map is 20×20, a tile is required in a rendering area with a size of 40×40, and according to the difference of tiling degrees, 1 can be selected in the rendering area with a size of 40×40: 1, wherein at the moment, each pixel point in a rendering area of 40 x 40 is covered and rendered one by one, and at the moment, the number of tiled topographic texture maps is 4; when 1 is selected: 4, only one quarter of the pixels in the rendering area of 40×40 are covered, i.e. one pixel out of every 4 pixels is rendered.
In one possible implementation, boundaries between multiple rendering regions in a terrain mesh region are smoothed. Smoothing refers to processing boundaries between rendering regions to reduce discontinuities, jagged or noticeable transitions, making the rendering boundaries more natural and smooth. The manner of smoothing includes, but is not limited to, at least one of fusion seams, texture transitions, normal averaging, vertex scaling, or edge blurring.
Blending seams, such as by blending texture, color, or other properties of adjacent rendered regions, may be accomplished by drawing a seam region at the boundary, performing texture interpolation, color blending, etc. within the seam region. The seam region may use fade and transition effects to gradually merge features of adjacent rendering regions. Texture transitions, for example, use and transition multiple textures through blending techniques to smooth the boundaries between rendering regions. Texture transition can be realized by using techniques such as map fusion and weight blending, and the transition effect between different textures can be controlled by adjusting the transparency of the textures or using Alpha maps. The normal average, for example, smoothes the normal vector between adjacent rendering regions. By calculating the normal direction of the adjacent surface and performing averaging or interpolation processing, the normal difference can be reduced, the boundary can be smoother, and a more realistic rendering effect can be produced using the smoothed normal. Vertex scaling is achieved by, for example, adjusting the vertex positions on the boundaries of adjacent rendering regions to make gradual transitions smooth between adjacent regions, and vertex scaling may be achieved by interpolation or weighting, etc., to make the height and shape changes of adjacent regions transition smoothly. Edge blurring, for example, applies blurring effects around rendered boundaries to reduce jaggy feel and hard edge phenomena of edges. Edge blurring may be performed on edge pixels using gaussian blur or the like algorithms and smooth blending of the color of the edge pixels with surrounding pixels.
In one possible implementation, the following steps a to C are performed before step S130 is performed.
And step A, determining the height value of each pixel point and the position information of each pixel point in the grid through a preset sequence array in the selected grid. And constructing a terrain grid area based on the height value of the pixel point and the position information of the pixel point.
A terrain mesh area is a discretized area used to represent terrain, typically for terrain rendering and simulation. In computer graphics, a topographical grid area may be considered a two-dimensional or three-dimensional grid consisting of a series of adjacent vertices and edges connecting them. In the two-dimensional case, the terrain mesh area is a mesh plane, consisting of a series of vertices and edges. Each vertex represents a point of the topographical surface and the edges represent the connection between adjacent points. Each vertex may be assigned a height value to simulate a change in height of the terrain. The use of interpolation algorithms can infer the height of unknown points by known height values to form a smooth terrain model. In three dimensions, a topographical grid region is a grid of triangles (or other polygons) connected to resemble the surface of a grid. Each vertex represents a point of the terrain surface and each triangle represents a small segment of the terrain. Each vertex may be assigned a height value to simulate the change in height of the terrain, and a continuous terrain surface model may be obtained by interpolating and smoothing the height values over each triangle. The fineness of the terrain mesh region depends on the resolution of the mesh, i.e., the number of vertices. Higher resolution may provide more detailed and realistic terrain manifestations, but may also increase computational and rendering complexity. Lower resolution may improve performance but may result in reduced definition of the terrain surface.
In one possible implementation, the terrain mesh data behaves like a gray scale map. For example, the height change of the terrain is represented using the color shading (gray scale) in the height topography. Specifically, when an area of an area in the altitude map is whiter, the greater the altitude value of the area is indicated, that is, the higher the terrain of the area is. Conversely, when a region in a high topography is darker or darker, it is indicated that the region has a smaller height value, i.e., the topography is lower. For example, referring to fig. 4, fig. 4 is a schematic diagram of terrain mesh data according to an embodiment of the present application, and by observing the brightness of the color in fig. 4, the height change condition of the terrain can be intuitively understood.
A sequence array refers to an array in which sequences are used as data. In other words, each element in the sequence array is a sequence type of data. The number of bits per element (i.e., sequence) in the sequence array may be set according to the accuracy requirements. For example, the number of bits of the sequence may be 8 bits or 16 bits, etc. The greater the number of bits of the sequence, the finer the height value representation can be provided. The value of the sequence corresponds to the height value. The sequence is in binary format.
The storage form of the array means that the distribution of the sequences is ordered. The number of the sequence data of each row and each column of the array corresponds to the number of the pixel points in the terrain grid. For example, the number of sequence data per row of the array indicates the number of pixels per row in the terrain grid. The number of sequence data per column of the array indicates the number of pixels per column in the terrain mesh. For example, the array has 1080 sequences per row, the array has 960 sequences per column, the data representing the terrain grid contains 1080×960 pixels, and the terrain grid has 1080 pixels per row, and 960 pixels per column. The horizontal distance between each pixel point is the same, so that the sequence is used as the height value of the pixel point to render the height topographic map.
And B, obtaining vertex data in the grid based on the coordinate information of the pixel points in the grid and the height values of the pixel points.
In one possible implementation, each pixel in the sampled image serves as a vertex in the terrain mesh area. The data of the vertex includes coordinate information and a height value, which is subsequently used for texture rendering of the vertex.
And C, acquiring all vertex data of the terrain grid area, and then performing texture rendering on all vertices of the terrain grid area according to texture feature configuration required to be rendered.
Alternatively, the method of FIG. 1 is performed by a computing device. Alternatively, the method shown in FIG. 1 is performed cooperatively by a computing device cluster that includes a plurality of computing devices. For example, computing device a performs S110 in the method shown in fig. 1, and computing device B performs S120 in the method shown in fig. 1. The computing device is, for example, a terminal or a server. In one possible implementation, the method shown in FIG. 1 is performed by a computing device through an operating application. The application program is, for example, browser software or client software, and the execution subject of the method shown in fig. 1 is not limited in this embodiment.
Fig. 3 is a schematic structural diagram of a lawn rendering device according to an embodiment of the present application, and the device 200 shown in fig. 3 includes:
an obtaining module 210, configured to obtain rendering data in a unit area in a rendering area;
a determining module 220 for determining that the rendering data of the unit area satisfies the grassland texture feature;
and a rendering module 230 for inserting the grass blades into the unit area.
In one possible implementation, the rendering data includes color data of pixels in the unit area, and the determining module 220 is configured to determine, based on the color data of pixels in the unit area and a grassland color interval, a number of target pixels in the unit area, where the color data of the target pixels belongs to the grassland color interval; based on the number of target pixel points in the unit area, it is determined that the rendering data of the unit area satisfies the grassland texture feature.
In one possible implementation, the determining module 220 is configured to determine that the number of target pixel points in the unit area is greater than or equal to the number threshold; or determining that the ratio between the number of target pixel points in the unit area and the total number of pixel points in the unit area is greater than or equal to the first ratio threshold.
In one possible implementation, the rendering data includes color data of pixels in the unit area, and the determining module 220 is configured to obtain a statistical value of the color data of the pixels in the unit area; the ratio between the statistical value of the color data and the grass color threshold value is determined to be greater than or equal to a second ratio threshold value.
In one possible implementation, the obtaining module 210 is configured to perform at least one of:
acquiring an average value of color data of each pixel point in the unit area; or,
acquiring color data of a central pixel point in a unit area; or,
and obtaining the average value of the color data of the pixel points of each corner point in the unit area.
In one possible implementation, the determining module 220 is configured to determine the number of grass blades based on rendering data within the unit area; and a rendering module 230 for inserting a number of grass blades into the unit area.
In one possible implementation, the rendering data includes a duty cycle of the grass texture within the unit area, and the determining module 220 is configured to determine a number of grass blades based on the duty cycle of the grass texture within the unit area, the number of grass blades being positively correlated with the duty cycle of the grass texture.
In one possible implementation, the rendering data includes color data of pixels in a unit area, and the determining module 220 is configured to determine the number of grass blades based on a ratio between the color data of the pixels and a grass color threshold.
In one possible implementation, the determining module 220 is configured to determine the number of grass blades based on a ratio between an average value of color data of each pixel point in the unit area and a grass color threshold; or determining the number of the grass blades based on the ratio between the color data of the central pixel point in the unit area and the grass color threshold; or, the number of the grass blades is determined based on the ratio between the average value of the color data of the pixel points of each corner point in the unit area and the grass color threshold value.
In a possible implementation manner, the determining module 220 is further configured to determine a rendering area based on a location where the virtual object is located and a preset radius, where a difference between a boundary of the rendering area and the location where the virtual object is located is the preset radius.
The terrain rendering device provided in the above embodiment only uses the division of the above functional modules to illustrate when rendering the terrain, and in practical application, the above functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the terrain rendering device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the terrain rendering device and the terrain rendering method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the terrain rendering device and the terrain rendering method are detailed in the method embodiments and are not described herein again.
Fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 300 includes: processor 301, processor 301 being coupled to memory 302, memory 302 having stored therein at least one computer program instruction that is loaded and executed by processor 301 to cause server 300 to implement the method provided by the embodiment of fig. 1.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are referred to each other, and each embodiment is mainly described as a difference from other embodiments.
A refers to B, referring to a simple variation where A is the same as B or A is B.
The terms first and second and the like in the description and in the claims of embodiments of the application, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order of the objects, and should not be interpreted to indicate or imply relative importance. For example, a first ratio threshold and a second ratio threshold are used to distinguish between different ratio thresholds, rather than to describe a particular order of ratio thresholds, nor should the first ratio threshold be understood to be more important than the second ratio threshold.
The above-described embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (13)

1. A method of grass rendering, the method comprising:
acquiring rendering data in a unit area in a rendering area;
the grass blades are inserted into the unit area in response to determining that the rendering data of the unit area satisfies a grass texture feature.
2. The method of claim 1, wherein the rendering data includes color data for pixels within the unit area, and wherein the determining that the rendering data for the unit area satisfies a grass texture feature comprises:
determining the number of target pixel points in the unit area based on the color data of the pixel points in the unit area and the grassland color interval, wherein the color data of the target pixel points belong to the grassland color interval;
And determining that the rendering data of the unit area meets the grassland texture characteristics based on the number of the target pixel points in the unit area.
3. The method of claim 2, wherein the determining that the rendering data of the unit area satisfies the grassland texture feature based on the number of target pixel points within the unit area comprises:
determining that the number of target pixel points in the unit area is greater than or equal to a number threshold; or,
and determining that the ratio between the number of the target pixel points in the unit area and the total number of the pixel points in the unit area is greater than or equal to a first ratio threshold.
4. The method of claim 1, wherein the rendering data includes color data for pixels within the unit area, and wherein the determining that the rendering data for the unit area satisfies a grass texture feature comprises:
acquiring the statistical value of the color data of the pixel points in the unit area;
determining that a ratio between the statistical value of the color data and a grass color threshold is greater than or equal to a second ratio threshold.
5. The method of claim 4, wherein the obtaining the statistics of the color data of the pixels in the unit area includes at least one of:
Acquiring an average value of color data of each pixel point in the unit area; or,
acquiring color data of a central pixel point in the unit area; or,
and obtaining the average value of the color data of the pixel points of each angular point in the unit area.
6. The method of claim 1, wherein said inserting the grass blades into the unit area comprises:
determining the number of grass blades based on the rendering data within the unit area;
inserting said number of said grass blades into said unit area.
7. The method of claim 6, wherein the rendering data includes a duty cycle of a grass texture within the unit area, wherein the determining the number of grass blades based on the rendering data of the unit area includes:
and determining the number of the grass blades based on the duty ratio of the grass texture in the unit area, wherein the number of the grass blades is positively correlated with the duty ratio of the grass texture.
8. The method of claim 6, wherein the rendering data includes color data for pixels within the unit area, and wherein the determining the number of grass blades based on the rendering data within the unit area includes:
And determining the number of the grass inserting sheets based on the ratio between the color data of the pixel points and the grass color threshold value.
9. The method of claim 8, wherein the determining the number of grass blades based on a ratio between color data of the pixel points and a grass color threshold comprises:
determining the number of the grass blades based on the ratio between the average value of the color data of each pixel point in the unit area and the grass color threshold; or,
determining the number of the grass blades based on the ratio between the color data of the central pixel point in the unit area and the grass color threshold; or,
and determining the number of the grass inserting sheets based on the ratio between the average value of the color data of the pixel points of each angular point in the unit area and the grassland color threshold value.
10. The method of claim 1, wherein prior to the obtaining the rendering data within a unit area in the rendering area, the method further comprises:
and determining the rendering area based on the position of the virtual object and a preset radius, wherein the difference between the boundary of the rendering area and the position of the virtual object is the preset radius.
11. A lawn rendering device, comprising:
the acquisition module is used for acquiring the rendering data in the unit area in the rendering area;
and an insertion module for inserting the grass blades into the unit area in response to determining that the rendering data of the unit area satisfies the grass texture characteristics.
12. A server, wherein the computer device comprises: a processor coupled to a memory having stored therein at least one computer program instruction that is loaded and executed by the processor to cause the computer arrangement to implement the method of any of claims 1-10.
13. A computer readable storage medium, characterized in that at least one instruction is stored in the storage medium, which instructions, when run on a computer, cause the computer to perform the method according to any of claims 1-10.
CN202311172716.5A 2023-09-12 2023-09-12 Grassland rendering method and device Pending CN117197276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311172716.5A CN117197276A (en) 2023-09-12 2023-09-12 Grassland rendering method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311172716.5A CN117197276A (en) 2023-09-12 2023-09-12 Grassland rendering method and device

Publications (1)

Publication Number Publication Date
CN117197276A true CN117197276A (en) 2023-12-08

Family

ID=88986542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311172716.5A Pending CN117197276A (en) 2023-09-12 2023-09-12 Grassland rendering method and device

Country Status (1)

Country Link
CN (1) CN117197276A (en)

Similar Documents

Publication Publication Date Title
US11386528B2 (en) Denoising filter
US10489970B2 (en) 2D image processing for extrusion into 3D objects
CN110706324B (en) Method and device for rendering weather particles
US7265761B2 (en) Multilevel texture processing method for mapping multiple images onto 3D models
CN108305312A (en) The generation method and device of 3D virtual images
US10204447B2 (en) 2D image processing for extrusion into 3D objects
CN110115841B (en) Rendering method and device for vegetation object in game scene
JP3626144B2 (en) Method and program for generating 2D image of cartoon expression from 3D object data
CN109087369A (en) Virtual objects display methods, device, electronic device and storage medium
CN113658316B (en) Rendering method and device of three-dimensional model, storage medium and computer equipment
CN106898040A (en) Virtual resource object rendering intent and device
CN109448093B (en) Method and device for generating style image
CN110610504A (en) Pencil drawing generation method and device based on skeleton and tone
CN112734900A (en) Baking method, baking device, baking equipment and computer-readable storage medium of shadow map
CN117197276A (en) Grassland rendering method and device
CN115761138A (en) Three-dimensional house model determining method and device, electronic equipment and storage medium
CN117475053A (en) Grassland rendering method and device
CN113838199B (en) Three-dimensional terrain generation method
CN111462343B (en) Data processing method and device, electronic equipment and storage medium
CN117197275A (en) Terrain rendering method and device
US20240193864A1 (en) Method for 3d visualization of sensor data
Cheok et al. Humanistic Oriental art created using automated computer processing and non-photorealistic rendering
CN107481184A (en) A kind of low polygon style figure generation interactive system
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
CN115439314A (en) Stylization method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination