CN108389245B - Animation scene rendering method and device, electronic equipment and readable storage medium - Google Patents

Animation scene rendering method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN108389245B
CN108389245B CN201810149087.7A CN201810149087A CN108389245B CN 108389245 B CN108389245 B CN 108389245B CN 201810149087 A CN201810149087 A CN 201810149087A CN 108389245 B CN108389245 B CN 108389245B
Authority
CN
China
Prior art keywords
role
area
preset
rendering
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810149087.7A
Other languages
Chinese (zh)
Other versions
CN108389245A (en
Inventor
马仕员
雷洪
李星彤
曾贤成
孙剑雄
方剑斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingcai Online Technology Dalian Co Ltd
Original Assignee
Jingcai Online Technology Dalian Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingcai Online Technology Dalian Co Ltd filed Critical Jingcai Online Technology Dalian Co Ltd
Priority to CN201810149087.7A priority Critical patent/CN108389245B/en
Priority to CN202211412521.9A priority patent/CN116091658A/en
Publication of CN108389245A publication Critical patent/CN108389245A/en
Application granted granted Critical
Publication of CN108389245B publication Critical patent/CN108389245B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a rendering method and device of an animation scene, electronic equipment and a readable storage medium, wherein the method comprises the following steps: obtaining the current coordinates of the role in the animation scene; selecting a level in a first preset area taking the current coordinate of the role as the center as each target level; the first preset area is a preset role visible range area; and rendering the objects in each target level to obtain a rendering picture of the role. When the technical scheme provided by the embodiment of the invention is applied to animation rendering, the animation rendering efficiency is improved.

Description

Animation scene rendering method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a method and an apparatus for rendering an animation scene, an electronic device, and a computer-readable storage medium.
Background
With the development of internet services, three-dimensional animation technology is becoming mature and widely applied to the fields of games, movies, televisions and the like, and animation rendering methods also become research hotspots of people.
For example, an animation scene is a scene in a game, which includes many buildings, vegetation, and the like. This scenario is divided into several levels, one level may be: a virtual building aggregation area in an animated scene. In this way, the user can enter different animation scenes, such as snow scenes, desert scenes, or forest scenes, etc., in the identity of the character.
At present, there are many technical bottlenecks in an animation rendering method, for example, in order to achieve the rendering effect of the above animation scene, the existing method generally renders all the level cards in the animation scene, but due to the limitation of the performance of the hardware device running the animation scene, the maximum number of the receivable characters in the animation scene is also limited, generally below 60, the scene scale is small, and it can be seen that the animation rendering efficiency of the existing method is not high, and therefore, an animation rendering method capable of improving the rendering efficiency is required.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a method and an apparatus for rendering an animation scene, an electronic device, and a computer-readable storage medium, so as to improve the efficiency of rendering an animation.
In a first aspect, an embodiment of the present invention provides a method for rendering an animation scene, where the method includes:
obtaining the current coordinates of the role in the animation scene;
selecting the level in a first preset area with the current coordinate of the role as the center as each target level; the first preset area is a preset role visible range area;
and rendering the objects in each target level to obtain a rendering picture of the role.
Optionally, after the level in the first preset area centered on the current coordinate of the role is selected as each target level, the method further includes:
removing objects meeting preset removal rules from each target level;
the rendering the objects in each target level to obtain the rendered image of the role comprises:
rendering the objects which are not removed in each target level to obtain a rendering picture of the role.
Optionally, after obtaining the rendered image of the character, the method further includes:
detecting the coordinates of the role, and taking the detected coordinates of the role as target coordinates of the role;
calculating the distance between the target coordinate of the role and the current coordinate of the role as the target distance of the role;
judging whether the target distance of the role is smaller than a preset re-rendering distance or not;
and if the current value is less than the preset value, displaying the rendering picture of the role.
Optionally, if the target distance of the role is not less than the preset re-rendering distance, the method further includes:
and assigning the target coordinates of the role to the current coordinates of the role, and returning to execute the step of selecting the level in the first preset area with the current coordinates of the role as the center as each target level.
Optionally, after the level in the first preset area centered on the current coordinate of the role is selected as each target level, the method further includes:
loading each target level and unloading the level outside a second preset area by taking the current coordinate of the role as a center; the area outside the second preset area is a preset role invisible range area;
the rendering the objects in each target level comprises:
and rendering each loaded target level.
Optionally, the first preset area is: a spherical area with a first preset value as a radius; the second preset area is as follows: and taking a second preset value as a spherical area with a radius, wherein the difference between the second preset value and the first preset value is equal to the preset re-rendering distance.
Optionally, the removing, from the target checkpoints, objects meeting a preset removing rule includes:
calculating the screen area proportion of each object in each target level, wherein the screen area proportion of the object represents the ratio of the area of the object under a screen coordinate system to the area of a display screen, and the display screen is used for displaying the rendering picture of the role;
and eliminating the objects with the screen area proportion smaller than the preset screen area proportion in each target level.
Optionally, the screen area ratio of each object in each target level is calculated by the following method:
calculating the distance from an object to the preset part of the role and the scaling coefficient of the role at the current view angle;
calculating the surrounding spherical area of the object under a screen coordinate system by using the distance, the zooming coefficient and the preset surrounding spherical radius of the object;
and obtaining the area of the display screen, calculating the ratio of the area of the surrounding sphere to the area of the display screen, and taking the calculated ratio as the screen area ratio of the object.
Optionally, rendering the objects in the target checkpoints includes:
obtaining coordinates of all similar objects in all target checkpoints, wherein the similar objects are as follows: each object belonging to the same preset object type;
rendering the same-kind objects at the coordinates of the same-kind objects.
In a second aspect, an embodiment of the present invention provides an apparatus for rendering an animated scene, where the apparatus includes:
the obtaining module is used for obtaining the current coordinates of the role in the animation scene;
the selection module is used for selecting the level in a first preset area with the current coordinate of the role as the center as each target level; the first preset area is a preset role visible range area;
and the rendering module is used for rendering the objects in each target level to obtain a rendering picture of the role.
Optionally, the apparatus further comprises:
the removing module is used for removing objects meeting preset removing rules from each target level after the level in a first preset area with the current coordinate of the role as the center is selected as each target level;
the rendering module is specifically configured to:
and rendering objects which are not removed in each target level to obtain a rendering picture of the role.
Optionally, the apparatus further comprises:
the detection module is used for detecting the coordinates of the role after the rendering picture of the role is obtained, and taking the detected coordinates of the role as the target coordinates of the role;
the calculation module is used for calculating the distance between the target coordinate of the role and the current coordinate of the role as the target distance of the role;
the judging module is used for judging whether the target distance of the role is smaller than a preset re-rendering distance or not;
and the display module is used for displaying the rendering picture of the role when the judgment result of the judgment module is yes.
Optionally, the apparatus further comprises:
and the return module is used for assigning the target coordinate of the role to the current coordinate of the role and returning to execute the selected level in a first preset area with the current coordinate of the role as the center to serve as each target level when the judgment result of the judgment module is negative.
Optionally, the apparatus further comprises:
the loading module is used for loading each target level after the level in the first preset area with the current coordinate of the role as the center is selected as each target level, and unloading the level outside a second preset area with the current coordinate of the role as the center; the area outside the second preset area is a preset role invisible range area;
the rendering module is specifically configured to:
and rendering each loaded target level.
Optionally, the first preset area is: a spherical area with a first preset value as a radius; the second preset area is as follows: and taking a second preset value as a spherical area with the radius, wherein the difference between the second preset value and the first preset value is equal to the preset re-rendering distance.
Optionally, the removing module includes:
the calculation submodule is used for calculating the screen area proportion of each object in each target level, wherein the screen area proportion of the object represents the ratio of the area of the object under a screen coordinate system to the area of a display screen, and the display screen is used for displaying the rendering picture of the role;
and the eliminating submodule is used for eliminating the objects with the screen area proportion smaller than the preset screen area proportion in each target level.
Optionally, the calculating sub-module calculates the screen area ratio of each object in each target level by:
calculating the distance from an object to the preset part of the role and the scaling coefficient of the role under the current view angle;
calculating the surrounding spherical area of the object under a screen coordinate system by using the distance, the zooming coefficient and the preset surrounding spherical radius of the object;
and obtaining the area of the display screen, calculating the ratio of the area of the surrounding sphere to the area of the display screen, and taking the calculated ratio as the screen area ratio of the object.
Optionally, the rendering module is specifically configured to:
obtaining coordinates of all similar objects in all target checkpoints, wherein the similar objects are as follows: each object belonging to the same preset object type;
rendering the same-kind objects at the coordinates of the same-kind objects.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes: a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor is configured to implement the steps of any one of the above animation scene rendering methods when executing the program stored in the memory.
In a fourth aspect, the embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the steps of the method for rendering an animated scene described above are performed.
In a fifth aspect, embodiments of the present invention provide a computer program product including instructions, which when run on a computer, cause the computer to perform any of the above methods for rendering an animated scene.
In a sixth aspect, an embodiment of the present invention provides a computer program, which when run on a computer, causes the computer to execute any one of the above-described methods for rendering an animated scene.
By applying the method provided by the embodiment of the invention, the level in the first preset area with the current coordinate of the role as the center is selected as each target level, and the object in each target level is rendered to obtain the rendering picture of the role.
Therefore, in the process of rendering the animation, only the objects in each target level are rendered, and all levels are not required to be rendered, so that the efficiency of rendering the animation is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a rendering method for an animation scene according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an animation scene provided by an embodiment of the present invention;
fig. 3 is another schematic flowchart of a method for rendering an animation scene according to an embodiment of the present invention;
FIG. 4 is a schematic plan view of a sphere surrounding a house model according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an apparatus for rendering an animation scene according to an embodiment of the present invention;
fig. 6 is another schematic structural diagram of an apparatus for rendering an animation scene according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to improve the efficiency of animation rendering, embodiments of the present invention provide a rendering method and apparatus for an animation scene, an electronic device, and a computer-readable storage medium.
First, a method for rendering an animation scene according to an embodiment of the present invention is described below.
It should be noted that the execution subject of the animation scene rendering method provided in the embodiment of the present invention may be an animation scene rendering device, and specifically, the animation scene rendering device may be located in an electronic device, and the electronic device may be a mobile terminal or a server.
The mobile terminal may be: a mobile terminal installed with an iOS operating system (iOS is a handheld device operating system developed by apple, inc.), or an Android operating system (the Android system is an operating system based on Linux, free and open source codes), or a Windows Phone operating system (Windows Phone is a mobile Phone operating system issued by microsoft), where the server may be: a server installed with a Linux operating system, or a Windows operating system, which is not limited herein.
Referring to fig. 1, a method for rendering an animation scene according to an embodiment of the present invention includes the following steps:
s101, obtaining the current coordinates of a role in an animation scene;
optionally, the step S101 may be to obtain or acquire current coordinates of the character in the animation scene.
One or more characters can be included in the animation scene, the character can be a cartoon image or a character image, for each character in the animation scene, the current coordinates of the character in the animation scene can be obtained, and for a certain character, the current coordinates of the character can be: the coordinates of the character in the animated scene are located prior to performing the step of selecting a level within the character's visibility range area. When selecting the level in the role visual range area, the current coordinate of the role needs to be called first, and the visual range area of the role is determined according to the current coordinate value.
In one implementation, a screen coordinate system may be established based on an animation scene, and thus, a current coordinate of a geometric center point of a character in the screen coordinate system may be obtained as the current coordinate of the character in the animation scene. In another mode, the current coordinates of a position point of the body of the character in the screen coordinate system may be used as the current coordinates of the character in the animation scene.
S102, selecting the level in a first preset area with the current coordinate of the role as the center as each target level;
the first preset area is a preset role visible range area.
Optionally, in the step S201, all the level cards in the first preset area with the current coordinates of the character as the center may be selected as the target level cards.
Different types of objects can be distributed in each area in the animation scene, the objects can be divided into buildings, vegetations, stones, lawns and the like, and the buildings can comprise houses, enclosing walls, barriers and the like. In general, objects are irregularly distributed over areas, for example, some areas may include densely distributed buildings in a simulated town layout, and some areas may include only a large piece of grass in a simulated unmanned area. When objects with different complexity or different densities are loaded, unloaded and rendered, physical memory and a GPU (Graphics Processing Unit) which need to be consumed are different, and areas with more concentrated buildings need larger physical memory and GPU, so that the area can be defined as a level, while areas without buildings need smaller physical memory and GPU, and the areas without buildings can be ignored and regarded as non-level. Therefore, the whole animation scene is divided into a level area and a non-level area, and level cards with different sizes, shapes, areas and positions are distributed, and can be overlapped in a cross mode.
It is understood that different threads, such as a load thread, an unload thread, and a render thread, may run in the GPU, and the load thread, the unload thread, and the render thread are frequently performed, but may cause the performance consumption of the GPU to be large. Although the loading link is optimized by dividing the animation scene into different levels, if the levels of the whole animation scene are loaded into the GPU together, the GPU can be more consumed although the rendering can be conveniently carried out at any time in the future. The larger the animation scene is, the more the level is included, especially when the game is just started, if the level of the whole animation scene is completely loaded, the user still needs to wait for the loading of the game for a long time, and a large number of clients are lost during the waiting period. If the animation scene is a relatively large map, it becomes impractical to load all of the checkpoints, so as a specific embodiment, we choose a part of the checkpoints to be loaded to process.
Each level may be a rectangular area, a circular area surrounding a certain center point, or an irregularly shaped area. The areas and shapes of the different gates may be different. There may be overlapping areas between the areas covered by different levels. As a specific embodiment of the present invention, fig. 2 shows that a map scene is distributed with several gates, each gate is rectangular, and the area of each gate may be different or the same, and there may be an overlapping area between different gates.
The character visible range region is a region formed by the visible range of the character, and it can be considered that the level that the character can see is: since the visual range area of the character is predetermined and fixed in size once set, the number of the checkpoints existing in the first preset area centered on the current coordinate of the character is limited, that is, the number of the target checkpoints is limited, and may be generally 3, 4, 5, 6, 7, and so on.
For the loading method of the level, all the levels in a first preset area taking the current coordinate of the role as the center can be loaded; or the partial level after being screened according to a certain rule, that is, the partial level in the first preset area with the current coordinate of the role as the center is loaded.
For example, as a specific embodiment of the present invention, the level may be regarded as a rectangular area surrounding a plurality of buildings, and in one implementation, when the rectangular area where the level is located completely falls within a first preset area with the current coordinate of the character as the center, the level may be selected as the target level.
It can be understood that the rectangular area where the level is located falls within the first preset area with the current coordinates of the character as the center is: when the area of the overlapping area is equal to the area of the smaller area, the rectangular area where the level is located can be considered to completely fall into the first preset area which takes the current coordinate of the role as the center; when the overlapped area is larger than zero and smaller than the area of the smaller area, it may be considered that the rectangular area where the checkpoint is located falls into a first preset area centered on the current coordinate of the character, and the smaller area may be: the smaller of the rectangular area where the level is located and the first preset area centered on the current coordinates of the character.
For example, the smaller regions may be: the rectangular area where the checkpoint is located is 10 square centimeters, and when the area of an overlapping area between the rectangular area where the checkpoint is located and a first preset area with the current coordinate of the role as the center is 10 square centimeters, the rectangular area where the checkpoint is located can be considered to completely fall into the first preset area with the current coordinate of the role as the center; when the area of the overlapping portion between the rectangular area where the checkpoint is located and the first preset area centered on the current coordinate of the character is 5 square centimeters, it can be considered that the rectangular area where the checkpoint is located partially falls within the first preset area centered on the current coordinate of the character.
Or, in another implementation manner, when the area of the overlapping region between the rectangular region where the level is located and the first preset region centered on the current coordinate of the character exceeds a preset multiple of the area of the rectangular region where the level is located, the level may be selected as the target level, where the preset multiple may be set in advance according to a user requirement, and this is not limited in the embodiment of the present invention, and may be, for example, 1/2, 3/4, 4/5, and the like.
For example, if the preset multiple is 1/2, the area of the overlapping region between the rectangular region where the level is located and the first preset region centered on the current coordinate of the character is 10 square centimeters, and the area of the rectangular region where the level is located is 15 square centimeters, the level may be selected as the target level.
In yet another implementation, when the center point of the rectangular area where the level is located falls within a first preset area centered on the current coordinates of the character, the level may be selected as the target level.
In a special case, the character is located in an area where a plurality of levels overlap, for example, the current coordinate position of the character falls within the area of 2 or 3 levels. As a specific example, when a rectangular area in which the level is located has a part of the rectangular area falling within the first preset area, the level may be selected as the target level. For example, when 1/3 of the area of the rectangular region where the level is located falls into a first preset region with the current coordinate of the role as the center, the level can be selected as a target level; when the coordinate of the peripheral side length of the rectangular region where the level is located falls within a first preset region with the current coordinate of the character as the center, the level can be selected as a target level. As a specific example, when the center point of the rectangular area where the level is located falls within a first preset area centered on the current coordinates of the character, the level may be selected as the target level.
And S103, rendering the objects in each target level to obtain a rendering picture of the role.
Objects within the checkpoint may include vegetation (e.g., trees, bushes, grasses, etc.), buildings, stones, lakes, and so forth. It can be understood that before rendering, each object in the level exists in the form of a virtual model, and after rendering, each object exists in the form of a graph, so that a rendered picture composed of each graph can be obtained.
Specifically, one or more characters may exist in the animation scene, and for each character, an LOD (Levels of Detail) rendering technology may be adopted to render the object in each target level, so that a rendered image of the character may be obtained.
After a level in a first preset area with the current coordinate of the character as the center is selected, that is, after a target level is selected, an overlapping area between the target levels may be determined first, and an object in each determined overlapping area is rendered once, so that it is possible to avoid that the object in the overlapping area between different levels is repeatedly rendered, and then the object in the non-overlapping area of each target level is rendered.
Optionally, in the step S103, the object in each target level may be rendered, and a rendering screen of the character is obtained or acquired.
Therefore, by applying the technical scheme provided by the embodiment of the invention, some target level cards are selected from the animation scene, and each target level card is loaded or unloaded, rather than all the level cards in the animation scene are loaded, unloaded and rendered, so that the consumption of a GPU, a CPU and a memory is reduced. In the process of rendering the animation of the first preset area, all the checkpoints are not required to be rendered, and only the objects in each target checkpoint are rendered, so that the animation rendering efficiency is improved.
Specifically, for the objects in the target level, in order to further improve the rendering efficiency, the rendering of the objects in each target level may include the following steps:
acquiring coordinates of all similar objects in all target checkpoints;
and rendering the same kind of objects at the coordinates of the same kind of objects.
Wherein, each object of the same kind is: each object belonging to the same preset object type.
For convenience of explanation, as one specific embodiment of the present invention, it is assumed that the object is vegetation and the object type is a vegetation type.
The vegetation types can be divided into crude types such as herbaceous, shrub and arbor types, and can also be divided into the following types according to the vegetation types: osmanthus trees, apricot blossom trees, cherry blossom trees, mimosa and other thin classes, every vegetation type can preset a vegetation type sign to whether the sign is the same according to the vegetation type, can confirm whether the vegetation belongs to same vegetation type, certainly, also can preset a vegetation characteristic value for every vegetation type, thereby, can be whether the same according to the vegetation characteristic value, confirm whether the vegetation belongs to same vegetation type, specific vegetation type and confirm whether the vegetation belongs to mode of same vegetation type can be according to designer's demand predesigned, do not do the injeciation here.
In an embodiment, the coordinates and the vegetation type parameters of each vegetation may be stored in a database in advance, and then the coordinates of each similar vegetation in all the target gates may be obtained by reading the database, and the vegetation type parameters may be parameters for uniquely identifying vegetation, such as vegetation type identifiers or vegetation characteristic values. The vegetation characteristic value can be various numerical parameters marked according to different attributes of the plant, can be divided into various types of parameters, and can be arbors or shrubs, temperate plants or tropical plants, yellow leaves or green leaves, deciduous plants or evergreen plants, and the like, and the computer renders a specific plant shape according to the characteristic value of the plant.
The coordinates of the vegetation can be the coordinates of the geometric center point of the vegetation under a screen coordinate system, and can also be the coordinates of a certain position point of the vegetation under the screen coordinate system.
When the vegetation is rendered, after the coordinates of all similar vegetation in all the target gates are obtained, the similar vegetation can be rendered at the coordinates of all the similar vegetation through only one rendering request, and therefore the number of times of sending the rendering request to a GPU is saved.
For example, the vegetation types include mimosa and osmanthus fragrans, coordinates of a geometric center point of the mimosa under a screen coordinate system are a1 and a2, coordinates of a geometric center point of the osmanthus fragrans under the screen coordinate system are b1 and b2, the mimosa is synchronously rendered at the coordinates a1 and a2, and the osmanthus fragrans is synchronously rendered at the coordinates b1 and b 2. The specific value of a1 may be (21, 24, 33), and the distances of the x-axis component, the y-axis component and the z-axis component of the coordinate a1 in the screen coordinate system from the coordinate origin are respectively represented by: 21. 24 and 33 unit values. The descriptions of a2, b1 and b2 can be the same as a1, and are not repeated here.
In addition, in other embodiments, the vegetation may also be rendered in the following manner:
acquiring the total number of similar vegetation in all target checkpoints and the coordinates of all similar vegetation;
and (3) once rendering the same type vegetation of the total plants, and setting a rendered same type vegetation at the coordinate position of each same type vegetation.
The total number of vegetation of each vegetation type, the coordinates of each vegetation and the vegetation type parameters can be stored in a database in advance correspondingly, and then the total number of the similar vegetation in all target gates and the coordinates of each similar vegetation can be obtained by reading the database.
For example, the vegetation type is mimosa, and the total number of vegetation is: and 2, respectively rendering the coordinates of the geometric center point of the vegetation under a screen coordinate system to be a1 and a2, and respectively arranging one rendered sensitive plant at the coordinates a1 and a 2.
It should be noted that, in the embodiment of the present invention, an object is taken as an example for description, which is only a specific example of the present invention, and a rendering manner of other objects (for example, buildings, stones, lakes, and the like) may be the same as the rendering manner of the plants, and details are not repeated here.
By sending a rendering request to a Graphics Processing Unit (GPU), object rendering can be achieved.
In order to further improve the rendering efficiency, referring to fig. 3, the embodiment shown in fig. 3 is based on the embodiment shown in fig. 1, and after S102, the method may further include:
s104, removing objects meeting preset removal rules from each target level;
as a specific embodiment of the present invention, when there is an overlapping area between areas where the target level is located, when counting all objects in the target level, there is a risk that the objects are repeatedly calculated, so that deleting repeated objects in advance can save time consumed by the following operation steps. Therefore, before objects meeting the preset rejection rules are rejected from each target level, the method further comprises the following steps: and counting the objects in each target level, and deleting repeated objects. Then, the step of eliminating the objects meeting the preset elimination rules from each target level further comprises the steps of: and eliminating the objects meeting the preset elimination rule from the non-repetitive objects included in each target level.
In this case, S103 may specifically be:
S103A, rendering objects which are not removed in each target level to obtain a rendering picture of the role.
The preset rejection rule can be preset according to the requirements of designers, and the embodiment of the invention does not limit the specific rejection rule, for example, the rejection rule can be designed by utilizing one or combination of visual cone rejection, occlusion rejection and distance rejection.
Illustratively, the elimination of the design elimination rule according to the distance and the elimination of the objects meeting the preset elimination rule from each target level may include the following steps:
acquiring current coordinates of a first class object and a second class object in a target level;
calculating the distance between the current coordinate of each first class object and the current coordinate of the role and the distance between the current coordinate of each second class object and the current coordinate of the role;
rejecting objects with the first distance within a first preset range and the second distance within a second preset range;
wherein, the first kind of object may be: all objects with the role of shielding roles in the target checkpoint, such as enclosing walls, boulders, houses and the like, the coordinates of the first type of object may be: the coordinates of the geometric center point of the first type object in the screen coordinate system can also be the coordinates of a certain position point of the first type object in the screen coordinate system.
The second type of object may be: objects with decorative effects in all target checkpoints, such as lamps, stickers, small ornaments and the like, and the coordinates of the second type of objects can be as follows: the coordinates of the geometric center point of the second-class object in the screen coordinate system can also be the coordinates of a certain position point of the second-class object in the screen coordinate system.
The method includes the steps of establishing a world coordinate system for an animation scene in advance, storing coordinates of objects (including a first class object and a second class object) in the animation scene in a database under the world coordinate system, further acquiring the coordinates of the objects in the world coordinate system by reading the database, converting the coordinates of the objects in the world coordinate system into the coordinates of the objects in a screen coordinate system, further acquiring the coordinates of the objects in the screen coordinate system, or directly storing the coordinates of the objects in the animation scene in the screen coordinate system in the database, further acquiring the coordinates of the objects in the screen coordinate system by reading the database.
It should be noted that, in the embodiment of the present invention, a dividing manner of the first class object and the second class object is not limited, that is, in different embodiments, specific objects included in the first class object and the second class object may be different from the embodiment of the present invention, for example, in other embodiments, the first class object, the second class object, and the third class object may also be divided according to a principle of dividing an area size of the object in a screen coordinate system.
The objects in the barrier are divided into the first class of objects and the second class of objects, so that a plurality of similar objects can be removed at one time, and compared with a mode of removing one object at one time, the efficiency of removing the objects can be improved.
The first distance may be: the distance between the current coordinates of the first type of object and the current coordinates of the character, the second distance may be: the distance between the current coordinates of the second type object and the current coordinates of the character.
The first preset range and the second preset range can be preset according to the requirements of designers, and the starting point of the first preset range can be larger than or equal to the end point of the second preset range because the volume of the first object is usually larger than that of the second object.
For example, the first predetermined range may be [500, + ∞ ], [600, + ∞ ], or [700, + ∞ ], etc., and the second predetermined range may be: the units may be "meters" in the world coordinate system, in which case, the first preset range and the second preset range may be converted from the world coordinate system to the screen coordinate system, and the values of the first preset range and the second preset range in the screen coordinate system are obtained, so as to determine whether the first distance is within the first preset range and whether the second distance is within the second preset range.
For a first type of object, when the first distance of the first type of object is within a first preset range, it can be considered that: the distance between the current coordinate of the first class of objects and the current coordinate of the role is long, the role cannot see the first class of objects clearly, and the first class of objects do not need to be rendered, so that the first class of objects can be removed; for a second type of object, when the second distance of the second type of object is within a second preset range, it can be considered that: the distance between the current coordinate of the second object and the current coordinate of the role is long, the role cannot see the second object clearly, the second object does not need to be rendered, and therefore the second object can be removed.
By applying the embodiment of the invention, objects which are not removed in each target level are rendered, and compared with the way of rendering the objects in each target level, the number of the objects which need to be rendered can be reduced, so that the rendering efficiency is improved.
In general, the object has a smaller area under the screen coordinate system when the distance between the object and the current coordinate of the character is long, but in some cases, the area of the object under the screen coordinate system is large although the distance between the object and the current coordinate of the character is long, for example, the area of a building under the screen coordinate system is 100 square centimeters, and even if the distance between the building and the current coordinate of the character is greater than 10 centimeters under the screen coordinate system, the building may still be visible to the character and thus cannot be culled.
Alternatively, although the object is close to the current coordinates of the character, the area of the object under the screen coordinate system is small, for example, some small parts (such as grass, a pendant on the character, a weapon pendant, etc.) which may not be visible to the character even if the distance between the small part and the current coordinates under the screen coordinate system is less than 5 cm, and thus, may be culled.
In summary, if the objects are removed only according to the distance, an error may be caused, and in order to improve the accuracy of removing the objects, in one embodiment, the removing the objects meeting the preset removing rule from each target level may include the following steps:
step A1, calculating the screen area proportion of each object in each target level,
the screen area proportion of the object represents the ratio of the area of the object under a screen coordinate system to the area of a display screen, and the display screen is used for displaying a rendering picture of a role;
and A2, eliminating the objects with the screen area proportion smaller than the preset screen area proportion in each target checkpoint.
The preset screen area ratio can be preset according to the requirements of designers, and can be, for example: 2%, 3% or 4%, etc.
The area of the display screen may be: the area of the display screen under the screen coordinate system.
By applying the embodiment of the invention, the objects with the screen area proportion smaller than the preset screen area proportion in each target checkpoint are eliminated, the elimination of the objects with larger area under the screen coordinate system is avoided, the objects with smaller area under the screen coordinate system can be eliminated, the visibility degree of the objects to the roles is fully considered, the accuracy of eliminating the objects is improved,
in one implementation, the screen area ratio of each object within each target level may be calculated by:
the method comprises the following steps of firstly, calculating the distance from an object to a preset part of a role and the scaling coefficient of the role at the current view angle;
the preset part can be preset according to the requirement of a designer, and the embodiment of the invention is not limited to this, and for example, the preset part can be an eye part, or a part at a preset distance below the top of the head of the character, or a virtual camera is preset at a certain part of the body of the character, and in this case, the preset part can be a part where the virtual camera is located.
For example, the predetermined portion is an eye portion, and the distance from the object to the eye portion of the character may be: the distance from the geometric center point of the object to the binocular straight line under the screen coordinate system is as follows: a straight line connecting the center point of the left eye and the center point of the right eye; alternatively, the distance from the object to the eye part of the character may be: the distance from the geometric center point of the object to the midpoint of the binocular line segment under the screen coordinate system is as follows: the midpoint of a line segment connecting the center points of the left and right eyes.
In one implementation, calculating the scaling factor of the character at the current view angle may be: acquiring the number of pixels in the horizontal direction and the number of pixels in the vertical direction of the current view angle of the role in a screen coordinate system, and calculating a horizontal scaling coefficient and a vertical scaling coefficient by using the acquired number of pixels in the horizontal direction and the acquired number of pixels in the vertical direction; and taking the larger one of the horizontal scaling coefficient and the vertical scaling coefficient as the scaling coefficient of the character under the current view angle. Wherein, the current view of the role can be considered as: a view area formed by the current view range of the character.
Wherein, the horizontal scaling factor may be: width/2M 2 [0]; the vertical scaling factor may be: height/2M [1] [1], width represents the number of pixels of the current visual angle of the character in the horizontal direction under the screen coordinate system, height represents the number of pixels of the current visual angle of the character in the vertical direction under the screen coordinate system, M [0] [0] represents the element value of the 0 th row and the 0 th column of the projection matrix, and M [1] [1] represents the element value of the 1 st row and the 1 st column of the projection matrix.
The projection matrix can be a matrix with 2 rows and 2 columns, the projection matrix can be used for converting each object in the animation scene into a two-dimensional plane from a three-dimensional space, the specific element size of the projection matrix can be preset according to the requirements of designers, and the size of each element is not limited in the embodiment of the invention.
Illustratively, width =10, m 2 [0] =1, height =10, m 2 [1] =2, and the horizontal scaling factor may be: width/2.0f m [0] =10/2 × 1=5, and the vertical scaling factor may be: since Height/2.0f m [1] =10/2 × 2=10, the scaling factor of the character at the current view angle is: 10.2.0f represents: the data type is single precision floating point (float) and the value is 2.0.
Secondly, calculating the surrounding spherical area of the object under a screen coordinate system by using the distance, the zooming coefficient and the preset surrounding spherical radius of the object;
it is understood that, for each object in the animation scene, a model for setting the object may be preset, for example, a house model, a wall model, and the like may be set, and the preset enclosing sphere radius of the object may be: and the radius of a sphere surrounding the preset model of the object under the screen coordinate system. The model of the object may be a three-dimensional model,
illustratively, the preset object model is a house model, as shown in fig. 4, and fig. 4 is a plan view of a sphere surrounding the house model in a screen coordinate system, the preset surrounding sphere of the object has a radius of 10 cm.
The radius of the sphere surrounding the preset object model in the world coordinate system can be stored in a database in advance, the radius of the sphere surrounding the preset object model in the world coordinate system can be obtained by reading the database, the read radius is converted into a screen coordinate system from the world coordinate system, and the radius of the preset surrounding sphere of the object can be obtained; alternatively, the radius of the sphere surrounding the preset object model in the screen coordinate system may be stored in the database in advance, and then, the radius of the sphere surrounding the preset object model in the screen coordinate system, that is, the preset surrounding sphere radius of the object, may be obtained by reading the database.
In one implementation, the bounding area of an object in the screen coordinate system can be calculated using the following expression:
S=π*R*R;R=k/d*r
the method comprises the following steps of obtaining a screen coordinate system, obtaining a role preset position, obtaining a surrounding sphere radius of an object in the current view angle of the role in the screen coordinate system, obtaining a scaling coefficient of the role in the current view angle, obtaining a distance from the object to the preset part of the role, obtaining a preset surrounding sphere radius of the object, and obtaining a circumference ratio. It can be seen that the bounding area is: the area of the circle corresponding to the sphere that surrounds the graphic displayed by the object on the screen is not the surface area of the sphere.
For example, if the preset bounding sphere of the object has a radius of 2 cm, the distance from the object to the preset position of the character is 5 cm, and the scaling factor of the character at the current view angle is 10, R =10/5 × 2=4 cm, and S = pi × 4.
And thirdly, obtaining the area of the display screen, calculating the ratio of the area of the surrounding sphere to the area of the display screen, and taking the calculated ratio as the screen area ratio of the object.
For example, if the area of the display screen is 100 square centimeters and the area of the enclosing sphere is 40 square centimeters, the ratio of the screen area of the object is: the ratio of the enclosed spherical area to the display screen area is as follows: 40/100=0.4.
The coordinates of the character may change constantly while the character is in motion in the animated scene, e.g., the character may move such as running, going up stairs, and being latent in the animated scene. When the coordinate of the character changes greatly, the change of the visible range area of the character can be considered to be large, so that the rendering process can be executed again, and when the character moves frequently and the coordinate of the character changes little, the rendering process can be considered to be as follows: the region of the character's visible range is substantially unchanged to avoid frequent execution of unnecessary rendering processes. Re-rendering may not be performed.
Further, after obtaining the rendered screen of the character, the method may further include the following steps, in order to enable the user to view the rendered screen:
b1, detecting the coordinates of the role, and taking the detected coordinates of the role as target coordinates of the role;
optionally, the coordinates of the character may be detected in real time or at a small time interval, and the target coordinate values of the character may be obtained. The target coordinate of the character can be a target coordinate of a geometric center point of the character in a screen coordinate system, and can also be a target coordinate of a certain position point of the character in the screen coordinate system.
B2, calculating the distance between the target coordinate of the role and the current coordinate of the role as the target distance of the role;
it will be appreciated that the target coordinates of the character may be: after a rendering picture of the role is obtained, coordinates of the role are detected; the current coordinates of the character may be: the obtained character coordinates before selecting the target level. Wherein, the detection mode can be as follows: the method comprises a real-time detection mode, or a mode of detecting once every fixed preset period, or a mode of detecting once every preset time point.
As a specific embodiment, in order to calculate the change situation of the character coordinate position, the current coordinate of the character and the target coordinate of the character are defined. The time of the current coordinate of the role is earlier than the target coordinate of the role, after the coordinate value of the role is detected, the coordinate value is recorded as the current coordinate of the role, the coordinate value of the role is continuously detected and then recorded as the target coordinate of the role, and then the target coordinate is compared with the current coordinate to calculate the target distance of the role.
B3, judging whether the target distance of the role is smaller than a preset re-rendering distance or not; if yes, executing step B4;
the preset re-rendering distance may be preset according to the requirement of a designer, and the specific numerical value of the preset re-rendering distance is not limited in the embodiment of the present invention, and may be, for example: 40. 50, 60, 70, etc., in units of "meters" in the world coordinate system, and may convert the preset re-rendering distance from the world coordinate system to the screen coordinate system, so as to determine whether the target distance of the character is less than the preset re-rendering distance.
And B4, displaying the rendering picture of the role.
It can be understood that, since the first preset area is a preset character visible range area, when the rendered screen of the character is displayed, the following may be performed: and displaying a rendering picture corresponding to an object in a first preset area with the target coordinate of the character as the center. Therefore, different contents in the rendered screen can be displayed at a time according to the different target coordinates.
It can be seen that the preset re-rendering distance plays a role of a buffer zone, and when the target distance of the character is smaller than the preset re-rendering distance, the target distance can be considered to be within the range of the buffer zone, and the movement range of the character is smaller, so that it can be considered that the object visible by the character is basically unchanged, the target level can not be re-determined, and the previously obtained rendering picture of the character is directly displayed, so that the user can view the rendering picture.
In one implementation, if the target distance of the character is not less than the preset re-rendering distance, the method may further include:
and step B5, assigning the target coordinates of the role to the current coordinates of the role, and returning to the step (S102) of executing and selecting the level in the first preset area with the current coordinates of the role as the center as each target level.
That is, if the target distance is not less than (greater than or equal to) the preset re-rendering distance, the moving range of the character may be considered to be large, and exceeds the range of the buffer zone, the target level may be re-determined, and the rendering picture of the character may be re-rendered.
Therefore, by applying the technical scheme provided by the embodiment of the invention, if the target distance is not less than the preset re-rendering distance, the target level is re-determined, and the rendering picture of the role is re-rendered; if the target distance is smaller than the preset re-rendering distance, displaying a rendering picture of the role; therefore, on the basis of ensuring the reliability of the rendered picture, unnecessary rendering times are avoided, and the rendering efficiency is further improved.
In one implementation, after selecting, as each target level, a level in a first preset area centered on a current coordinate of a character, the method may further include:
loading each target level and unloading the level outside a second preset area by taking the current coordinate of the role as a center; the second preset area is a preset role invisible range area;
as an embodiment of the present invention, the second predetermined area is larger than the first predetermined area, and the difference zone between the predetermined second area and the first predetermined area (the area outside the first predetermined area and inside the second predetermined area) is equivalent to manufacturing a buffer zone, and the level inside the buffer zone is not loaded or unloaded, but only the level outside the second predetermined area is unloaded. The purpose of this is that the character has not moved out of the target checkpoints after a period of time, which occurs in two subsequent times, if the character is loaded just before and unloaded just after the previous time, and then loaded again when a new target checkpoint is loaded, which is not necessary, the checkpoints in the buffer zone will not be unloaded after the buffer zone is set.
In this case, rendering the objects in each target level may specifically be:
and rendering each loaded target level.
In one implementation, the steps of loading and unloading the target level may be performed by a loading thread, and then the step of rendering each loaded target level may be performed by a rendering thread.
Specifically, the first preset area may be: a spherical area with a first preset value as a radius; the second preset area is as follows: in the spherical area with the radius of the second preset value, the difference between the second preset value and the first preset value may be equal to the preset re-rendering distance.
The first preset value and the second preset value may be preset according to the requirements of designers, and may be, for example: 400 and 450, the units may be meters in a world coordinate system, and the first preset value and the second preset value may be converted from the world coordinate system to screen coordinates when the first preset area and the second preset area are determined. The region between 400 and 450 is the region that functions as a buffer zone.
In other implementation manners, a difference between the second preset value and the first preset value may also not be equal to the preset re-rendering distance, for example, may be greater than or less than the preset re-rendering distance, and in addition, the first preset area and the second preset area may also be rectangular areas, elliptical areas, fan-shaped areas, irregular-shaped areas, and the like. And the rectangular area is more convenient to calculate.
Compared with the mode of loading all the level cards in the animation scene to the GPU in the prior art, the method and the device for loading the level cards select a part of the level cards to be processed, can shorten the loading time of the target level card, and improve the loading efficiency.
Furthermore, the steps of loading and unloading the target level can be executed by using a loading thread, and then the step of rendering each loaded target level is executed by using a rendering thread, so that the burden of the rendering thread can be reduced, and the execution efficiency of the rendering thread is improved.
Corresponding to the embodiment of the rendering method of the animation scene, an embodiment of the present invention provides an apparatus for rendering an animation scene, as shown in fig. 5, corresponding to the flow shown in fig. 1, where the apparatus includes:
an obtaining module 301, configured to obtain current coordinates of a character in an animation scene;
a selecting module 302, configured to select a level in a first preset area with the current coordinate of the role as a center, as each target level; the first preset area is a preset role visible range area;
and a rendering module 303, configured to render the objects in the target level cards to obtain a rendered image of the role.
Therefore, by applying the technical scheme provided by the embodiment of the invention, only the objects in each target level are rendered in the animation rendering process without rendering all the levels, so that the animation rendering efficiency is improved
Referring to fig. 6, fig. 6 is another schematic structural diagram of an animation scene rendering apparatus according to an embodiment of the present invention, which corresponds to the flow shown in fig. 3, in the embodiment of the present invention shown in fig. 6, on the basis of the embodiment shown in fig. 5, a culling module 304 is added,
a removing module 304, configured to remove, after the level in the first preset area with the current coordinate of the role as the center is selected as each target level, an object meeting a preset removing rule from each target level;
the rendering module 303 is specifically configured to:
rendering the objects which are not removed in each target level to obtain a rendering picture of the role.
By applying the embodiment of the invention, objects which are not removed in each target level are rendered, and compared with the way of rendering the objects in each target level, the number of the objects which need to be rendered can be reduced, so that the rendering efficiency is improved.
Optionally, the apparatus further comprises:
the detection module is used for detecting the coordinates of the role after the rendering picture of the role is obtained, and taking the detected coordinates of the role as the target coordinates of the role;
the calculation module is used for calculating the distance between the target coordinate of the role and the current coordinate of the role as the target distance of the role;
the judging module is used for judging whether the target distance of the role is smaller than a preset re-rendering distance or not;
and the display module is used for displaying the rendering picture of the role when the judgment result of the judgment module is yes.
Optionally, the apparatus further comprises:
and the return module is used for assigning the target coordinate of the role to the current coordinate of the role and returning to execute the selected level in a first preset area with the current coordinate of the role as the center to serve as each target level when the judgment result of the judgment module is negative.
Optionally, the apparatus further comprises:
the loading module is used for loading each target level after the level in the first preset area with the current coordinate of the role as the center is selected as each target level, and unloading the level outside the second preset area with the current coordinate of the role as the center; the area outside the second preset area is a preset role invisible range area;
the rendering module 303 is specifically configured to:
and rendering each loaded target level.
Optionally, the first preset area is: a spherical area with a first preset value as a radius; the second preset area is as follows: and taking a second preset value as a spherical area with the radius, wherein the difference between the second preset value and the first preset value is equal to the preset re-rendering distance.
Optionally, the rejecting module 304 includes:
the calculation submodule is used for calculating the screen area proportion of each object in each target level, wherein the screen area proportion of the object represents the ratio of the area of the object under a screen coordinate system to the area of a display screen, and the display screen is used for displaying the rendering picture of the role;
and the eliminating submodule is used for eliminating the objects with the screen area proportion smaller than the preset screen area proportion in each target level.
Optionally, the calculating sub-module calculates the screen area ratio of each object in each target level by:
calculating the distance from an object to the preset part of the role and the scaling coefficient of the role under the current view angle;
calculating the surrounding spherical area of the object under a screen coordinate system by using the distance, the zooming coefficient and the preset surrounding spherical radius of the object;
and obtaining the area of the display screen, calculating the ratio of the area of the surrounding spherical area to the area of the display screen, and taking the calculated ratio as the screen area ratio of the object.
Optionally, the rendering module 303 is specifically configured to:
obtaining coordinates of all similar objects in all target checkpoints, wherein the similar objects are as follows: each object belonging to the same preset object type;
rendering the same-kind objects at the coordinates of the same-kind objects.
An embodiment of the present invention further provides an electronic device, as shown in fig. 7, the electronic device includes: a processor 501 and a memory 502, wherein,
a memory 502 for storing a computer program;
the processor 501 is configured to implement the method for rendering an animation scene according to the embodiment of the present invention when executing the program stored in the memory.
The method for rendering the animation scene comprises the following steps:
obtaining the current coordinates of the role in the animation scene;
selecting the level in a first preset area with the current coordinate of the role as the center as each target level; the first preset area is a preset role visible range area;
and rendering the objects in each target level to obtain a rendering picture of the role.
Therefore, in the process of rendering the animation, only the objects in each target level are rendered, and all levels are not required to be rendered, so that the efficiency of rendering the animation is improved.
It should be noted that other embodiments of the method for rendering an animation scene, which are implemented by the processor 501 executing the program stored in the memory 502, are the same as the embodiments of the method for rendering an animation scene mentioned in the foregoing method section, and are not described herein again.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, the steps of the rendering method of the animation scene are realized.
The rendering method of the animation scene comprises the following steps:
obtaining the current coordinates of the role in the animation scene;
selecting the level in a first preset area with the current coordinate of the role as the center as each target level; the first preset area is a preset role visible range area;
and rendering the objects in each target level to obtain a rendering picture of the role.
Therefore, in the process of rendering the animation, only the objects in each target level are rendered, and all levels are not required to be rendered, so that the animation rendering efficiency is improved.
It should be noted that other embodiments of the method for rendering an animation scene implemented when the computer program is executed by the processor are the same as the embodiments of the method for rendering an animation scene mentioned in the foregoing method section, and are not described again here.
Embodiments of the present invention provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the method for rendering an animation scene provided in the above embodiments.
Embodiments of the present invention provide a computer program, which when running on a computer, enables the computer to execute the method for rendering an animation scene provided in the foregoing embodiments.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, the electronic device, the computer-readable storage medium, the computer program product containing the instructions, and the computer program embodiment, since they are substantially similar to the method embodiment, the description is relatively simple, and it is sufficient to refer to the partial description of the method embodiment for the relevant points.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (16)

1. A method of rendering an animated scene, the method comprising:
obtaining the current coordinates of the role in the animation scene;
selecting a level in a first preset area taking the current coordinate of the role as a center as each target level; the first preset area is a preset role visible range area;
calculating the distance from an object to the preset part of the role and the scaling coefficient of the role under the current view angle;
calculating the surrounding spherical area of the object under a screen coordinate system by using the distance, the zoom coefficient and the preset surrounding spherical radius of the object;
obtaining the area of the display screen, calculating the ratio of the area of the surrounding sphere to the area of the display screen, and taking the calculated ratio as the screen area proportion of the object; the screen area proportion of the object represents the ratio of the area of the object under a screen coordinate system to the area of a display screen, and the display screen is used for displaying the rendering picture of the role;
eliminating objects with the screen area proportion smaller than the preset screen area proportion in each target barrier;
and rendering the objects in each target level to obtain a rendering picture of the role.
2. The method of claim 1, wherein the rendering the objects in the respective target level to obtain a rendered image of the character comprises:
and rendering objects which are not removed in each target level to obtain a rendering picture of the role.
3. The method of claim 1, wherein after obtaining the rendered view of the character, the method further comprises:
detecting the coordinates of the role, and taking the detected coordinates of the role as target coordinates of the role;
calculating the distance between the target coordinate of the role and the current coordinate of the role as the target distance of the role;
judging whether the target distance of the role is smaller than a preset re-rendering distance or not;
and if the image size is smaller than the preset value, displaying a rendering picture of the role.
4. The method of claim 3, wherein if the target distance of the character is not less than a predetermined re-rendering distance, the method further comprises:
and assigning the target coordinates of the role to the current coordinates of the role, and returning to execute the step of selecting the level in the first preset area taking the current coordinates of the role as the center as each target level.
5. The method of claim 1, wherein after the selecting, as each target level, a level within a first predetermined area centered on the current coordinates of the character, the method further comprises:
loading each target level and unloading the level outside a second preset area by taking the current coordinate of the role as a center; the area outside the second preset area is a preset role invisible range area;
the rendering the objects in the target checkpoints includes:
and rendering each loaded target level.
6. The method of claim 5,
the first preset area is as follows: a spherical area with a first preset value as a radius; the second preset area is as follows: and taking a second preset value as a spherical area with a radius, wherein the difference between the second preset value and the first preset value is equal to a preset re-rendering distance.
7. The method of claim 1, wherein rendering the objects within the respective target level comprises:
obtaining coordinates of all similar objects in all target checkpoints, wherein the similar objects are as follows: each object belonging to the same preset object type;
rendering the same-kind objects at the coordinates of the same-kind objects.
8. An apparatus for rendering an animated scene, the apparatus comprising:
the obtaining module is used for obtaining the current coordinates of the role in the animation scene;
the selection module is used for selecting the level in a first preset area with the current coordinate of the role as the center as each target level; the first preset area is a preset role visible range area;
the first calculation module is used for calculating the distance from an object to the preset part of the role and the scaling coefficient of the role under the current view angle;
the second calculation module is used for calculating the surrounding spherical area of the object in the screen coordinate system by using the distance, the zoom coefficient and the preset surrounding spherical radius of the object;
the third calculation module is used for obtaining the area of the display screen, calculating the ratio of the area of the surrounding sphere to the area of the display screen, and taking the calculated ratio as the screen area ratio of the object; the screen area proportion of the object represents the ratio of the area of the object under a screen coordinate system to the area of a display screen, and the display screen is used for displaying the rendering picture of the role;
the removing module is used for removing the objects with the screen area proportion smaller than the preset screen area proportion in each target level;
and the rendering module is used for rendering the objects in the target level cards to obtain rendering pictures of the roles.
9. The apparatus according to claim 8, wherein the rendering module is specifically configured to:
and rendering objects which are not removed in each target level to obtain a rendering picture of the role.
10. The apparatus of claim 8, further comprising:
the detection module is used for detecting the coordinates of the role after the rendering picture of the role is obtained, and taking the detected coordinates of the role as the target coordinates of the role;
the calculation module is used for calculating the distance between the target coordinate of the role and the current coordinate of the role as the target distance of the role;
the judging module is used for judging whether the target distance of the role is smaller than a preset re-rendering distance or not;
and the display module is used for displaying the rendering picture of the role when the judgment result of the judgment module is yes.
11. The apparatus of claim 10, further comprising:
and the return module is used for assigning the target coordinate of the role to the current coordinate of the role and returning to execute the selected level in a first preset area with the current coordinate of the role as the center to serve as each target level when the judgment result of the judgment module is negative.
12. The apparatus of claim 8, further comprising:
the loading module is used for loading each target level after the level in the first preset area with the current coordinate of the role as the center is selected as each target level, and unloading the level outside a second preset area with the current coordinate of the role as the center; the area outside the second preset area is a preset role invisible range area;
the rendering module is specifically configured to:
and rendering each loaded target level.
13. The apparatus of claim 12,
the first preset area is as follows: a spherical area with a first preset value as a radius; the second preset area is as follows: and taking a second preset value as a spherical area with a radius, wherein the difference between the second preset value and the first preset value is equal to a preset re-rendering distance.
14. The apparatus of claim 8, wherein the rendering module is specifically configured to:
obtaining coordinates of all similar objects in all target checkpoints, wherein the similar objects are as follows: each vegetation belonging to the same preset object type;
rendering the same-kind objects at the coordinates of the same-kind objects.
15. An electronic device, characterized in that the electronic device comprises: a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor, when executing a program stored in the memory, implementing the method steps of any of claims 1-7.
16. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN201810149087.7A 2018-02-13 2018-02-13 Animation scene rendering method and device, electronic equipment and readable storage medium Active CN108389245B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810149087.7A CN108389245B (en) 2018-02-13 2018-02-13 Animation scene rendering method and device, electronic equipment and readable storage medium
CN202211412521.9A CN116091658A (en) 2018-02-13 2018-02-13 Animation scene rendering method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810149087.7A CN108389245B (en) 2018-02-13 2018-02-13 Animation scene rendering method and device, electronic equipment and readable storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202211412521.9A Division CN116091658A (en) 2018-02-13 2018-02-13 Animation scene rendering method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN108389245A CN108389245A (en) 2018-08-10
CN108389245B true CN108389245B (en) 2022-11-04

Family

ID=63069598

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201810149087.7A Active CN108389245B (en) 2018-02-13 2018-02-13 Animation scene rendering method and device, electronic equipment and readable storage medium
CN202211412521.9A Pending CN116091658A (en) 2018-02-13 2018-02-13 Animation scene rendering method and device, electronic equipment and readable storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202211412521.9A Pending CN116091658A (en) 2018-02-13 2018-02-13 Animation scene rendering method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (2) CN108389245B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448117A (en) * 2018-11-13 2019-03-08 北京旷视科技有限公司 Image rendering method, device and electronic equipment
CN112650896B (en) * 2019-10-12 2024-07-19 阿里巴巴集团控股有限公司 Data processing method, device, equipment and storage medium
CN112686981B (en) 2019-10-17 2024-04-12 华为终端有限公司 Picture rendering method and device, electronic equipment and storage medium
CN110838162B (en) * 2019-11-26 2023-11-28 网易(杭州)网络有限公司 Vegetation rendering method and device, storage medium and electronic equipment
CN111359204A (en) * 2020-03-08 2020-07-03 北京智明星通科技股份有限公司 Rendering method and device of mobile phone game scene and mobile terminal
CN111701238B (en) * 2020-06-24 2022-04-26 腾讯科技(深圳)有限公司 Virtual picture volume display method, device, equipment and storage medium
CN112231020B (en) * 2020-12-16 2021-04-20 成都完美时空网络技术有限公司 Model switching method and device, electronic equipment and storage medium
CN112587921A (en) * 2020-12-16 2021-04-02 成都完美时空网络技术有限公司 Model processing method and device, electronic equipment and storage medium
CN113316020B (en) * 2021-05-28 2023-09-15 上海曼恒数字技术股份有限公司 Rendering method, device, medium and equipment
CN114581573A (en) * 2021-12-13 2022-06-03 北京市建筑设计研究院有限公司 Local rendering method and device of three-dimensional scene, electronic equipment and storage medium
CN116309974B (en) * 2022-12-21 2023-11-28 四川聚川诚名网络科技有限公司 Animation scene rendering method, system, electronic equipment and medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9183667B2 (en) * 2011-07-15 2015-11-10 Kirill Garanzha Out-of-core ray tracing with memory-efficient page generation
JP6181917B2 (en) * 2011-11-07 2017-08-16 株式会社スクウェア・エニックス・ホールディングス Drawing system, drawing server, control method thereof, program, and recording medium
CN102831631B (en) * 2012-08-23 2015-03-11 上海创图网络科技发展有限公司 Rendering method and rendering device for large-scale three-dimensional animations
CN104182999B (en) * 2013-05-21 2019-02-12 百度在线网络技术(北京)有限公司 Animation jump method and system in a kind of panorama
US9519986B1 (en) * 2013-06-20 2016-12-13 Pixar Using stand-in camera to determine grid for rendering an image from a virtual camera
CN104867174B (en) * 2015-05-08 2018-02-23 腾讯科技(深圳)有限公司 A kind of three-dimensional map rendering indication method and system
CN105844694B (en) * 2015-08-24 2019-04-26 鲸彩在线科技(大连)有限公司 A kind of game data generates, method for uploading and device
CN107481312B (en) * 2016-06-08 2020-02-14 腾讯科技(深圳)有限公司 Image rendering method and device based on volume rendering
US10825129B2 (en) * 2016-06-12 2020-11-03 Apple Inc. Eliminating off screen passes using memoryless render target
CN106296786B (en) * 2016-08-09 2019-02-15 网易(杭州)网络有限公司 The determination method and device of scene of game visibility region
CN106910236A (en) * 2017-01-22 2017-06-30 北京微视酷科技有限责任公司 Rendering indication method and device in a kind of three-dimensional virtual environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Maya场景建模技术在泥偶动画立体造型设计中的应用;李艳妮;《现代电子技术》;20171001(第19期);全文 *

Also Published As

Publication number Publication date
CN108389245A (en) 2018-08-10
CN116091658A (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN108389245B (en) Animation scene rendering method and device, electronic equipment and readable storage medium
US20220148278A1 (en) Method and device for a placement of a virtual object of an augmented or mixed reality application in a real-world 3d environment
CN109523621B (en) Object loading method and device, storage medium and electronic device
CN111957040B (en) Detection method and device for shielding position, processor and electronic device
CN106780709B (en) A kind of method and device of determining global illumination information
CN110990516B (en) Map data processing method, device and server
CN112581629A (en) Augmented reality display method and device, electronic equipment and storage medium
CN110969592B (en) Image fusion method, automatic driving control method, device and equipment
CN109819226B (en) Method of projecting on a convex body, projection device and computer-readable storage medium
CN111340960B (en) Image modeling method and device, storage medium and electronic equipment
CN113077548A (en) Collision detection method, device, equipment and storage medium for object
CN110262763B (en) Augmented reality-based display method and apparatus, storage medium, and electronic device
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN110458954B (en) Contour line generation method, device and equipment
CN114092663B (en) Three-dimensional reconstruction method, device, equipment and medium for urban information model building
KR100975128B1 (en) Method, system and computer-readable recording medium for providing information of object using viewing frustum
CN114241105A (en) Interface rendering method, device, equipment and computer readable storage medium
CN114529647A (en) Object rendering method, device and apparatus, electronic device and storage medium
US20200242819A1 (en) Polyline drawing device
CN110019596B (en) Method and device for determining tiles to be displayed and terminal equipment
CN113190150A (en) Display method, device and storage medium of covering
CN112614221A (en) High-precision map rendering method and device, electronic equipment and automatic driving vehicle
CN113870365B (en) Camera calibration method, device, equipment and storage medium
CN116049505B (en) Screen space tag collision detection method and device, computer equipment and storage medium
CN114821365B (en) Unmanned aerial vehicle photogrammetry overlapping degree calculation method and system considering surface relief

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant