CN112717390A - Virtual scene display method, device, equipment and storage medium - Google Patents

Virtual scene display method, device, equipment and storage medium Download PDF

Info

Publication number
CN112717390A
CN112717390A CN202110037843.9A CN202110037843A CN112717390A CN 112717390 A CN112717390 A CN 112717390A CN 202110037843 A CN202110037843 A CN 202110037843A CN 112717390 A CN112717390 A CN 112717390A
Authority
CN
China
Prior art keywords
area
virtual scene
virtual
shelter
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110037843.9A
Other languages
Chinese (zh)
Inventor
刘峰
张明甫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110037843.9A priority Critical patent/CN112717390A/en
Publication of CN112717390A publication Critical patent/CN112717390A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a virtual scene display method, a virtual scene display device, virtual scene display equipment and a storage medium, and is applicable to the fields of computer technology, cloud computing and the like. The method comprises the following steps: displaying a first unobscured region in the virtual scene, the first unobscured region including at least one virtual object; in response to a change in an object attribute of the first virtual object in the first non-occluded area, displaying an occluding object changing picture in the virtual scene to display a second non-occluded area in the virtual scene after the occluding object changing picture stops being displayed; the first non-shading area and the second non-shading area are non-shading areas before and after the object attribute of the first virtual object in the virtual scene changes respectively. By adopting the embodiment of the application, the display content of the virtual scene can be enriched, the display effect of the virtual scene is improved, and the applicability is high.

Description

Virtual scene display method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for displaying a virtual scene.
Background
In order to meet various demands of game players in current virtual games, various scene shading methods implemented by computer technology need to be introduced to increase the display effect of game screens. For example, a fog display mechanism is often used in a virtual scene of a virtual game, that is, a game scene outside the visual field of a player is covered by fog, a real-time refreshing mechanism is used to control the closing or dissipation of the fog, or the player is required to autonomously control the closing or dissipation of the fog.
However, the fog-lost display mechanism in the prior art generally has a single display mode and cannot be well adapted to various game mechanisms. For example, for turn-based games, the conventional fog display mechanism cannot adapt to actions and states of characters in the game, and further cannot enrich the game scene.
Therefore, how to improve the display effect of the virtual scene and further improve the user experience becomes a problem which needs to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a virtual scene display method, a virtual scene display device and a virtual scene storage medium, which can improve the display effect of a virtual scene and improve user experience and are high in applicability.
In one aspect, an embodiment of the present application provides a method for displaying a virtual scene, where the method includes:
displaying a first unobscured region in a virtual scene, the first unobscured region including at least one virtual object;
in response to the object attribute of the first virtual object changing in the first non-occlusion region, displaying a mask change picture in the virtual scene to display a second non-occlusion region in the virtual scene after the display of the mask change picture is stopped;
wherein the first and second non-occluded areas are non-occluded areas before and after the change in the object attribute of the first virtual object in the virtual scene, respectively.
On the other hand, an embodiment of the present application provides a virtual scene display apparatus, including:
a virtual scene display module, configured to display a first non-occluded area in a virtual scene, where the first non-occluded area includes at least one virtual object;
a shelter change picture display module, configured to display a shelter change picture in the virtual scene in response to a change in an object attribute of a first virtual object in the first non-shelter area, so as to display a second non-shelter area in the virtual scene after the display of the shelter change picture is stopped;
wherein the first and second non-occluded areas are non-occluded areas before and after the change in the object attribute of the first virtual object in the virtual scene, respectively.
In another aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the processor and the memory are connected to each other;
the memory is used for storing computer programs;
the processor is configured to execute the virtual scene display method provided by the embodiment of the application when the computer program is called.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and the computer program is executed by a processor to implement the virtual scene display method provided in the embodiment of the present application.
In another aspect, embodiments of the present application provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the virtual scene display method provided by the embodiment of the application.
In the embodiment of the application, through the change of the object attribute of the virtual object in the virtual scene, the shelter change picture in the virtual scene can be displayed, so that the display content of the virtual scene is enriched. Meanwhile, the non-shielding area before and after the object attribute of the virtual object in the virtual scene is changed can be changed through the shielding object change picture after the shielding object change picture stops being displayed, the display effect of the virtual scene is improved, the user experience is improved, and the applicability is high.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of a virtual scene display method according to an embodiment of the present disclosure;
fig. 2a is a scene schematic diagram of a virtual scene provided in an embodiment of the present application;
fig. 2b is another schematic view of a virtual scene provided in an embodiment of the present application;
fig. 2c is a schematic view of a virtual scene provided in the embodiment of the present application;
fig. 2d is a schematic view of a virtual scene provided in the embodiment of the present application;
FIG. 3a is a schematic view of a scene of a shelter variation picture according to an embodiment of the present application;
FIG. 3b is a schematic view of another scene of a shelter variation picture according to an embodiment of the present application;
FIG. 3c is a schematic view of another scene of a shelter change picture according to an embodiment of the present application;
FIG. 3d is a schematic view of another scene of a shelter variation picture according to an embodiment of the present application;
FIG. 3e is a schematic view of another scene of a shelter change picture according to an embodiment of the present application;
FIG. 3f is a schematic view of another scene of a shelter variation picture according to an embodiment of the present application;
FIG. 4 is a schematic illustration of a screen for dissipation of shelter in accordance with an embodiment of the present application;
FIG. 5a is a schematic view of a scene of an unmasked area according to an embodiment of the present application;
FIG. 5b is a schematic diagram of another scenario of an unmasked area provided by an embodiment of the present application;
FIG. 5c is a schematic view of a scene of an embodiment of the present application for determining a shelter variation picture;
fig. 6 is another schematic flowchart of a virtual scene display method provided in an embodiment of the present application;
FIG. 7 is a schematic view of a scene displaying a silhouette picture according to an embodiment of the present application;
FIG. 8 is a flowchart illustrating a method for displaying a game screen according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a virtual scene display apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The virtual scene display method provided by the embodiment of the present application is applicable to the fields of computers, cloud technologies, and the like, and the virtual scene display method provided by the embodiment of the present application can be executed by a terminal device with a display function, including but not limited to a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart watch, and the like, which is not limited herein. Optionally, the virtual scene display method provided in this embodiment of the application may be performed by interacting between the terminal device and the server, for example, the server performs corresponding instruction response, related calculation, and the like, and sends the response result or the calculation result to the terminal device, so that the terminal device completes a corresponding virtual scene display process. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server or a server cluster providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like, which is not limited herein.
A specific implementation manner of the virtual scene display method provided in this embodiment of the present application is implemented based on the terminal device, or a specific implementation manner of the virtual scene display method provided in this embodiment of the present application is implemented based on the terminal device and the server in an interactive manner, which will be described in each possible implementation manner in the following, and is not described herein again.
Referring to fig. 1, fig. 1 is a schematic flow chart of a virtual scene display method provided in the embodiment of the present application. As shown in fig. 1, a virtual scene display method provided in an embodiment of the present application includes the following steps:
step S11, displaying a first non-occluded area in the virtual scene, the first non-occluded area including at least one virtual object.
In some possible implementations, the virtual scene in the embodiment of the present application is a virtual scene displayed (or provided) when the application program runs on the device. The virtual scene may be a simulation scene of a real world, a semi-simulation semi-fictional scene, or a pure fictional scene, and may be determined based on the requirements of an actual application scene, which is not limited herein. The virtual scene in the embodiment of the present application may be any one or a combination of two-dimensional virtual scene, 2.5-dimensional virtual scene, and three-dimensional virtual scene, and may also be determined based on the requirements of the actual application scene, which is not limited herein.
Optionally, the virtual scene in the embodiment of the present application is further used for performing object interaction, movement, and the like of a plurality of virtual objects in the virtual scene, for example, the virtual scene may be a checkpoint strategy scene in a game such as a turn-based game and a Multiplayer Online Battle Arena (MOBA), and based on the virtual scene, the movements, the battles, and the like of the virtual objects may be implemented.
In some possible embodiments, the virtual object in the embodiments of the present application refers to a movable object in a virtual scene. The movable object may be a virtual character, a virtual animal, an animation character, or other movable objects, and may be determined based on the actual application scene requirements, which is not limited herein. For example, the virtual object in the embodiment of the present application may be an animated character in an activity (such as a game battle) in a virtual scene.
Optionally, the virtual object in the embodiment of the present application may also be a three-dimensional stereo model created based on an animated skeleton technology, and each virtual object has its own shape and volume in a three-dimensional virtual scene and occupies a part of the space in the virtual scene. Alternatively, the virtual object in the embodiments of the present application may hold a weapon, or may release skills to play a game battle.
Optionally, the virtual object in the embodiment of the present application is a part of virtual scene elements in a virtual scene, and the virtual scene in the embodiment of the present application may further include one or a combination of multiple virtual scene elements of a terrain element, an environment element for beautifying the virtual scene, and an auxiliary element for highlighting a position of the virtual object in the virtual scene, which is not limited herein.
For example, the terrain elements may be used to represent that the virtual scene is a desert scene, a jungle scene, a town scene, or the like, the environmental factors include, but are not limited to, stones, wooden boxes, grass, oil drums, street lamps, buildings, and the like, and the auxiliary elements may be shaped elements such as squares for highlighting the position of the virtual object in the virtual scene. It should be particularly noted that, the other virtual object elements in the virtual scene except the virtual object are only examples, and may be determined specifically based on the requirements of the actual application scene, and are not limited herein.
In some possible embodiments, the first non-occlusion region in the embodiment of the present application is a region in the virtual scene that is not occluded by the occluding object, that is, the first non-occlusion region is a region in the virtual scene where each virtual scene element can be clearly displayed. Optionally, at least one virtual object, such as a plurality of combat characters manipulated by a player in a game scene, is included in the first non-occluded area.
The shielding object can be a fog, a cloud layer, a tree and other scrawls, color coatings and the like which indirectly realize the shielding effect in a game scene, and can be determined based on the requirements of an actual application scene without limitation.
In some possible embodiments, while displaying the first non-occluded area in the virtual scene, a first occluded area in the virtual scene corresponding to the first non-occluded area may be displayed simultaneously. The first non-occlusion area and the first occlusion area constitute the virtual scene, that is, the complete virtual scene is displayed by displaying the first non-occlusion area and the first occlusion area.
The first shielding area is composed of the shielding object and cannot clearly display an area of a virtual object in a virtual scene, such as an area shielded by fog or cloud in a game scene. For another example, if the virtual scene is a battle scene of a round of game, the first non-occluded area is a field of view of the player, and all virtual scene elements including a virtual object of the player and a terrain can be clearly displayed in the field of view of the player. The first shielded area is an area shielded by the fog outside the field of view of the present invention, such as an area including a game monster or an opposite virtual object.
Referring to fig. 2a, fig. 2a is a scene schematic diagram of a virtual scene provided in the embodiment of the present application. Fig. 2a shows a virtual scene of a fighting character in a game in standby, wherein a gray part in fig. 2a is a first non-shading area in which a virtual object (i.e., a fighting character in the game) and a terrain (a grid terrain) corresponding to a player are clearly displayed. In fig. 2a, the white portion is a first shaded area, i.e., an area of the virtual scene that is shaded by the cloud fog, and when the first non-shaded area is displayed, only the cloud fog is displayed to display the first shaded area, and no virtual object such as a game monster or a fighting character is displayed in the first non-shaded area.
Optionally, to further improve the display diversity of the virtual scene, for the first occlusion region, other virtual scene elements except the virtual object in the first occlusion region may be displayed at the same time of displaying the first occlusion region. Similarly, taking the battle scene of the turn-based game as an example, the first non-shielding region is the local view range, and the first shielding region is the region of the game monster or the opponent battle character outside the local view range, and at this time, for the region outside the local view range, only the virtual scene elements such as the game terrain and the environment elements can be displayed, but the game monster or the opponent virtual operation object is not displayed. Therefore, the expressive force of the virtual scene is enriched, meanwhile, monsters outside the field of view of the game (the first shielding area) or opponent battle characters (virtual objects) are invisible to the game, and the interestingness and the playability of the game are improved.
It should be noted that, the displaying of the first non-shielded area and the first shielded area in step S11 may be implemented by the terminal device. Optionally, when the first mask area is displayed, whether to display other virtual scene elements except the virtual operation object in the mask area may be implemented by the terminal device based on actual settings of the virtual scene, or the server may send instruction information to the terminal device, so as to instruct the terminal device to display the first mask area based on the instruction information.
Step S12, in response to the object property of the first virtual object changing in the first non-occluded area, displaying an occluding object changing picture in the virtual scene to display a second non-occluded area in the virtual scene after the occluding object changing picture stops being displayed.
In some possible embodiments, after displaying the first non-occlusion region in the virtual scene, if the object attribute of the first virtual object in the first non-occlusion region changes, the occlusion change screen in the virtual scene is displayed, so as to display the second non-occlusion region in the virtual scene after the occlusion change screen stops displaying.
The first and second non-occluded regions are non-occluded regions before and after the object attribute of the first virtual object changes in the virtual scene. In other words, the first non-occluded area in the virtual scene is displayed before the object attribute of the first virtual object is changed, and after the object attribute of the first virtual object is changed, the non-occluded area in the virtual scene is changed from the first non-occluded area to the second non-occluded area by displaying the occluding object change screen and after the display of the occluding object change screen is stopped.
In some possible embodiments, while displaying the second unmasked area in the virtual scene, a second masked area in the virtual scene corresponding to the second unmasked area may be displayed simultaneously. The second non-occlusion region and the second occlusion region constitute a virtual scene in which the object attribute of the first virtual object in the first non-occlusion region is changed, that is, the virtual scene in which the object attribute of the first virtual object in the first non-occlusion region is changed is displayed by displaying the second non-occlusion region and the second occlusion region.
The second shielding area is composed of the shielding object and cannot clearly display an area of a virtual object in a virtual scene, such as an area shielded by fog or cloud in a game scene.
Optionally, to further improve the display diversity of the virtual scene, for a second occlusion region in which the object attribute of the first virtual object in the first non-occlusion region is changed, other virtual scene elements except the virtual object in the second occlusion region may be displayed while the second occlusion region is displayed. Similarly, taking the battle scene of the turn-based game as an example, the second non-shielded region is the local view range, and the second shielded region is the region of a game monster or an opponent battle character outside the local view range, and at this time, for the region outside the local view range, only the virtual scene elements such as the game terrain and the environment elements can be displayed, and the game monster or the opponent virtual operation object is not displayed. Therefore, the expressive force of the virtual scene is enriched, meanwhile, monsters outside the field of view of the game (in the second shielding area) or opponent battle characters (virtual objects) are invisible to the game, and the interestingness and the playability of the game are improved.
In some possible embodiments, when the object attribute of the first virtual object in the first non-occlusion region changes, a mask change picture in the virtual scene may be displayed based on the object attribute of each virtual object in the first non-occlusion region after the object attribute of the first virtual object changes.
The first virtual object in the first non-occlusion region is any one or more virtual objects in the first non-occlusion region. In other words, when the object attribute of any one or more virtual objects in the first non-occluded area changes, an occluding object changing screen in the virtual scene may be displayed.
In some possible embodiments, the object properties of the virtual object in the embodiment of the present application include one or more combinations of a position, a moving capability, and a skill radiation range in the virtual scene, which is not limited herein.
As an example, when the first virtual object in the first non-occluded area moves to any position within the first occluded area, an occluding object changing screen in the virtual scene may be displayed. If the first virtual object moves to a second virtual object in the first shading area and attacks the second virtual object, due to the change of the position of the first virtual object in the virtual scene, a shading object change picture in the virtual scene can be displayed so as to display a second non-shading area after the shading object change picture stops being displayed.
The moving capability of any virtual object is used for explaining the moving range of the virtual object in the virtual scene, including but not limited to physical strength values of the virtual object influencing the moving range, used props and the like. The smaller the physical strength value of the virtual object, the smaller the maximum movement range of the virtual object in the virtual scene, and therefore the corresponding non-occluded area of the virtual object in the virtual scene can be determined based on the physical strength value of the virtual object.
As an example, the movement capability of the first virtual object in the first non-occlusion region may be determined by the movement attribute (e.g., physical strength value, agility, sum of movement force effects, etc.) of the first virtual object, and after the first virtual object in the first non-occlusion region changes its own movement capability due to triggering the relevant mechanism of the virtual scene, attacking other virtual objects, consuming physical strength values, using virtual props, etc., a shelter change screen in the virtual scene may be displayed to display the second non-occlusion region after the shelter change screen stops being displayed.
The skill radiation range of any virtual object is used for explaining the reachable range of the virtual object in the virtual scene, and includes but is not limited to the skill effect radiation range, the magic value, the skill attack range and the like. The corresponding non-occluded area of the virtual object in the virtual scene may be determined, e.g., based on the maximum skill attack range of each skill of the virtual object in the virtual scene.
As an example, when the skill of the first virtual object in the first non-occlusion area is released, the area corresponding to the skill effect radiation range in the virtual scene may be regarded as an area not requiring occlusion by the occlusion object, that is, an occlusion object change picture in the virtual scene needs to be displayed, so as to display the second non-occlusion area after the occlusion object change picture stops being displayed. If the first virtual object releases the lighting skill, the area of the skill effect radiation range object of the lighting skill in the first shading area is changed into a non-shading area by displaying a shading object change picture in the virtual scene, and all the non-shading areas in the virtual scene are regarded as second non-shading areas.
Optionally, the first non-occluded area and the second non-occluded area are non-occluded areas before and after the object attribute of the first virtual object changes in the virtual scene, respectively. In other words, the first non-occluded area in the virtual scene is displayed before the object attribute of the first virtual object is changed, and after the object attribute of the first virtual object is changed, the non-occluded area in the virtual scene is changed from the first non-occluded area to the second non-occluded area by displaying the occluding object change screen and after the display of the occluding object change screen is stopped.
Optionally, each first virtual object in the virtual scene may respectively correspond to a respective non-occlusion region, and for each first virtual object, the non-occlusion region corresponding to the first virtual object may be determined by an object attribute of the first virtual object. That is, for each first virtual object, the object attribute of the first virtual object may determine an unmasked region corresponding to the virtual object, and the unmasked regions corresponding to the first virtual objects may be independent of each other or may be overlapped with each other, and may be specifically determined based on the actual object attribute of each first virtual object, which is not limited herein.
For example, if the first virtual object movement capability is enhanced, the range of the corresponding non-occlusion region after the first virtual object movement capability is enhanced is larger than that before the first virtual object movement capability is enhanced. For another example, if the position of the first virtual object in the virtual scene changes, the position of the non-occlusion area corresponding to the first virtual object in the virtual scene also changes. For another example, if the skill radiation range of the first virtual object is smaller, the non-occlusion area corresponding to the first virtual object is also smaller.
That is to say, the degree of change of the object attribute of the first virtual object is positively correlated with the size of the range of the corresponding non-occlusion region, and the specific determination manner of the object attribute of the first virtual object and the range of the corresponding non-occlusion region can be determined based on the requirements of the actual application scene, which is not limited herein.
The position of any virtual object in the virtual scene can determine the position of the non-shielding area corresponding to the virtual object relative to the virtual scene. Alternatively, the position of the virtual object in the virtual scene may be used to determine whether the virtual object is moving in the virtual scene (e.g., in a standby state), or whether the virtual object disappears from the virtual scene or is about to disappear from the virtual scene (e.g., the virtual object dies), and the like, and may be determined based on the actual application scene requirements, which is not limited herein.
For example, in the turn-based game, any one of the fighter characters in the field of view of the player enters a standby state through action, and the non-shielded area corresponding to each fighter character can be determined according to the movement capability of the fighter character controlled by the player, the skill radiation range and the position of the fighter character, so that the non-shielded area corresponding to each fighter character in the virtual scene is displayed.
Referring to fig. 2b, fig. 2b is another scene schematic diagram of a virtual scene provided in the embodiment of the present application. Fig. 2b shows a virtual scene of a fighting character in a game in standby, wherein the gray part in fig. 2b is a second non-shaded area in which the virtual object (i.e. the fighting character in the game) and the terrain (checkered terrain) corresponding to the player are clearly displayed. The white portion in fig. 2b is the second shaded area, i.e. the area of the virtual scene shaded by the cloud, and no virtual objects such as game monsters, battle characters, etc. are displayed in the second non-shaded area. Assuming that a virtual object in fig. 2b moves from the position in fig. 2a to the position shown in fig. 2b, the position of the virtual object in the virtual scene changes, and thus the object attribute of the virtual object changes, a mask change screen in the virtual scene is displayed so that the first masked area and the first unmasked area in fig. 2a change to the second masked area and the second unmasked area in fig. 2 b.
Referring to fig. 2c, fig. 2c is a schematic view of another virtual scene provided in the embodiment of the present application. Fig. 2c shows a virtual scene of a fighting character in a game in standby, in which the gray part in fig. 2c is a second non-shaded area in which the virtual object (i.e., the fighting character in the game) and the terrain (checkered terrain) corresponding to the player are clearly displayed. The white portion in fig. 2c is the second shaded area, i.e. the area of the virtual scene shaded by the cloud, and no virtual objects such as game monsters, battle characters, etc. are displayed in the second non-shaded area. Assuming that a virtual object in fig. 2a dies and exits from the virtual scene during the game, only one virtual object (the virtual object shown in fig. 2 c) exists in the virtual scene, that is, the object attribute of one virtual object in the first non-occlusion region in fig. 2a changes, a mask change screen in the virtual scene may be displayed so that the first occlusion region and the first non-occlusion region in fig. 2a change into the second occlusion region and the second non-occlusion region in fig. 2 c.
Referring to fig. 2d, fig. 2d is a schematic view of another virtual scene provided in the embodiment of the present application. Fig. 2d shows a virtual scene of a fighting character in a game in standby, wherein the gray part in fig. 2d is a second non-shaded area in which the virtual object (i.e. the fighting character in the game) and the terrain (checkered terrain) corresponding to the player are clearly displayed. The white part in fig. 2d is the second shaded area, i.e. the area of the virtual scene shaded by the cloud, and no virtual objects such as game monsters, battle characters, etc. are displayed in the second non-shaded area. Assuming that the moving capability or skill radiation range of each virtual object in fig. 2a during the game becomes smaller, that is, the object attribute of each virtual object in the first non-occluded area in fig. 2a changes, an occluding object changing screen in the virtual scene may be displayed so that the first occluded area and the first non-occluded area in fig. 2a change to the second occluded area and the second non-occluded area in fig. 2 d.
In some possible embodiments, at least one of the displaying the first obscured area or the displaying the second obscured area is achieved by a representation of the covering. Optionally, displaying the shelter variation view in the virtual scene may include displaying at least one of a shelter augmentation view, a shelter non-dissipation view, and a shelter movement view in the virtual scene. As an example, in the embodiment of the present application, displaying the first mask region and displaying the second mask region are both realized by a display form of a mask.
Optionally, the mask adding frame is a frame in which the mask is gradually displayed from the edge of the first area to the center of the first area, and the first area is an area to be added with the mask in the virtual scene. For example, since the object attribute of the first virtual object in the first non-occlusion region changes, a part of the first non-occlusion region needs to be changed into an occlusion region, and when an occlusion change screen is displayed, an occlusion increase screen is displayed in the region, which is the first region.
For example, referring to fig. 3a, fig. 3a is a schematic view of a scene of a shelter variation picture according to an embodiment of the present application. The first area shown in fig. 3a is a partial area in a first non-occluded area in the virtual scene, and the object attribute of the first virtual object in the first non-occluded area changes, so that the first area does not belong to the non-occluded area corresponding to any virtual object in the first non-occluded area after the object attribute of the first virtual object changes. Therefore, when the screen is shaded in the virtual scene, a shading object (such as cloud) added screen is displayed in the first area. Namely, the picture of the shelter is gradually displayed from the edge of the first area to the center of the first area. As shown in fig. 3b, fig. 3b is another schematic view of a scene of a shelter variation picture according to an embodiment of the present application. The mask in fig. 3a is displayed gradually from the edge of the first area towards the center of the first area to obtain the display state of the first area and the mask in fig. 3 b. Then, on the basis of fig. 3b, the shielding object is displayed toward the center of the first area until the shielding object addition screen stops displaying when the shielding object is completely displayed in the first area (the first area is covered by the shielding object), so as to change the first area from the non-shielding area to the shielding area.
When a screen is added to the shelter in the virtual scene, the size and the range of the area corresponding to the shelter displayed to the center of the first area each time can be determined based on the actual application scene, and the method is not limited herein.
In some possible embodiments, in the process of displaying the screen of the virtual scene, the virtual scene element corresponding to the change from the first non-occlusion area to the second non-occlusion area in the virtual scene may be displayed.
Optionally, the virtual scene elements corresponding to the first non-occlusion region and the second non-occlusion region are displayed in the process of displaying the shelter variation picture, and the scene elements corresponding to the second non-occlusion region are displayed after the shelter variation picture is stopped.
Alternatively, when a screen (screen adding screen) displaying a screen from the edge of the first region to the center of the first region is displayed in the virtual scene, virtual scene elements such as virtual objects or environment elements in the first region may be gradually masked while the screen is displayed. That is, in the process of displaying the screen for adding the shielding object in the virtual scene, the virtual scene element in the first area is continuously refreshed with the continuous display of the shielding object, so that the first area is changed from the non-shielding area to the shielding area through the screen for adding the shielding object. It should be particularly noted that, in the process of displaying the screen for increasing the shielding object in the virtual scene, the virtual object in the first area is completely shielded by the shielding object with the continuous increase of the shielding object, and whether other virtual scene elements except the virtual object in the first area need to be shielded by the shielding object may be determined based on the requirements of the actual application scene, which is not limited herein.
Referring to fig. 3c, fig. 3c is a schematic view of another scene of the shelter alteration picture according to the embodiment of the present application. The virtual scene elements in the first region as shown in 3c are grass and stone. In the process of displaying the picture of the shielding object from the edge of the first area to the center of the first area gradually in the virtual scene, the grass and the stones in the first area are completely shielded by the shielding object along with the continuous increase of the shielding object, so that after the display of the screen of the shielding object is stopped, the first area is completely shielded by the shielding object, and no virtual scene element is displayed.
Optionally, the shelter dissipation picture is a picture in which shelters are gradually eliminated from the center of the second area to the edge of the second area, and the second area is an area in the virtual scene where the shelters are to be eliminated. For example, since the object attribute of the first virtual object in the first non-occlusion region changes, and a partial region in the first occlusion region needs to be changed into a non-occlusion region, when the occlusion change screen is displayed, the occlusion addition screen is displayed in the region, and the region is the second region.
For example, referring to fig. 3d, fig. 3d is a schematic view of another scene of a shelter variation picture provided by an embodiment of the present application. The first region shown in fig. 3d is a partial region in a first non-occluded region in the virtual scene, and the object attribute of the first virtual object in the first non-occluded region changes, so that the second region belongs to a non-occluded region corresponding to any virtual object in the first non-occluded region after the object attribute of the first virtual object changes. Thus, when the mask is displayed in the virtual scene, a mask (e.g., cloud) dissipation picture is displayed in the second area. Namely, the picture of the elimination object is displayed from the center of the second area to the edge of the second area gradually. As shown in fig. 3e, fig. 3e is a schematic view of another scene of the shelter variation picture provided in the embodiment of the present application. Fig. 3d eliminates the mask from the center of the second area gradually towards the edge of the second area to obtain the display state of the second area and the mask in fig. 3 e. And then based on the elimination of the covering toward the edge of the second area on the basis of fig. 3e until the display of the covering-dispersed picture is stopped when the covering is not present at all in the second area, so as to transform the second area from a covered area to an unmasked area.
When the shelter dissipation picture in the virtual scene is displayed, the size and the range of the area corresponding to the shelter eliminated to the edge of the second area each time can be determined based on the actual application scene, and the method is not limited herein.
For example, as an alternative, referring to FIG. 4, FIG. 4 is a schematic illustration of a screen dissipation image provided by an embodiment of the present application. It is assumed that fig. 4 corresponds to the second region described above, and the region marked 0 in fig. 4 is the center of the second region. In the process of displaying the shelter dissipation picture in the virtual scene, the shelter dissipation picture is specifically: and eliminating the shielding objects corresponding to the area marked as 0, then eliminating the shielding objects corresponding to the area marked as 1, and so on, and eliminating the shielding objects corresponding to the areas marked as 2, 3 and 4 in sequence, thereby gradually diffusing to the edge of the second area to change the second area from the shielding area to the non-shielding area.
Optionally, the screen moving picture is a picture formed by moving the screen in the virtual scene, and the first non-occluded area and the first occluded area in the virtual scene can be changed into the second non-occluded area and the second occluded area respectively by displaying the screen moving picture in the virtual scene. For example, when a partial area in the first non-occlusion area needs to be changed into an occlusion area due to a change in an object attribute of the first virtual object in the first non-occlusion area, a moving screen of the occlusion object in the virtual scene is displayed so that the partial area is changed from the non-occlusion area to the occlusion area. For another example, when a partial area in the first non-occlusion area needs to be changed into a non-occlusion area due to a change in an object attribute of the first virtual object in the first non-occlusion area, a moving picture of the shelter in the virtual scene is displayed so that the partial area is changed from an occlusion area to a non-occlusion area.
Alternatively, when a screen (screen dissipation screen) in which the screen is removed from the center of the second region toward the edge of the second region is displayed in the virtual scene, virtual scene elements such as virtual objects or environmental elements in the second region may be gradually displayed while removing the screen. That is, in the process of displaying the shelter dissipation picture in the virtual scene, the virtual scene elements in the second area are continuously refreshed along with the continuous dissipation of the shelter, so that the second area is changed from the sheltered area to the non-sheltered area through the shelter dissipation picture. It should be noted that, during the process of displaying the shelter dissipation picture in the virtual scene, the virtual object in the second area is completely displayed as the shelter is continuously dissipated.
Referring to fig. 3f, fig. 3f is a schematic view of another scene of a shelter variation picture according to an embodiment of the present application. The virtual scene elements in the second region as shown in 3f are grass and stone. In the process of displaying the screen for eliminating the shelter from the center of the second area to the edge of the second area gradually in the virtual scene, the grass and the stones in the second area are displayed along with the continuous elimination of the shelter, so that after the display of the shelter dissipation screen is stopped, the shelter played in the second area is completely eliminated, and all virtual scene elements are completely displayed.
Optionally, when displaying the covering object change picture in the virtual scene, for each first area in the virtual scene, any one of the covering object addition picture and the covering object moving picture may be displayed. When the covering change picture in the virtual scene is displayed, any one of the covering dissipation picture and the covering moving picture may be displayed for each second area in the virtual scene. In other words, when the covering change picture in the virtual scene is displayed, one or more combinations of the above-described plural kinds of covering change pictures may be displayed in the virtual scene.
It should be particularly noted that, for the shelter areas in the other areas except the first area and the second area in the virtual scene, in the process of displaying the shelter change picture in the virtual scene, the shelter change picture corresponding to the shelter area may be a shelter still picture, or may be a shelter dynamic picture such as a cloud churning picture. Alternatively, the screen moving picture may also be displayed along with the screen moving pictures corresponding to the first area and the second area, which may be determined based on the actual application scene requirements, and is not limited herein.
In some possible embodiments, the first area of the virtual scene where the mask is to be added and the second area where the mask is to be removed may be determined based on the first non-masked area and the second non-masked area. That is to say, after the object attribute of the first virtual object in the first non-occlusion region is changed, the second non-occlusion region that needs to be displayed in the virtual scene after the object attribute of the first virtual object is changed may be determined according to the object attributes of all the virtual objects in the first non-occlusion region after the object attribute of the first virtual object is changed, and then the first region and the second region may be determined according to the first non-occlusion region and the second non-occlusion region, so as to display different display forms of the shielding object for different regions when the shielding object change screen in the virtual scene is displayed.
In particular, at least one non-coincident region of the first and second non-occluded regions may be determined. For each misalignment region, if the misalignment region is a part of the first non-occlusion region, it means that it is necessary to perform an occlusion addition on the misalignment region to change the part from the non-occlusion region to an occlusion region, so that the non-occlusion region in the virtual scene after the display of the occlusion change screen is stopped is the same as a second non-occlusion region determined based on the object attributes of all the virtual objects in the first non-occlusion region after the object attributes of the first virtual object are changed.
For example, referring to fig. 5a, fig. 5a is a schematic view of a scene of an un-occluded area provided in the embodiment of the present application. In fig. 5a, the first non-occlusion region is a non-occlusion region in the virtual scene before the object attribute of the first virtual object is changed, and the second non-occlusion region is a second non-occlusion region determined based on the object attributes of all the virtual objects in the first non-occlusion region after the object attribute of the first virtual object is changed, that is, the second non-occlusion region is a corresponding non-occlusion region in the virtual scene after the object attribute of the first virtual object is changed. Comparing the first non-shielding area and the second non-shielding area, it can be seen that the second non-shielding area has a smaller area range than the first non-shielding area, that is, the non-overlapping area of the first non-shielding area and the second non-shielding area is an area of the first non-shielding area that needs to be added with a shielding object to change from the non-shielding area to the shielding area, and therefore the non-overlapping area can be determined as the first area. Based on this, the second non-occluded area in fig. 5a is displayed based on the first non-occluded area after the display of the occluding object changing screen in the virtual scene is stopped.
If the misaligned region is a part of the second non-occluded region, it is described that the misaligned region needs to be masked so as to be changed from the occluded region to the non-occluded region, and the non-occluded region in the virtual scene after the display of the mask change screen is stopped is the same as the second non-occluded region determined based on the object attributes of all the virtual objects in the first non-occluded region after the object attributes of the first virtual object have been changed.
For example, referring to fig. 5b, fig. 5b is a schematic view of another scene of the non-occluded area provided in the embodiment of the present application. In fig. 5b, the first non-occlusion region is a non-occlusion region in the virtual scene before the object attribute of the first virtual object is changed, and the second non-occlusion region is a second non-occlusion region determined based on the object attributes of all the virtual objects in the first non-occlusion region after the object attribute of the first virtual object is changed, that is, the second non-occlusion region is a corresponding non-occlusion region in the virtual scene after the object attribute of the first virtual object is changed. Comparing the first non-shielding area and the second non-shielding area, it can be seen that the second non-shielding area has a larger area range than the first non-shielding area, that is, the non-overlapping area of the first non-shielding area and the second non-shielding area is an area that needs to be removed from the shielding area to be converted into the non-shielding area in the first shielding area corresponding to the first non-shielding area, so that the non-overlapping area can be determined as the second area. Based on this, after the display of the shelter change picture in the virtual scene is stopped, the second non-shelter area in fig. 5b is displayed based on the first shelter area corresponding to the first non-shelter area.
Optionally, because the first non-occluded area in the virtual scene is displayed and the first occluded area corresponding to the first non-occluded area is also displayed, after the second non-occluded area determined based on the object attribute of all the virtual objects in the first non-occluded area after the object attribute of the first virtual object is changed is displayed, it may be determined that the second occluded area in the virtual scene needs to be displayed after the object attribute of the first virtual object is changed, and then the second occluded area and the first occluded area determine the display form of the corresponding occluding object in the process of displaying the occluding object change picture.
For another example, referring to fig. 5c, fig. 5c is a schematic view of a scene for determining a shelter variation picture according to an embodiment of the present application. In fig. 5c, a first unobscured region in the virtual scene is shown, and a second unobscured region in the virtual scene needs to be displayed. Wherein, for intuitive understanding, the first and second non-occluded areas are respectively represented by areas composed of squares. After comparing the first non-shielding region and the second non-shielding region, a non-overlapping region belonging to the first non-shielding region in the non-overlapping region of the first non-shielding region and the second non-shielding region may be used as a first region, i.e., a region to be masked, and a non-overlapping region belonging to the second non-shielding region in the non-overlapping region of the first non-shielding region and the second non-shielding region may be used as a second region, i.e., a region to be masked. And further adopting different display forms of the shielding objects when the shielding object change picture is displayed in the virtual scene according to the determined first area and the second area.
In some possible embodiments, after the object attribute of the first virtual object in the first non-occluded area is changed, the fixed area in the first non-occluded area is used as the first area of the to-be-added occlusion object every preset time period, and/or the fixed area in the first occluded area is used as the second area of the to-be-removed occlusion object every preset time period. Different preset time periods can correspond to different fixed areas, and the duration of the preset time periods and the range of the fixed areas can be determined based on actual scene requirements, which is not limited herein. For example, the virtual scene is a game scene, when the object attribute of the first virtual object in the first non-occlusion region changes, it may be determined that the player starts to operate the game character, and at this time, the first regions of the multiple to-be-added occlusions may be gradually determined over time to continuously expand the coverage area of the occlusion region in the virtual scene, and/or the second regions of the multiple to-be-eliminated occlusions may be gradually determined to continuously reduce the coverage area of the non-occlusion region in the virtual scene, so as to improve the interactivity of the game scene.
In some possible embodiments, each of the second virtual objects in the first occlusion region may correspond to a respective occlusion region, and for each of the second virtual objects, the occlusion region corresponding to the second virtual object may be determined by an object attribute of the second virtual object. That is, for each second virtual object, the object attribute of the second virtual object may determine the occlusion region corresponding to the virtual object, and the occlusion regions corresponding to the second virtual objects may be independent of each other or may coincide with each other, and may specifically be determined based on the actual object attribute of each second virtual object, which is not limited herein.
Further, the object attribute of any second virtual object in the first shielding area changes, including but not limited to a skill radiation range change, a movement capability change, a physical strength value (blood volume, etc.) change of the second virtual object, and the second virtual object is subjected to a skill attack related to shielding object elimination, and the like, and may be determined based on an actual application scenario, which is not limited herein. In other words, when the object attribute of any one of the second virtual objects in the first occlusion region changes, which results in a change in the occlusion region corresponding to the object, the first region of the mask to be added and/or the second region of the mask to be removed may be determined based on the object attributes before and after the change of the second virtual object.
For example, if the second virtual object physical strength value decreases, the range of the corresponding masked region after the second virtual object physical strength value decreases compared with the masked region before enhancement. For another example, if the position of the second virtual object in the virtual scene changes, the position of the occlusion region corresponding to the second virtual object in the virtual scene also changes. For another example, if the skill radiation range of the second virtual object is increased, the range of the mask region corresponding to the second virtual object is also increased accordingly.
That is to say, the degree of change of the object attribute of the second virtual object in the first occlusion region is positively correlated with the size of the range of the corresponding occlusion region, and the specific determination manner of the object attribute of the second virtual object and the range of the corresponding occlusion region can be determined based on the actual application scene requirement, which is not limited herein.
It should be particularly noted that each of the possible embodiments shown in step S12 can be implemented based on the terminal device. Optionally, in step S12, the implementation manners of displaying the shelter variation picture in the virtual scene, displaying various presentation forms of the shelter variation picture in the virtual scene, displaying the virtual scene element corresponding to the change from the first non-shelter area to the second non-shelter area in the virtual scene, and displaying other virtual scene elements except the virtual object in the first shelter area or the second shelter area may be executed by the server instruction terminal device. The server can respond to the change of the object attribute of the first virtual object in the first non-shielding area, and determine a second non-shielding area which needs to be displayed in the virtual scene according to the object attribute of each virtual object in the first non-shielding area after the change of the object attribute of the first virtual object in the first non-shielding area. Further, the server may determine an occlusion change screen based on the first and second non-occluded areas to instruct the terminal device to display an occlusion change screen in the virtual scene, and may instruct the terminal device to display a virtual scene element in the virtual scene that corresponds to a change from the first non-occluded area to the second non-occluded area, and to display other virtual scene elements except the virtual object within the first occluded area or the second occluded area.
In the embodiment of the application, through the change of different object attributes of the virtual object in the virtual scene, a plurality of shelter change pictures such as a shelter adding picture, a shelter dissipating picture, a shelter moving picture and the like in the virtual scene can be displayed, so that the display content of the virtual scene is enriched, and the state change of the virtual object is effectively adapted. Meanwhile, the non-shielding area before and after the object attribute of the virtual object in the virtual scene is changed can be changed after the shielding object change picture is stopped to be displayed through the shielding object change picture, the display effect of the virtual scene is improved, and the diversity of the virtual scene is further improved and the user experience is improved by displaying the corresponding virtual scene element in the virtual scene in the process of changing from the non-shielding area and displaying other virtual scene elements except the virtual object in the shielding area.
Referring to fig. 6, fig. 6 is another schematic flow chart of a virtual scene display method provided in the embodiment of the present application. As shown in fig. 6, the virtual scene display method provided in the embodiment of the present application may include the following steps:
step S61, displaying a first non-occluded area and a corresponding first occluded area in the virtual scene, where the first non-occluded area includes at least one virtual object.
Step S62, in response to the object attribute of the first virtual object in the first non-occlusion region changing, displaying an occlusion change picture in the virtual scene, so as to display a second non-occlusion region and a corresponding second occlusion region in the virtual scene after the occlusion change picture stops being displayed.
In some possible embodiments, the detailed implementation of the steps S61 to S62 can refer to the implementation shown in steps S11 to S12 in fig. 1, and will not be described herein again.
Step S63 is to display an action screen of the virtual object corresponding to the action instruction in any of the shaded areas in response to the action instruction for the second virtual object in any of the shaded areas.
In some possible embodiments, for an occlusion region in a virtual scene, in response to an action instruction for a second virtual object in any occlusion region, an action screen of the virtual object in the occlusion region in the virtual scene corresponding to the action instruction may be displayed. The second virtual object is any virtual object in the occlusion area, and the virtual object in the occlusion area corresponding to the action command is the second virtual object or the first virtual object moving to the position of the second virtual object in the first non-occlusion area and executing the action command. That is, regardless of whether the first mask region or the second mask region in the virtual scene is displayed, if an action command for any one of the second virtual objects in the mask region in the virtual scene is detected, an action screen corresponding to the action command virtual object in the mask region can be displayed. For example, if a first virtual object in the first unmasked area moves to the position of a second virtual object in the first masked area and attacks the second virtual object, the action picture of the first virtual object in the second masked area and/or the action picture of the attacked second virtual object may be displayed. For example, in a turn-based game, when a second virtual object in a shielding area releases a skill based on an action instruction of a game mechanism, an action picture corresponding to the second virtual object in the shielding area when the second virtual object releases the skill can be displayed.
The action instruction may be an instruction for instructing the virtual object to perform a relevant virtual action such as skill release, attack, or movement, and may be specifically determined based on a requirement of an actual application scenario, which is not limited herein.
Based on the above implementation manner, when the virtual object is occluded by the occlusion region in the virtual scene, based on the action instruction for the occluded virtual object, the relevant action picture of the occluded virtual object corresponding to the action instruction can be displayed in the occlusion region, so as to improve the representation effect of the virtual scene.
Optionally, when the action picture of the virtual object corresponding to the action instruction in any one of the shaded areas is displayed, the preset action and skill performance of the virtual object corresponding to the action instruction may be determined according to the action instruction, and then the corresponding silhouette picture is determined based on the preset action and skill performance, and the silhouette picture in the shaded area is displayed.
The outline of the silhouette picture is completely consistent with the outline of the preset action and skill expression of the virtual object, and the player can see the preset action and skill expression of the virtual object through the shielding area in a concealed mode.
The preset action and skill performance corresponding to each action instruction may be determined based on the requirements of the actual application scenario, which is not limited herein.
Referring to fig. 7, fig. 7 is a schematic view of a scene displaying a silhouette picture according to an embodiment of the present application. As shown in fig. 7, a mask region includes a virtual object. After detecting the action instruction for the virtual object, determining that the preset action of the virtual object is a skill release action according to the action instruction, and the skill release action is embodied as 'vacation and sword raising', and the skill corresponding to the skill is embodied as a hexagonal light. Further, based on the skill release action and the skill performance corresponding to the preset action, a silhouette picture shown in fig. 7 may be presented.
It should be particularly noted that, each of the possible embodiments shown in step S61 to step S63 can be implemented based on the terminal device. Optionally, the server may respond to an action instruction for a second virtual object in any one of the shaded areas, and determine a preset action and a skill performance of the second virtual object corresponding to the action instruction, so as to determine a silhouette picture corresponding to the second virtual object according to the preset action and the skill performance, and further instruct the terminal device to display the silhouette picture corresponding to the second virtual object in the shaded area.
In this embodiment of the application, when the terminal device or the server performs the calculation of the related data, for example, when the second non-occluded area corresponding to the virtual scene is determined based on the object attribute of each virtual object in the first non-occluded area after the object attribute of the first virtual object is changed, and the first area where the occluding object is to be added and the second area where the occluding object is to be removed are determined based on the first non-occluded area and the second non-occluded area, the related data processing process may be implemented based on cloud calculation in the cloud technical field, and the like.
The cloud computing is a computing mode, and distributes computing tasks on a resource pool formed by a large number of computers, so that various application systems can acquire computing power, storage space and information services according to needs. The network that provides the resources is called the "cloud". Resources in the "cloud" appear to the user to be infinitely expandable and to be available at any time, available on demand, expandable at any time, and pay per use. Based on this, the data processing process related to the virtual scene display method provided by the embodiment of the application can be realized through cloud computing.
The following is a complete game example to explain the virtual scene display method provided in the embodiments of the present application. For convenience of understanding, the game screen is explained as a virtual scene. Referring to fig. 8, fig. 8 is a schematic flow chart of a game screen display method according to an embodiment of the present application. When the game character of the player is in the visual range of the player, such as the character is in a standby state after completing a combat action, the character is passively displaced, the character moving force or the skill range is changed, the next visual range (a second non-shielding area) of the player. If the next visual field range of the local visual field is changed compared with the current visual field range, the shielding object dissipation picture can be displayed on the game picture needing to increase the visual field, the virtual scene elements are refreshed while the shielding object dissipation picture is displayed, the new local visual field range is displayed when the shielding object dissipation picture stops, and the refreshed virtual scene elements are displayed. Meanwhile, a shelter increasing picture is displayed on a game picture needing to reduce the visual field, the virtual scene elements are refreshed while the shelter increasing picture is displayed, so that a new visual field range of the player is displayed when the shelter increasing picture stops, and the virtual scene elements in the reduced visual field range are shaded.
In the embodiment of the application, through the change of different object attributes of the virtual object in the virtual scene, a plurality of shelter change pictures such as a shelter adding picture, a shelter dissipating picture, a shelter moving picture and the like in the virtual scene can be displayed, so that the display content of the virtual scene is enriched, and the state change of the virtual object is effectively adapted. Meanwhile, the display effect of the virtual scene is improved in a mode that the non-shielding area before and after the object attribute of the virtual object in the virtual scene is changed, the corresponding virtual scene element in the process of displaying the change from the non-shielding area in the virtual scene, other virtual scene elements except the virtual object in the shielding area and the like can be changed through the shielding object change picture after the shielding object change picture is stopped to be displayed. Moreover, silhouette pictures of the virtual objects can be displayed in the shielding areas, so that the user attraction and the user experience are further improved, and the applicability is high.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a virtual scene display apparatus according to an embodiment of the present application. The device 1 provided by the embodiment of the application comprises:
a virtual scene display module 11, configured to display a first non-occluded area in a virtual scene, where the first non-occluded area includes at least one virtual object;
a shelter change picture display module 12, configured to display a shelter change picture in the virtual scene in response to a change in an object attribute of a first virtual object in the first non-shelter area, so as to display a second non-shelter area in the virtual scene after the display of the shelter change picture is stopped;
wherein the first and second non-occluded areas are non-occluded areas before and after the change in the object attribute of the first virtual object in the virtual scene, respectively.
In some possible embodiments, the virtual scene display module 11 is configured to display a first non-occlusion region and a corresponding first occlusion region in a virtual scene;
the shelter change picture display module 12 is configured to display a shelter change picture in the virtual scene, so as to display a second non-shelter area and a corresponding second shelter area in the virtual scene after the display of the shelter change picture is stopped.
In some possible embodiments, at least one of displaying the first shaded region or displaying the second shaded region is achieved by a representation of a shade.
In some possible embodiments, the shelter variation picture display module 12 is configured to:
displaying at least one of a shelter adding picture, a shelter dissipating picture and a shelter moving picture in the virtual scene;
the shelter adding picture is a picture for gradually displaying shelters from the edge of a first area to the center of the first area, and the first area is an area to be added with shelters in the virtual scene;
the shelter dissipating picture is a picture for gradually eliminating the shelters from the center of a second area to the edge of the second area, and the second area is an area of the virtual scene where the shelters are to be eliminated.
In some possible embodiments, the shelter alteration screen display module 12 is further configured to:
and displaying the virtual scene element which is changed from the first non-shielding area to the second non-shielding area in the virtual scene.
In some possible embodiments, the apparatus 1 further includes an action screen display module 13, and the action screen display module 13 is further configured to:
and responding to an action instruction aiming at a second virtual object in any shielding area, and displaying an action picture of the virtual object corresponding to the action instruction in any shielding area.
In some possible embodiments, the motion picture display module 13 is configured to:
displaying a silhouette picture of a virtual object corresponding to the action instruction in any one of the shaded areas;
the silhouette picture is determined based on the preset action and skill performance of the virtual object corresponding to the action command.
In some possible embodiments, the shelter variation picture display module 12 is configured to:
and displaying a mask change screen in the virtual scene based on the object attribute of each of the virtual objects in the first non-mask area after the object attribute of the first virtual object is changed.
In some possible embodiments, the shelter alteration screen display module 12 is further configured to:
and displaying other virtual scene elements except the virtual object in the first shielding area or the second shielding area.
In some possible embodiments, the object attribute includes at least one of:
a location in the virtual scene;
mobility capability;
the range of the technical radiation.
Wherein the virtual scene display device may be a computer program (including program code) running in a computer device, for example, the virtual scene display device is an application software; the device can be used for executing corresponding steps in the virtual scene display method provided by the embodiment of the application.
In some possible embodiments, the virtual scene display Device provided in the embodiments of the present Application may be implemented by combining hardware and software, and by way of example, the virtual scene display Device provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the virtual scene display method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
In some feasible embodiments, the virtual scene display device provided in the embodiments of the present application may be implemented in a software manner, and in a specific implementation, the virtual scene display device provided in the embodiments of the present application may execute, through each built-in functional module thereof, the implementation manners provided in each step in fig. 1 and/or fig. 6 as described above, which may specifically refer to the implementation manners provided in each step, and no further description is given here.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. As shown in fig. 10, the electronic device 1000 in the present embodiment may include: the processor 1001, the network interface 1004, and the memory 1005, and the electronic device 1000 may further include: a user interface 1003, and at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display) and a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a standard wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1004 may be a high-speed RAM memory or a non-volatile memory (e.g., at least one disk memory). The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 10, a memory 1005, which is a kind of computer-readable storage medium, may include therein an operating system, a network communication module, a user interface module, and a device control application program.
In the electronic device 1000 shown in fig. 10, the network interface 1004 may provide a network communication function; the user interface 1003 is an interface for providing a user with input; and the processor 1001 may be configured to call a device control application stored in the memory 1005, so as to implement the virtual scene display method provided in the embodiment of the present application.
It should be understood that in some possible embodiments, the processor 1001 may be a Central Processing Unit (CPU), and the processor may be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), field-programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In a specific implementation, the electronic device 1000 may execute, through each built-in functional module thereof, the implementation manners provided in each step in fig. 1 and/or fig. 6, which may be referred to specifically for the implementation manners provided in each step, and are not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and is executed by a processor to implement the method provided in each step in fig. 1 and/or fig. 6, which may specifically refer to the implementation manner provided in each step, and is not described herein again.
The computer readable storage medium may be the virtual scene display apparatus or an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device. The computer readable storage medium may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash card (flash card), and the like, which are provided on the electronic device. The computer readable storage medium may further include a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), and the like. Further, the computer readable storage medium may also include both an internal storage unit and an external storage device of the electronic device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the electronic device. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods provided by the steps of fig. 1 and/or fig. 6.
The terms "first", "second", and the like in the claims and in the description and drawings of the present application are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or electronic device that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or electronic device. Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments. The term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not intended to limit the scope of the present application, which is defined by the appended claims.

Claims (15)

1. A method for displaying a virtual scene, the method comprising:
displaying a first unobscured region in a virtual scene, the first unobscured region including at least one virtual object;
in response to an object attribute of a first virtual object in the first non-occluded area changing, displaying an occluding object changing picture in the virtual scene to display a second non-occluded area in the virtual scene after the occluding object changing picture stops being displayed;
the first and second non-occluded areas are non-occluded areas before and after the object attribute of the first virtual object in the virtual scene changes, respectively.
2. The method of claim 1, wherein displaying the first unobscured region in the virtual scene comprises:
displaying a first non-occlusion area and a corresponding first occlusion area in a virtual scene;
the displaying a shelter change picture in the virtual scene to display a second non-shelter area in the virtual scene after the shelter change picture stops being displayed comprises:
and displaying a shelter change picture in the virtual scene so as to display a second non-shelter area and a corresponding second shelter area in the virtual scene after the shelter change picture is stopped being displayed.
3. The method of claim 2, wherein at least one of displaying the first obscured area or displaying the second obscured area is achieved through a representation of an obscuration.
4. The method of claim 1, wherein said displaying an occlusion change picture in the virtual scene comprises:
displaying at least one of a shelter add screen, a shelter dissipate screen, and a shelter move screen in the virtual scene;
the shelter adding picture is a picture for gradually displaying shelters from the edge of a first area to the center of the first area, and the first area is an area to be added with shelters in the virtual scene;
the shelter dissipation picture is a picture for gradually eliminating shelters from the center of a second area to the edge of the second area, and the second area is an area of shelters to be eliminated in the virtual scene.
5. The method of claim 1, further comprising:
and displaying the virtual scene elements corresponding to the change from the first non-shielding area to the second non-shielding area in the virtual scene.
6. The method of claim 1, further comprising:
and responding to an action instruction aiming at a second virtual object in any shielding area, and displaying an action picture of the virtual object corresponding to the action instruction in any shielding area.
7. The method according to claim 6, wherein the displaying the action screen of the virtual object corresponding to the action instruction in any one of the shaded areas comprises:
displaying a silhouette picture of a virtual object corresponding to the action instruction in any one of the shading areas;
wherein the silhouette picture is determined based on the preset action and skill performance of the virtual object corresponding to the action instruction.
8. The method according to claim 1 or 2, wherein the displaying of the shelter-changing picture in the virtual scene comprises:
and displaying a shelter change picture in the virtual scene based on the object attribute of each virtual object in the first non-shelter area after the object attribute of the first virtual object is changed.
9. The method of claim 2, further comprising:
and displaying other virtual scene elements except the virtual object in the first shading area or the second shading area.
10. The method according to any one of claims 1 to 9, wherein the object properties comprise at least one of:
a location in the virtual scene;
mobility capability;
the range of the technical radiation.
11. A virtual scene display apparatus, comprising:
a virtual scene display module, configured to display a first non-occluded area in a virtual scene, where the first non-occluded area includes at least one virtual object;
a shelter change picture display module, configured to display a shelter change picture in the virtual scene in response to a change in an object attribute of a first virtual object in the first non-shelter area, so as to display a second non-shelter area in the virtual scene after the display of the shelter change picture is stopped;
the first and second non-occluded areas are non-occluded areas before and after the object attribute of the first virtual object in the virtual scene changes, respectively.
12. The apparatus of claim 11,
the virtual scene display module is used for displaying a first non-shielding area and a corresponding first shielding area in a virtual scene;
the shelter change picture display module is used for displaying a shelter change picture in the virtual scene so as to display a second non-shelter area and a corresponding second shelter area in the virtual scene after the shelter change picture stops being displayed.
13. The apparatus of claim 11, further comprising:
and the action picture display module is used for responding to an action instruction aiming at a second virtual object in any shielding area and displaying the action picture of the virtual object corresponding to the action instruction in any shielding area.
14. An electronic device comprising a processor and a memory, the processor and the memory being interconnected;
the memory is used for storing a computer program;
the processor is configured to perform the method of any of claims 1 to 10 when the computer program is invoked.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method of any one of claims 1 to 10.
CN202110037843.9A 2021-01-12 2021-01-12 Virtual scene display method, device, equipment and storage medium Pending CN112717390A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110037843.9A CN112717390A (en) 2021-01-12 2021-01-12 Virtual scene display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110037843.9A CN112717390A (en) 2021-01-12 2021-01-12 Virtual scene display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112717390A true CN112717390A (en) 2021-04-30

Family

ID=75590661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110037843.9A Pending CN112717390A (en) 2021-01-12 2021-01-12 Virtual scene display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112717390A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7504261B1 (en) 2023-05-19 2024-06-21 エヌエイチエヌ コーポレーション Game Program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296786A (en) * 2016-08-09 2017-01-04 网易(杭州)网络有限公司 The determination method and device of scene of game visibility region
JP2017068438A (en) * 2015-09-29 2017-04-06 株式会社コロプラ Computer program for generating silhouette, and computer implementing method
CN107358579A (en) * 2017-06-05 2017-11-17 北京印刷学院 A kind of game war dense fog implementation method
CN107517360A (en) * 2017-08-01 2017-12-26 深圳英飞拓科技股份有限公司 A kind of image-region masking methods and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017068438A (en) * 2015-09-29 2017-04-06 株式会社コロプラ Computer program for generating silhouette, and computer implementing method
CN106296786A (en) * 2016-08-09 2017-01-04 网易(杭州)网络有限公司 The determination method and device of scene of game visibility region
CN107358579A (en) * 2017-06-05 2017-11-17 北京印刷学院 A kind of game war dense fog implementation method
CN107517360A (en) * 2017-08-01 2017-12-26 深圳英飞拓科技股份有限公司 A kind of image-region masking methods and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MTHOUTAI: "2D游戏平滑的迷雾战争效果", 《博客园》 *
SCHIVAS: "Unity3D游戏高性能战争迷雾***实现", 《博客园》 *
包包入侵SSR: "网易又双叒出了一款吃鸡!这次是迷雾逃杀!手电筒照明可还行?要是做成3D版的就好了", 《BILIBILI》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7504261B1 (en) 2023-05-19 2024-06-21 エヌエイチエヌ コーポレーション Game Program

Similar Documents

Publication Publication Date Title
CN111767503B (en) Game data processing method, device, computer and readable storage medium
CN108619720B (en) Animation playing method and device, storage medium and electronic device
WO2022151946A1 (en) Virtual character control method and apparatus, and electronic device, computer-readable storage medium and computer program product
US11779845B2 (en) Information display method and apparatus in virtual scene, device, and computer-readable storage medium
JP7492611B2 (en) Method, apparatus, computer device and computer program for processing data in a virtual scene
KR102680014B1 (en) Method and apparatus for displaying pictures in a virtual environment, devices, and media
JP2022540277A (en) VIRTUAL OBJECT CONTROL METHOD, APPARATUS, TERMINAL AND COMPUTER PROGRAM
JP2023126292A (en) Information display method, device, instrument, and program
US20230033530A1 (en) Method and apparatus for acquiring position in virtual scene, device, medium and program product
JP2023164787A (en) Picture display method and apparatus for virtual environment, and device and computer program
CN113069761B (en) Method and device for displaying virtual characters in game scene and electronic equipment
CN112717390A (en) Virtual scene display method, device, equipment and storage medium
CN113018862A (en) Virtual object control method and device, electronic equipment and storage medium
JP2024506920A (en) Control methods, devices, equipment, and programs for virtual objects
CN111589115B (en) Visual field control method and device for virtual object, storage medium and computer equipment
CN114307150A (en) Interaction method, device, equipment, medium and program product between virtual objects
KR20190018816A (en) System for controlling skills of online game
CN111760279B (en) Picture display method, device, terminal and storage medium
WO2024012016A1 (en) Information display method and apparatus for virtual scenario, and electronic device, storage medium and computer program product
WO2023231557A1 (en) Interaction method for virtual objects, apparatus for virtual objects, and device, storage medium and program product
WO2024125092A1 (en) Interaction method and apparatus based on flyable prop, and electronic device and storage medium
US20230040506A1 (en) Method and apparatus for controlling virtual character to cast skill, device, medium, and program product
WO2023231544A1 (en) Virtual object control method and apparatus, device, and storage medium
CN117679743A (en) Information processing method and device in game, electronic equipment and readable storage medium
US10888781B2 (en) Game scene display control method and system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40042459

Country of ref document: HK