CN111659117A - Virtual object display method and device, computer equipment and storage medium - Google Patents

Virtual object display method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111659117A
CN111659117A CN202010653031.2A CN202010653031A CN111659117A CN 111659117 A CN111659117 A CN 111659117A CN 202010653031 A CN202010653031 A CN 202010653031A CN 111659117 A CN111659117 A CN 111659117A
Authority
CN
China
Prior art keywords
virtual object
virtual
state
distance
target state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010653031.2A
Other languages
Chinese (zh)
Other versions
CN111659117B (en
Inventor
黄晓权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010653031.2A priority Critical patent/CN111659117B/en
Publication of CN111659117A publication Critical patent/CN111659117A/en
Application granted granted Critical
Publication of CN111659117B publication Critical patent/CN111659117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a virtual object display method, a virtual object display device, computer equipment and a storage medium, and relates to the technical field of virtual scenes. The method comprises the following steps: the method comprises the steps of displaying a virtual scene picture, responding to the situation that a second virtual object is in a target state, obtaining object distance, obtaining a first material of the second virtual object based on the object distance, and displaying the second virtual object in the virtual scene picture based on the first material of the second virtual object. From the perspective of an observer, for a virtual object in a target state, the transparency of the virtual object in the target state is adaptively adjusted according to the distance between the virtual object and the virtual object corresponding to the observer, so that the display effect of the virtual object in the target state in a virtual scene is closer to reality, the display of the virtual object in the target state is more matched with the virtual scene, and the display effect of the virtual object in the target state in the virtual scene is improved.

Description

Virtual object display method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of virtual scene technologies, and in particular, to a method and an apparatus for displaying a virtual object, a computer device, and a storage medium.
Background
At present, in many game applications for constructing virtual scenes, a virtual object may enter a stealth state in the virtual scene in an active or passive triggering manner.
In the related art, the display condition of the virtual object when the virtual object enters the stealth state in the virtual scene is usually set by a developer according to a game mechanism. For example, when the virtual scene enters a stealth state, the virtual object is only visible to teammates or operators; alternatively, all other virtual objects are visible in the virtual scene with a fixed transparency.
However, the display mode of the virtual object in the stealth state is fixedly set according to the game mechanism, and the virtual object in the stealth state is not matched with the virtual scene, so that the display effect of the virtual object in the virtual scene is poor.
Disclosure of Invention
The embodiment of the application provides a virtual object display method, a virtual object display device, computer equipment and a storage medium, wherein relevant parameters for displaying a virtual object can be determined according to an object distance, and the technical scheme is as follows:
in one aspect, a method for displaying a virtual object is provided, the method comprising:
displaying a virtual scene picture, wherein the virtual scene picture is a picture of a virtual scene observed from the visual angle of a first virtual object;
in response to a second virtual object being in a target state, obtaining an object distance, the object distance being a distance between the first virtual object and the second virtual object;
acquiring a first material of the second virtual object based on the object distance; the transparency of the first material of the second virtual object is positively correlated with the object distance;
and displaying the second virtual object in the virtual scene picture based on the first material of the second virtual object.
In one aspect, an apparatus for presenting a virtual object is provided, the apparatus comprising:
the picture display module is used for displaying a virtual scene picture, wherein the virtual scene picture is a picture of a virtual scene observed from the visual angle of the first virtual object;
a distance acquisition module, configured to acquire an object distance in response to a second virtual object being in a target state, where the object distance is a distance between the first virtual object and the second virtual object;
a material obtaining module, configured to obtain a first material of the second virtual object based on the object distance; the transparency of the first material of the second virtual object is positively correlated with the object distance;
and the object display module is used for displaying the second virtual object in the virtual scene picture based on the first material of the second virtual object.
In one possible implementation manner, the material obtaining module includes:
the first material obtaining submodule is used for obtaining a first material of the second virtual object corresponding to the state entering duration based on the object distance; the state entry duration is a duration for which the second virtual object enters the target state.
In one possible implementation manner, in response to the entry state duration being within a first time interval, the transparency of the first material of the second virtual object is positively correlated with the entry state duration; the first time interval is a time interval in which the second virtual object starts to enter the target state;
in response to the entry state duration being within a second time interval, the transparency of the first material of the second virtual object is inversely related to the entry state duration; the second time interval is a time interval during which the second virtual object exits from the target state.
In one possible implementation, the apparatus further includes:
and the track display module is used for responding to the second virtual object in the target state and displaying the moving track of the second virtual object in the virtual scene picture.
In one possible implementation manner, the track display module includes:
the track display submodule is used for responding to the condition that the time length of the entering state is in the specified time length interval and displaying the moving track of the second virtual object in the virtual scene picture; the state entry duration is a duration for which the second virtual object enters the target state.
In one possible implementation, the apparatus further includes:
the transparency reduction module is used for reducing the transparency of the material of a second virtual object in response to the second virtual object being in the target state and the second virtual object meeting a first condition;
the first condition includes at least one of the following conditions:
the second virtual object is in a first designated area in the virtual scene, the second virtual object performs a first designated action, and the second virtual object is hit by a first virtual prop.
In one possible implementation, the apparatus further includes:
the exit triggering module is used for triggering the second virtual object to exit the target state in response to the second virtual object being in the target state and the second virtual object meeting a second condition;
the second condition includes at least one of the following conditions:
the second virtual object is in a second designated area in the virtual scene, the second virtual object executes a second action, the second virtual object is hit by a second virtual prop, and the value of the designated attribute of the second virtual object is lower than a value threshold.
In one possible implementation, the apparatus further includes:
and the reminding display module is used for responding that the second virtual object and the first virtual object belong to different teams, and the object distance is smaller than a distance threshold value, and displaying a visual reminding effect in the virtual scene picture.
In one possible implementation manner, the material obtaining module includes:
and the second material obtaining sub-module is used for responding to that the second virtual object and the first virtual object belong to different teams, and obtaining the first material of the second virtual object based on the object distance.
In a possible implementation manner, a state trigger control is displayed in a virtual scene picture in an overlapped mode; the device further comprises:
and the state triggering module is used for triggering the first virtual object to enter the target state in response to receiving the triggering operation of the state triggering control.
In one possible implementation, the apparatus further includes:
in response to the first virtual object meeting a third condition, displaying the triggerable state trigger control in the virtual scene picture in an overlapping mode;
the third condition includes at least one of the following conditions:
the system comprises a virtual resource and a virtual prop, wherein the virtual resource is required for releasing a target skill, and the virtual prop is used for releasing the target skill;
and the target skill is the skill corresponding to the state trigger control.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the above virtual object exhibition method.
In yet another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the above virtual object exhibition method.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the virtual object presentation method provided in the above aspect or in the various alternative implementations of the above aspect.
From the perspective of an observer, for a virtual object in a target state, the transparency of the virtual object in the target state is adaptively adjusted according to the distance between the virtual object and the virtual object corresponding to the observer, so that the display effect of the virtual object in the target state in a virtual scene is closer to reality, the display of the virtual object in the target state is more matched with the virtual scene, and the display effect of the virtual object in the target state in the virtual scene is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application;
FIG. 2 is a schematic illustration of a display interface of a virtual scene provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic illustration of a virtual object exposure flow provided by an exemplary embodiment of the present application;
FIG. 4 is a flowchart of a method for presenting a virtual object according to an exemplary embodiment of the present application;
FIG. 5 is a schematic view of a camera model to which the embodiment shown in FIG. 4 relates;
FIG. 6 is a schematic diagram illustrating a process of entering a stealth state for a virtual object according to the embodiment shown in FIG. 4;
FIG. 7 is a schematic illustration of a material replacement process according to the embodiment shown in FIG. 4;
FIG. 8 is a diagram illustrating a special effect of a moving track in a hiding process according to the embodiment shown in FIG. 4;
FIG. 9 is a schematic diagram illustrating a visual alert according to the embodiment shown in FIG. 4;
FIG. 10 is a schematic diagram of the embodiment of FIG. 4 illustrating the state of the object from a first perspective;
FIG. 11 is a flow diagram of a target skill release process according to the embodiment shown in FIG. 4;
FIG. 12 is an interaction diagram of a virtual object state transition provided by an exemplary embodiment of the present application;
fig. 13 is a block diagram illustrating a structure of a virtual object presentation apparatus according to an exemplary embodiment of the present application;
fig. 14 is a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
First, terms referred to in the embodiments of the present application are described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
Virtual props: the tool is a tool which can be used by a virtual object in a virtual environment, and comprises a virtual weapon which can hurt other virtual objects, such as a pistol, a rifle, a sniper, a dagger, a knife, a sword, an axe and the like, and a supply tool such as a bullet, wherein a quick cartridge clip, a sighting telescope, a silencer and the like are arranged on the appointed virtual weapon, and can provide a virtual pendant with partial added attributes for the virtual weapon, and defense tools such as a shield, a armor, a armored car and the like.
First person shooter game: the shooting game is a shooting game that a user can play from a first-person perspective, and a screen of a virtual environment in the game is a screen that observes the virtual environment from a perspective of a first virtual object. In the game, at least two virtual objects carry out a single-game fighting mode in a virtual environment, the virtual objects achieve the purpose of survival in the virtual environment by avoiding the injury initiated by other virtual objects and the danger (such as poison circle, marshland and the like) existing in the virtual environment, when the life value of the virtual objects in the virtual environment is zero, the life of the virtual objects in the virtual environment is ended, and finally the virtual objects which survive in the virtual environment are winners. Optionally, each client may control one or more virtual objects in the virtual environment, with the time when the first client joins the battle as a starting time and the time when the last client exits the battle as an ending time. Optionally, the competitive mode of the battle may include a single battle mode, a double group battle mode or a multi-person group battle mode, and the battle mode is not limited in the embodiment of the present application.
User Interface (UI) controls: refers to any visual control or element that is visible on the user interface of the application, such as controls for pictures, input boxes, text boxes, buttons, tabs, etc., some of which are responsive to user actions.
The method provided in the present application may be applied to a virtual reality application program, a three-dimensional map program, a military simulation program, a first-person shooter game, a Multiplayer Online Battle arena games (MOBA), and the like, and the following embodiments are exemplified by application in the first-person shooter game.
Referring to fig. 1, a schematic structural diagram of a terminal according to an exemplary embodiment of the present application is shown. As shown in fig. 1, the terminal includes a main board 110, an external input/output device 120, a memory 130, an external interface 140, a touch system 150, and a power supply 160.
The main board 110 has integrated therein processing elements such as a processor and a controller.
The external input/output device 120 may include a display component (e.g., a display screen), a sound playing component (e.g., a speaker), a sound collecting component (e.g., a microphone), various keys, and the like.
The memory 130 has program codes and data stored therein.
The external interface 140 may include a headset interface, a charging interface, a data interface, and the like.
The touch system 150 may be integrated into a display component or a key of the external input/output device 120, and the touch system 150 is used to detect a trigger operation performed by a user on the display component or the key.
The power supply 160 is used to power the various other components in the terminal.
In the embodiment of the present application, the processor in the main board 110 may generate a virtual scene by executing or calling the program code and data stored in the memory, and expose the generated virtual scene through the external input/output device 120. In the process of displaying the virtual scene, the capacitive touch system 150 may detect a touch operation performed when the user interacts with the virtual scene.
The virtual scene may be a three-dimensional virtual scene, or the virtual scene may also be a two-dimensional virtual scene. Taking the example that the virtual scene is a three-dimensional virtual scene, please refer to fig. 2, which shows a schematic view of a display interface of the virtual scene according to an exemplary embodiment of the present application. As shown in fig. 2, the display interface of the virtual scene includes a scene screen 200, and the scene screen 200 includes a currently controlled virtual object 210, an environment screen 220 of the three-dimensional virtual scene, and a virtual object 240. The virtual object 240 may be a virtual object controlled by a user or a virtual object controlled by an application program corresponding to other terminals.
In fig. 2, the currently controlled virtual object 210 and the virtual object 240 are three-dimensional models in a three-dimensional virtual scene, and the environment picture of the three-dimensional virtual scene displayed in the scene picture 200 is an object observed from the perspective of the currently controlled virtual object 210, for example, as shown in fig. 2, the environment picture 220 of the three-dimensional virtual scene displayed from the perspective of the currently controlled virtual object 210 is the ground 224, the sky 225, the horizon 223, the hill 221, and the factory building 222.
The currently controlled virtual object 210 may release skills or use virtual props, move and execute a specified action under the control of the user, and the virtual object in the virtual scene may show different three-dimensional models under the control of the user, for example, a screen of the terminal supports touch operation, and a scene screen 200 of the virtual scene includes a virtual control, so that when the user touches the virtual control, the currently controlled virtual object 210 may execute the specified action in the virtual scene and show a currently corresponding three-dimensional model.
A terminal may display a three-dimensional model corresponding to a virtual object in a virtual scene through the virtual object display method, please refer to fig. 3, which shows a schematic diagram of a virtual object display flow provided in an exemplary embodiment of the present application. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in FIG. 3, a computer device may expose a virtual object by performing the following steps.
Step 31, displaying a virtual scene picture, wherein the virtual scene picture is a picture of a virtual scene observed from the view angle of the first virtual object.
And step 32, responding to the second virtual object being in the target state, and acquiring the object distance, wherein the object distance is the distance between the first virtual object and the second virtual object.
Step 33, obtaining a first material of the second virtual object based on the object distance; the transparency of the first material of the second virtual object is positively correlated with the object distance.
Step 34, displaying the second virtual object in the virtual scene picture based on the first material of the second virtual object.
In a possible implementation manner, when the first virtual object is in the target state, the second material of the first virtual object is obtained according to the time length of the first virtual object entering the target state, the transparency of the second material of the first virtual object is related to the time length of the first virtual object entering the target state, and the first virtual object is displayed in the virtual scene picture according to the second material of the first virtual object.
In one possible implementation, a first virtual scene picture is displayed, and the first virtual scene picture is a picture of a virtual scene observed from the perspective of a first virtual object; in response to a second virtual object in the first virtual scene picture being in a target state, presenting the second virtual scene picture; the second virtual scene picture comprises a first virtual object and a second virtual object; the transparency of the second virtual object is positively correlated with the object distance; the object distance is the distance between the first virtual object and the second virtual object.
To sum up, in the scheme shown in the embodiment of the present application, from the perspective of an observer, for a virtual object in a target state, the transparency of the virtual object in the target state is adaptively adjusted according to the distance between the virtual object and a virtual object corresponding to the observer, so that the display effect of the virtual object in the target state in a virtual scene is closer to reality, the display of the virtual object in the target state is more matched with the virtual scene, and the display effect of the virtual object in the target state in the virtual scene is improved.
Referring to fig. 4, a flowchart of a method for presenting a virtual object according to an exemplary embodiment of the present application is shown. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 4, taking the computer device as a terminal as an example, the terminal may expose the virtual object by performing the following steps.
Step 401, displaying a virtual scene picture.
In the embodiment of the present application, the virtual scene picture is a picture in which a virtual scene is observed from a perspective of a first virtual object, the virtual scene picture includes the first virtual object and a second virtual object, the first virtual object is a virtual object manipulated by a user on a terminal side currently showing the virtual scene picture, the second virtual object is a virtual object controlled by a user on another terminal side or controlled by a computer device, and the second virtual object may be a virtual object belonging to the same camp or team as the first virtual object, a virtual object in a neutral camp or team, or a virtual object in a different camp or team from the first virtual object.
In a possible implementation manner, the displayed virtual scene picture includes two display manners, namely a virtual scene picture expanded from a view angle of the first virtual object (for example, a first person view angle) and a virtual scene picture expanded from a view angle directly behind the first virtual object (for example, a third person view angle).
Referring to fig. 5, a schematic view of a camera model according to an embodiment of the present application is shown. A point is determined in the virtual object 51 as the centre of rotation 52 around which the camera model is rotated, optionally provided with an initial position which is a position above and behind the virtual object (such as the rear position of the brain). Illustratively, as shown in fig. 5, the initial position is a position 53, when the camera model rotates to a position 54 or a position 55, the viewing angle direction of the camera model changes along with the rotation of the camera model, and the virtual scene picture obtained in this way is a third person-called viewing angle picture.
The camera model can also be located in the brain of the virtual object, and the view angle direction of the camera model is changed along with the change of the position direction of the brain of the virtual object, and the virtual scene picture obtained in the way is a first-person view angle picture.
In the embodiment of the present application, the virtual object 51 may be a virtual carrier in a virtual scene, or the virtual object may be any other form of virtual object that can be controlled by a user, such as a virtual animal.
Step 402, in response to the second virtual object being in the target state, an object distance is obtained.
In the embodiment of the present application, when the second virtual object is in the target state, the current object distance is obtained.
Wherein the object distance may be a distance between the first virtual object and the second virtual object.
In one possible implementation, when the distance between the second virtual object and the first virtual object is less than or equal to a specified distance, a three-dimensional model of the second virtual object is presented in the virtual scene picture.
Wherein the specified distance may be configured by a developer to find a maximum visual range of the virtual object in the target state.
In a possible implementation manner, the target state of the second virtual object includes a state of the second virtual object after receiving the specified instruction to release the skill, or a state of the second virtual object when the specified virtual item is assembled, or a state of the second virtual object after the specified virtual item is used.
For example, the virtual object may have a hidden active skill, and when the user controls the virtual object to release the active skill, the virtual object may change from a normal state to a hidden state, where the hidden state is a target state; or, a virtual item for hiding may also exist in the virtual scene, and when the virtual item holds or uses the virtual item, the virtual item may also change from a normal state to a hidden state, where the hidden state is a target state.
For example, please refer to fig. 6, which is a schematic diagram illustrating a process of entering a stealth state of a virtual object according to an embodiment of the present application. As shown in fig. 6, when the virtual object 61 enters the target state and the target state is the stealth state, in the process of starting to enter the stealth state, the material and the outer contour of the virtual object 61 are gradually transparent, and along with the special effect 62 of starting to hide, when the transparency of the material and the outer contour of the virtual object 61 reaches a certain value, the transparency stops changing, and at this time, the virtual object 63 in the stable stealth state is displayed in the virtual scene picture.
In addition, the target state may be a changed state, and the changed state may be a state in which a part or all of the material in the original three-dimensional model is replaced with another material and displayed.
For example, when the target state is the change state, the clothing pattern of the virtual object can be changed in the target state, for example, the clothing region of the model is changed from the horizontal stripe material to the vertical stripe material.
In one possible implementation manner, coordinates of a first virtual object at the current moment in a world coordinate system in a virtual scene are acquired as a position point of the first virtual object; acquiring coordinates of a second virtual object at the current moment in a world coordinate system in the virtual scene as position points of the second virtual object; and performing distance calculation on the coordinate point of the first virtual object and the coordinate point of the second virtual object to determine the current object distance.
Step 403, obtaining a first material of the second virtual object based on the object distance.
In the embodiment of the application, according to the acquired data of the distance of the current object, a numerical value of a current first material of the second virtual object is determined, and a corresponding first material is acquired.
The value of the first material may include a material type and a material transparency, and the transparency of the first material of the second virtual object is positively correlated with the object distance.
That is, the greater the object distance, the greater the transparency; the smaller the object distance, the smaller the transparency, and the more the second virtual object is far from the first virtual object, the more transparent the model of the second virtual object is, and the closer the second virtual object is to the first virtual object, the less transparent the model of the second virtual object gradually decreases in the virtual scene picture.
In one possible implementation, the first material of the second virtual object corresponding to the duration of entering the state is obtained based on the object distance.
Wherein the state entry duration is a duration for which the second virtual object enters the target state.
In one possible implementation manner, the following two cases may be possible according to the duration of the second virtual object entering the target state:
1) in response to the entry state duration being within the first time interval, the transparency of the first material of the second virtual object is positively correlated with the entry state duration.
Wherein the first time interval is a time interval when the second virtual object starts to enter the target state.
That is to say, when the second virtual object starts to enter the target state, there is a period of specified duration as the first time zone, the transparency of the first material corresponding to the model displayed in the first time zone by the second virtual object increases with the increase of the duration of entering the target state, until the current duration exceeds the first time zone, the transparency of the first material stops increasing, and the transparency can maintain the value unchanged without considering the influence of other factors (such as distance).
2) In response to the entry state duration being within the second time interval, the transparency of the first material of the second virtual object is inversely related to the entry state duration.
Wherein the second time interval is a time interval during which the second virtual object exits from the target state.
That is to say, when the second virtual object is about to end the target state, there is a period of specified duration as the second time region for gradually exiting the target state, the transparency of the first material corresponding to the model displayed by the second virtual object in the second time region decreases with the increase of the duration of entering the target state until the current duration exceeds the second time region, so that the transparency of the first material becomes 0, without considering the influence of other factors, and then the second virtual object resumes the normal state.
For example, when the object distance is within 20 meters, the opacity can be varied between 0.5 and 0.3; when the object distance is in the range of 20-50 meters, the opacity may vary between 0.3-0.1, wherein the outline of the second virtual object may be a semi-transparent state.
In one possible implementation manner, in response to that the second virtual object and the first virtual object belong to different teams, the first material of the second virtual object is obtained based on the object distance.
When the second virtual object and the first virtual object belong to different teams, except that the transparency of the first material is changed by the second virtual object according to the entry state duration, and the second virtual object is displayed in the virtual scene picture, the corresponding transparency of the first material is determined based on the object distance. The model material of the second virtual object finally displayed in the virtual scene picture may be determined by the duration of the entering state and the object distance.
The method comprises the steps of determining the influence ratio of the state entering time length and the influence ratio of the object distance according to the specified proportion distribution, and determining parameters related to transparency in the first material according to the state entering time length and the object distance.
Step 404, displaying the second virtual object in the virtual scene picture based on the first material of the second virtual object.
In the embodiment of the application, a model of the second virtual object constructed according to the first material at the current time is displayed in the virtual scene picture through the acquired first material of the second virtual object at the current time.
For example, please refer to fig. 7, which shows a schematic diagram of a material replacement process according to an embodiment of the present application. As shown in fig. 7, the material replacement process may be implemented by performing the following steps.
In step 71, the material file is loaded in a dynamic loading manner, so that no code needs to be changed in the process of replacing the material, and the effect of replacing the material can be obtained only by updating the material file in the process of hot updating.
In step 72, the invisible material and the playing time may be transmitted to the material management system during the implementation of the special effect of stealth.
The material management system is a system which is convenient for adding, searching, deleting, replacing and presenting each material. After the stealth special effect is triggered, the identity of the related material resource and the playing time are used as parameters to be transmitted to the material management system, so that the expression of the stealth material on the virtual object can be realized.
In step 73, the virtual object is replaced with a stealth material by the material management system.
The stealth skill needs to transmit the effect parameters of the invisible materials into the material management system, and the materials in all renderers (renderers) of the model corresponding to the virtual object can be replaced by the invisible materials to express the stealth effect.
In step 74, the display and disappearance of the material is played over time.
In the material management system, the virtual object model is made of invisible materials within the playing time of the invisible materials, and the invisible materials can be restored to the original materials of the virtual object when the playing time is exceeded.
Step 405, responding to that the second virtual object is in the target state, displaying the moving track of the second virtual object in the virtual scene picture.
In this embodiment of the application, when the second virtual object is in the target state and the second virtual object moves in the virtual scene picture, the movement track may be displayed.
And displaying the moving track can be realized by adding a special effect animation on the moving path of the second virtual object.
For example, please refer to fig. 8, which shows a diagram illustrating a special effect of a moving track in a stealth process according to an embodiment of the present application. As shown in fig. 8, the virtual object 81 in the stealth state is at a position a in the virtual scene screen at a first time, and after a period of time, the virtual object 81 in the stealth state moves to a position B in the virtual scene screen at a second time, and a special effect animation 82 for displaying a trajectory from the position a to the position B may appear in the virtual scene screen at the second time.
In one possible implementation manner, in response to the entry state duration being in the specified duration interval, the movement track of the second virtual object is displayed in the virtual scene picture.
Wherein the state entry duration is a duration for which the second virtual object enters the target state. The specified duration interval is a specified duration region in the state entry duration.
That is, if the current time is in the specified duration region of the duration of the entering state of the second virtual object, a special effect animation may be added to the moving path of the second virtual object, so as to display the moving track of the second virtual object in the virtual scene picture.
For example, if the target state is a stealth state and the specified duration region includes 2s from which the target state starts to enter the stealth state, the movement trajectory of the second virtual object may be displayed in the virtual scene picture within the 2 s. For another example, if the target state is a stealth state, and the specified duration region includes 6s of the stealth state, the movement trajectory of the second virtual object is displayed in the virtual scene picture after the second virtual object enters 6s of the stealth state.
Step 406, in response to the second virtual object being in the target state and the second virtual object satisfying the first condition, reducing the transparency of the material of the second virtual object.
Wherein the first condition comprises at least one of the second virtual object being in a first designated area in the virtual scene, the second virtual object performing a first designated action, and the second virtual object being hit by the first virtual item.
In one possible implementation, when the second virtual object is in the target state and the second virtual object is in the first designated area in the virtual scene, the transparency of the material of the second virtual object is reduced.
Wherein the first designated area may be an area in the virtual scene that interferes with the virtual object being in the target state.
For example, the first designated area may be a stealth interference area, and when the virtual object in a stealth state enters the stealth interference area, the stealth effect of the virtual object is reduced.
In addition, the time length of the virtual object in the target state in the first designated area may affect the degree of interference on the target state, that is, as the time length in the first designated area increases, the transparency of the material of the virtual object may be accelerated to be reduced.
In one possible implementation, when the second virtual object is in the target state, and the second virtual object executes the first specified action, the transparency of the material of the second virtual object is reduced.
Wherein the first specified action may be a specified action that affects the virtual object to perform while the virtual object is in the target state.
For example, the first designated action may be a running action performed by the virtual object, and when the designated action for running is performed by the virtual object in the target state, the stealth effect of the virtual object is reduced if the target state is the stealth state.
In a possible implementation manner, when the second virtual object is in the target state and the second virtual object is hit by the first virtual prop, the transparency of the material of the second virtual object is reduced.
The first virtual item may be a virtual item that inhibits the virtual object from being in the target state.
For example, when the first virtual item is a gun, if the second virtual object is in a stealth state, when the second virtual object is hit by the virtual gun, the stealth effect of the second virtual object is reduced.
Step 407, in response to the second virtual object being in the target state and the second virtual object satisfying the second condition, triggering the second virtual object to exit the target state.
In this embodiment of the application, the exiting of the second virtual object from the target state may include two cases, namely, an active exiting and a passive exiting, where the active exiting is a target state where the user operates the second virtual object to end the second virtual object, and the passive exiting is a target state where the second virtual object receives other factors to interrupt the second virtual object.
The second condition comprises at least one of that the second virtual object is in a second designated area in the virtual scene, that the second virtual object executes a second action, that the second virtual object is hit by the second virtual object, and that the designated attribute value of the second virtual object is lower than the value threshold.
In one possible implementation, when the second virtual object is in the target state and the second virtual object is in a second designated area in the virtual scene, the second virtual object is triggered to exit the target state.
Wherein the second designated area may be a designated area that interrupts the target state of the second virtual object.
For example, the second designated area may be a visible area, and when the virtual object in the stealth state enters the stealth interference area, the virtual object ends the stealth state and returns to the normal state.
In one possible implementation, when the second virtual object is in the target state, and the second virtual object performs the second action, the second virtual object is triggered to exit the target state.
Wherein the second action may be a specified action to interrupt the execution of the virtual object whose target state is the virtual object.
For example, when the second virtual object performs a gun-firing action, a thunder-throwing action, a land-falling action, a rescue teammate action, a wing-mounting action, and a carrier action, the interrupting virtual object is in a target state.
In a possible implementation manner, when the second virtual object is in the target state and the second virtual object is hit by the second virtual prop, the second virtual object is triggered to exit the target state.
The second virtual item may be a virtual item that interrupts the virtual object from being in the target state.
For example, the second virtual item may include a flash bomb and a virtual firearm, and the second virtual object interrupts the target state when the second virtual object is hit by the flash bomb or the virtual firearm.
In one possible implementation, when the second virtual object is in the target state and the value of the specified attribute of the second virtual object is lower than the value threshold, the second virtual object is triggered to exit the target state.
Wherein the specified attribute value may include at least one of a blood volume value, a endurance value, and an energy value of the second virtual object.
For example, when the blood volume value, endurance value, or energy value of the second virtual object is zero, the virtual object may fall or die, and when the second virtual object falls or dies, the second virtual object interrupts the target state.
In a possible implementation manner, in the process of exiting the target state, the transparency of the material of the second virtual object is gradually reduced until the transparency of the material of the second virtual object is zero, and the target state is completely exited.
Step 408, in response to that the second virtual object and the first virtual object belong to different teams and that the object distance is smaller than the distance threshold, displaying a visual reminding effect in the virtual scene picture.
The distance threshold is a preset threshold, and the distance threshold may be smaller than the visual distance of the first virtual object.
In one possible implementation manner, a small map display area exists on the virtual scene interface, and the small map display area is used for displaying the position distribution of each virtual object and each virtual building.
In one possible implementation, the position of the virtual object in the same team as the first virtual object is mapped on the small map for display, and the position of the virtual object in the visual range of the first virtual object and in a different team from the first virtual object is mapped on the small map for display.
In one possible implementation manner, when the second virtual object is in the target state, the second virtual object which is located in the visual range and is in a different team from the first virtual object shows a visual reminding effect on the virtual scene interface under the condition that the object distance is smaller than a specified threshold value.
The visual reminding effect can be displaying a warning mark or playing a special sound effect.
For example, please refer to fig. 9, which shows a schematic diagram illustrating a visual alert effect according to an embodiment of the present application. As shown in fig. 9, when a first virtual object 91 and a second virtual object 92 exist in a virtual scene screen, and the second virtual object 92 is in a stealth state, it is determined whether the object distance 94 is less than or equal to a specified threshold value according to the acquired object distance 94, and when the object distance 94 is less than or equal to the specified threshold value, a visual alert effect 93 is displayed on the virtual scene screen to notify a user corresponding to the first virtual object 91 that a stealth second virtual object 92 exists near the user.
And step 409, in response to that the first virtual object meets the third condition, displaying a triggerable state trigger control in a virtual scene picture in an overlapping mode.
Wherein the third condition comprises at least one of a virtual resource required to release the target skill and a virtual prop having the target skill to release.
And the target skill is the skill corresponding to the state trigger control.
In one possible implementation manner, in response to the first virtual object meeting the virtual resource required for releasing the target skill, a triggerable state triggering control is displayed in a virtual scene picture in an overlapping mode.
The virtual resource may be a time resource or an energy resource.
For example, when the virtual object passes the skill cooling time of the target skill, or the energy reaches the energy required by the target skill, the triggerable state trigger control is displayed in the virtual scene picture in an overlapping manner.
In one possible implementation manner, in response to the first virtual object meeting the virtual prop with the target skill being released, a triggerable state trigger control is displayed in a virtual scene picture in an overlapping mode.
For example, when the virtual object acquires or is equipped with a virtual item that can enter a stealth state, a triggerable state trigger control is displayed in a virtual scene picture in an overlapping manner.
Step 410, a state trigger control is displayed in the virtual scene picture in an overlapped mode, and the first virtual object is triggered to enter the target state in response to receiving trigger operation on the state trigger control.
The second virtual object entering the target state may also be implemented in a manner similar to that in step 410, that is, a frame for observing the virtual scene from the perspective of the second virtual object is displayed in a terminal corresponding to the second virtual object, and a state trigger control is displayed in the frame in an overlapping manner, and the second virtual object is triggered to enter the target state in response to receiving a trigger operation on the state trigger control.
In a possible implementation manner, a triggerable state trigger control is displayed in a virtual scene picture in an overlapping manner, a trigger operation is required to be performed to trigger a target state corresponding to the state trigger control, and after the trigger operation is performed, a virtual object starts to enter a target state process.
The first virtual object can show the target state through the first person perspective or the third person perspective.
For example, please refer to fig. 10, which illustrates a schematic diagram of displaying a target state through a first person perspective according to an embodiment of the present application. As shown in fig. 10, the first person weighing time terminal can display the virtual weapon portion of the virtual object itself on the virtual scene screen, and in the stealth state, display the virtual weapon 1001 in the stealth state, in this case, it can be realized by replacing the material of the local area.
In one possible implementation, the process of implementing the stealth skill as the target skill may be implemented as follows, please refer to fig. 11, which shows a flowchart of a target skill release process according to an embodiment of the present application.
In step 1101, an energy bar of the skill control is displayed on a display interface of the terminal, and the user waits for the energy bar of the skill control to be full.
The time required by the energy bar of the skill control to be full is configured by planning, the server updates the time value and then issues the time value to the client, and the energy bar of the virtual control becomes full gradually in the display interface.
In step 1102, the user releases the stealth skills by clicking on the skill controls.
When the energy of the skill control is full, the stealth skill can be released by clicking.
In step 1103, the whole body of the virtual object (including the firearm and pendant) is replaced with a stealth material.
After releasing the stealth skills, the virtual object acquires all renderers (renderers) on the body, replaces all materials on the renderers with invisible materials, and plays some stored special effects in the releasing process to show that the skills are being released.
In step 1104, the virtual object in the stealth state may attack an enemy virtual object.
The virtual object in the stealth state can be moved at will, and the stealth effect can be triggered to disappear after the attack action is carried out.
In step 1105, the stealth effect of the virtual object disappears.
The hiding skill can be ended by the virtual object executing the attack action or actively canceling the hiding skill or being knocked down, and when the hiding effect disappears, the materials in all renderers on the virtual object recover to the materials before the release of the skills.
In summary, from the perspective of an observer, for a virtual object in a target state, the transparency of the virtual object in the target state is adaptively adjusted according to the distance between the virtual object and the virtual object corresponding to the observer, so that the display effect of the virtual object in the target state in a virtual scene is closer to reality, the display of the virtual object in the target state is more matched with the virtual scene, and the display effect of the virtual object in the target state in the virtual scene is improved.
FIG. 12 is an interaction diagram illustrating state transitions of a virtual object, according to an example embodiment. As shown in fig. 12, the client and the server may interact by performing the following steps to expose the virtual object.
S1201, the client side sends a protocol for releasing the stealth skills to the server by clicking the skill control, wherein the protocol may include a protocol type, an identity of the skill, and an identity of the account.
And S1202, after receiving a protocol for requesting releasing of the stealth skill, the server checks whether the skill can be released, and transmits the protocol containing successful or unsuccessful skill releasing information to the client, wherein the protocol comprises a protocol type, an identity of the skill, an identity of an account and successful or unsuccessful information.
And S1203, after the skill releasing is successful, the server continuously issues a protocol with the stealth gain effect information to the client, wherein the protocol comprises a protocol type, an identity of an account and a stealth gain effect identity. The gain effect may be other additive effects of the stealth effect, for example, in the stealth state, 20% running and walking speed additive may be obtained until the stealth state is completed.
And S1204, after receiving the sent stealth gain effect protocol, the client performs material replacement, changes the material into a stealth material and plays other related special effects.
And S1205, when the material effect playing time is over or the playing process is terminated in advance, the server issues a protocol for terminating the stealth skill, wherein the protocol comprises a protocol type, a skill identity and an account identity.
And S1206, after the client receives the protocol which is issued by the server and ends the stealth skill, restoring the role material to the object material before releasing the skill, and closing the stealth effect.
In summary, from the perspective of an observer, for a virtual object in a target state, the transparency of the virtual object in the target state is adaptively adjusted according to the distance between the virtual object and the virtual object corresponding to the observer, so that the display effect of the virtual object in the target state in a virtual scene is closer to reality, the display of the virtual object in the target state is more matched with the virtual scene, and the display effect of the virtual object in the target state in the virtual scene is improved.
Fig. 13 is a block diagram illustrating a structure of a virtual object presentation apparatus according to an exemplary embodiment. The virtual object showing device can be used in a computer device to execute all or part of the steps in the method shown in the corresponding embodiment of fig. 3 or fig. 4. The virtual object presentation apparatus may include:
a picture displaying module 1301, configured to display a virtual scene picture, where the virtual scene picture is a picture of a virtual scene observed from a viewing angle of a first virtual object;
a distance obtaining module 1302, configured to obtain an object distance in response to a second virtual object being in a target state, where the object distance is a distance between the first virtual object and the second virtual object;
a material obtaining module 1303, configured to obtain a first material of the second virtual object based on the object distance; the transparency of the first material of the second virtual object is positively correlated with the object distance;
an object display module 1304, configured to display the second virtual object in the virtual scene picture based on the first material of the second virtual object.
In a possible implementation manner, the material obtaining module 1303 includes:
the first material obtaining submodule is used for obtaining a first material of the second virtual object corresponding to the state entering duration based on the object distance; the state entry duration is a duration for which the second virtual object enters the target state.
In one possible implementation manner, in response to the entry state duration being within a first time interval, the transparency of the first material of the second virtual object is positively correlated with the entry state duration; the first time interval is a time interval in which the second virtual object starts to enter the target state;
in response to the entry state duration being within a second time interval, the transparency of the first material of the second virtual object is inversely related to the entry state duration; the second time interval is a time interval during which the second virtual object exits from the target state.
In one possible implementation, the apparatus further includes:
and the track display module is used for responding to the second virtual object in the target state and displaying the moving track of the second virtual object in the virtual scene picture.
In one possible implementation manner, the track display module includes:
the track display submodule is used for responding to the condition that the time length of the entering state is in the specified time length interval and displaying the moving track of the second virtual object in the virtual scene picture; the state entry duration is a duration for which the second virtual object enters the target state.
In one possible implementation, the apparatus further includes:
the transparency reduction module is used for reducing the transparency of the material of a second virtual object in response to the second virtual object being in the target state and the second virtual object meeting a first condition;
the first condition includes at least one of the following conditions:
the second virtual object is in a first designated area in the virtual scene, the second virtual object performs a first designated action, and the second virtual object is hit by a first virtual prop.
In one possible implementation, the apparatus further includes:
the exit triggering module is used for triggering the second virtual object to exit the target state in response to the second virtual object being in the target state and the second virtual object meeting a second condition;
the second condition includes at least one of the following conditions:
the second virtual object is in a second designated area in the virtual scene, the second virtual object executes a second action, the second virtual object is hit by a second virtual prop, and the value of the designated attribute of the second virtual object is lower than a value threshold.
In one possible implementation, the apparatus further includes:
and the reminding display module is used for responding that the second virtual object and the first virtual object belong to different teams, and the object distance is smaller than a distance threshold value, and displaying a visual reminding effect in the virtual scene picture.
In a possible implementation manner, the material obtaining module 1303 includes:
and the second material obtaining sub-module is used for responding to that the second virtual object and the first virtual object belong to different teams, and obtaining the first material of the second virtual object based on the object distance.
In a possible implementation manner, a state trigger control is displayed in a virtual scene picture in an overlapped mode; the device further comprises:
and the state triggering module is used for triggering the first virtual object to enter the target state in response to receiving the triggering operation of the state triggering control.
In one possible implementation, the apparatus further includes:
in response to the first virtual object meeting a third condition, displaying the triggerable state trigger control in the virtual scene picture in an overlapping mode;
the third condition includes at least one of the following conditions:
the system comprises a virtual resource and a virtual prop, wherein the virtual resource is required for releasing a target skill, and the virtual prop is used for releasing the target skill;
and the target skill is the skill corresponding to the state trigger control.
In summary, from the perspective of an observer, for a virtual object in a target state, the transparency of the virtual object in the target state is adaptively adjusted according to the distance between the virtual object and the virtual object corresponding to the observer, so that the display effect of the virtual object in the target state in a virtual scene is closer to reality, the display of the virtual object in the target state is more matched with the virtual scene, and the display effect of the virtual object in the target state in the virtual scene is improved.
FIG. 14 is a block diagram illustrating the structure of a computer device 1400 in accordance with an exemplary embodiment. The computer device 1400 may be a user terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture experts Group Audio Layer III, motion video experts compression standard Audio Layer 3), an MP4 player (Moving Picture experts Group Audio Layer IV, motion video experts compression standard Audio Layer 4), a laptop computer, or a desktop computer. Computer device 1400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
Generally, computer device 1400 includes: a processor 1401, and a memory 1402.
Processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1401 may be implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array). Processor 1401 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1402 may include one or more computer-readable storage media, which may be non-transitory. Memory 1402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1402 is used to store at least one instruction for execution by processor 1401 to implement all or part of the steps of a method provided by the method embodiments herein.
In some embodiments, computer device 1400 may also optionally include: a peripheral device interface 1403 and at least one peripheral device. The processor 1401, the memory 1402, and the peripheral device interface 1403 may be connected by buses or signal lines. Each peripheral device may be connected to the peripheral device interface 1403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1404, a touch display 1405, a camera 1406, audio circuitry 1407, a positioning component 1408, and a power supply 1409.
The peripheral device interface 1403 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1401 and the memory 1402. In some embodiments, the processor 1401, memory 1402, and peripheral interface 1403 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1401, the memory 1402, and the peripheral device interface 1403 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1405 is a touch display screen, the display screen 1405 also has the ability to capture touch signals at or above the surface of the display screen 1405. The touch signal may be input to the processor 1401 for processing as a control signal. At this point, the display 1405 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 1405 may be one, providing the front panel of the computer device 1400; in other embodiments, the display 1405 may be at least two, respectively disposed on different surfaces of the computer device 1400 or in a folded design; in still other embodiments, the display 1405 may be a flexible display disposed on a curved surface or on a folded surface of the computer device 1400. Even further, the display 1405 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1405 can be made of LCD (Liquid Crystal Display), OLED (organic light-Emitting Diode), and the like.
The camera assembly 1406 is used to capture images or video. Optionally, camera assembly 1406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1401 for processing or inputting the electric signals to the radio frequency circuit 1404 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and located at different locations on the computer device 1400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is then used to convert electrical signals from the processor 1401 or the radio frequency circuit 1404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1407 may also include a headphone jack.
The Location component 1408 is operable to locate a current geographic Location of the computer device 1400 for navigation or LBS (Location Based Service). The Positioning component 1408 may be based on the Global Positioning System (GPS) in the united states, the beidou System in china, the Global Navigation Satellite System (GLONASS) in russia, or the galileo System in europe.
The power supply 1409 is used to power the various components of the computer device 1400. The power source 1409 may be alternating current, direct current, disposable or rechargeable. When the power source 1409 comprises a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1400 also includes one or more sensors 1410. The one or more sensors 1410 include, but are not limited to: acceleration sensor 1411, gyroscope sensor 1412, pressure sensor 1413, fingerprint sensor 1412, optical sensor 1415, and proximity sensor 1416.
The acceleration sensor 1411 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the computer apparatus 1400. For example, the acceleration sensor 1411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1401 can control the touch display 1405 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1411. The acceleration sensor 1411 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1412 may detect a body direction and a rotation angle of the computer device 1400, and the gyro sensor 1412 may cooperate with the acceleration sensor 1411 to collect a 3D motion of the user on the computer device 1400. The processor 1401 can realize the following functions according to the data collected by the gyro sensor 1412: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1413 may be disposed on the side bezel of the computer device 1400 and/or underneath the touch display 1405. When the pressure sensor 1413 is disposed on the side frame of the computer device 1400, the user's holding signal to the computer device 1400 can be detected, and the processor 1401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1413. When the pressure sensor 1413 is disposed at the lower layer of the touch display 1405, the processor 1401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1414 is used for collecting a fingerprint of a user, and the processor 1401 identifies the user according to the fingerprint collected by the fingerprint sensor 1414, or the fingerprint sensor 1414 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 1401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. The fingerprint sensor 1414 may be disposed on the front, back, or side of the computer device 1400. When a physical key or vendor Logo is provided on the computer device 1400, the fingerprint sensor 1414 may be integrated with the physical key or vendor Logo.
The optical sensor 1415 is used to collect ambient light intensity. In one embodiment, processor 1401 can control the display brightness of touch display 1405 based on the ambient light intensity collected by optical sensor 1415. Specifically, when the ambient light intensity is high, the display luminance of the touch display 1405 is increased; when the ambient light intensity is low, the display brightness of the touch display 1405 is turned down. In another embodiment, the processor 1401 can also dynamically adjust the shooting parameters of the camera assembly 1406 according to the intensity of the ambient light collected by the optical sensor 1415.
A proximity sensor 1416, also known as a distance sensor, is typically provided on the front panel of the computer device 1400. The proximity sensor 1416 is used to capture the distance between the user and the front of the computer device 1400. In one embodiment, the touch display 1405 is controlled by the processor 1401 to switch from a bright screen state to a dark screen state when the proximity sensor 1416 detects that the distance between the user and the front of the computer device 1400 is gradually decreasing; when the proximity sensor 1416 detects that the distance between the user and the front of the computer device 1400 is gradually increasing, the processor 1401 controls the touch display 1405 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the architecture shown in FIG. 14 is not intended to be limiting of the computer device 1400, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the virtual object presentation method provided in the above aspect or in the various alternative implementations of the above aspect.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method illustrated in the corresponding embodiments of fig. 3 or fig. 5 or fig. 6 is also provided. For example, the non-transitory computer readable storage medium may be a ROM (Read-Only Memory), a Random Access Memory (RAM), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. A virtual object display method, characterized in that the method comprises:
displaying a virtual scene picture, wherein the virtual scene picture is a picture of a virtual scene observed from the visual angle of a first virtual object;
in response to a second virtual object being in a target state, obtaining an object distance, the object distance being a distance between the first virtual object and the second virtual object;
acquiring a first material of the second virtual object based on the object distance; the transparency of the first material of the second virtual object is positively correlated with the object distance;
and displaying the second virtual object in the virtual scene picture based on the first material of the second virtual object.
2. The method of claim 1, wherein obtaining the first material of the second virtual object based on the object distance comprises:
acquiring a first material of the second virtual object corresponding to the state entering duration based on the object distance; the state entry duration is a duration for which the second virtual object enters the target state.
3. The method of claim 2,
in response to the entry state duration being within a first time interval, the transparency of the first material of the second virtual object is positively correlated with the entry state duration; the first time interval is a time interval in which the second virtual object starts to enter the target state;
in response to the entry state duration being within a second time interval, the transparency of the first material of the second virtual object is inversely related to the entry state duration; the second time interval is a time interval during which the second virtual object exits from the target state.
4. The method of claim 1, further comprising:
in response to the second virtual object being in the target state, showing a movement trajectory of the second virtual object in the virtual scene screen.
5. The method according to claim 4, wherein the presenting a movement trajectory of the second virtual object in the virtual scene screen in response to the second virtual object being in the target state comprises:
displaying a moving track of the second virtual object in the virtual scene picture in response to the entering state duration being in a specified duration interval; the state entry duration is a duration for which the second virtual object enters the target state.
6. The method of claim 1, further comprising:
in response to a second virtual object being in the target state and the second virtual object satisfying a first condition, reducing transparency of a material of the second virtual object;
the first condition includes at least one of the following conditions:
the second virtual object is in a first designated area in the virtual scene, the second virtual object performs a first designated action, and the second virtual object is hit by a first virtual prop.
7. The method of claim 1, further comprising:
in response to a second virtual object being in the target state and the second virtual object satisfying a second condition, triggering the second virtual object to exit the target state;
the second condition includes at least one of the following conditions:
the second virtual object is in a second designated area in the virtual scene, the second virtual object executes a second action, the second virtual object is hit by a second virtual prop, and the value of the designated attribute of the second virtual object is lower than a value threshold.
8. The method of claim 1, further comprising:
and displaying a visual reminding effect in the virtual scene picture in response to that the second virtual object and the first virtual object belong to different teams and the object distance is smaller than a distance threshold value.
9. The method of claim 1, wherein obtaining the first material of the second virtual object based on the object distance comprises:
and acquiring a first material of the second virtual object based on the object distance in response to the second virtual object and the first virtual object belonging to different teams.
10. The method according to claim 1, wherein a state trigger control is displayed in the virtual scene picture in an overlapping manner; the method further comprises the following steps:
and triggering the first virtual object to enter the target state in response to receiving a triggering operation on the state triggering control.
11. The method of claim 10, further comprising:
in response to the first virtual object meeting a third condition, displaying the triggerable state trigger control in the virtual scene picture in an overlapping mode;
the third condition includes at least one of the following conditions:
the system comprises a virtual resource and a virtual prop, wherein the virtual resource is required for releasing a target skill, and the virtual prop is used for releasing the target skill;
and the target skill is the skill corresponding to the state trigger control.
12. A virtual object display method, characterized in that the method comprises:
displaying a first virtual scene picture, wherein the first virtual scene picture is a picture of a virtual scene observed from a visual angle of a first virtual object;
in response to a second virtual object in the first virtual scene picture being in a target state, presenting a second virtual scene picture; the second virtual scene picture contains the first virtual object and the second virtual object; the transparency of the second virtual object is positively correlated with the object distance; the object distance is a distance between the first virtual object and the second virtual object.
13. An apparatus for presenting a virtual object, the apparatus comprising:
the picture display module is used for displaying a virtual scene picture, wherein the virtual scene picture is a picture of a virtual scene observed from the visual angle of the first virtual object;
a distance acquisition module, configured to acquire an object distance in response to a second virtual object being in a target state, where the object distance is a distance between the first virtual object and the second virtual object;
a material obtaining module, configured to obtain a first material of the second virtual object based on the object distance; the transparency of the first material of the second virtual object is positively correlated with the object distance;
and the object display module is used for displaying the second virtual object in the virtual scene picture based on the first material of the second virtual object.
14. A computer device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, said at least one instruction, said at least one program, said set of codes, or set of instructions being loaded and executed by said processor to implement a virtual object representation method according to any one of claims 1 to 11.
15. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the virtual object representation method according to any one of claims 1 to 11.
CN202010653031.2A 2020-07-08 2020-07-08 Virtual object display method and device, computer equipment and storage medium Active CN111659117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010653031.2A CN111659117B (en) 2020-07-08 2020-07-08 Virtual object display method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010653031.2A CN111659117B (en) 2020-07-08 2020-07-08 Virtual object display method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111659117A true CN111659117A (en) 2020-09-15
CN111659117B CN111659117B (en) 2023-03-21

Family

ID=72391701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010653031.2A Active CN111659117B (en) 2020-07-08 2020-07-08 Virtual object display method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111659117B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112245912A (en) * 2020-11-11 2021-01-22 腾讯科技(深圳)有限公司 Sound prompting method, device, equipment and storage medium in virtual scene
CN112295230A (en) * 2020-10-30 2021-02-02 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for activating virtual props in virtual scene
CN112717392A (en) * 2021-01-21 2021-04-30 腾讯科技(深圳)有限公司 Mark display method, device, terminal and storage medium
CN113101638A (en) * 2021-04-19 2021-07-13 网易(杭州)网络有限公司 Interactive data processing method and device in game
WO2022121653A1 (en) * 2020-12-08 2022-06-16 上海米哈游天命科技有限公司 Transparency determination method and apparatus, electronic device, and storage medium
CN116704843A (en) * 2023-06-07 2023-09-05 广西茜英信息技术有限公司 Virtual simulation training platform based on communication engineering investigation design

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120146993A1 (en) * 2010-12-10 2012-06-14 Nintendo Co., Ltd. Computer-readable storage medium having stored therein display control program, display control apparatus, display control method, and display control system
CN107430479A (en) * 2015-03-31 2017-12-01 索尼公司 Information processor, information processing method and program
CN107890673A (en) * 2017-09-30 2018-04-10 网易(杭州)网络有限公司 Visual display method and device, storage medium, the equipment of compensating sound information
CN108525300A (en) * 2018-04-27 2018-09-14 腾讯科技(深圳)有限公司 Position indication information display methods, device, electronic device and storage medium
JP6407460B1 (en) * 2018-02-16 2018-10-17 キヤノン株式会社 Image processing apparatus, image processing method, and program
EP3588249A1 (en) * 2018-06-26 2020-01-01 Koninklijke Philips N.V. Apparatus and method for generating images of a scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120146993A1 (en) * 2010-12-10 2012-06-14 Nintendo Co., Ltd. Computer-readable storage medium having stored therein display control program, display control apparatus, display control method, and display control system
CN107430479A (en) * 2015-03-31 2017-12-01 索尼公司 Information processor, information processing method and program
CN107890673A (en) * 2017-09-30 2018-04-10 网易(杭州)网络有限公司 Visual display method and device, storage medium, the equipment of compensating sound information
JP6407460B1 (en) * 2018-02-16 2018-10-17 キヤノン株式会社 Image processing apparatus, image processing method, and program
CN110720114A (en) * 2018-02-16 2020-01-21 佳能株式会社 Image processing apparatus, image processing method, and program
CN108525300A (en) * 2018-04-27 2018-09-14 腾讯科技(深圳)有限公司 Position indication information display methods, device, electronic device and storage medium
EP3588249A1 (en) * 2018-06-26 2020-01-01 Koninklijke Philips N.V. Apparatus and method for generating images of a scene

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112295230A (en) * 2020-10-30 2021-02-02 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for activating virtual props in virtual scene
CN112295230B (en) * 2020-10-30 2022-07-29 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for activating virtual props in virtual scene
CN112245912A (en) * 2020-11-11 2021-01-22 腾讯科技(深圳)有限公司 Sound prompting method, device, equipment and storage medium in virtual scene
CN112245912B (en) * 2020-11-11 2022-07-12 腾讯科技(深圳)有限公司 Sound prompting method, device, equipment and storage medium in virtual scene
WO2022121653A1 (en) * 2020-12-08 2022-06-16 上海米哈游天命科技有限公司 Transparency determination method and apparatus, electronic device, and storage medium
CN112717392A (en) * 2021-01-21 2021-04-30 腾讯科技(深圳)有限公司 Mark display method, device, terminal and storage medium
WO2022156504A1 (en) * 2021-01-21 2022-07-28 腾讯科技(深圳)有限公司 Mark processing method and apparatus, and computer device, storage medium and program product
TWI804153B (en) * 2021-01-21 2023-06-01 大陸商騰訊科技(深圳)有限公司 Mark processing method, apparatus, computing device, storage medium and program product
CN113101638A (en) * 2021-04-19 2021-07-13 网易(杭州)网络有限公司 Interactive data processing method and device in game
CN116704843A (en) * 2023-06-07 2023-09-05 广西茜英信息技术有限公司 Virtual simulation training platform based on communication engineering investigation design
CN116704843B (en) * 2023-06-07 2024-02-23 广西茜英信息技术有限公司 Virtual simulation training platform based on communication engineering investigation design

Also Published As

Publication number Publication date
CN111659117B (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN110694261B (en) Method, terminal and storage medium for controlling virtual object to attack
CN109529319B (en) Display method and device of interface control and storage medium
CN111659117B (en) Virtual object display method and device, computer equipment and storage medium
CN110755841B (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
WO2021143259A1 (en) Virtual object control method and apparatus, device, and readable storage medium
CN113398571B (en) Virtual item switching method, device, terminal and storage medium
CN111589130B (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN111921197B (en) Method, device, terminal and storage medium for displaying game playback picture
CN111420402B (en) Virtual environment picture display method, device, terminal and storage medium
CN111589132A (en) Virtual item display method, computer equipment and storage medium
CN110917623B (en) Interactive information display method, device, terminal and storage medium
CN112870715B (en) Virtual item putting method, device, terminal and storage medium
CN111744184A (en) Control display method in virtual scene, computer equipment and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN110721469A (en) Method, terminal and medium for shielding virtual object in virtual environment
CN112221141A (en) Method and device for controlling virtual object to use virtual prop
CN113713382A (en) Virtual prop control method and device, computer equipment and storage medium
CN112138374B (en) Virtual object attribute value control method, computer device, and storage medium
CN112316421A (en) Equipment method, device, terminal and storage medium of virtual prop
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111475029A (en) Operation method, device, equipment and storage medium of virtual prop
CN113713383A (en) Throwing prop control method and device, computer equipment and storage medium
CN112330823A (en) Virtual item display method, device, equipment and readable storage medium
CN112221142A (en) Control method and device of virtual prop, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028968

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant