CN111389000A - Using method, device, equipment and medium of virtual prop - Google Patents

Using method, device, equipment and medium of virtual prop Download PDF

Info

Publication number
CN111389000A
CN111389000A CN202010188505.0A CN202010188505A CN111389000A CN 111389000 A CN111389000 A CN 111389000A CN 202010188505 A CN202010188505 A CN 202010188505A CN 111389000 A CN111389000 A CN 111389000A
Authority
CN
China
Prior art keywords
virtual
prop
character
environment
virtual character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010188505.0A
Other languages
Chinese (zh)
Inventor
姚丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010188505.0A priority Critical patent/CN111389000A/en
Publication of CN111389000A publication Critical patent/CN111389000A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a using method, a using device, using equipment and using media of a virtual prop, and relates to the field of virtual environments. The method comprises the following steps: displaying a first user interface, wherein the first user interface comprises a virtual environment picture, and the virtual environment picture is a picture for observing a virtual environment from the view angle of a first virtual role belonging to a first camp; responding to a prop use instruction, setting a virtual prop in an inactivated state in a virtual environment, wherein the virtual prop is a trap prop which is invisible to a virtual character of a second battle when in the inactivated state, and comprises a virtual detection line for simulating a non-real object; and responding to the collision of a second virtual character and the virtual detection line, changing the virtual prop into an activated state, and controlling the second virtual character to be subjected to the negative effect of the virtual prop, wherein the second virtual character belongs to a virtual character of second battle, and the virtual prop is visible to the virtual character of the second battle when being in the activated state. The method can improve the human-computer interaction efficiency.

Description

Using method, device, equipment and medium of virtual prop
Technical Field
The embodiment of the application relates to the field of virtual environments, in particular to a using method, a using device, using equipment and using media of virtual props.
Background
In an application program based on a three-dimensional virtual environment, such as a first-person shooting game, a user can control virtual characters in the virtual environment to perform actions of walking, running, climbing, shooting, fighting and the like, and a plurality of users can form a team on line to cooperatively complete a certain task in the same virtual environment. The virtual character needs to use gun ammunition to attack other virtual characters in the virtual environment, so that other virtual characters are eliminated, and a win is obtained.
In the related technology, after aiming at other virtual characters, a user can control a virtual firearm to shoot bullets by triggering a shooting control to attack other virtual characters; the grenade can be thrown to other virtual characters by triggering the throwing control to attack the other virtual characters.
The attack mode can be only used by active triggering of the user, when the virtual character fights a plurality of other virtual characters at the same time, the user needs to continuously convert the aiming object and continuously trigger the control to attack, the user operation is too complex, and the human-computer interaction efficiency is too low.
Disclosure of Invention
The embodiment of the application provides a using method, a using device, a using equipment and a using medium of a virtual prop, and the man-machine interaction efficiency can be improved. The technical scheme is as follows:
in one aspect, a method for using a virtual item is provided, where the method includes:
displaying a first user interface, wherein the first user interface comprises a virtual environment picture, and the virtual environment picture is a picture for observing a virtual environment from the view angle of a first virtual role belonging to a first camp;
setting a virtual prop in an inactivated state in the virtual environment in response to a prop use instruction, the virtual prop being a trap prop that is invisible to a virtual character of a second lineup when in the inactivated state, the virtual prop including a virtual detection line for simulating a non-physical object;
and in response to the collision of a second virtual character with the virtual detection line, changing the virtual prop into an activated state, and controlling the second virtual character to be subjected to the negative effect of the virtual prop, wherein the second virtual character is a virtual character belonging to the second camp, and the virtual prop is visible to the virtual character of the second camp when being in the activated state.
In another aspect, an apparatus for using a virtual prop is provided, the apparatus including:
the display module is used for displaying a first user interface, the first user interface comprises a virtual environment picture, and the virtual environment picture is a picture for observing a virtual environment from the view angle of a first virtual role belonging to a first camp;
the setting module is used for responding to a prop use instruction, setting a virtual prop in an inactivated state in the virtual environment, wherein the virtual prop is a trap prop which is invisible to a virtual role of a second formation when the virtual prop is in the inactivated state, and the virtual prop comprises a virtual detection line used for simulating a non-real object;
the collision module is used for detecting that a second virtual role collides with the virtual detection line, wherein the second virtual role belongs to the second marketing virtual role;
an activation module, responsive to a second avatar colliding with the virtual detection line, to change the virtual prop to an activated state, the virtual prop being visible to avatars of the second camp when in the activated state;
and the control module is used for controlling the negative effect of the second virtual character on the virtual prop.
In another aspect, there is provided a computer apparatus comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a method of using a virtual prop as described above.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by the processor to implement a method of using a virtual prop as described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
through setting up virtual stage property in virtual environment, when enemy virtual character passes through the virtual detection line of virtual stage property, trigger virtual stage property, produce the negative effect to enemy to virtual character. The user can attack the virtual role by the enemy by placing the virtual prop on the necessary way of the enemy virtual role. The attack does not need to be triggered actively by a user, but is triggered passively by detecting the collision between the enemy virtual character and the virtual detection line by the client, so that the attack operation of the user is simplified, and the human-computer interaction efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal provided in an exemplary embodiment of the present application;
FIG. 2 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 3 is a flowchart of a method for using a virtual prop, provided in an exemplary embodiment of the present application;
FIG. 4 is a schematic view of a camera model corresponding to a perspective of a virtual object provided by an exemplary embodiment of the present application;
FIG. 5 is a schematic view of a user interface of a method of using a virtual prop provided in another exemplary embodiment of the present application;
FIG. 6 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 7 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 8 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 9 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 10 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 11 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 12 is a schematic view of a virtual environment of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 13 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 14 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 15 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 16 is a schematic view of a virtual environment of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 17 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 18 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 19 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 20 is a schematic user interface diagram of a method of using a virtual prop provided in accordance with another exemplary embodiment of the present application;
FIG. 21 is a flowchart of a method for using a virtual prop, provided in accordance with another exemplary embodiment of the present application;
FIG. 22 is a block diagram of an apparatus for using a virtual prop provided in accordance with another exemplary embodiment of the present application;
fig. 23 is a block diagram of a terminal provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulated world of a real world, a semi-simulated semi-fictional world, or a purely fictional world. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Virtual roles: refers to a movable object in a virtual environment. The movable object may be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual character is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual character has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
User Interface (UI) (user interface) controls, any visual control or element that can be seen on the user interface of the application, such as controls of a picture, an input box, a text box, a button, a label, etc., wherein some of the UI controls respond to the operation of the user, such as moving the controls, to control the virtual character to move within the virtual environment. And the user triggers the mobile control to control the virtual character to move forward, backward, leftwards and rightwards, climb, swim, jump and the like. The UI control referred to in the embodiments of the present application includes, but is not limited to: a move control, a jump control.
The method provided in the present application may be applied to any one of a Virtual Reality (VR) application, an Augmented Reality (AR) program, a three-dimensional map program, a military Simulation program, a Virtual Reality Game, an Augmented Reality Game, a First-person shooter Game (FPS), a Third-person shooter Game (TPS), a Multiplayer online tactical Game (MOBA), and a Game (Simulation Game, S L G).
Illustratively, a game in the virtual environment is composed of one or more maps of game worlds, the virtual environment in the game simulates a scene of a real world, a user can control a virtual character in the game to perform actions such as walking, running, jumping, shooting, fighting, driving, attacking other virtual characters by using virtual weapons, and the like in the virtual environment, the interactivity is strong, and a plurality of users can form a team on line to perform a competitive game.
The client may support at least one of a Windows operating system, an apple operating system, an android operating system, an IOS operating system, and an L INUX operating system, and clients of different operating systems may be interconnected.
In some embodiments, the client is an application developed based on a three-dimensional engine, such as the three-dimensional engine being a Unity engine.
The terminal in the present application may be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio L eye III, Audio layer 3) player, an MP4(Moving Picture Experts Group Audio L eye IV, Audio layer 4) player, etc. the terminal has a client installed and running therein that supports a virtual environment, such as a client of an application that supports a three-dimensional virtual environment.
Fig. 1 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application. As shown in fig. 1, the terminal includes a processor 101, a touch screen 102, and a memory 103.
The processor 101 may be at least one of a single-core processor, a multi-core processor, an embedded chip, and a processor having instruction execution capabilities.
The touch screen 102 includes a general touch screen or a pressure sensitive touch screen. The normal touch screen can measure a pressing operation or a sliding operation applied to the touch screen 102; a pressure sensitive touch screen can measure the degree of pressure exerted on the touch screen 102.
The memory 103 stores an executable program of the processor 101. Illustratively, the memory 103 stores a virtual environment program a, an application program B, an application program C, a touch pressure sensing module 18, and a kernel layer 19 of an operating system. The virtual environment program a is an application program developed based on the three-dimensional virtual engine 17. Optionally, the virtual environment program a includes, but is not limited to, at least one of a game program, a virtual reality program, a three-dimensional map program, and a three-dimensional presentation program developed by a three-dimensional virtual engine (also referred to as a virtual environment engine) 17. For example, when the operating system of the terminal adopts an android operating system, the virtual environment program a is developed by adopting Java programming language and C # language; for another example, when the operating system of the terminal is the IOS operating system, the virtual environment program a is developed using the Object-C programming language and the C # language.
The three-dimensional Virtual engine 17 is a three-dimensional interactive engine supporting multiple operating system platforms, and illustratively, the three-dimensional Virtual engine may be used for program development in multiple fields, such as a game development field, a Virtual Reality (VR) field, and a three-dimensional map field, and the specific type of the three-dimensional Virtual engine 17 is not limited in the embodiment of the present application, and the following embodiment exemplifies that the three-dimensional Virtual engine 17 is a Unity engine.
The touch (and pressure) sensing module 18 is a module for receiving a touch event (and a pressure touch event) reported by the touch screen driver 191, and optionally, the touch sensing module may not have a pressure sensing function and does not receive a pressure touch event. The touch event includes: the type of touch event and the coordinate values, the type of touch event including but not limited to: a touch start event, a touch move event, and a touch down event. The pressure touch event comprises the following steps: a pressure value and a coordinate value of the pressure touch event. The coordinate value is used for indicating a touch position of the pressure touch operation on the display screen. Optionally, an abscissa axis is established in the horizontal direction of the display screen, and an ordinate axis is established in the vertical direction of the display screen to obtain a two-dimensional coordinate system.
Illustratively, the kernel layer 19 includes a touch screen driver 191 and other drivers 192. The touch screen driver 191 is a module for detecting a pressure touch event, and when the touch screen driver 191 detects the pressure touch event, the pressure touch event is transmitted to the pressure sensing module 18.
Other drivers 192 may be drivers associated with the processor 101, drivers associated with the memory 103, drivers associated with network components, drivers associated with sound components, and the like.
Those skilled in the art will appreciate that the foregoing is merely a general illustration of the structure of the terminal. A terminal may have more or fewer components in different embodiments. For example, the terminal may further include a gravitational acceleration sensor, a gyro sensor, a power supply, and the like.
Fig. 2 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 200 includes: terminal 210, server cluster 220.
The terminal 210 is installed and operated with a client 211 supporting a virtual environment, which client 211 may be an application supporting a virtual environment, when the terminal is operated with the client 211, a user interface of the client 211 is displayed on a screen of the terminal 210, the client may be any one of an FPS game, a TPS game, a military simulation program, an MOBA game, a tactical competition game, and an S L G game, in the present embodiment, the client is exemplified by the FPS game, the terminal 210 is a terminal used by a first user 212, the first user 212 controls a first virtual character located in the virtual environment to perform an activity using the terminal 210, the first virtual character may be referred to as a first virtual character of the first user 212, the activity of the first virtual character includes, but is not limited to, adjusting at least one of a body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing, and schematically, the first virtual character is a first virtual character such as an emulated character or animated character.
The device types of the terminal 210 include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only one terminal is shown in fig. 2, but there are a plurality of other terminals 240 in different embodiments. In some embodiments, there is at least one other terminal 240 corresponding to the developer, a development and editing platform of the client of the virtual environment is installed on the other terminal 240, the developer can edit and update the client on the other terminal 240, and transmit the updated client installation package to the server cluster 220 through a wired or wireless network, and the terminal 210 can download the client installation package from the server cluster 220 to update the client.
The terminal 210 and the other terminals 240 are connected to the server cluster 220 through a wireless network or a wired network.
The server cluster 220 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Server cluster 220 is used to provide background services for clients that support a three-dimensional virtual environment. Optionally, the server cluster 220 undertakes primary computing work and the terminals undertake secondary computing work; or, the server cluster 220 undertakes the secondary computing work, and the terminal undertakes the primary computing work; or, the server cluster 220 and the terminal perform cooperative computing by using a distributed computing architecture.
Optionally, the terminal and the server are both computer devices.
In one illustrative example, server cluster 220 includes servers 221 and 226, where servers 221 include a processor 222, a user account database 223, a combat service module 224, and a user-oriented Input/Output Interface (I/O Interface) 225. The processor 222 is configured to load an instruction stored in the server 221, and process data in the user account database 221 and the combat service module 224; the user account database 221 is used for storing data of user accounts used by the terminal 210 and the other terminals 240, such as head images of the user accounts, nicknames of the user accounts, fighting capacity indexes of the user accounts, and service areas where the user accounts are located; the fight service module 224 is used for providing a plurality of fight rooms for the users to fight against; the user-facing I/O interface 225 is used to establish communication with the terminal 210 through a wireless network or a wired network to exchange data.
With reference to the above description of the virtual environment and the description of the implementation environment, a method for using the virtual item provided in the embodiment of the present application is described, and an execution subject of the method is illustrated as a client running on the terminal shown in fig. 1. The terminal runs an application program, which is a program supporting a virtual environment.
The application provides an exemplary embodiment for applying the using method of the virtual prop to an FPS game.
When the virtual character enters the virtual environment and starts to match, a skill selection interface of the match at this time can be popped up on the user interface, and the user can select a skill from the skill selection interface to use in the match at this time. Illustratively, virtual characters of the same lineup cannot use the same skills, and when a skill has been selected by other virtual characters of the same lineup, the user cannot select the skill any more. For example, the use method of the virtual prop provided by the application can be used for the use process of the skill of a trap chip. Illustratively, the virtual character selects a 'trap chip' as the skill of the game.
Illustratively, after the virtual character selects the skill, a use control of the skill is displayed on the right side of the user interface, and illustratively, a "trap chip" skill corresponds to a trap placement control. Illustratively, after the user selects the "trap chip" skill, the skill will enter a cooling period, where the skill is not available, at which point the trap placement control is not triggerable and is displayed in a non-triggerable state, e.g., in gray. Illustratively, the cooling period is 75s, and after 75s after the user selects the "trap chip" skill, the user can use the skill by triggering the trap placement control. Illustratively, after the user has used the "trap chip" skill once, the skill will again enter the cooling period until the cooling period is over and the user can use the skill again.
The user adjusts the position of the trap placement by adjusting the location on the user interface at which the sight is aimed, for example, the adjustment of the sight can be adjusted by a movement operation on a movement control and a view rotation operation on a view control. And after the user aims at the place where the trap is placed, triggering the trap placing control, and placing the trap on the ground aimed by the sight bead. For example, when the trap is placed, the client may detect whether the place where the trap is placed is suitable for placing the trap, for example, whether the place where the trap is placed is too far away from the position of the virtual character, whether an obstacle is blocked on the place where the trap is placed, whether the place where the trap is placed is too inclined, and the like. If the trap is suitable for placement, the trap is placed in a placement place, and if the trap is not suitable for placement, the trap is not placed.
Illustratively, the trap of the "trap chip" is a laser trap. After the trap is placed, two laser rods higher than the ground are arranged on the ground, the lower ends of the laser rods are inserted into the ground, the upper ends of the laser rods are provided with transverse laser emitting ports, the two laser rods are respectively placed at two ends and emit laser beams from the two emitting ports, and the laser beams are connected with the two laser rods in a straight line.
For example, after the trap is placed by the virtual character, the trap can be seen by other virtual characters in the same camp, and the trap can not be seen by other virtual characters in the hostile camp. Illustratively, the trap will only work against other avatars in a hostile camp. When other virtual characters of enemy camps pass through the laser beam, the action effect of the trap is generated, the moving speed is reduced, and the life value is continuously reduced, and the action effect can be continued for a period of time. Illustratively, the trap will show up when triggered by another avatar of a hostile camp, after which the trap is visible to all avatars of the hostile camp.
Illustratively, the client detects whether other virtual characters of the hostile camp trigger the trap or not by means of ray detection. The client controls at least one of the two laser emitting ports to periodically emit at least one detection ray. The maximum length of the detection ray is equal to the linear distance between the two laser emitting openings. When the detection ray is emitted to impact the three-dimensional virtual model, the client side obtains an impact result, the impact result comprises information of the impacted three-dimensional virtual model, and when the three-dimensional virtual model is the three-dimensional virtual model of the virtual character of enemy camp, the client side determines that other virtual characters of enemy camp trigger the trap to have an effect on the other virtual characters.
Fig. 3 is a flowchart of a method for using a virtual item according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment, and the method includes at least the following steps.
Step 305, displaying a first user interface, wherein the first user interface comprises a virtual environment picture, and the virtual environment picture is a picture for observing the virtual environment from the perspective of a first virtual character belonging to the first camp.
The client displays a first user interface.
Illustratively, the first user interface is the interface displayed on the client after the session has been initiated. Exemplary, before the first user interface, may further include: a team forming interface for friend team forming, a matching interface for matching the virtual role with other virtual roles, a game loading interface for loading game information of the game, and the like.
Illustratively, the first virtual role in the present embodiment is a virtual role controlled by the client, that is, the first virtual role is a master virtual role controlled by the client. The second virtual character is a virtual character controlled by other clients, or the second virtual character is a virtual character controlled by a server, or the second virtual character is a virtual character controlled by artificial intelligence, or the second virtual character is a virtual character controlled by a simple algorithm in the client, that is, the second virtual character can be a main control virtual character on other clients, or can be a human machine virtual character which is not controlled by any user.
The first user interface includes a virtual environment picture, the virtual environment picture is a picture acquired by observing the virtual environment from the perspective of a first virtual character, and the first virtual character belongs to a first camp virtual character.
Alternatively, the virtual environment screen is a screen from which the virtual environment is viewed from the perspective of the first avatar. The perspective refers to an observation angle when the virtual character is observed in the virtual environment from a first person perspective or a third person perspective. Optionally, in an embodiment of the present application, the viewing angle is an angle at which the first virtual character is observed by the camera model in the virtual environment.
Optionally, the camera model automatically follows the virtual character in the virtual environment, that is, when the position of the virtual character in the virtual environment changes, the camera model changes while following the position of the virtual character in the virtual environment, and the camera model is always within the preset distance range of the virtual character in the virtual environment. Optionally, the relative positions of the camera model and the virtual character do not change during the automatic following process.
The camera model is a three-dimensional model positioned around a virtual character in a virtual environment, and when a first-person visual angle is adopted, the camera model is positioned near the head of the virtual character or positioned at the head of the virtual character; when a third person perspective view is adopted, the camera model can be located behind the virtual character and bound with the virtual character, or located at any position away from the virtual character by a preset distance, the virtual character located in the virtual environment can be observed from different angles through the camera model, and optionally, when the third person perspective view is the shoulder-crossing perspective view of the first person, the camera model is located behind the virtual character (such as the head and the shoulder of the virtual character). Optionally, the viewing angle includes other viewing angles, such as a top viewing angle, in addition to the first person viewing angle and the third person viewing angle; the camera model may be positioned over the head of the virtual character when a top view is used, which is a view of viewing the virtual environment from an aerial top view. Optionally, the camera model is not actually displayed in the virtual environment, i.e. the camera model is not displayed in the virtual environment view displayed by the user interface.
To illustrate an example where the camera model is located at any position away from the virtual character by a preset distance, optionally, one virtual character corresponds to one camera model, and the camera model may rotate with the virtual character as a rotation center, for example: the camera model is rotated with any point of the virtual character as a rotation center, the camera model rotates not only angularly but also shifts in displacement during the rotation, and the distance between the camera model and the rotation center is kept constant during the rotation, that is, the camera model rotates on the surface of a sphere with the rotation center as the sphere center, wherein any point of the virtual character can be the head, the trunk or any point around the virtual character, which is not limited in the embodiment of the present application. Optionally, when the virtual character is observed by the camera model, the center of the view angle of the camera model points to the direction in which the point of the spherical surface on which the camera model is located points to the center of the sphere.
Optionally, the camera model may also observe the virtual character at a preset angle in different directions of the virtual character.
Referring to fig. 4, schematically, a point is determined in the first virtual character 11 as a rotation center 12, and the camera model is rotated around the rotation center 12, and optionally, the camera model is configured with an initial position, which is a position above and behind the first virtual character (for example, a rear position of the brain). Illustratively, as shown in fig. 4, the initial position is position 13, and when the camera model rotates to position 14 or position 15, the direction of the angle of view of the camera model changes as the camera model rotates.
Optionally, the virtual environment displayed by the virtual environment screen includes: at least one element selected from the group consisting of mountains, flat ground, rivers, lakes, oceans, deserts, swamps, quicksand, sky, plants, buildings, and vehicles.
Illustratively, the local game is participated by at least two virtual roles, the at least two virtual roles respectively belong to at least two camps, and different camps are in hostile relation with each other. In this embodiment, the first virtual character belongs to the first camp, and the second virtual character belongs to the second camp, wherein the first camp and the second camp are each other a hostile camp. Optionally, the office of the game may also include a third virtual character and a fourth virtual character, where the third virtual character belongs to a third camp, the fourth virtual character belongs to a first camp, the third camp and the first camp are each other a hostile camp, and the third camp and the second camp are also each other a hostile camp.
Illustratively, the virtual roles in the office of the office pair are divided into at least one camp, and each camp comprises at least one virtual role. For example, the game is divided into two camps, and each camps has five virtual roles, wherein the first virtual role to the fifth virtual role belong to the first camps, and the sixth virtual role to the tenth virtual role belong to the second camps. For another example, each virtual character belongs to an individual camp, and ten virtual characters participate in the office, wherein the first virtual character belongs to the first camp, and the second virtual character belongs to the second camp … …, and the tenth virtual character belongs to the tenth camp.
And step 306, responding to the item use instruction, setting the virtual item in an inactivated state in the virtual environment, wherein the virtual item is a trap item which is invisible to the virtual character of the second formation when the virtual item is in the inactivated state, and the virtual item comprises a virtual detection line for simulating a non-real object.
In response to the item use instruction, the client sets the virtual item in an inactivated state in the virtual environment.
The item use instruction is an instruction for controlling the first virtual character to set the virtual item in the virtual environment. The prop use instruction is generated after the client receives the prop use operation of the user. The client may receive the property using operation of the user through a variety of ways, for example, the client may receive a touch operation of the user through a touch display screen, receive a voice operation of the user through a microphone, recognize an action instruction of the user through a camera, receive a click operation of the user through a mouse, receive a key operation of the user through a gamepad, and receive a key operation of the user through a keyboard. Exemplarily, the client receives a touch operation of the user through the touch display screen: at least one of click, double click, slide, drag, press.
Taking an example of generating a prop use instruction by a prop use control, as shown in fig. 5, a prop use control 402 is displayed on a first user interface 401, where the prop use control 402 is a UI control, for example, a plurality of prop use controls 402 may be displayed on the first user interface 401, and in this embodiment, one prop use control is displayed on the first user interface 401. When the client receives an operation of clicking the item use control 402 by the user, as shown in fig. 6, a special effect 403 for placing the virtual item is displayed in the virtual environment screen, and after the playing of the special effect 403 is finished, as shown in fig. 7, the virtual item 404 is set in the virtual environment.
The virtual prop is a prop used for simulating a real trap. Illustratively, the virtual prop is a trap prop. Or, the virtual prop is a prop with a trap effect. Namely, after the virtual prop is placed by the first virtual character, the prop is triggered by other virtual characters (second virtual characters). The virtual prop is placed in a virtual environment and has a hiding effect, and is invisible to other virtual characters (second virtual characters) or is not easy to find the other virtual characters (second virtual characters). When the virtual prop is used, the virtual roles are arranged at any position in the virtual environment, and when other virtual roles meet the triggering conditions of the virtual prop, the virtual prop can be triggered, so that the effect on other virtual roles triggering the virtual prop is generated. Exemplarily, the virtual prop provided by the embodiment has a virtual detection line, the virtual detection line is a detection line formed by virtual substances, and the virtual detection line cannot deform when being subjected to an external force. Illustratively, the virtual detection lines exist in an energy state, for example, the virtual detection lines exist in the form of light, electricity, or waves. Illustratively, the virtual detection line is at least one of a laser line, a current line. The virtual detection line is used for triggering the virtual prop. When the virtual character of the enemy camp collides with the virtual detection line, the virtual prop is triggered, and a negative effect is generated on the virtual character of the enemy camp. Illustratively, when the virtual detection line is a laser line, the virtual prop simulates real-world triggering of a trap by a laser sensor. When the virtual detection line is a current line, the virtual prop simulates the placing of the outgoing current in the virtual environment, and when the virtual character touches the outgoing current, an electric shock effect is generated. Illustratively, a virtual detection line is a line segment of finite length.
Illustratively, a virtual prop has two states, an inactive state and an active state. The two states of the virtual prop control the virtual prop to be visible and invisible to the virtual character of the second battle (hostile battle). Illustratively, after the first virtual character places the virtual item in the virtual environment, the virtual item is in an inactivated state and invisible to the virtual character in the enemy camp. That is, the virtual item is not displayed on the client that controls the virtual character of the enemy camp. When the virtual prop is triggered for the first time, the virtual prop becomes an activated state and is visible to virtual roles strutted by enemies. That is, the virtual item is displayed on the client that controls the virtual character of the hostile camp. For example, only the first trigger will change the state of the virtual item, and neither the second trigger nor the third trigger will change the state of the virtual item. For example, only a virtual character in a hostile camp (not the first camp) may trigger a virtual prop. For example, the virtual item is a virtual item placed in a virtual environment by a first virtual character of a first lineup, at this time, the virtual item is in an inactivated state, and the virtual item cannot be seen by a second virtual character of a second lineup and a third virtual character of a third lineup; then, the second virtual character accidentally touches the virtual detection line, the virtual prop is triggered, the virtual prop becomes activated, and at the moment, the second virtual character and the third virtual character both see the virtual prop. For example, the other avatars in the first lineup may gain the gain effect by hitting the virtual detection line, but may not trigger the virtual prop. Illustratively, the virtual prop is always visible to the virtual character of the first lineup.
For example, as shown in FIG. 8, a user interface is presented that views the virtual environment from the perspective of the first avatar 405. In a first region 407 of the virtual environment, virtual item 404 is placed, which virtual item is placed by a first banked virtual character, and first virtual character 405 can see virtual item 404. As shown in fig. 9, a user interface is presented for viewing the virtual environment from the perspective of the second virtual character 406. A virtual item is placed in a first area 407 of the virtual environment, and since the virtual item is placed by a virtual character of a first lineup, the virtual item cannot be seen by the second virtual character 406, and the virtual item is not displayed in the first area 407.
Step 307, in response to the collision between the second virtual character and the virtual detection line, changing the virtual prop into an activated state, and controlling the second virtual character to receive the negative effect of the virtual prop, where the second virtual character is a virtual character belonging to a second camp, and the virtual prop is visible to the virtual character of the second camp when the virtual prop is in the activated state.
And in response to the collision of the second virtual character with the virtual detection line, the client changes the virtual prop into an activated state, and controls the second virtual character to be subjected to the negative effect of the virtual prop.
When the three-dimensional virtual model of the second virtual character collides with the virtual detection line or generates intersection in the virtual environment, the second virtual character triggers the virtual prop. When the virtual item is triggered by a virtual character in an enemy battle for the first time, the virtual item changes from an inactive state to an active state.
For example, as shown in fig. 9, at this point, the virtual item is in an inactive state and second virtual character 406 cannot see the virtual item. When the second avatar moves forward to first area 407, as shown in fig. 10, second avatar 406 collides with the virtual detection line of virtual prop 404, thereby triggering virtual prop, at which point virtual prop 404 becomes active and virtual prop 404 is visible to second avatar 406.
The virtual character triggering the virtual prop can be affected by the negative effect of the virtual prop. A negative side effect is one that negatively impacts the activity of the virtual character in the virtual environment. For example, the negative side effects include lowering at least one state value of the second avatar, the state value including at least one of a vital value, a movement speed, a signal value, a defense value, an attack value, an equipment endurance, a skill consumption blue amount, an economic value, and an equipment quantity. Illustratively, the life value and the signal value are used to determine the survival status of the virtual character in the virtual environment, and when the life value or the signal value is too low, the virtual character dies. The defense value is used for calculating the damage value of the virtual character when the virtual character is attacked, and under the same attack, the higher the defense value is, the lower the damage value of the virtual character is. The attack value is used for calculating the damage value suffered by other virtual roles when the virtual roles attack other virtual roles, and under the condition that the defense values of other virtual roles are the same, the higher the attack value is, the larger the damage value suffered by other virtual roles is, and the damage value is the reduction amount of the life value or the signal value. The amount of blue consumed by the skill is also called blue bars, when the skill is released by the virtual character, the amount of blue of the virtual character with a certain amount can be reduced, and when the amount of blue is too low, the skill cannot be released by the virtual character. The equipment durability is used for describing the usable state of the equipment, and the equipment durability influences the addition of the equipment to the defense value or the attack value of the virtual character, or the equipment durability influences the precision when the equipment is used, and when the equipment durability is too low, the equipment cannot be used. The virtual character can exchange economic value for equipment or virtual prop in the game. For example, the life value of the second avatar may decrease at a first rate and the movement rate may decrease for a period of time after the second avatar triggers the virtual prop. For example, within one minute after the second avatar triggers the virtual prop, the second avatar's life value decreases by 10 points per second and the movement speed decreases from 3m/s to 1 m/s.
Exemplary negative side effects may also be: dropping a piece of equipment on the virtual character body randomly; controlling that the virtual character may not move for a period of time; controlling the user interface of the virtual character to be occluded; controlling at least one of the virtual character deaths.
In summary, in the method provided in this embodiment, the virtual item is set in the virtual environment, and when an enemy virtual character passes through the virtual detection line of the virtual item, the virtual item is triggered, so that a negative effect is generated on the enemy virtual character. The user can attack the virtual role by the enemy by placing the virtual prop on the necessary way of the enemy virtual role. The attack does not need to be triggered actively by a user, but is triggered passively by detecting the collision between the enemy virtual character and the virtual detection line by the client, so that the attack operation of the user is simplified, and the human-computer interaction efficiency is improved.
Exemplary, an exemplary embodiment of setting a virtual prop in a virtual environment and an exemplary embodiment of detecting a collision of a second virtual character with a virtual detection line using a ray are given.
Fig. 11 is a flowchart of a method for using a virtual item according to an exemplary embodiment of the present application. Taking the execution subject of the method as an example of the client running on the terminal shown in fig. 1, the client is a client supporting a virtual environment, and according to the exemplary embodiment shown in fig. 3, step 306 includes steps 3061 to 3064, and step 307 includes step 3071.
Step 3061, in response to the prop use instruction, setting a first component and a second component of the virtual prop in the virtual environment.
In response to the item use instruction, the client sets the first and second components of the virtual item in the virtual environment.
The virtual prop further includes a first component and a second component. The first component and the second component are used for fixing the virtual prop in the virtual environment, and the first component and the second component are also used for connecting the virtual detection line. Illustratively, the top of the first component and the top of the second component are respectively connected with two ends of the virtual detection line, and the bottom of the first component and the bottom of the second component are fixed in the virtual environment. Illustratively, the shape of the first and second members may be arbitrary, e.g., conical, cylindrical, rectangular parallelepiped, hemispherical, spherical, etc. Illustratively, the first and second components may be the same or different in shape. Illustratively, the first and second units each have a connection point, and the connection point is connected to an end of the virtual detection line. For example, when the virtual detection line is a laser line, the laser line is emitted from a connection point of one component to reach a connection point of another component, wherein one connection point is provided with a laser emitter, and the other connection point is provided with a laser receiver. Illustratively, the first and second members are identically shaped and the first and second connection points are located at the same location, or, symmetrically, of the first and second members, respectively.
For example, as shown in fig. 12, the virtual prop includes a cylindrical first member 408, a cylindrical second member 409, and a virtual detection line 410. Wherein, the first member 408 and the second member 409 are vertically placed on the ground, the bottom of the first member 408 and the bottom of the second member 409 are connected to the ground, and the top of the first member 408 and the top of the second member 409 are respectively connected to two ends of the virtual detection line 410. Wherein the virtual detection line 410 is parallel to the ground, and the virtual detection line 410 is higher than the ground by a first height, which is smaller than the height of the virtual character three-dimensional virtual model.
Illustratively, as shown in FIG. 13, step 3061 includes steps 3061-1 and 3061-2.
Step 3061-1, displaying a aiming sight on the virtual environment screen in response to the first road tool use instruction.
In response to the first headwear use instruction, the client displays a sight on the virtual environment screen.
Illustratively, a targeting sight may also be displayed on the user interface of the client, the targeting sight serving to assist the user in targeting a location in the virtual environment. Illustratively, the aiming sight is generally located at the center of the virtual environment screen, and as the virtual character moves within the virtual environment and the user rotates the direction of the virtual character's view, the location at which the aiming sight is aimed within the virtual environment changes. For example, as shown in fig. 7, a cross-shaped aiming sight 411 is displayed on the virtual environment screen.
For example, the aiming sight may be always displayed on the virtual environment screen, or may be displayed on the virtual environment screen after the client receives the first tacker use instruction. The first vehicle uses instructions for displaying the aiming sight. The first track use instruction may also be referred to as a sighting instruction, which may be triggered by a sighting operation. Illustratively, the aiming operation may be the first triggering of a prop use control or the triggering of an aiming control. The embodiment takes the targeting operation as an example to trigger the prop using control, when the user triggers the prop using control for the first time, a first prop using instruction is generated, a targeting sight is displayed on a virtual environment picture, the user can adjust the targeting position of the targeting sight in the virtual environment by controlling the virtual character to move in the virtual environment and adjusting the visual angle of the virtual character, after the adjustment is completed, when the user triggers the prop using control again, a second prop using instruction is generated, and the virtual prop is placed at the targeting position.
Illustratively, when the aiming sight is always displayed on the virtual environment screen, step 3061-1 may instead be that the client displays the aiming sight on the virtual environment screen. At this moment, there is not first stage property instruction of using, and the client can directly place the virtual stage property according to second stage property instruction of using at the aim position.
Step 3061-2, in response to the second prop use instruction, placing a virtual prop at the aiming position of the aiming sight, enabling a first part and a second part of the virtual prop to be respectively located at two sides of the aiming position, fixing the bottom of the first part and the bottom of the second part in a virtual environment, and enabling the virtual detection line to be perpendicular to the aiming direction of the aiming sight.
In response to the second prop use instruction, the client places a virtual prop at the aiming position of the aiming sight.
Illustratively, the manner in which the client places the virtual prop is one-time placement. The client takes the aiming position as the central position of the virtual prop, and the virtual prop is transversely placed in the direction perpendicular to the aiming direction of the virtual character. Namely, the first part and the second part of the virtual prop are symmetrically distributed on two sides of the aiming position, and the virtual detection line is perpendicular to the aiming direction of the first virtual character. Illustratively, the aiming direction of the first virtual character is a shooting direction of a camera model corresponding to the first virtual character. Illustratively, the aiming direction of the first avatar is a direction perpendicular to the virtual environment screen inward. Illustratively, the aiming direction of the first virtual character is directly in front of the viewing direction of the virtual environment view. Namely, when the virtual prop is placed, the virtual prop is transversely placed in front of the first virtual character by taking the aiming position as a central position.
The second prop uses instructions for placing the virtual prop in the virtual environment. Illustratively, the second prop use instruction is generated after the client receives the prop use operation. For example, the prop use operation may be triggering a prop use control, or the prop use operation may be triggering a prop use control a second time. The operation of triggering the prop use control for the first time is a sighting operation.
Illustratively, as shown in FIG. 14, step 3061-2 includes steps 3061-21 through 3061-23.
Step 3061-21, in response to the second prop use instruction, emitting a first straight line from the first space point to the aiming direction, wherein the first straight line intersects the virtual environment at a second space point, and the first space point is a point on the three-dimensional virtual model of the first virtual character.
And responding to the second prop use instruction, and enabling the client to emit a first straight line from the first space point to the aiming direction, wherein the first straight line intersects with the virtual environment at the second space point.
Illustratively, the client, in response to the second prop use instruction, emits a line extending in the direction of aim from a point on the three-dimensional virtual model of the first virtual character, the line intersecting the three-dimensional virtual model of the virtual environment at a second spatial point. For example, the first spatial point may be any point on the three-dimensional virtual model of the first virtual character, e.g., the first spatial point may be a point located in the eyes, head, shoulders of the first virtual character. For example, the position of the first spatial point is determined according to the position of the camera model, and if the camera model is arranged at the head of the first virtual character, the first spatial point is positioned at the head of the first virtual character; if the camera model is located on a shoulder of the first virtual character, the first spatial point is located on the shoulder of the first virtual character.
Illustratively, a straight line emitted from the first space point collides with the three-dimensional virtual model of the virtual environment, and the client acquires collision information after the collision, wherein the collision information includes a collision point, and the collision point is the second space point.
As shown in fig. 12, the first virtual character 405 has a first spatial point 412 on its head, and a straight line is projected from the first spatial point 412 in the aiming direction 414 to intersect the virtual environment at a second spatial point 413.
For example, the second spatial point and the aiming position may be the same position point or different position points.
Illustratively, the collision information acquired by the client further includes information of a three-dimensional virtual model collided with a straight line, the client judges whether the three-dimensional virtual model is a model of a virtual environment according to the information of the three-dimensional virtual model, and if the three-dimensional virtual model is not the model of the virtual environment, the client determines that the virtual prop cannot be placed and gives a prompt that the virtual prop cannot be placed. If the three-dimensional model of the virtual environment is collided by the straight line, the client detects whether an obstacle exists near the point, and the detection method can be as follows: respectively emitting two detection rays to two directions vertical to the aiming direction by using a third space point right above the collision point, judging whether the detection rays collide with the three-dimensional virtual model, and if so, judging that an obstacle exists; if the three-dimensional virtual model is not collided, no obstacle exists. The height of the third space point from the second space point is determined according to the height of the virtual detection ray from the ground, and the length of the detection ray is determined according to the length of the virtual detection line. If no obstacle exists, the second space point can be used for placing the virtual prop, and if the obstacle exists, the fact that the virtual prop cannot be placed is prompted.
For example, the client may further obtain an inclination angle of the three-dimensional virtual model of the virtual environment in the direction perpendicular to the aiming direction according to the collision point, and then adjust the inclination angle of the virtual prop in steps 3061-23 according to the inclination angle of the virtual environment, so as to place the virtual prop in the virtual environment in parallel with the ground.
Step 3061-22, obtain a first location of the first virtual character.
The client acquires a first position where the first virtual role is located.
Illustratively, the first location is a location of the first avatar in the virtual environment. For example, the first location may be a two-dimensional coordinate of the first avatar on a horizontal plane of the virtual environment. Illustratively, the first location may also be a three-dimensional coordinate of a center location of a three-dimensional virtual model of the first virtual character.
For example, as shown in fig. 12, the client obtains a first location 415 at which the first avatar 405 is located.
Step 3061-23, in response to a linear distance between the first location and the second spatial point being less than a distance threshold, placing a virtual prop at the aiming location of the aiming sight, such that the first component and the second component are respectively located at two sides of the aiming location, a bottom of the first component and a bottom of the second component are fixed in a virtual environment, and the virtual detection line is perpendicular to the aiming direction of the aiming sight.
In response to a straight-line distance between the first location and the second spatial point being less than a distance threshold, the client places a virtual prop at an aiming location of the aiming sight.
Illustratively, the client calculates a linear distance between the first location and the second spatial point. For example, as shown in FIG. 12, the client calculates a linear distance from the first location 415 to the second spatial point 413. Illustratively, the linear distance is a linear distance of two points in a horizontal direction, i.e. a distance from a projected point of the first position on a horizontal plane to a projected point of the second spatial point on the horizontal plane.
If the straight-line distance is smaller than the distance threshold value, the first virtual character can place a virtual prop at the aiming position; if the straight-line distance is greater than the distance threshold, the first virtual character may not place a virtual prop at the target location.
For example, the virtual prop may be placed at the farthest distance (distance threshold), for example, the first virtual character may place the virtual prop within 10 meters of itself, and if it exceeds 10 meters, the first virtual prop needs to approach the target position to place the virtual prop. Illustratively, the distance threshold may be arbitrarily set.
Illustratively, as shown in FIG. 15, if the aiming sight 411 is aimed at a location that is relatively far from the first virtual character, a prompt 416 "too far away and not placeable" is displayed on the user interface when the user triggers the prop use control 402.
For example, after the virtual item is placed in the virtual environment, the client detects whether a virtual character of an enemy camp collides with the virtual detection line in a ray detection manner.
Step 3062, at least one virtual detection line is emitted from the first component to the second component.
The client transmits at least one virtual detection line from the first component to the second component.
Illustratively, the virtual detection lines in steps 3062 through 3064 are rays for ray detection that are not visible to the user. Illustratively, the virtual detection lines in steps 3062 through 3064 may also be rendered with special effects as user-visible lines.
Illustratively, at least one virtual detection line is periodically emitted between the first part and the second part of the virtual prop, and is used for detecting the collision of the second virtual character with the virtual detection line. For example, the virtual detection line may be emitted from the first component to the second component in a single direction, or emitted from the second component to the first component; the light may be emitted from the first member to the second member and emitted from the second member to the first member in both directions. For example, there may be multiple virtual detection lines emitted in each direction.
Illustratively, the virtual detection lines are periodically and continuously emitted, that is, the parts of the virtual props are emitted one virtual detection line at intervals. In order to improve the detection accuracy of the virtual detection lines, the emission cycle of the virtual detection lines may be set to a short cycle.
For example, as shown in fig. 16, the virtual prop emits two virtual detection lines from the first member 408 to the second member 409, and the virtual prop emits one virtual detection line from the second member 409 to the first member 408.
Step 3063, at least one virtual detection line is emitted from the second component toward the first component.
The client transmits at least one virtual detection line from the second component to the first component.
Step 3064, at least one virtual detection line is emitted from the first component to the second component and at least one virtual detection line is emitted from the second component to the first component.
The client transmits at least one virtual detection line from the first component to the second component, and transmits at least one virtual detection line from the second component to the first component.
Step 3071, in response to the collision between the at least one virtual detection line and the three-dimensional virtual model of the second virtual character, the virtual prop is changed into an activated state, and the second virtual character is controlled to be subjected to the negative effect of the virtual prop.
And in response to the collision of the at least one virtual detection line with the three-dimensional virtual model of the second virtual character, the client changes the virtual prop into an activated state and controls the second virtual character to be subjected to the negative effect of the virtual prop.
For example, a virtual prop may be triggered by any virtual detection ray colliding with the three-dimensional virtual model of the second virtual character. For example, as shown in fig. 16, a virtual detection line emitted from the second member 409 collides with the second virtual character 406, and the second virtual character 406 triggers a virtual prop.
For example, after the virtual detection line collides with the three-dimensional virtual model, collision information may also be acquired. The collision information comprises information of a three-dimensional virtual model, the client judges whether a virtual character of enemy battle collides with a virtual detection line according to the information of the three-dimensional virtual model, and if so, the client triggers the virtual prop; if not, the virtual prop is not triggered.
For example, the virtual character of the first lineup, the virtual carrier, and the three-dimensional virtual model of the virtual prop do not trigger the virtual prop when colliding with the virtual detection line. For example, if the virtual detection line collides with the virtual character of the first camp, a gain effect may be exerted on the virtual character of the first camp, for example, the state value of the virtual character of the first camp is increased. For example, if the virtual vehicle carries a virtual character of enemy camp, the three-dimensional virtual model of the virtual vehicle may trigger the virtual prop, or may not trigger the virtual prop. The client can obtain the current bearing state of the virtual carrier according to the information of the virtual carrier in the collision information, and judge whether the virtual character for battle is taking the virtual carrier.
In summary, the method provided in this embodiment provides a method for placing a virtual prop in a virtual environment, and the method includes obtaining a placement position of the virtual prop by using the aiming sight, and determining a placement direction of the virtual prop according to the aiming direction, so as to quickly place the virtual prop, so that the first virtual character can place the virtual prop in the virtual environment through one operation, thereby simplifying user operation, improving efficiency of placing the virtual prop by the user, and improving human-computer interaction efficiency.
According to the method provided by the embodiment, the straight-line distance from the aiming position to the first virtual character position is detected, when the first virtual character is too far away from the aiming position, the user is prompted that the virtual prop cannot be placed, and the situation that the user places the virtual prop randomly in a virtual environment to influence normal activities of other virtual characters in the virtual environment is avoided.
According to the method provided by the embodiment, the ray detection mode is utilized to detect the collision between the virtual character of the enemy camp and the virtual detection line, so that the collision between the virtual character and the virtual detection line can be rapidly and efficiently detected, and the trigger sensitivity of the virtual prop is improved.
Illustratively, an exemplary embodiment is given in which a virtual item has a limited time of use.
Fig. 17 is a flowchart of a method for using a virtual item according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment. Based on the exemplary embodiment shown in fig. 3, the method further comprises step 308 and step 309.
And 308, responding to the successful placement of the virtual prop, and calculating the placed time length of the virtual prop.
And responding to the successful placement of the virtual prop, the client calculates the placed time length of the virtual prop.
Illustratively, when a first virtual character successfully places a virtual item in the virtual environment, the client starts to calculate the placed duration of the virtual item. The placed duration is used to describe the length of time that the virtual item is placed in the virtual environment.
Illustratively, the virtual prop can be triggered for multiple times to generate multiple action effects, and the triggering times do not influence the effectiveness or the failure of the virtual prop.
Step 309, in response to the placed time length meeting the time threshold, determining that the virtual item is invalid.
In response to the placed duration satisfying the time threshold, the client determines that the virtual prop is invalid.
For example, if the time length of the virtual item placed in the virtual environment exceeds the time threshold, the virtual item may be disabled, and the virtual item may disappear from the virtual environment regardless of whether the virtual item is triggered. The time threshold may be any value, for example, the time threshold may be 5 minutes, that is, the virtual item exists in the virtual environment for at most 5 minutes, and the virtual item automatically disappears after 5 minutes.
In summary, in the method provided by this embodiment, if the time length of the placed virtual item satisfies the time threshold, the virtual item is controlled to be invalid, so that the virtual item disappears from the virtual environment, on one hand, the virtual item can be triggered many times, and on the other hand, the method can prevent the virtual item in the virtual environment from being too many, which affects the normal movement of the virtual character in the virtual environment.
Illustratively, an exemplary embodiment of a virtual item is presented as a skill that a user may select at the beginning of an encounter.
Fig. 18 is a flowchart of a method for using a virtual prop according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment. Based on the exemplary embodiment shown in fig. 3, the method further comprises steps 301 to 304.
Step 301, displaying a second user interface, where the second user interface includes at least one skill and at least one skill selection control, the at least one skill includes a virtual prop, and the at least one skill selection control includes a virtual prop selection control corresponding to the virtual prop.
The client displays a second user interface.
For example, after entering the game, the virtual character displays a second user interface (skill selection interface) on which the user can select the skill used by the game. The skills are only used in the game, and the next game needs to be selected again when starting. Illustratively, where at least one skill is provided in the second user interface, multiple avatars of the same row may not possess the repeated skill, e.g., a first skill has already been acquired by a first avatar, and other avatars of the first row may no longer acquire the first skill.
The skill selection control is used for receiving a selection instruction of generating skill by a selection operation of a user. Illustratively, one skill corresponds to one skill selection control.
For example, as shown in fig. 19, five skills are displayed on the second user interface 417: the chip comprises a hard chip, an interference chip, a confusion chip, a control chip and a trap chip. The trap chip corresponds to the virtual item, and the user may select the trap chip 418 and then click the determination control 419 to generate a selection instruction for selecting the virtual item.
Step 302, responding to a selection instruction generated by triggering a virtual item selection control, and sending a skill selection request to a server, wherein the skill selection request comprises a first virtual character and a virtual item.
And responding to a selection instruction generated by triggering the virtual prop selection control, and sending a skill selection request to the server by the client.
Illustratively, after a user triggers a virtual item selection control, a selection instruction is generated, the selection instruction comprises information of a virtual item selected by a first virtual character, a client sends a skill selection request to a server according to the selection instruction, and the skill selection request is used for requesting the server to acquire the skill of the virtual item for the first virtual character.
Illustratively, after receiving a skill selection request sent by a client, a server determines whether a virtual character in a first battle has a person to select a virtual prop, if the person has selected the virtual prop, the first virtual character cannot be selected any more, and if no person has selected the virtual prop, the first virtual character acquires the skill of the virtual prop.
Step 303, in response to receiving a selection failure instruction sent by the server, prompting that the virtual item cannot be selected, where the selection failure instruction is sent by the server after determining that the first virtual character is not a virtual character of the first selected virtual item in the first lineup.
In response to receiving a selection failure instruction sent by the server, the client prompts that the virtual prop cannot be selected.
For example, if the virtual character has acquired the skill of the virtual item in the first lineup, the server may send a failure selection instruction to the client, and the client prompts, according to the failure selection instruction, that the skill of the virtual item cannot be selected.
And step 304, in response to receiving a selection success instruction sent by the server, determining the virtual item as the skill of the first virtual character, wherein the selection success instruction is sent after the server determines that the first virtual character is the virtual character of the first selected virtual item in the first lineup.
In response to receiving a selection success instruction sent by the server, the client determines the virtual item as a skill of the first virtual character.
For example, if there is no virtual character in the first lineup that obtains the skill of the virtual item, the server may send a selection success instruction to the client, and the client determines the virtual item as the skill of the first virtual character.
For example, after the skill of the virtual prop is acquired, the skill has a cooling time, the first virtual character cannot use the virtual prop during the cooling time, and after the cooling time is over, the first virtual character can use the virtual prop.
For example, as shown in fig. 20, when the skill of the virtual item is just acquired by the first virtual character, item usage control 402 is displayed in gray, and at this time, item usage control 402 may not receive a trigger operation, that is, the virtual item cannot be used. After the cooling time is over, as shown in fig. 5, prop usage control 402 may be highlighted, and at this time, prop usage control 402 may receive a trigger operation, that is, the virtual prop may be used.
For example, after the virtual prop is used once, the virtual prop enters the cooling time again. For example, the duration of the cooling time may be set arbitrarily, and the cooling time may be 60 seconds, for example.
In summary, in the method provided by this embodiment, only one virtual character can obtain the skill of the virtual prop from the same virtual character in the same battle, so as to limit the number of virtual characters that can use the virtual prop in one battle, reduce the frequency of the virtual prop in the virtual environment, and prevent the virtual prop in the virtual environment from being too much to affect the normal activities of the virtual character in the virtual environment.
By way of example, an exemplary embodiment of a method of using a virtual item provided herein in a first person shooter game is presented.
Fig. 21 is a flowchart of a method for using a virtual item according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment. The method comprises the following steps:
in step 501, the game starts.
At step 502, the client determines whether the first avatar is the avatar for the first selected laser trap. If yes, go to step 503; otherwise, return to step 501.
Illustratively, the user selects the skill of the local office at a second user interface (skill selection interface), and after the user selects the laser trap skill, the client determines whether the first virtual character is the virtual character of the first selected laser trap in the first camp.
At step 503, the client determines that the first avatar acquires laser trap skills.
In step 504, the client determines whether the laser trap skill cooling time is over. If so, go to step 505; otherwise, return to step 503.
Step 505, the client highlights the skill.
Illustratively, the client highlights the prop usage control using the laser trap.
In step 506, the client determines whether the user clicks on a laser trap. If yes, go to step 507; otherwise, return to step 505.
In step 507, the client determines whether the aiming position of the aiming sight is within the range that the skill can use. If yes, go to step 508; otherwise, return to step 506.
And step 508, releasing the skill at the aiming position by the client, and placing the laser trap.
In step 509, the client determines whether the enemy touches the laser trap. If yes, go to step 511; otherwise, go to step 510.
At step 510, the client waits for an enemy to trigger a laser trap.
In step 511, the client controls the laser trap to show up and the enemy is harmed and slowed down.
Illustratively, the client turns the laser trap into an active state, visible to the enemy.
Step 512, the client judges whether the use time of the laser trap is finished, and if so, the step 513 is carried out; otherwise, go back to step 511.
And step 513, the client controls the laser trap to disappear.
In summary, in this embodiment, the method provided by the present application is applied to the first-person shooting game, so that the user can use the virtual prop to attack the enemy virtual character, and when the enemy virtual character touches the virtual prop, the virtual prop is triggered to cause damage and deceleration effect to the enemy virtual character. The user can attack the virtual role by the enemy by placing the virtual prop on the necessary way of the enemy virtual role. The attack does not need to be triggered actively by a user, but is triggered passively by detecting the collision between the enemy virtual character and the virtual detection line by the client, so that the attack operation of the user is simplified, and the human-computer interaction efficiency is improved.
The above embodiments describe the above method based on the application scenario of the game, and the following describes the above method by way of example in the application scenario of military simulation.
The simulation technology is a model technology which reflects system behaviors or processes by simulating real world experiments by using software and hardware.
The military simulation program is a program specially constructed for military application by using a simulation technology, and is used for carrying out quantitative analysis on sea, land, air and other operational elements, weapon equipment performance, operational actions and the like, further accurately simulating a battlefield environment, presenting a battlefield situation and realizing the evaluation of an operational system and the assistance of decision making.
In one example, soldiers establish a virtual battlefield at a terminal where military simulation programs are located and fight in a team. The soldier controls a virtual object in the virtual battlefield environment to perform at least one operation of standing, squatting, sitting, lying on the back, lying on the stomach, lying on the side, walking, running, climbing, driving, shooting, throwing, attacking, injuring, reconnaissance, close combat and other actions in the virtual battlefield environment. The battlefield virtual environment comprises: at least one natural form of flat ground, mountains, plateaus, basins, deserts, rivers, lakes, oceans and vegetation, and site forms of buildings, vehicles, ruins, training fields and the like. The virtual object includes: virtual characters, virtual animals, cartoon characters, etc., each virtual object having its own shape and volume in the three-dimensional virtual environment occupies a part of the space in the three-dimensional virtual environment.
Based on the above, in one example, soldier a controls virtual object a, soldier B controls virtual object B, soldier C controls virtual object C, soldiers a and B are soldiers of a first team, and soldier C is a soldier of a second team. The soldier A controls the virtual object a to place the virtual prop in the virtual environment, the virtual prop is visible to the soldier B and invisible to the soldier C, when the virtual prop is triggered by the virtual character C, the virtual character C is enabled to be subjected to the negative effect of the virtual prop, and the virtual prop is set to be visible to the soldier C.
In summary, in this embodiment, the use method of the virtual prop is applied to a military simulation program, and a soldier places the virtual prop in a virtual environment to simulate a real laser trap, so as to simulate a real field of actual combat, and thus the soldier can be trained better.
In the following, embodiments of the apparatus of the present application are referred to, and for details not described in detail in the embodiments of the apparatus, the above-described embodiments of the method can be referred to.
Fig. 22 is a block diagram of a device for using a virtual prop according to an exemplary embodiment of the present application. The device is applied to a terminal, an application program supporting the virtual environment runs in the terminal, and the device comprises:
a display module 601, configured to display a first user interface, where the first user interface includes a virtual environment picture, and the virtual environment picture is a picture obtained by observing a virtual environment from a view of a first virtual character belonging to a first camp;
a setting module 602, configured to set, in response to a prop use instruction, a virtual prop in an inactive state in the virtual environment, where the virtual prop is a trap prop that is invisible to a virtual character in a second formation when the virtual prop is in the inactive state, and the virtual prop includes a virtual detection line for simulating a non-physical object;
a collision module 603, configured to detect that a second virtual character collides with the virtual detection line, where the second virtual character is a virtual character belonging to the second camp;
an activation module 604, configured to change the virtual prop to an activated state in response to a collision between a second virtual character and the virtual detection line, where the virtual prop is visible to the virtual character of the second camp when in the activated state;
a control module 605, configured to control a negative effect of the second virtual character from the virtual prop.
In an optional embodiment, the setting module 602, further in response to the item use instruction, sets a first component and a second component of the virtual item in the virtual environment;
a ray module 606 for emitting at least one of the virtual detection lines from the first component to the second component;
or the like, or, alternatively,
the setting module 602, further responding to the item use instruction, setting a first component and a second component of the virtual item in the virtual environment;
the ray module 606 is further configured to emit at least one of the virtual detection lines from the second component to the first component;
or the like, or, alternatively,
the setting module 602, further responding to the item use instruction, setting a first component and a second component of the virtual item in the virtual environment;
the radiation module 606 is further configured to transmit at least one of the virtual detection lines from the first component to the second component, and transmit at least one of the virtual detection lines from the second component to the first component.
In an optional embodiment, the collision module 603 is further configured to detect that at least one of the virtual detection lines collides with the three-dimensional virtual model of the second virtual character;
the activation module 604, further responsive to at least one of the virtual detection lines colliding with the three-dimensional virtual model of the second virtual character, changing the virtual prop to an activated state;
the control module 605 is configured to control a negative effect of the second virtual character from the virtual prop.
In an optional embodiment, the display module 601, further in response to a first road tool use instruction, displays a sight on the virtual environment screen;
the setting module 602, further responding to a second prop use instruction, places the virtual prop at an aiming position of the aiming sight, so that the first part and the second part of the virtual prop are respectively located at two sides of the aiming position, the bottom of the first part and the bottom of the second part are fixed in the virtual environment, and the virtual detection line is perpendicular to the aiming direction of the aiming sight.
In an optional embodiment, the apparatus further comprises:
a ray module 606, responsive to the second prop use instruction, for emitting a first straight line from a first spatial point to the aiming direction, the first straight line intersecting the virtual environment at a second spatial point, the first spatial point being a point on a three-dimensional virtual model of the first virtual character;
an obtaining module 607, configured to obtain a first position where the first virtual role is located;
the setting module 602, further responding to that a linear distance between the first position and the second spatial point is smaller than a distance threshold, placing the virtual prop at a targeting position of the targeting sight so that the first component and the second component are respectively located at two sides of the targeting position, a bottom of the first component and a bottom of the second component are fixed in the virtual environment, and the virtual detection line is perpendicular to the targeting direction of the targeting sight.
In an optional embodiment, the apparatus further comprises:
the timing module 608, in response to the virtual item being placed successfully, calculates the placed time length of the virtual item;
and a failure module 609, responsive to the placed time length meeting a time threshold, determining that the virtual prop is failed.
In an alternative embodiment, the first battle comprises at least one virtual character; the device further comprises:
the display module 601 is further configured to display a second user interface, where the second user interface includes at least one skill and at least one skill selection control, the at least one skill includes the virtual prop, and the at least one skill selection control includes a virtual prop selection control corresponding to the virtual prop;
the interaction module 610 is configured to trigger the virtual item selection control to generate a selection instruction;
the sending module 611, configured to send a skill selection request to a server in response to a selection instruction that triggers the virtual item selection control to generate, where the skill selection request includes the first virtual character and the virtual item;
a receiving module 612, configured to receive a selection success instruction sent by the server;
the skill module 613 is configured to determine the virtual prop as the skill of the first virtual character in response to receiving a selection success instruction sent by the server, where the selection success instruction is sent after the server determines that the first virtual character is a first virtual character in the first lineup that selects the virtual prop.
In an optional embodiment, the apparatus further comprises:
the receiving module 612 is further configured to receive a selection failure instruction sent by the server;
a prompt module 614, configured to prompt that the virtual prop cannot be selected in response to receiving a selection failure instruction sent by the server, where the selection failure instruction is sent after the server determines that the first virtual character is not a virtual character selected by the first virtual character in the first lineup.
In an alternative embodiment, the negative effect comprises lowering at least one state value of the second avatar, the state value comprising at least one of a vital value, a movement speed, a signal value, a defense value, an attack value, an equipment durability, a skill consumption blue amount, an economic value, and an equipment quantity.
It should be noted that: the use device of the virtual item provided in the above embodiment is exemplified by only the division of the above functional modules, and in practical applications, the above function allocation can be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the use device of the virtual prop and the use method of the virtual prop provided by the above embodiments belong to the same concept, and the specific implementation process is described in detail in the method embodiments and will not be described herein again.
Fig. 23 illustrates a block diagram of a terminal 3900 provided in an exemplary embodiment of the present application, where the terminal 3900 may be a smartphone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio L player iii, mpeg Audio layer 3), an MP4 player (Moving Picture Experts Group Audio L player IV, mpeg Audio layer 4), a notebook computer, or a desktop computer.
Generally, the terminal 3900 includes: a processor 3901 and a memory 3902.
Processor 3901 may also include a main processor, which is a processor for Processing data in a wake-up state, also known as a CPU (Central Processing Unit), and a coprocessor, which is a low power processor for Processing data in a standby state, processor 3901 may further include an intelligent processor, which is an integrated GPU (Graphics Processing Unit, image processor) for rendering and rendering content for display on a display screen, processor 3901 may further include one or more Processing cores, such as a 4-core processor, an 8-core processor, and the like, processor 3901 may be implemented in at least one of a DSP (Digital Signal Processing), an FPGA (Field Programmable Gate Array), and a P L a (Programmable logic Array).
The memory 3902 may include one or more computer-readable storage media, which may be non-transitory. The memory 3902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in memory 3902 is used to store at least one instruction for execution by processor 3901 to implement a method of using a virtual prop provided by method embodiments herein.
In some embodiments, the terminal 3900 can also optionally include: a peripheral interface 3903 and at least one peripheral. Processor 3901, memory 3902, and peripheral interface 3903 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 3903 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 3904, touch display screen 3905, camera 3906, audio circuitry 3907, positioning component 3908, and power source 3909.
Peripheral interface 3903 can be used to connect at least one peripheral associated with I/O (Input/Output) to processor 3901 and memory 3902. In some embodiments, processor 3901, memory 3902, and peripheral device interface 3903 are integrated on the same chip or circuit board; in some other embodiments, any one or both of processor 3901, memory 3902, and peripheral device interface 3903 may be implemented on separate chips or circuit boards, which are not limited by the present embodiment.
The Radio Frequency circuit 3904 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 3904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 3904 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuitry 3904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 3904 can communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 3904 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The Display 3905 is adapted to Display a UI (User Interface) which may include graphics, text, icons, video and any combination thereof, when the Display 3905 is a touch screen, the Display 3905 also has the capability to capture touch signals on or over a surface of the Display 3905 which may be input to the processor 3901 for processing as control signals, in which case the Display 3905 may also be adapted to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard, in some embodiments the Display 3905 may be one, providing the front panel of the terminal 3900, in other embodiments the Display 3905 may be at least two, each provided on a different surface of the terminal 3900 or in a folded design, in still other embodiments the Display 3905 may be a flexible Display, provided on a curved surface or on a folded surface of the terminal 3900, even further, the Display 3905 may be provided with non-irregular graphics, and the Display may be provided in a non-rectangular Display 366335 using L, Display 39056 (portable Display) or Organic light Emitting diodes L, e.g. Display devices, lcd devices, and the like.
Camera assembly 3906 is used to capture images or video. Optionally, camera assembly 3906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 3906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 3907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 3901 for processing or inputting the electric signals to the radio frequency circuit 3904 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the terminal 3900. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 3901 or the radio frequency circuit 3904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 3907 may also include a headphone jack.
The positioning component 3908 is configured to locate a current geographic location of the terminal 3900 to implement navigation or L BS (L geographic based Service). the positioning component 3908 can be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 3909 is used to provide power to the various components in terminal 3900. Power supply 3909 can be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When power supply 3909 includes a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 3900 also includes one or more sensors 3910. The one or more sensors 3910 include, but are not limited to: an acceleration sensor 3911, a gyro sensor 3912, a pressure sensor 3913, a fingerprint sensor 3914, an optical sensor 3915, and a proximity sensor 3916.
The acceleration sensor 3911 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the terminal 3900. For example, the acceleration sensor 3911 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 3901 may control the touch display screen 3905 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal collected by the acceleration sensor 3911. The acceleration sensor 3911 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 3912 may detect a body direction and a rotation angle of the terminal 3900, and the gyroscope sensor 3912 may cooperate with the acceleration sensor 3911 to acquire a 3D motion of the user on the terminal 3900. From the data collected by the gyro sensor 3912, the processor 3901 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 3913 may be disposed on side frames of the terminal 3900 and/or underlying layers of the touch display screen 3905. When the pressure sensor 3913 is disposed on the side frame of the terminal 3900, a user's holding signal of the terminal 3900 can be detected, and the processor 3901 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 3913. When the pressure sensor 3913 is disposed at a lower layer of the touch display screen 3905, the processor 3901 controls the operability controls on the UI interface according to the pressure operation of the user on the touch display screen 3905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 3914 is used for collecting fingerprints of a user, the identity of the user is identified by the processor 3901 according to the fingerprints collected by the fingerprint sensor 3914, or the identity of the user is identified by the fingerprint sensor 3914 according to the collected fingerprints, when the identity of the user is identified to be a trusted identity, the user is authorized to perform related sensitive operations by the processor 3901, the sensitive operations comprise screen unlocking, encrypted information viewing, software downloading, payment, setting change and the like, the fingerprint sensor 3914 can be arranged on the front side, the back side or the side of the terminal 3900, and when a physical key or a manufacturer L ogo is arranged on the terminal 3900, the fingerprint sensor 3914 can be integrated with the physical key or the manufacturer L ogo.
The optical sensor 3915 is used to collect the ambient light intensity. In one embodiment, the processor 3901 may control the display brightness of the touch display screen 3905 based on the intensity of ambient light collected by the optical sensor 3915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 3905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 3905 is turned down. In another embodiment, the processor 3901 may also dynamically adjust the shooting parameters of the camera assembly 3906 based on the intensity of ambient light collected by the optical sensor 3915.
A proximity sensor 3916, also known as a distance sensor, is typically disposed on the front panel of the terminal 3900. The proximity sensor 3916 is used to capture the distance between the user and the front face of the terminal 3900. In one embodiment, the touch display screen 3905 is controlled by the processor 3901 to switch from a bright screen state to a dark screen state when the proximity sensor 3916 detects that the distance between the user and the front face of the terminal 3900 gradually decreases; when the proximity sensor 3916 detects that the distance between the user and the front face of the terminal 3900 gradually becomes larger, the touch display screen 3905 is controlled by the processor 3901 to switch from a breath-screen state to a light-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 23 does not constitute a limitation of terminal 3900, and may include more or fewer components than those shown, or some of the components may be combined, or a different arrangement of components may be employed.
The present application further provides a computer device comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the method for using the virtual prop provided in any of the above exemplary embodiments.
The present application further provides a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by the processor to implement the method for using the virtual prop provided in any of the above exemplary embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A method for using a virtual prop, the method comprising:
displaying a first user interface, wherein the first user interface comprises a virtual environment picture, and the virtual environment picture is a picture for observing a virtual environment from the view angle of a first virtual role belonging to a first camp;
setting a virtual prop in an inactivated state in the virtual environment in response to a prop use instruction, the virtual prop being a trap prop that is invisible to a virtual character of a second lineup when in the inactivated state, the virtual prop including a virtual detection line for simulating a non-physical object;
and in response to the collision of a second virtual character with the virtual detection line, changing the virtual prop into an activated state, and controlling the second virtual character to be subjected to the negative effect of the virtual prop, wherein the second virtual character is a virtual character belonging to the second camp, and the virtual prop is visible to the virtual character of the second camp when being in the activated state.
2. The method of claim 1, wherein said setting a virtual prop in an inactive state in said virtual environment in response to a prop use instruction comprises:
setting a first component and a second component of the virtual item in the virtual environment in response to the item use instruction; transmitting at least one of the virtual detection lines from the first part to the second part;
or the like, or, alternatively,
setting a first component and a second component of the virtual item in the virtual environment in response to the item use instruction; transmitting at least one of the virtual detection lines from the second part to the first part;
or the like, or, alternatively,
setting a first component and a second component of the virtual item in the virtual environment in response to the item use instruction; at least one of the virtual detection lines is emitted from the first member to the second member, and at least one of the virtual detection lines is emitted from the second member to the first member.
3. The method of claim 2, wherein said changing the virtual prop to an activated state in response to a collision of a second virtual character with the virtual detection line, controlling the second virtual character to be negatively affected by the virtual prop, comprises:
and responding to the collision of at least one virtual detection line and the three-dimensional virtual model of the second virtual character, changing the virtual prop into an activated state, and controlling the second virtual character to have the negative effect of the virtual prop.
4. A method according to claim 2 or 3, wherein said setting first and second components of the virtual prop in the virtual environment in response to a prop use instruction comprises:
displaying a sighting sight on the virtual environment picture in response to a first road tool use instruction;
in response to a second prop use instruction, the virtual prop is placed at the aiming position of the aiming sight, the first part and the second part of the virtual prop are respectively positioned at two sides of the aiming position, the bottom of the first part and the bottom of the second part are fixed in the virtual environment, and the virtual detection line is perpendicular to the aiming direction of the aiming sight.
5. The method of claim 4, wherein said placing the virtual prop at the aiming location of the aiming sight in response to a second prop use instruction comprises:
emitting a first straight line from a first spatial point to the aiming direction in response to the second prop use instruction, the first straight line intersecting the virtual environment at a second spatial point, the first spatial point being a point located on a three-dimensional virtual model of the first virtual character;
acquiring a first position of the first virtual role;
in response to a linear distance between the first location and the second spatial point being less than a distance threshold, placing the virtual prop at a sighting location of the sighting sight such that the first member and the second member are located on either side of the sighting location, respectively, a bottom of the first member and a bottom of the second member are fixed in the virtual environment, and the virtual detection line is perpendicular to the sighting direction of the sighting sight.
6. The method of claim 4, further comprising:
responding to successful placement of the virtual prop, and calculating the placed time length of the virtual prop;
determining that the virtual prop is invalid in response to the placed duration satisfying a time threshold.
7. A method according to any one of claims 1 to 3, wherein said first battle comprises at least one virtual character; the method further comprises the following steps:
displaying a second user interface, wherein the second user interface comprises at least one skill and at least one skill selection control, the at least one skill comprises the virtual prop, and the at least one skill selection control comprises a virtual prop selection control corresponding to the virtual prop;
responding to a selection instruction for triggering the generation of the virtual prop selection control, and sending a skill selection request to a server, wherein the skill selection request comprises the first virtual character and the virtual prop;
in response to receiving a selection success instruction sent by the server, determining the virtual prop as the skill of the first virtual character, wherein the selection success instruction is sent after the server determines that the first virtual character is the first virtual character in the first lineup to select the virtual prop.
8. The method of claim 7, further comprising:
and prompting that the virtual prop cannot be selected in response to receiving a selection failure instruction sent by the server, wherein the selection failure instruction is sent after the server determines that the first virtual role is not the first virtual role in the first battle to select the virtual role of the virtual prop.
9. The method of any of claims 1 to 3, wherein the negative side effects comprise reducing at least one state value of the second avatar, the state value comprising at least one of a vital value, a movement speed, a signal value, a defense value, an attack value, an equipment durability, a skill consumption blue amount, an economic value, and an equipment quantity.
10. An apparatus for using a virtual prop, the apparatus comprising:
the display module is used for displaying a first user interface, the first user interface comprises a virtual environment picture, and the virtual environment picture is a picture for observing a virtual environment from the view angle of a first virtual role belonging to a first camp;
the setting module is used for responding to a prop use instruction, setting a virtual prop in an inactivated state in the virtual environment, wherein the virtual prop is a trap prop which is invisible to a virtual role of a second formation when the virtual prop is in the inactivated state, and the virtual prop comprises a virtual detection line used for simulating a non-real object;
the collision module is used for detecting that a second virtual role collides with the virtual detection line, wherein the second virtual role belongs to the second marketing virtual role;
an activation module, responsive to a second avatar colliding with the virtual detection line, to change the virtual prop to an activated state, the virtual prop being visible to avatars of the second camp when in the activated state;
and the control module is used for controlling the negative effect of the second virtual character on the virtual prop.
11. A computer device, characterised in that it comprises a processor and a memory in which is stored at least one instruction, at least one program, set of codes or set of instructions, which is loaded and executed by the processor to implement a method of use of a virtual prop according to any one of claims 1 to 9.
12. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a method of use of a virtual prop according to any one of claims 1 to 9.
CN202010188505.0A 2020-03-17 2020-03-17 Using method, device, equipment and medium of virtual prop Pending CN111389000A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010188505.0A CN111389000A (en) 2020-03-17 2020-03-17 Using method, device, equipment and medium of virtual prop

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010188505.0A CN111389000A (en) 2020-03-17 2020-03-17 Using method, device, equipment and medium of virtual prop

Publications (1)

Publication Number Publication Date
CN111389000A true CN111389000A (en) 2020-07-10

Family

ID=71417230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010188505.0A Pending CN111389000A (en) 2020-03-17 2020-03-17 Using method, device, equipment and medium of virtual prop

Country Status (1)

Country Link
CN (1) CN111389000A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111888761A (en) * 2020-08-12 2020-11-06 腾讯科技(深圳)有限公司 Control method and device of virtual role, storage medium and electronic device
CN112402966A (en) * 2020-11-20 2021-02-26 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
CN113457168A (en) * 2021-07-21 2021-10-01 北京字跳网络技术有限公司 Interaction method, interaction device and computer storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010072768A (en) * 2008-09-16 2010-04-02 Namco Bandai Games Inc Program, information storage medium, and game device
CN204944321U (en) * 2013-02-07 2016-01-06 马卡里 Laser shooting antagonism game infrared ray antitank grenade and laser shooting antagonism games system
CN106527702A (en) * 2016-11-03 2017-03-22 网易(杭州)网络有限公司 Virtual reality interaction method and apparatus
JP2018028920A (en) * 2017-09-26 2018-02-22 株式会社コロプラ Method for providing virtual space, program and recording medium
CN108804013A (en) * 2018-06-15 2018-11-13 网易(杭州)网络有限公司 Method, apparatus, electronic equipment and the storage medium of information alert
CN109917910A (en) * 2019-02-19 2019-06-21 腾讯科技(深圳)有限公司 Display methods, device, equipment and the storage medium of line style technical ability
CN110507993A (en) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 Control method, apparatus, equipment and the medium of virtual objects
CN110585712A (en) * 2019-09-20 2019-12-20 腾讯科技(深圳)有限公司 Method, device, terminal and medium for throwing virtual explosives in virtual environment
CN110681152A (en) * 2019-10-17 2020-01-14 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
CN110711384A (en) * 2019-10-24 2020-01-21 网易(杭州)网络有限公司 Game history operation display method, device and equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010072768A (en) * 2008-09-16 2010-04-02 Namco Bandai Games Inc Program, information storage medium, and game device
CN204944321U (en) * 2013-02-07 2016-01-06 马卡里 Laser shooting antagonism game infrared ray antitank grenade and laser shooting antagonism games system
CN106527702A (en) * 2016-11-03 2017-03-22 网易(杭州)网络有限公司 Virtual reality interaction method and apparatus
JP2018028920A (en) * 2017-09-26 2018-02-22 株式会社コロプラ Method for providing virtual space, program and recording medium
CN108804013A (en) * 2018-06-15 2018-11-13 网易(杭州)网络有限公司 Method, apparatus, electronic equipment and the storage medium of information alert
CN109917910A (en) * 2019-02-19 2019-06-21 腾讯科技(深圳)有限公司 Display methods, device, equipment and the storage medium of line style technical ability
CN110507993A (en) * 2019-08-23 2019-11-29 腾讯科技(深圳)有限公司 Control method, apparatus, equipment and the medium of virtual objects
CN110585712A (en) * 2019-09-20 2019-12-20 腾讯科技(深圳)有限公司 Method, device, terminal and medium for throwing virtual explosives in virtual environment
CN110681152A (en) * 2019-10-17 2020-01-14 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
CN110711384A (en) * 2019-10-24 2020-01-21 网易(杭州)网络有限公司 Game history operation display method, device and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
嗨我是KH: "《bilibili》", 14 March 2020 *
小昕君聊游戏: "《游戏达人》", 26 February 2020 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111888761A (en) * 2020-08-12 2020-11-06 腾讯科技(深圳)有限公司 Control method and device of virtual role, storage medium and electronic device
CN112402966A (en) * 2020-11-20 2021-02-26 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium
WO2022105480A1 (en) * 2020-11-20 2022-05-27 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal, storage medium, and program product
CN113457168A (en) * 2021-07-21 2021-10-01 北京字跳网络技术有限公司 Interaction method, interaction device and computer storage medium

Similar Documents

Publication Publication Date Title
CN111589131B (en) Control method, device, equipment and medium of virtual role
CN111265869B (en) Virtual object detection method, device, terminal and storage medium
CN110433488B (en) Virtual character-based fight control method, device, equipment and medium
US20220152501A1 (en) Virtual object control method and apparatus, device, and readable storage medium
CN110917619B (en) Interactive property control method, device, terminal and storage medium
CN111408133B (en) Interactive property display method, device, terminal and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN110917623B (en) Interactive information display method, device, terminal and storage medium
CN111589124A (en) Virtual object control method, device, terminal and storage medium
CN111389005B (en) Virtual object control method, device, equipment and storage medium
CN110465083B (en) Map area control method, apparatus, device and medium in virtual environment
CN111338534A (en) Virtual object game method, device, equipment and medium
CN111714893A (en) Method, device, terminal and storage medium for controlling virtual object to recover attribute value
CN112870715B (en) Virtual item putting method, device, terminal and storage medium
CN112076469A (en) Virtual object control method and device, storage medium and computer equipment
CN112316421B (en) Equipment method, device, terminal and storage medium of virtual item
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN111744186A (en) Virtual object control method, device, equipment and storage medium
CN112138384A (en) Using method, device, terminal and storage medium of virtual throwing prop
CN111389000A (en) Using method, device, equipment and medium of virtual prop
CN113041622A (en) Virtual throwing object throwing method in virtual environment, terminal and storage medium
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium
CN113117330A (en) Skill release method, device, equipment and medium for virtual object
CN112044073A (en) Using method, device, equipment and medium of virtual prop
CN111921190A (en) Method, device, terminal and storage medium for equipping props of virtual objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025831

Country of ref document: HK

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200710