CN111672118A - Virtual object aiming method, device, equipment and medium - Google Patents

Virtual object aiming method, device, equipment and medium Download PDF

Info

Publication number
CN111672118A
CN111672118A CN202010507566.9A CN202010507566A CN111672118A CN 111672118 A CN111672118 A CN 111672118A CN 202010507566 A CN202010507566 A CN 202010507566A CN 111672118 A CN111672118 A CN 111672118A
Authority
CN
China
Prior art keywords
virtual object
aiming
virtual
target
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010507566.9A
Other languages
Chinese (zh)
Other versions
CN111672118B (en
Inventor
万钰林
粟山东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010507566.9A priority Critical patent/CN111672118B/en
Publication of CN111672118A publication Critical patent/CN111672118A/en
Application granted granted Critical
Publication of CN111672118B publication Critical patent/CN111672118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a virtual object aiming method, a virtual object aiming device, a virtual object aiming equipment and a virtual object aiming medium, and relates to the field of virtual environments. The method comprises the following steps: displaying a user interface, wherein the user interface comprises a virtual environment picture, and the virtual environment picture comprises a first virtual object and a second virtual object which are positioned in a virtual environment; in response to receiving a start instruction of a sighting operation at a first time, displaying a spot skill indicator on a ground plane of the virtual environment; in response to receiving an end instruction of the aiming operation at a second time, controlling the first virtual object to aim at a target virtual object, wherein the target virtual object is one virtual object determined from the second virtual objects according to a first priority principle, and the first priority principle comprises the priority that the priority of the type of the virtual object is higher than the linear distance between the second virtual object and an aiming point; and the difference value between the second moment and the first moment is smaller than the time threshold. The method can improve the accuracy of the user in aiming.

Description

Virtual object aiming method, device, equipment and medium
Technical Field
The embodiment of the application relates to the field of virtual environments, in particular to a method, a device, equipment and a medium for aiming a virtual object.
Background
The battle game is a game in which a plurality of user accounts compete in the same scene. Alternatively, the Battle game may be a Multiplayer Online tactical sports game (MOBA).
In a typical MOBA game, a first virtual object controlled by a user possesses skills. In using the skill, the terminal may display a fan skill indicator comprising a fan-shaped area located under the foot of the first virtual object, the fan-shaped area having an axis of symmetry that is the line of sight. The user can drag the fan-shaped skill indicator to rotate around the first virtual object, the candidate virtual object which is positioned in the fan-shaped area and is closest to the aiming line is determined as the aimed target virtual object, and the user controls the first virtual object to release the skill on the target virtual object.
The above-described technique is to determine a candidate virtual object closest to the aiming line as a targeted virtual object, which is not necessarily a targeting target desired by the user when there are two candidate virtual objects both located on the aiming line.
Disclosure of Invention
The embodiment of the application provides a virtual object aiming method, a virtual object aiming device and a virtual object aiming medium, which can improve the accuracy of a user in aiming. The technical scheme is as follows:
in one aspect, a method for aiming a virtual object is provided, the method comprising:
displaying a user interface, the user interface including a virtual environment screen, the virtual environment screen including a first virtual object and a second virtual object located in the virtual environment;
in response to receiving a start instruction of a sighting operation at a first time, displaying a spot skill indicator on a ground plane of the virtual environment, the spot skill indicator for indicating a selected sighting point of the sighting operation on the ground plane of the virtual environment;
in response to receiving an end instruction of the aiming operation at a second time, controlling the first virtual object to aim at a target virtual object, the target virtual object being one virtual object determined from the second virtual objects according to a first priority principle, the first priority principle comprising a priority of a virtual object type being higher than a priority of a straight-line distance of the second virtual object from the aiming point;
wherein a difference between the second time and the first time is less than a time threshold.
In another aspect, there is provided a targeting apparatus for a virtual object, the apparatus comprising:
a display module for displaying a user interface, the user interface including a virtual environment screen, the virtual environment screen including a first virtual object and a second virtual object located in the virtual environment;
the interaction module is used for receiving an aiming operation generation starting instruction at a first moment;
a display module for displaying a spot skill indicator on the ground plane of the virtual environment in response to receiving the start instruction of the aiming operation at the first time, the spot skill indicator for indicating a selected aiming point of the aiming operation on the ground plane of the virtual environment;
the interaction module is further used for responding to the stopping of the aiming operation at a second moment and generating an ending instruction;
a targeting module for controlling the first virtual object to target a target virtual object in response to receiving the end instruction of the targeting operation at the second time, the target virtual object being one determined from the second virtual objects according to a first priority principle, the first priority principle comprising a priority of a virtual object type being higher than a linear distance of the second virtual object from the targeting point;
wherein a difference between the second time and the first time is less than a time threshold.
In another aspect, there is provided a computer device comprising a processor and a memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by the processor to implement a method of aiming a virtual object as described above.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by the processor to implement the method of aiming a virtual object of the above aspect.
In another aspect, a computer program product is provided which, when run on a computer, causes the computer to perform the method of aiming a virtual object as described in the above aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
when the aiming operation of the user is received, aiming the target virtual object with higher priority for the user according to the aiming point selected by the user and the virtual object type of the second virtual object. For example, typically, a user will prefer to attack hero, and the client will preferentially target hero for the user. Therefore, the capability of the client for aiming at the virtual object is improved, the operation difficulty required by the user for aiming at the virtual object is reduced, and the human-computer interaction effect and accuracy of the aiming operation are improved. When the game is very fierce, many virtual objects which can be aimed by the user may exist on the virtual environment picture, for example, enemy hero and multiple soldiers exist on the virtual environment picture, in order to gain the game advantages, the user needs to aim fast to control the first virtual object to attack the enemy hero, but the user has difficulty in accurately aiming at the enemy hero due to too many virtual objects which can be aimed, and most of the soldiers which the user does not want to aim at. At this time, by using the method provided by the embodiment of the application, the user only needs to perform the aiming operation quickly to give a rough aiming position, and the client preferentially aims hero instead of soldier for the user according to the type of the virtual object which can be aimed, so that the efficiency and accuracy of the aiming operation of the user are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 2 is a schematic user interface diagram of a method for targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 3 is an illustration of a dead zone of a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 4 is a schematic view of a virtual environment for a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 5 is a schematic view of a virtual environment for a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 6 is a method flow diagram of a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 7 is a schematic view of a camera model corresponding to a perspective of a virtual character provided in an exemplary embodiment of the present application;
FIG. 8 is a schematic user interface diagram of a method for targeting a virtual object as provided by another exemplary embodiment of the present application;
FIG. 9 is a schematic view of a virtual environment for a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 10 is a schematic view of a virtual environment for a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 11 is a method flow diagram of a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 12 is a schematic view of a virtual environment for a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 13 is a method flow diagram of a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 14 is a diagram illustrating selected special effects of a method for targeting a virtual object according to another exemplary embodiment of the present application;
FIG. 15 is a method flow diagram of a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 16 is a method flow diagram of a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 17 is a method flow diagram of a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 18 is a schematic user interface diagram of a method for targeting a virtual object as provided by another exemplary embodiment of the present application;
FIG. 19 is a timeline illustration of a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 20 is a method flow diagram of a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 21 is a method flow diagram of a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 22 is a schematic diagram of a classifier for a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 23 is a method flow diagram of a method of targeting a virtual object provided by another exemplary embodiment of the present application;
FIG. 24 is a schematic view of a virtual environment for a method of targeting virtual objects provided by another exemplary embodiment of the present application;
FIG. 25 is an apparatus block diagram of a targeting apparatus for a virtual object provided in another exemplary embodiment of the present application;
fig. 26 is a block diagram of a terminal provided in another exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulated world of a real world, a semi-simulated semi-fictional three-dimensional world, or a purely fictional three-dimensional world. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment. Optionally, the virtual environment is also used for virtual environment engagement between at least two virtual objects, in which virtual resources are available for use by the at least two virtual objects. Optionally, the virtual environment includes a symmetric lower left corner region and an upper right corner region, and virtual objects belonging to two enemy camps occupy one of the regions respectively, and destroy a target building/site/ground/crystal deep in the other region as a winning target.
Virtual object: refers to both moveable and non-moveable objects in a virtual environment. The movable object may be at least one of a virtual character, a virtual animal, and an animation character. The non-movable object may be at least one of a virtual building, a virtual plant, a virtual terrain. Alternatively, when the virtual environment is a three-dimensional virtual environment, the virtual objects may be three-dimensional virtual models, each virtual object having its own shape and volume in the three-dimensional virtual environment, occupying a part of the space in the three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional character constructed based on three-dimensional human skeletal technology, and the virtual object realizes different external images by wearing different skins. In some implementations, the virtual object may also be implemented by using a 2.5-dimensional or 2-dimensional model, which is not limited in this application. For example, virtual objects can be divided into user-controlled virtual objects and server-controlled virtual objects according to the way in which the virtual objects are controlled, wherein the user-controlled virtual objects are objects controlled by a client that are movable in a virtual environment. Server-controlled virtual objects are virtual objects controlled by an automatic control algorithm or an artificial intelligence program on the client or server. Server-controlled virtual objects include both moveable objects and non-moveable objects in a virtual environment. For example, an immovable object may respond to or affect the activity of a movable object, e.g., the movable object may destroy the immovable object or the movable object enters a stealth state when the movable object enters the immovable object. Illustratively, the first virtual object in the present application is a virtual object controlled by a client. For example, the target virtual object in the present application may be a virtual object controlled by another client or server.
The multi-person online tactical competition is as follows: in the virtual environment, different virtual teams belonging to at least two enemy camps respectively occupy respective map areas, and the competition is carried out by taking a certain winning condition as a target. Such conditions of interest include, but are not limited to: the method comprises the following steps of occupying site points or destroying enemy battle site points, killing virtual objects of enemy battles, guaranteeing the survival of the enemy battles in a specified scene and moment, seizing certain resources, and comparing and exceeding the other party in the specified moment. The tactical competitions can be carried out by taking a game as a unit, and the map of each tactical competition can be the same or different. Each virtual team includes one or more virtual objects, such as 1, 2, 3, or 5.
The MOBA game: the game is a game which provides a plurality of base points in a virtual environment, and users in different camps control virtual objects to fight in the virtual environment, take the base points or destroy enemy camp base points. For example, the MOBA game may divide the user into two enemy paradigms, and disperse the virtual objects controlled by the user in the virtual environment to compete with each other to destroy or dominate all points of enemy as winning conditions. The MOBA game is in the unit of a game, and the duration of the game is from the time of starting the game to the time of achieving a winning condition.
User interface UI (user interface) controls, any visual control or element that can be seen on the user interface of the application, such as controls for pictures, input boxes, text boxes, buttons, labels, etc., wherein some UI controls control the release of skills from the master virtual object in response to user operations, such as skill controls. And triggering the skill control by the user to control the master control virtual object to release the skill. The UI controls involved in the embodiments of the present application include, but are not limited to: skill controls, movement controls.
FIG. 1 is a block diagram illustrating a computer system according to an exemplary embodiment of the present application. The computer system 100 includes: a first terminal 110, a server 120, a second terminal 130.
The first terminal 110 is installed and operated with a client 111 supporting a virtual environment, and the client 111 may be a multiplayer online battle program. When the first terminal runs the client 111, a user interface of the client 111 is displayed on the screen of the first terminal 110. The client may be any one of a military Simulation program, a large-flight Shooting Game, a Virtual Reality (VR) application program, an Augmented Reality (AR) program, a three-dimensional map program, a Virtual Reality Game, an Augmented Reality Game, a First-Person Shooting Game (FPS), a Third-Person Shooting Game (TPS), a Multiplayer online tactical sports Game (MOBA), and a strategy Game (SLG). In the present embodiment, the client is an MOBA game for example. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual object located in the virtual environment for activity, and the first virtual object may be referred to as a master virtual object of the first user 112. The activities of the first virtual object include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing. Illustratively, the first virtual object is a first virtual character, such as a simulated persona or an animated persona.
The second terminal 130 is installed and operated with a client 131 supporting a virtual environment, and the client 131 may be a multiplayer online battle program. When the second terminal 130 runs the client 131, a user interface of the client 131 is displayed on the screen of the second terminal 130. The client may be any one of a military simulation program, a large fleeing and killing shooting game, a VR application program, an AR program, a three-dimensional map program, a virtual reality game, an augmented reality game, an FPS, a TPS, an MOBA, and an SLG, and in this embodiment, the client is an MOBA game as an example. The second terminal 130 is a terminal used by the second user 113, and the second user 113 uses the second terminal 130 to control a target virtual object located in the virtual environment to perform an activity, where the target virtual object may be referred to as a master virtual object of the second user 113. Illustratively, the target virtual object is a second virtual character, such as a simulated persona or an animated persona.
Optionally, the first virtual character and the second virtual character are in the same virtual environment. Optionally, the first virtual character and the second virtual character may belong to the same camp, the same team, the same organization, a friend relationship, or a temporary communication right. Alternatively, the first avatar and the second avatar may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the clients installed on the first terminal 110 and the second terminal 130 are the same, or the clients installed on the two terminals are the same type of client on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals 140 that may access the server 120 in different embodiments. Optionally, there are one or more terminals 140 corresponding to the developer, a development and editing platform for supporting the client in the virtual environment is installed on the terminal 140, the developer may edit and update the client on the terminal 140, and transmit the updated client installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 may download the client installation package from the server 120 to implement the update of the client.
The first terminal 110, the second terminal 130, and the other terminals 140 are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used for providing background services for clients supporting a three-dimensional virtual environment. Optionally, the server 120 undertakes primary computational work and the terminals undertake secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, the server 120 includes a processor 122, a user account database 123, a combat service module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 121, and process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of user accounts used by the first terminal 110, the second terminal 130, and the other terminals 140, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 124 is used for providing a plurality of fight rooms for the users to fight, such as 1V1 fight, 3V3 fight, 5V5 fight and the like; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data.
With reference to the above description of the virtual environment and the description of the implementation environment, the method for aiming a virtual object provided in the embodiment of the present application is described, and an execution subject of the method is illustrated as a client running on the terminal shown in fig. 1. The terminal runs an application program, which is a program supporting a virtual environment.
For example, the method for aiming the virtual object provided by the present application is applied to the MOBA game.
In the MOBA game, a user can select an attack target of the first virtual object by using a targeting control, wherein the targeting control can be a general attack control corresponding to general attack or a skill control corresponding to skill attack.
Illustratively, the aiming control is a roulette aiming control. As shown in fig. 2, a circular aiming control 201 is displayed on the first user interface, and a user can call up a wheel disc type virtual rocker 202 of the aiming control by pressing the aiming control 201, the wheel disc type virtual rocker is composed of a wheel disc region 203 (a wheel disc region) and a rocker 204 (a rocker button), the wheel disc region 203 is an operable range of the wheel disc type virtual rocker, the rocker 205 is a contact position of the wheel disc type virtual rocker, and the user can slide the rocker 205 arbitrarily within the range of the wheel disc region 203. Illustratively, as shown in fig. 3, the disc zone 203 is further divided into a dead zone 2032 and an effective disc zone 2031, the dead zone 2032 being a circular area having a radius smaller than that of the disc zone 203 and concentric with the disc zone 203, and the effective disc zone 2031 being an annular area of the disc zone 203 other than the dead zone 2032. For example, the upper drawing only illustrates that the aiming control is circular, and the aiming control may also be in other shapes, such as oval, square, rectangle, triangle, pentagon, etc., and the shape of the aiming control is not limited in this embodiment.
Illustratively, the aiming control has three triggering modes: fast trigger, fast drag trigger (proposed in this application), drag trigger. Illustratively, all three of these triggering modes are continuous triggering. When the user uses the finger to trigger the control, the continuous triggering refers to one complete triggering operation from the moment that the user presses the control by the finger to the moment that the user leaves the control by the finger. Illustratively, when a user finger presses down the aiming control, the client acquires an activation point pressed down by the user finger, periodically detects a contact point moved by the finger on the aiming control, and determines that the triggering operation is finished when the client does not detect the next contact point within a period of time. For example, the client acquires the touch point position of the finger on the aiming control every 0.01ms, and when the client does not acquire the next touch point position of the finger of the user within 0.05ms, the client determines that the finger of the user leaves the aiming control, and this triggering operation is finished.
The quick trigger is a trigger operation that the activation point of the aiming control is located in a dead zone and the contact of the current trigger operation does not exceed the dead zone. The activation point is the contact point at which the aiming control was first triggered in a trigger operation, i.e., the contact point when the user's finger pressed the control. Illustratively, as shown in FIG. 3, a quick trigger is a trigger operation where the user quickly clicks on the targeting control and the click location is located in the dead zone 2032. For example, the position of the aiming control displayed on the user interface may move within a certain range along with the activation point, and the client moves the aiming control by taking the activation point as the center, that is, each time the user activates the aiming control, the activation point is the center (circle center) of the aiming control.
The dragging trigger is an operation that the activation point of the aiming control is positioned in a dead zone, at least one contact exceeds the dead zone, and the time length of the contact moving out of the dead zone is greater than a time threshold. Illustratively, as shown in FIG. 3, a drag trigger is an operation in which the user presses the aiming control at the dead zone 2032, strokes a finger out of the dead zone 2032, aims at the active wheel area 2031, and lifts the finger after aiming.
The quick dragging trigger is an operation that the activation point of the aiming control is positioned in a dead zone, at least one contact exceeds the dead zone, and the time length of the contact moving out of the dead zone is less than a time threshold. Illustratively, a quick drag trigger is an operation in which the user presses the aim control, quickly strokes a finger out of a dead zone and lifts it. Illustratively, a fast drag trigger is a drag trigger that takes less time.
Illustratively, when the user quickly triggers the targeting control, the client automatically selects a candidate virtual object as the targeting object from within the targeting range or target selection range of the first virtual object. The target selection range is a range in which the first virtual object is aimed and an extended range, and when the range is a range in which the first virtual object is centered at a radius R, the extended range is an annular region having a width Y outside the range. For example, as shown in fig. 4, when the user quickly triggers the targeting control, the client selects the virtual object B with the lowest blood volume as the targeting object from within the target selection range 301 of the first virtual object, the target selection range 301 being a circular range centered on the first virtual object.
When the user drags and triggers the aiming control, the client takes a contact point when the user lifts a finger from the aiming control as an aiming point, a candidate virtual object which is in a reference range and is closest to the aiming point is determined as an aiming object, and the reference range is determined according to the aiming point. For example, as shown in fig. 5, when the user drags the aiming control, the client acquires an aiming point 302 corresponding to the dragging operation, determines a reference range 303 with the aiming point 302 as a center, acquires a candidate virtual object within the reference range 303, and determines a virtual object a closest to the aiming point 302 within the reference range 303 as the aiming object.
When the user quickly drags and triggers the aiming control, the client determines the aiming object from the candidate virtual objects within the aiming range or the target selection range according to the aiming point when the user lifts the finger from the aiming control and the virtual object types of the candidate virtual objects.
For example, the method provided by the embodiment is applied to quickly select the target object when the trigger aiming control is dragged quickly.
For example, the selection rule of the target is set according to the distance between the candidate virtual object and the target point and the virtual object type of the candidate virtual object. When the types of the candidate virtual objects include: virtual character, monster, building, their priority may be that the virtual character is larger than the monster than the building. And when the finger of the user presses the aiming control in the dead zone, the finger moves out of the dead zone to start timing, and the aiming object is preferentially determined according to the type of the virtual object in the first n frames of time. The first n frames may refer to logical frames as well as presentation frames. The logical frames are uniformly set by the server, and for example, the n frames may be 2-frame logical frames. The representation frames are calculated by each client according to the frame rate and the logical frame of the terminal, for example, n frames are 2-frame logical frames, the frame rate of the server is 15 frames/s, when the frame rate of the terminal is 30 frames/s, the 2-frame logical frames correspond to 4-frame representation frames, and then the n frames may be 2-frame logical frames or 4-frame representation frames.
And if the user lifts the finger within the first n frame moments, determining the candidate virtual object with higher priority of the virtual object type as the aiming object.
For example, the method provided by the present embodiment may also be applied to other two triggering manners.
Illustratively, in the aspect of technical implementation, when a user performs a quick drag operation, the client acquires a contact position in an effective wheel area of the aiming control, and maps the contact position to a virtual environment to obtain an aiming point. Then, the client calls a search-and-enemy interface to store the candidate virtual objects in the target selection range with the first virtual object as the center into a candidate virtual object list. Then, a filter is called to delete the candidate virtual objects in the candidate virtual object list which do not meet the aiming condition, for example, the candidate virtual objects which cannot be acted by skills are deleted from the candidate virtual object list, and a filtered candidate virtual object list is obtained. And filtering and sorting the filtered candidate virtual object list by using a search enemy tree (search enemy selector). The search and enemy tree is used for selecting the target object from the candidate virtual object list according to a preset priority rule. For example, the search and enemy tree deletes the candidate virtual object currently in the non-attacked state in the candidate virtual object list, sorts the remaining candidate virtual objects according to the priority scores, and determines the candidate virtual object with the highest or lowest priority score as the target object.
Fig. 6 shows a flowchart of a method for targeting a virtual object according to an exemplary embodiment of the present application. The method may be performed by a client running on any of the terminals in fig. 1 described above, the client being a virtual environment enabled client. The method comprises the following steps:
step 601, displaying a user interface, where the user interface includes a virtual environment screen, and the virtual environment screen includes a first virtual object and a second virtual object located in a virtual environment.
Optionally, the virtual environment screen is a screen obtained by observing the virtual environment with the first virtual object as an observation center. The observation center refers to a focused position when the camera model photographs a virtual environment. Illustratively, the observation center is located at the center of the virtual environment picture. That is, the first virtual object is located at the center of the virtual environment screen. Exemplarily, taking the first virtual object as the observation center means observing the first virtual object from a third person's perspective to obtain a virtual environment picture. The perspective refers to an observation angle when the virtual object is observed in the virtual environment at a third person-named perspective of the virtual object. Optionally, in an embodiment of the present application, the viewing angle is an angle when a virtual object is observed by a camera model in a virtual environment.
Optionally, the camera model automatically follows the virtual object in the virtual environment, that is, when the position of the virtual object in the virtual environment changes, the camera model changes while following the position of the virtual object in the virtual environment, and the camera model is always within the preset distance range of the virtual object in the virtual environment. Optionally, the relative positions of the camera model and the virtual object do not change during the automatic following process.
The camera model is a three-dimensional model located around a virtual object in a virtual environment, when a third human scale view angle is adopted, the camera model can be located behind the virtual object and bound with the virtual object, and also can be located at any position away from the virtual object by a preset distance, the virtual object located in the virtual environment can be observed from different angles through the camera model, and optionally, the view angle also comprises other view angles such as an overhead view angle besides the first human scale view angle and the third human scale view angle; the camera model may be positioned over the head of the virtual object when a top view is employed, which is a view of viewing the virtual environment from an aerial top view. Optionally, the camera model is not actually displayed in the virtual environment, i.e. the camera model is not displayed in the virtual environment displayed by the user interface.
For example, the camera model is located at any position away from the virtual object by a preset distance, and optionally, one virtual object corresponds to one camera model. For example, in an application program partially supporting a virtual environment, the camera model may rotate around a virtual object as a rotation center, such as: the camera model is rotated with any point of the virtual object as a rotation center, the camera model rotates not only angularly but also shifts in displacement during the rotation, and the distance between the camera model and the rotation center is kept constant during the rotation, that is, the camera model rotates on the surface of a sphere with the rotation center as a sphere center, wherein any point of the virtual object may be any point of the head, the trunk, or the periphery of the virtual object, which is not limited in the embodiment of the present application. Optionally, when the camera model observes the virtual object, the center of the view angle of the camera model points in a direction in which a point of the spherical surface where the camera model is located points at the center of the sphere.
Optionally, the camera model may also observe the virtual object at a preset angle in different directions of the virtual object.
Referring to fig. 7, schematically, a point is determined in the virtual object 11 as a rotation center 12, and the camera model rotates around the rotation center 12, and optionally, the camera model is configured with an initial position, which is a position at the upper rear of the virtual object (for example, a rear position of the brain). Illustratively, as shown in fig. 7, the initial position is position 13, and when the camera model rotates to position 14 or position 15, the direction of the angle of view of the camera model changes as the camera model rotates.
Optionally, the virtual environment displayed by the virtual environment screen includes: at least one element selected from the group consisting of a mountain, a flat ground, a river, a lake, an ocean, a desert, a marsh, a quicksand, a sky, a plant, a building, a vehicle, and a figure.
The first virtual object is a virtual object controlled by a client executing the method. And the client controls the first virtual object to move in the virtual environment according to the received user operation. Illustratively, the activity of the first virtual object in the virtual environment includes: walking, running, jumping, climbing, lying down, shooting, general attack, releasing skills, picking up props, using props, sending messages.
A skill is an ability to be used or released by a virtual object, to attack a virtual object (including a second virtual object and itself), to produce a subtractive effect, or to produce a gain effect. The skills include active skills, which are the skills actively used, released by the virtual object, passive skills, which are the skills automatically triggered when the passive condition is met. Illustratively, different virtual objects may have different skills.
The common attack is an attack mode of the virtual object, and the virtual object can cause damage to the second virtual object through the common attack. Illustratively, a common attack itself can only cause harm to the second virtual object and cannot produce a gain or a reduction effect. All virtual objects can be subjected to common attack, and different attack effects can be displayed according to different attack speeds, attack forces and special effect display of different virtual objects.
For example, part of the activities require the user to target the virtual object, i.e., the user needs to select or target a virtual object before performing the activities. For example, virtual objects need to be targeted when releasing skills, general attack, using props, and capturing a field of view. The method provided by the present embodiment is used to aim at a target.
Illustratively, there is at least one second virtual object in the virtual environment. The second virtual object includes at least one of a virtual object controlled by the client and a virtual object controlled by the server. Illustratively, the second virtual object is divided into different virtual object types. For example, virtual objects may be classified according to their different characteristics, e.g., virtual objects may be classified into movable objects and non-movable objects according to whether the virtual objects are movable or not. The virtual objects may be divided into user-controlled virtual objects and server-controlled virtual objects according to the control manner of the virtual objects. The virtual objects may be classified into virtual characters, virtual animals, virtual buildings, virtual plants, virtual terrains, and the like according to the model types of the virtual objects. Illustratively, the virtual objects may be divided into a first formation, a second formation, and a neutral formation according to the formations of the virtual objects. The virtual objects can be classified into a forensic attack class and a physical attack class according to their attributes. In this embodiment, the classification manner of the virtual object is not limited, and thus, the types of the virtual object may be various. A virtual object may also have multiple virtual object types.
Illustratively, the second virtual object is a virtual object that can be aimed at by the first virtual object. Exemplary virtual objects that can be targeted include any three-dimensional model of a virtual environment. For example, virtual objects controlled by other clients (teammates or opponents), virtual objects controlled by clients or servers (monsters, animals, plants, etc.), virtual buildings (defense towers, bases, barracks, crystals, etc.), virtual vehicles (cars, planes, motorcycles, etc.). For example, a first virtual object may target other client-controlled virtual objects, thereby freeing them from skills or making common attacks. As another example, the first virtual object may target a server-controlled virtual object (monster, soldier) to attack the virtual object. Still alternatively, the first virtual object may be aimed at a virtual building or virtual plant to attack or destroy the virtual building. Still alternatively, the first virtual object may be aimed at the virtual terrain to change the virtual terrain or to make the virtual terrain, etc.
Illustratively, the user interface may also include UI controls located on the virtual environment screen, for example, the UI controls may be skill controls, general attack controls, movement controls, signaling controls, setup controls, chat controls, and the like.
For example, as shown in fig. 8, the user interface 801 is a type of user interface 801, and a virtual environment screen including a first virtual object 802 and two second virtual objects 803 is displayed on the user interface 801. Illustratively, a movement control 804, a skill control 805, and a general attack control 806 are also displayed on the user interface 801. The move control 804 is used to control the first virtual object 802 to move in the virtual environment; skill control 805 is used to control first virtual object 802 to release a skill; the common attack control 806 is used to control the first virtual object 802 to make a common attack.
Step 602, in response to receiving a start instruction of the aiming operation at a first time, displaying a point skill indicator on the ground plane of the virtual environment, wherein the point skill indicator is used for indicating a selected aiming point of the aiming operation on the ground plane of the virtual environment.
The targeting operation is a user operation received by the client. For example, the user operation may be a trigger operation on a UI control, a voice operation, an action operation, a text operation, a mouse operation, a keyboard operation, or a joystick operation. For example, the targeting operation may be a dragging operation received by the client on the UI control, and the targeting operation may also be an operation in which the user issues a voice instruction, an action instruction, or a text instruction, and the client acquires the user intention by recognizing the voice, the action, or the text of the user.
Illustratively, the targeting operation is an operation of selecting a targeted point in the virtual environment. The user selects a position point in the virtual environment through aiming operation, and the position point is determined as an aiming point. Subsequent clients may determine the targeted virtual object from the targeted point.
Illustratively, the targeting operation is a continuous operation having a start time and an end time. For example, the client may continuously receive the aiming instruction for the aiming operation during the duration from the beginning to the end of the aiming operation. In this embodiment, taking an example that the aiming instruction is triggered by a dragging operation that exceeds the dead zone on the wheel aiming control, at this time, the "aiming operation" is a dragging operation that exceeds the dead zone on the wheel aiming control. Illustratively, the start command is the aiming command corresponding to the first contact beyond the dead zone. Illustratively, the end command is the aiming command corresponding to the last contact on the active aiming area before the user releases the wheel aiming control.
For example, when the aiming operation is dragging or sliding operation on a UI control, when a user presses the UI control, the client determines that the aiming operation is received, a start instruction is generated at a first time, the start instruction includes an activation point when the user presses the UI control, then the user presses and drags the UI control, the client periodically detects a plurality of contacts on a dragging path of the user, generates a dragging track, each aiming instruction includes a contact on the dragging path, when the user releases the UI control, the client cannot detect a contact on the UI control, determines that the aiming operation is ended, determines that a last contact is a deviation point for dragging the current operation, and a time corresponding to the deviation point is an end time.
For another example, since the UI control may receive various trigger operations, such as clicking, dragging, fast dragging, and the like, and the targeting operation in this embodiment is a fast targeting operation, in order to distinguish from the clicking operation, the start time of the targeting operation may be determined in the following manner, and then the start instruction may be generated. Illustratively, as shown in fig. 2, taking the UI control as the aiming control 201 as an example, when the aiming control 201 is triggered, a roulette virtual joystick 203 of the aiming control 201 is displayed, and the roulette virtual joystick 202 includes a joystick 204 (joystick button) and a roulette area 203. As shown in fig. 3, the wheel area 203, in turn, includes a dead zone 2032 and an active targeting zone 2031. Illustratively, when the user presses down on the aiming control 201, the client displays the roulette virtual rocker 202 of the aiming control 201 centered on the position pressed down by the user, i.e., the activation point pressed down by the user must be located in the dead zone 2032. At the same time, the rocker 204 is displayed in the user depressed position, and the rocker 204 moves following the user's contact movement. For example, after the user presses the aiming control 201, a dragging operation is performed, when a contact point of the dragging operation by the user exceeds the range of the dead zone 2032, the operation is determined as the aiming operation, a time when the contact point exceeds the dead zone is determined as a first time, and a start instruction is generated. Illustratively, the contact point in the start instruction at this time is not the activation point, which is always the point when the user presses the UI control.
For example, in the above embodiment, the pointing operation is a drag operation on the UI control, and the pointing operation may also be an operation of a user dragging a mouse, a long-time press operation by the user, a plurality of continuous click operations by the user, and the like.
In another alternative embodiment, the targeting operation may also be a segmented operation. For example, the user first performs a trigger operation to start the aiming operation; then, the user performs the trigger operation again to end the aiming operation. For example, the targeting operation may also be: and clicking the UI control by the user to enter the aiming state, then clicking any point in the virtual environment by the user, or clicking any point on the UI control to select the aiming point, and finishing the aiming operation.
The point skill indicator is an indicator for assisting aiming, which is displayed in the virtual environment in order to facilitate the user to observe the aiming point selected in the virtual environment by the aiming operation. Illustratively, a point skill indicator is an indicator corresponding to a target-type skill, or, an indicator corresponding to a general attack performed by a given target. Illustratively, the point skill indicator includes an aiming point displayed in the virtual environment. Illustratively, the point skill indicator includes an aiming range and an aiming point. Illustratively, the targeting range is a range determined from the first map point at which the first virtual object is located. For example, the targeting range is a circle having a center point at which the first virtual object is located and a targeting radius as a radius. The aiming point is a point that is mapped into the virtual environment according to the contact points of the aiming operation. Illustratively, a reference range may also be displayed on the spot skill indicator. The reference range is a range determined from the aimed point, for example, the reference range is a circle having the aimed point as a center point and a reference radius as a radius. For example, a reference line may be displayed on the point skill indicator, and the reference line is a line connecting the position of the first virtual object and the aiming point.
For example, as shown in fig. 9, a point skill indicator 901 is presented, the point skill indicator 901 comprising an aiming range 902, an aiming point 903, a reference range 904, a reference line 905. As another example, as shown in fig. 10, a point skill indicator 901 is provided, the point skill indicator 901 comprising an aiming range 902, an aiming point 903, and a reference range 904.
Step 603, in response to receiving an end instruction of the aiming operation at the second time, controlling the first virtual object to aim at the target virtual object, wherein the target virtual object is a virtual object determined from the second virtual objects according to a first priority principle, and the first priority principle comprises a priority that the priority of the type of the virtual object is higher than a linear distance between the second virtual object and the aiming point, and a difference value between the second time and the first time is smaller than a time threshold.
Illustratively, the second time is an end time of the targeting operation. For example, when the client does not receive the next contact of the aiming operation within a period of time, the last contact is determined as the offset point (end point) of the current aiming operation, and the time (second time) corresponding to the offset point is determined as the end time. The ending instruction includes an offset point.
For example, the client determines the offset direction (aiming direction) of the aiming point relative to the position of the first virtual object according to the offset direction of the offset point relative to the activation point; and determining the ratio of the aiming distance from the aiming point to the position of the first virtual object to the aiming radius according to the ratio of the offset distance from the offset point to the activation point to the radius of the wheel disc, and further calculating to obtain the coordinate position of the aiming point in the virtual environment. The wheel radius is the radius of the wheel area and the aiming radius is the maximum distance at which the first virtual object is aimed.
For example, the client calculates the aiming point according to the contact points (offset points) stored in the ending instruction, selects a target virtual object from the second virtual objects according to the straight-line distance between the second virtual object and the aiming point and the virtual object type of the second virtual object, and aims the first virtual object at the target virtual object.
Illustratively, the client selects the target virtual object according to a first priority principle when the duration of the targeting operation (the difference between the second time and the first time) is less than a time threshold. The first priority rule is a rule that the virtual object type is considered first and then the straight-line distance is considered. That is, the first virtual object is preferentially targeted to virtual objects of the specified type. For example, when the targeting operation is a quick targeting operation (quick drag operation), the client preferentially targets hero.
For example, the first priority rule is at least a rule set according to the straight-line distance and the type of the virtual object, and in other alternative embodiments, the first priority rule may also be set according to other parameters or conditions, for example, the distance between the second virtual object and the first virtual object, the blood volume (life value) of the second virtual object, a defense value, an attack force, an occupation, a gender, an attack attribute (physical attack or legal attack), and the like.
For example, the client may set the classification tree according to a first priority principle, and determine the classification order of each parameter according to the priority, thereby determining the targeting target. For example, the client classifies the second virtual objects according to the virtual object types, acquires the second virtual objects of the first type, then sorts the second virtual objects of the first type according to the straight-line distance, and determines the virtual object with the smallest straight-line distance as the targeting target (target virtual object).
For example, the client may set the weight scores of the parameters or conditions according to a first priority rule, then calculate the priority score of the second virtual object, and determine the virtual object with the highest or lowest priority score as the targeting target (target virtual object).
The time threshold is a judgment standard set by the server for judging whether the aiming operation is a quick aiming operation. When the duration of the aiming operation is greater than a time threshold, the client determines the aiming operation as a normal aiming operation (dragging operation); when the duration of the aiming operation is less than the time threshold, the client determines the aiming operation as a quick aiming operation (quick drag operation). The client side can determine the aiming target according to different priority principles corresponding to different aiming operations.
Illustratively, the time threshold may be a duration, e.g., 0.5s, 1 s. For example, the time threshold may also be a logical time (logical frame) of the office, for example, 2 logical frames. Illustratively, when the office is started, all the clients participating in the office and the server providing the logical operation service for the office synchronously time the logical time of the office, and periodically calibrate the logical time, so as to ensure that the logical times of all the clients in the office are consistent. The correspondence between the logical frames and the presentation frames is detailed in the above embodiment, that is, the time threshold may also be the presentation frame of the client, for example, 4 presentation frames.
Illustratively, the aiming in the present embodiment includes at least one of a normal aiming and a locking aiming. The general aiming is as follows: when the position of the aiming target (target virtual object) changes, aiming is automatically cancelled. Locking and aiming: when the position of the sighting target (target virtual object) changes, the sighting will not be cancelled.
Illustratively, after the first virtual object is aimed at the target virtual object in a normal aiming manner, the target virtual object is moved to change the position, the first virtual object is no longer aimed at the target virtual object, and skill or normal attack on the target virtual object is not performed, and if the user wants to continue aiming at the target virtual object, the user needs to perform the aiming operation again to aim at the target virtual object.
For example, after the first virtual object is aimed at the target virtual object by means of locking aiming, the first virtual object can be continuously aimed at the target virtual object for skill release or common attack. In one embodiment, after the first object aims at the target virtual object in a locking aiming mode, when the position of the target virtual object changes and exceeds the attack range (aiming range) of the first virtual object, the client also automatically controls the first virtual object to follow the target virtual object so as to continuously aim at the target virtual object for attack. Exemplary ending modes of locking the sight may include the following: stopping locking aiming after the aiming time length reaches a preset time length; stopping aiming after the target virtual object moves out of the aiming range of the first virtual object; stopping the aiming after the target virtual object or the first virtual object dies; when the user performs the aiming operation again and aims at the second virtual object, the aiming of the target virtual object is stopped.
In summary, in the method provided in this embodiment, when the aiming operation of the user is received, the target virtual object with a higher priority of the aiming type is aimed for the user according to the aiming point selected by the user and the virtual object type of the second virtual object. For example, typically, a user will prefer to attack hero, and the client will preferentially target hero for the user. Therefore, the capability of the client for aiming at the virtual object is improved, the operation difficulty required by the user for aiming at the virtual object is reduced, and the human-computer interaction effect and accuracy of the aiming operation are improved. When the game is very violent, many virtual objects which can be aimed by the user may exist on the virtual environment picture, for example, enemy hero and multiple soldiers exist on the virtual environment picture, and in order to gain the game advantages, the user needs to aim quickly to control the first virtual object to attack the enemy hero, but because the number of virtual objects which can be aimed is too many and most of the soldiers which the user does not want to aim at, the user has difficulty in accurately aiming at the enemy hero. At this time, by using the method provided by the embodiment of the present application, the user only needs to perform the aiming operation quickly to give a rough aiming position, and the client will preferentially aim hero for the user instead of soldier according to the type of the virtual object that can be aimed, thereby improving the efficiency and accuracy of the user aiming operation.
Exemplary embodiments are given for determining a target virtual object according to a first priority principle.
FIG. 11 illustrates a flow chart of a method for targeting a virtual object provided by an exemplary embodiment of the present application. The method may be performed by a client running on any of the terminals in fig. 1 described above, the client being a virtual environment enabled client. Based on the exemplary embodiment shown in fig. 6, step 603 includes steps 6031 to 6034, and step 605 is further included after step 603.
In step 6031, in response to receiving an end instruction of the aiming operation at the second time, at least two second virtual objects located in a target selection range, which is a range determined according to at least one of the first map point and the aiming point where the first virtual object is located, are determined as the candidate virtual objects.
Illustratively, the targeting operation corresponds to a target selection range, and the user can target a target (virtual object) in the target selection range through the targeting operation. The target selection range is used to select candidate virtual objects, and further select a targeting target (target virtual object) from the candidate virtual objects. For example, the target selection range may be determined according to the position of the first virtual object, or may be determined according to the aiming point.
When the target selection range is determined according to the position of the first virtual object, the target selection range may be an aiming range in which the first virtual object is currently aimed. The targeting range may be a circle having a center point at which the first virtual object is located and a radius of the targeting radius. The target selection range may be a sighting range plus an extension range, which is an extension of the sighting range, for example, an extension range is a ring having a width Y outside the sighting range.
For example, the client invokes a search enemy interface to search for other virtual exclusive shares in the target selection scope. And the searching interface adds all the second virtual objects positioned in the circular range into the candidate virtual object list in a circular range determined by taking the first map point where the first virtual object is positioned as the center and taking (X + Y) as the radius, and determines the second virtual objects as the candidate virtual objects. Wherein, X is the radius of the maximum aiming range of the aiming, Y is the difference value between the radius of the pre-aiming range and the radius of the maximum aiming range, the pre-aiming range is a circular ring range sleeved outside the maximum aiming range,
for example, as shown in fig. 12, the target selection range includes a circular aiming range 1201 centered on the first virtual object and a ring-shaped extended range 1202. The aiming point can be arbitrarily moved within the aiming range 1201, but cannot reach the extended range 1202 beyond the aiming range 1201.
And the client determines the second virtual objects positioned in the target selection range according to the coordinate positions of all the second virtual objects in the virtual environment, and determines the second virtual objects as candidate virtual objects. For example, the client may also calculate the distance between the second virtual object and the first virtual object, and determine that the distance is smaller than the radius of the target selection range as the candidate virtual object.
For example, the client may generate a candidate virtual object list from the candidate virtual objects, and filter or sort the candidate virtual object list to obtain the targeting target (the target virtual object).
Step 6032, obtain virtual object types of at least two candidate virtual objects, the virtual object types including a first type and a second type.
The client obtains the virtual object type of each candidate virtual object. The virtual object types include at least a first type and a second type, and illustratively, the first type is a virtual object that participates in a game in the virtual environment, and the second type is a virtual object that is set in the virtual environment. For example, the virtual objects of the second type include: monster, soldier, defense tower, crystal, barracks, etc. Illustratively, since the targets that a user would normally want to attack are all virtual objects of a first type, the first type is prioritized over the second type.
Step 6033, in response to the priority of the first type being higher than the priority of the second type, determines the candidate virtual object of the first type as the target virtual object.
When only one candidate virtual object of the first type exists in the candidate virtual objects, the client determines the candidate virtual object as a target virtual object.
When a plurality of candidate virtual objects of the first type exist in the candidate virtual objects, the client further screens the target virtual object according to the straight-line distance.
As shown in FIG. 13, step 6033 also includes step 6033-1 and step 6033-2.
Step 6033-1, in response to the priority of the first type being higher than the priority of the second type and there being at least two candidate virtual objects of the first type, acquiring straight-line distances of the at least two candidate virtual objects of the first type from the aiming point.
When a plurality of first-type candidate virtual objects exist, the client calculates the linear distance between each first-type candidate virtual object and the aiming point, performs sorting according to the linear distance, and determines the candidate virtual object closest to the aiming point as the aiming target (target virtual object).
In step 6033-2, the candidate virtual object of the first type having the smallest straight-line distance is determined as the target virtual object.
Step 6034, control the first virtual object to aim at the target virtual object.
Step 605, displaying the selected special effect on the target virtual object, wherein the selected special effect comprises at least one of the following special effects: and displaying the first selected identification on the second map point where the target virtual object is located, and displaying the second selected identification above the target virtual object.
Illustratively, since the pointing operation is a continuous operation, the client calculates a responsive aiming point for any one touch point in the pointing operation and controls the aiming point to move as the touch point moves. The client may calculate different aiming targets (target virtual objects) according to different aiming points.
For example, since the duration of the quick aiming operation is short when the aiming operation is the quick aiming operation, in order to prevent the aiming target from being changed many times in a short time and prevent the user from being aware of which target is being aimed, the client does not display the selected special effect on the aiming target during the operation of the quick aiming operation. Only when the client determines the finally selected aiming target of the quick aiming operation after the quick aiming operation is finished, the client displays the selected special effect on the aiming target, so that the user is informed that the first object aims at the aiming target, and the skill of the aiming target is released or the common attack is started.
That is, during the operation of the quick aiming operation, the aiming point of the point skill indicator, which is changed in real time with the quick aiming operation, is displayed on the user interface, and the selected special effect is not displayed on the current aiming target. When the quick aiming operation is finished, the client no longer displays the point skill indicator, but displays the selected special effect on the aiming target.
For example, when the duration of the aiming operation exceeds the time threshold, and the aiming operation is a normal aiming operation, after the duration of the normal aiming operation exceeds the time threshold, the client determines an aiming target (a third aiming target) in real time according to the current aiming point of the normal aiming operation, and displays the selected special effect on the aiming target. When the duration of the normal aiming operation is less than the time threshold, the client does not display the selected special effect. And when the client finishes the normal aiming operation, the selected special effect stays on the last aimed aiming target, and the first virtual object is controlled to release the skill of the aiming target or launch the common attack.
That is, the selected special effect is displayed after the duration of the aiming operation exceeds the time threshold, and the display of the selected special effect is stopped until the first virtual object stops releasing the skill or stopping the common attack.
Illustratively, selecting a special effect refers to distinctively displaying a virtual object (target virtual object) in the virtual environment. For example, the model color, the nickname color, and the blood volume bar color of the virtual object are changed, the model, the nickname, and the blood volume bar of the virtual object are highlighted, the model stroke, the nickname character, and the blood volume bar stroke of the virtual object are increased, and a selected icon, a selected light column, a selected animation, and the like are displayed around the virtual object.
For example, as shown in fig. 14, the selected effect may be a sighting icon 1401 displayed above the target virtual object, a light pillar 1402 displayed on the head of the target virtual object, changing the color of a three-dimensional virtual model 1403 of the target virtual object, and the like.
In summary, in the method provided in this embodiment, when the targeting operation of the user is received, the second virtual object located in the target selection range is obtained, a candidate virtual object list is generated, and then the target virtual object is selected from the candidate virtual object list according to the first priority principle. The client only needs to calculate the second virtual object located in the target selection range, and the calculation amount of the client is reduced.
In the method provided by this embodiment, by obtaining the virtual object type of each candidate virtual object and the linear distance between the virtual object type and the aiming point, the candidate virtual object of the type with higher priority is selected according to the virtual object type, and then the candidate virtual object closest to the aiming point is selected from the candidate virtual objects of the type, and is determined as the target virtual object. Therefore, the virtual object which is more inclined to be aimed by the user is selected by comprehensively considering the type of the virtual object and the linear distance between the virtual object and the aiming point, and the target virtual object selected by the client is more accurate.
For example, when the duration of the aiming operation is greater than the time threshold, the client determines the aiming target according to the second priority principle, so that the user can choose to accurately select the aiming target through accurate aiming.
FIG. 15 illustrates a flow chart of a method for targeting a virtual object provided by an exemplary embodiment of the present application. The method may be performed by a client running on any of the terminals in fig. 1 described above, the client being a virtual environment enabled client. Step 603 is followed by step 604, in accordance with the exemplary embodiment shown in fig. 6.
Step 604, in response to not receiving the end instruction of the aiming operation at the third time, controlling the first virtual object to aim at a third virtual object, wherein the third virtual object is a virtual object determined from the second virtual objects according to a second priority principle, and the second priority principle comprises the priority that the type of the virtual object is lower than the priority of the straight-line distance between the second virtual object and the aiming point; and the difference value between the third moment and the first moment is equal to the time threshold value.
For example, if the aiming operation is not finished at the third time, the current aiming operation is not the fast aiming operation, and the client determines the aiming target of the current aiming operation according to the operation logic of the normal aiming operation.
Illustratively, when the targeting operation is a normal targeting operation (drag operation), the client determines the targeting target (third virtual object) according to the second priority principle.
The second priority rule is a rule that gives priority to the straight-line distance. That is, the first virtual object is preferentially aimed at the virtual object designated to be closest to the aiming point. For example, when the aiming operation is a normal aiming operation (drag operation), the client preferentially aims at other virtual characters closest to the aiming point.
For example, the second priority rule is set according to at least a straight-line distance, and in other alternative embodiments, the first priority rule may also be set according to other parameters or conditions, such as a virtual object type, a distance between the second virtual object and the first virtual object, a blood volume (life value) of the second virtual object, a defense value, an attack force, an occupation, a gender, an attack attribute (physical attack or legal attack), and the like.
Illustratively, as shown in fig. 16, step 604 further includes steps 6041 through 6044.
In step 6041, in response to not receiving the end instruction of the aiming operation at the third time, at least two second virtual objects located in a target selection range, which is a range determined according to at least one of the first map point and the aiming point where the first virtual object is located, are determined as the candidate virtual objects.
For example, the target selection range in step 6041 may be the same target selection range as the target selection range in step 6031 or a different target selection range from the target selection range in step 6031.
Illustratively, the target selection range in step 6041 is the same as the target selection range in step 6031, see the description of the target selection range in step 6031.
Step 6042, obtain the linear distance between at least two candidate virtual objects and the aiming point.
Illustratively, after obtaining a candidate virtual object list of the candidate virtual objects, the client calculates linear distances between the candidate virtual objects in the list and the aiming point, and sorts the candidate virtual objects according to the linear distances.
In step 6043, the candidate virtual object with the smallest straight-line distance is determined as the third virtual object.
Step 6044, control the first virtual object to aim at the third virtual object.
For example, at the third time, the aiming operation is not finished yet, that is, the aiming operation is still in the process of operation, at this time, the client calculates the corresponding aiming target in real time according to the current contact point of the aiming operation, and displays the selected special effect in real time according to the calculated aiming target, so that the user can observe the virtual object aimed by the current aiming operation.
In summary, in the method provided in this embodiment, when the fast drag operation is performed, the targeting target (target virtual object) is selected from the second virtual objects by using the first priority principle; during the dragging operation, selecting a targeting target (a third virtual object) according to the original second priority rule; the user can select the target unit according to different priority principles through different operations, and the accuracy of the aiming operation and the human-computer interaction efficiency are improved.
For example, after the candidate virtual object list is generated according to the target selection range, the candidate virtual objects in the candidate virtual object list are further screened, and the target is selected from the screened candidate virtual object list according to the first priority principle or the second priority principle.
For example, after the client determines at least two second virtual objects located in the target selection range as candidate virtual objects, the method further includes:
the client filters candidate virtual objects according to at least one of aiming conditions and priority conditions, wherein the aiming conditions comprise conditions which the virtual objects selectable by aiming operation are required to meet, and the priority conditions comprise at least one of the following conditions:
preferentially selecting a candidate virtual object closest to the aiming point;
preferentially selecting the candidate virtual object with the least blood volume percentage;
the candidate virtual object with the least absolute blood volume is preferentially selected.
The aiming condition includes a condition corresponding to the aiming operation when the user performs the aiming operation. For example, when the aiming operation is an operation of controlling the first virtual object to release a skill, the aiming condition is a release condition of the skill, and for example, the skill can only act on enemy hero, or the skill can only act on a virtual building or the like. When the aiming operation is an operation of controlling the first virtual object to perform a common attack, the aiming condition is an attack condition of the common attack, for example, the first virtual object can only perform the common attack on hero or only perform the common attack on the defense tower.
Illustratively, the aiming condition further comprises other refining conditions corresponding to the aiming operation, for example, a second virtual object in a no-enemy state and a non-selectable state cannot be selected; a second virtual object with some gain or reduction effect cannot be selected; a second virtual object that has been attacked by the same skill for a period of time cannot be selected; preferentially selecting a second virtual object positioned in the first area in the target selection range; and when the second virtual object does not exist in the first area, reselecting the second virtual object positioned in the second area in the target selection range.
The priority condition includes at least one of the following conditions:
1. a distance priority condition;
the candidate virtual object closest to the targeted point is preferentially selected. As shown in fig. 5, a virtual object a and a virtual object B exist simultaneously in the candidate virtual objects, a straight-line distance between the virtual object a and the aiming point 302 is a first distance, and a straight-line distance between the virtual object B and the aiming point 302 is a second distance. When the first distance is less than the second distance, the virtual object a is preferentially selected as the targeting target.
2. A blood volume percentage priority condition;
the candidate virtual object with the least percentage of blood volume is preferentially selected. As shown in fig. 4: the candidate virtual object has a virtual object A and a virtual object B, the blood volume percentage of the virtual object A is 100%, the blood volume percentage of the virtual object B is 80%, and the candidate virtual object B is preferentially selected as the aiming target.
3. A blood volume absolute value priority condition;
the candidate virtual object with the least absolute blood volume is preferentially selected. Such as: the virtual object A and the virtual object B exist in the candidate virtual objects at the same time, the blood volume of the virtual object A is 1200 points, the blood volume of the virtual object B is 801 points, and the virtual object B is preferentially selected as the aiming target.
4. A type priority condition;
the candidate virtual object with the highest priority virtual object type is preferentially selected. Such as: the candidate virtual objects simultaneously have a virtual object A and a virtual object B, the type of the virtual object A is hero, the type of the virtual object B is soldier, the priority of hero is greater than the priority of soldier, and the virtual object A is preferentially selected as the aiming target.
Illustratively, the client filters out at least one candidate virtual object from the candidate virtual object list according to a priority principle, the filtered candidate virtual object list has at least one candidate virtual object, and then the target is determined for the filtered candidate virtual object list according to a first priority principle or a second priority principle.
Alternatively, when the priority conditions include at least two different priority conditions, a primary priority condition and a secondary priority condition may be set, and when there is no selection result or more than one selection result after the primary priority condition is selected, the secondary priority condition is used for selection. For example, the selection is performed according to a distance priority condition, and when two candidate virtual objects have the same distance from the aiming point, the selection is performed according to a type priority condition, so that a screened candidate virtual object list is obtained through screening.
In summary, in the method provided in this embodiment, the candidate virtual objects that do not meet the condition in the candidate virtual object list are deleted from the list according to the aiming condition and the priority condition, and then the candidate virtual objects in the list are sorted according to the priority principle, so that the finally obtained aiming target meets the attack condition of skill or common attack, and is a virtual object that can be hit by the first virtual object.
Exemplary, an exemplary embodiment of calculating an aiming point is also presented. An exemplary embodiment of selecting a targeting target from a list of candidate virtual objects based on a priority score is also presented.
Fig. 17 illustrates a flow chart of a method for targeting a virtual object provided by an exemplary embodiment of the present application. The method may be performed by a client running on any of the terminals in fig. 1 described above, the client being a virtual environment enabled client. In accordance with the exemplary embodiment shown in FIG. 6, step 602 includes steps 6021 through 6024 and step 603 includes steps 6035 through 6039.
The aiming operation is an operation for triggering a wheel disc aiming control (a wheel disc type virtual rocker control). The user interface displayed by the client further comprises: and the wheel disc aiming control is displayed on the virtual environment picture in an overlapped mode. The wheel aiming control comprises a rocker button and a wheel area, the aiming operation is a dragging operation which triggers the wheel aiming control to enable the rocker button to move from an activation point to an offset point, and the activation point is the center position of the wheel area.
Step 6021, in response to receiving a starting instruction of the aiming operation at a first time, calculating an offset vector of an activation point and an offset point of the aiming operation;
optionally, the dragging operation (pointing operation) triggers a touch screen in the terminal to report a series of touch instructions to a Central Processing Unit (CPU), including but not limited to: a touch start command, at least one touch move command and a touch end command. Each touch command carries real-time touch coordinates (touch points) of the user's finger on the touch screen. For example, when the contact points exceed the dead zone of the wheel disk area, the client calculates the aiming point corresponding to each contact point as an offset point in real time.
Referring collectively to fig. 18, the activation point 91 refers to the center position of the roulette wheel region. In some embodiments, the center position of the wheel disk region is fixed; when the center position of the roulette wheel region is fixed, the activation point is the center position of the roulette wheel region. In other embodiments, the center position of the wheel area is dynamically changed, and the finger drop position detected by the touch screen is set to the center position of the wheel area when the thumb of the right hand of the user drops. When the center position of the roulette wheel dynamically changes, the activation point is the user's finger drop position, i.e., the center position of the roulette wheel region.
When the user's finger drags the rocker button in the wheel region, the rocker position will shift from the activation point 91 to the shift point 92. The first coordinate of the activation point 91 and the second coordinate of the offset point 92 are recorded in the client, and an offset vector is calculated according to the second coordinate and the first coordinate.
The offset vector is a vector pointing from the activation point 91 to the offset point 92, the first coordinate and the second coordinate are both coordinates (two-dimensional coordinates) of the plane of the touch screen, and the offset vector is a vector located on the plane of the touch screen.
Step 6022, calculating an aiming vector according to the offset vector, wherein the aiming vector is a vector pointing to an aiming point from the position of the first virtual object, the ratio of the offset vector to the radius of the wheel disc is equal to the ratio of the aiming vector to the aiming radius, the radius of the wheel disc is the radius of the area of the wheel disc, and the aiming radius is the maximum distance aimed by the first virtual object;
the aiming vector is a vector pointing from the first map point where the first virtual object is located to the aiming point.
Alternatively, as shown in FIG. 18, the ratio of the length L1 of the offset vector to the radius R1 of the wheel is equal to the ratio of the length L2 of the targeting vector to the targeting radius R2. The wheel radius R1 is the radius of the wheel area and the sighting radius R2 is the maximum sighting distance of the first virtual object at sighting. In some embodiments, the targeting radius R2 is equal to the maximum range distance X of the skill (or common attack). In other embodiments, the aiming vector may also be calculated based on the selection radius of the target selection range instead of the aiming radius. The radius is chosen to be equal to the sum of the maximum range distance X and the pre-aiming distance Y of the skill (or common attack). The present embodiment is exemplified by the former.
Alternatively, as shown in FIG. 18, α 1 is equal to aiming angle α 2.α 1 is the offset angle of the offset vector with respect to the horizontal direction, and α 2 is the offset angle of the aiming vector with respect to the x-axis in the virtual environment.
The aiming vector is a vector in the virtual environment. When the virtual environment is a three-dimensional virtual environment, the aiming vector is a vector on a plane in the virtual environment.
Illustratively, the client maps the offset vector into the virtual environment through a mapping relationship to obtain the aiming vector. The mapping relation is a projection relation of the virtual environment picture to the virtual environment.
Step 6023, calculating an aiming point according to the aiming vector and the position of the first virtual object;
and the client adds the position of the first virtual object and the aiming vector, and calculates to obtain an aiming point. Optionally, the aimed point is a point located on the ground plane of the virtual environment.
Step 6024, displaying a point skill indicator on the ground plane of the virtual environment according to the targeted point.
In response to receiving the end instruction of the sighting operation, an end time at which the end instruction is received is recorded, step 6035.
When the sighting operation is the quick sighting operation, that is, when the ending time is less than the third time, the ending time is recorded as the second time. When the sighting operation is the normal sighting operation, that is, when the end time is greater than the third time, the end time is recorded as the fourth time.
For example, as shown in fig. 19, when the user starts the aiming operation at time t1 and ends the aiming operation at time t3, time t3 at which the user lifts the skill key is the end time. Wherein t1 is the first time, t4 is the third time, and t4-t1 are time thresholds. When the targeting operation is the quick targeting operation, the duration of the targeting operation is shorter than the time threshold, that is, the end time is before time t4, and t3 is the second time (end time); when the targeting operation is a normal targeting operation, the duration of the targeting operation is longer than the time threshold, i.e., the end time is after time t 4. The client calculates the target aimed by the aiming operation in real time at the time t2 from t1 to t3, and after the user lifts the skill case, the first virtual object releases the skill on the aimed aiming target at the time t 5.
Step 6036, at least two second virtual objects located in a target selection range are determined as candidate virtual objects, the target selection range being a range determined according to at least one of the first map point and the aiming point where the first virtual object is located.
Step 6037, calculating a priority score of the candidate virtual object, wherein the priority score is calculated according to at least one of the type weight of the virtual object type, the linear distance, the time threshold value and the duration of the aiming operation, and the duration is the difference between the ending time and the first time.
Illustratively, different virtual object types correspond to different type weights. The client calculates the priority score of the candidate virtual object according to at least one of the type weight, the straight-line distance, the time threshold and the duration of the aiming operation of the candidate virtual object.
Illustratively, as shown in FIG. 20, step 6037 further includes steps 6037-1 through 6037-4.
Illustratively, the client computes the priority scores of the candidate virtual objects according to the following equation.
curWeight=distance+weight*Max((MaxFrame-aimFrames),0)/MaxFrame
Wherein, curWeight is a priority score, distance is a straight-line distance between the candidate virtual object and the aiming point, weight is a type weight of the candidate virtual object, MaxFrame is a time threshold, aimFrames is a duration of aiming operation, (MaxFrame-aimFrames) is a difference value between the time threshold and the duration, Max ((MaxFrame-aimFrames),0) is a larger number of the difference value and 0, when the duration is larger than the time threshold, Max ((MaxFrame-aimFrames),0) is 0, and when the duration is smaller than the time threshold ((MaxFrame-aimFrames),0) is (MaxFrame-aimFrames).
Step 6037-1, obtain the type weight corresponding to the virtual object type of the candidate virtual object.
Illustratively, the client obtains the type weight of the virtual object of each candidate virtual object.
And step 6037-2, calculating the linear distance between the candidate virtual object and the aiming point.
Illustratively, the client calculates the straight-line distance between the position of the candidate virtual object and the aiming point.
Step 6037-3, calculating a time weight corresponding to the duration of the aiming operation, wherein the time weight is 0 in response to the duration being greater than a time threshold; in response to the duration being less than the time threshold, the time weight is not 0.
Illustratively, the client first aims the duration aimFrames of the operation according to the end time and the first time (the end time minus the difference of the first time). Then, a time threshold and a duration difference (MaxFrame-aimFrames) are calculated, and time weight is calculated by using Max ((MaxFrame-aimFrames), 0).
And step 6037-4, determining the sum of the product of the type weight and the time weight and the straight-line distance as the priority score of the candidate virtual object.
Illustratively, the client brings the parameters calculated in the above steps into a calculation formula of the priority score, and calculates the priority score of the candidate virtual object.
Step 6038, a target virtual object is determined from the candidate virtual objects according to the priority scores.
For example, the client determines the candidate virtual object with the highest priority score or the lowest priority score as the target virtual object.
Illustratively, the type weight is a negative number, and the higher the priority of the virtual object type is, the smaller the type weight is, the client determines the candidate virtual object with the smallest priority number as the target virtual object.
Step 6039, control the first virtual object to aim at the target virtual object.
In summary, in the method provided in this embodiment, the priority score of each candidate virtual object is calculated by obtaining the type weight corresponding to the virtual object type of each candidate virtual object, the linear distance between each candidate virtual object and the aiming point, the time threshold, and the duration of the aiming operation, and the aiming target is determined according to the priority score. The client can select the aiming target for the first priority principle for the quick aiming operation and select the aiming target for the second priority principle for the normal aiming operation according to the priority score, so that the judgment or calculation process of the client is reduced, and the determination efficiency and accuracy of the aiming target are improved.
Taking the first virtual object as an example to release the skill to the target virtual object, as shown in fig. 21, the method for aiming at the virtual object includes:
step 2101, the rocker button of the wheel aiming control is pressed and dragged;
when the rocker button is pressed down, the touch screen reports a touch start event to the CPU, and the client records a first coordinate in the touch start event as an activation point DownPos.
When the rocker button is dragged, the touch screen reports touch movement events to the CPU according to the sampling frequency, and the client records a second coordinate in the latest touch movement event as an offset point DragPos.
Step 2102, calculating a corresponding aiming point FocusPoint of the dragged rocker button in the virtual environment;
the radius (the maximum dragging range) of the wheel disc in the wheel disc aiming control is set as MaxCragAdius, a first map point where a first hero controlled by a user is located in a virtual environment is HeroPos, the maximum range radius X of the directional technology is set as follows, and the offset position of the aiming point relative to the first map point is calculated by adopting the following proportional relation:
|DragPos-DownPos|/MaxDragRadius=|FocusPoint–HeroPos|/X;
in addition, the orientation of the aiming point FocusPoint with respect to the first map point HeroPos needs to be calculated. Illustratively, the screen center point (0,0) position is first mapped to the screen center2 sequence pos position in the three-dimensional virtual environment, which is also the center of view of the camera model, and then the screen center point (0,0) + offset vector (DragPos-DownPos) position is used to map the reference point, screen drag2scene pos. The position orientation of the reference point screendrags 2ScenePos and the observation center ScreenCenter2 sequenepos in the three-dimensional virtual environment is the position orientation of the aiming point FocusPoint and the first map point HeroPos. Combining the above yields the following formula:
FocusPoint=HeroPos+(|DragPos-DownPos|/MaxDragRadius)*X* Normalize(ScreenDrag2ScenePos-ScreenCenter2SencePos)。
wherein, the normalized unit aiming vector represents the normalized norm (Screen Drag2ScenePos-Screen center2Sence Pos).
Step 2103, calling a searching and enemy interface according to the skill information (parameters such as skill tree ID, aiming point, maximum range, pre-aiming range outside the maximum range and the like);
wherein the skill tree ID is an identification of a directional skill. The maximum range is the maximum range of directional skill, typically a circular range. Alternatively, the maximum range is represented by the above-described maximum range radius X. The pre-aiming range outside the maximum range is denoted by Y. Where Y can be configured individually for each directional skill by the strategy.
Step 2104, acquiring a second virtual object around the first hero (maximum range + pre-aiming range) and storing the second virtual object into a candidate virtual object list;
and the search enemy interface adds all other heros belonging to the circular range into the target list in the circular range determined by taking the first map point where the first hero is positioned as the center and taking (X + Y) as the radius. Wherein, X is the radius of the maximum range scope of skill, Y is the difference of the radius of the pre-aiming scope and the radius of the maximum range scope, and the pre-aiming scope is a circular ring-shaped scope sleeved outside the maximum range scope.
Step 2105, traversing the candidate virtual object list, and deleting objects which do not conform to the filter;
the plan assigns a filter ID to each directional skill, which is a legal condition that needs to be satisfied by the release target of the directional skill, such as a virtual object belonging to a different camp from the first hero, a virtual object that cannot be of a specific type (e.g., building, size dragon, eye), a virtual object that cannot be in a specific state (stealth, unselectable), and the like.
And the client traverses the candidate virtual objects in the candidate virtual object list, judges whether the candidate virtual objects meet the filter rules or not, and deletes the candidate virtual objects which do not meet the filter from the candidate virtual object list.
Step 2106, call the search tree to find the appropriate second hero.
The structure of the search tree is shown in fig. 22, and first, all nodes in the search tree are inherited from baseselect nodes, which mainly have two methods, configuration and battletoreselect, and battletere refers to candidate virtual objects. Wherein:
the configuration is used for initializing self data of the Selector subclass according to the table data of the planned configuration, for example, a branch Selector node needs to Configure a plurality of branches, the data of the configuration is ids of several branch selectors, and for example, in the shape filter node, a shape field in a target selection range needs to be configured, such as a circle and a sector, and of course, parameters such as a radius of the circle, an angle of the sector, and the like.
The input parameter of the BattleActorSelect function is a candidate virtual object List < BattleActor >, the return parameter is a filtered candidate virtual object BattleActor, but the actual content of the BattleActorSelect function has different behaviors according to the implementation of a Selector subclass.
The BaseSelector node includes three core derived subclasses: LinkedSelector, branch selector and PrioritySelector.
LinkSector: the core is that there is a next parameter for representing the next required filter, thus forming a chain structure. It has many subclasses, which are basically filters. The candidate virtual object BattleActor which does not conform to the legal rule is deleted mainly in the Select function, and List < BattleActor > of the candidate virtual object BattleActor which does not conform to the legal rule is deleted is transferred to the next Select, so that the filter is realized. For example, a ShapeSector corresponding to a target selection range configures needed graphs and parameters in a configuration, a Select function judges whether candidate virtual objects in List < BattleActor > are in shape ranges corresponding to the target selection range one by one, and deletes candidate virtual objects not in the target selection range from List < BattleActor >, and other Filter types are the same, for example, a BuffetFilter deletes candidate virtual objects with a certain type of additional effect button, and an IDSector deletes candidate virtual objects with a certain id, so as to process that a certain skill cannot hit an enemy for the second time.
BranchSector: the main function is to handle multiple rule priorities. Several selector IDs can be configured in a configuration table, member variable selectors can be initialized in a configuration function according to the configured selector IDs, a parameter List < BattleActor > acts is required to be temporarily stored in the selector function, then the selector function is called by using the base selector in the selectors and the temporarily stored List < BattleActor > as the parameter one by one to judge whether a return candidate virtual object BattleActor exists, if the return candidate virtual object BattleActor exists, the candidate virtual object BattleActor already exists and meets a target rule, the subsequent selectors do not need to be traversed, and if the return candidate virtual object BattleActor does not exist, the base selector in the next selector is called.
PrioritySector: the planner uses the Selector to sort the filtered List < BattleActor > and select the appropriate target virtual object BattleActor. The planning needs to configure a priority rule in the table, such as blood volume priority, distance priority, percentage blood volume priority, etc., and in the Select function, a List < BattleActor > is sorted according to the configured priority rule, and the first one in the List is returned, and if the List is empty, NULL is returned.
Through the combined use of the above-mentioned selectors, a very complex search logic can be realized. Schematically shown in fig. 23, when the target is determined according to the first priority principle in the fast targeting operation and the target is determined according to the second priority principle in the normal targeting operation, the structure of the whole search tree is shown in fig. 23.
Step 2301, client initializes parameters, and sets minWeight to 0; minWeighttarget is NULL
In step 2302, the client traverses the candidate virtual objects in the candidate virtual object list, and selects the current target currtarget to perform the following steps.
Step 2303, the client calculates the distance between the current target and the aiming point.
In step 2304, the client obtains the type weight of the current target.
At step 2305, the client calculates the priority score of the current target.
curWeight=distance+weight*Max((MaxFrame-aimFrames),0)/MaxFrame
In step 2306, the client determines whether the priority score of the current target is less than minWeight. If less than, go to step 2307, otherwise go to step 2308.
In step 2307, the client assigns the priority level number of the current target to minWeight, and determines the current target as minWeight target.
In step 2308, the client determines whether the candidate virtual object list is traversed, if so, step 2309 is performed, otherwise, step 2302 is performed.
Step 2309, the client outputs minWeightTarget, which is the target with the minimum priority score in the candidate virtual object list.
In summary, in the method provided in this embodiment, when the aiming operation of the user is received, the target virtual object with the highest priority is selected from the plurality of second virtual objects according to the aiming point selected by the user and the virtual object type of the second virtual object, and the first virtual object is controlled to aim at the target virtual object. The basis for the client to determine the targeted virtual object is not limited to the distance between the virtual object and the targeted point, but also preferentially targets a target which the user is more likely to attack according to the type of the virtual object, for example, the user is more likely to attack a second virtual object controlled by other clients under normal conditions. Therefore, the capability of the client for aiming at the virtual object is improved, the operation difficulty required by the user for aiming at the virtual object is reduced, and the human-computer interaction effect and accuracy of the aiming operation are improved.
The above embodiments describe the above method based on the application scenario of the game, and the following describes the above method by way of example in the application scenario of military simulation.
The simulation technology is a model technology which reflects system behaviors or processes by applying software and hardware to simulate real-world experiments.
The military simulation program is a program specially constructed for military application by using a simulation technology, and is used for carrying out quantitative analysis on sea, land, air and other operational elements, weapon equipment performance, operational actions and the like, further accurately simulating a battlefield environment, presenting a battlefield situation and realizing the evaluation of an operational system and the assistance of decision making.
In one example, soldiers establish a virtual battlefield at a terminal where military simulation programs are located and fight in a team. The soldier controls a virtual object in the virtual battlefield environment to perform at least one operation of standing, squatting, sitting, lying on the back, lying on the stomach, lying on the side, walking, running, climbing, driving, shooting, throwing, attacking, injuring, detecting, close combat and other actions in the virtual battlefield environment. The battlefield virtual environment comprises: at least one natural form of flat ground, mountains, plateaus, basins, deserts, rivers, lakes, oceans and vegetation, and site forms of buildings, vehicles, ruins, training fields and the like. The virtual object includes: virtual characters, virtual animals, cartoon characters, etc., each virtual object having its own shape and volume in the three-dimensional virtual environment occupies a part of the space in the three-dimensional virtual environment.
Based on the above situation, in one example, soldier a controls a first virtual object to move in a virtual environment, and when soldier a performs a fast aiming operation, a client determines a virtual object controlled by other soldiers and located near an aiming point as an aiming target according to the aiming point selected by soldier a, so that animals, plants, buildings and the like closer to the aiming point are prevented from being selected as the aiming target by soldier a, and therefore aiming operations of soldiers aiming at other soldiers are simplified, and aiming efficiency is improved.
For example, as shown in the false entry 24, when soldier a performs a quick aiming operation, the client determines a virtual object 2501 controlled by other soldiers and located near an aiming point as an aiming target according to the aiming point selected by soldier a, so as to avoid selecting an animal 2502 closer to the aiming point as the aiming target by soldier a.
The following are apparatus embodiments of the present application, and reference may be made to the above-described method embodiments for details not described in detail in the apparatus embodiments.
FIG. 25 is a block diagram of a targeting device for a virtual object provided in an exemplary embodiment of the present application. The device comprises:
a display module 2401, configured to display a user interface, where the user interface includes a virtual environment screen, and the virtual environment screen includes a first virtual object and a second virtual object located in the virtual environment;
an interaction module 2402, configured to receive, at a first time, an aiming operation generation start instruction;
a display module 2401, configured to display, in response to receiving the start instruction of the pointing operation at the first time, a point skill indicator on a ground plane of the virtual environment, the point skill indicator being configured to indicate a selected pointing point of the pointing operation on the ground plane of the virtual environment;
the interaction module 2402 is further configured to generate an end instruction in response to the targeting operation stopping at a second time;
a targeting module 2403, for controlling the first virtual object to target a target virtual object in response to receiving the end instruction of the targeting operation at the second time, the target virtual object being one determined from the second virtual objects according to a first priority rule, the first priority rule including a priority of a type of the virtual object being higher than a straight-line distance of the second virtual object from the targeting point;
wherein a difference between the second time and the first time is less than a time threshold.
In an optional embodiment, the apparatus further comprises:
a determining module 2404, configured to determine, in response to receiving an end instruction of the aiming operation at a second time, at least two of the second virtual objects located in a target selection range as candidate virtual objects, the target selection range being a range determined according to at least one of a first map point where the first virtual object is located and the aiming point;
an obtaining module 2405, configured to obtain the virtual object types of at least two candidate virtual objects, where the virtual object types include a first type and a second type;
the determining module 2404, further configured to determine the candidate virtual object of the first type as the target virtual object in response to the priority of the first type being higher than the priority of the second type;
the aiming module 2403 is further configured to control the first virtual object to aim at the target virtual object.
In an optional embodiment, the obtaining module 2405 is further configured to obtain linear distances between at least two candidate virtual objects of the first type and the aiming point, in response to that the priority of the first type is higher than the priority of the second type and that at least two candidate virtual objects of the first type exist;
the determining module 2404 is further configured to determine the candidate virtual object of the first type with the smallest linear distance as the target virtual object.
In an alternative embodiment, the aiming module 2403 is further configured to control the first virtual object to aim at a third virtual object in response to not receiving an instruction to end the aiming operation at a third time, where the third virtual object is one of the virtual objects determined from the second virtual objects according to a second priority rule, and the second priority rule includes a priority of the type of the virtual object being lower than a priority of the straight-line distance of the second virtual object from the aiming point;
wherein a difference between the third time and the first time is equal to the time threshold.
In an optional embodiment, the apparatus further comprises:
a determining module 2404, configured to determine, in response to not receiving the end instruction of the aiming operation at the third time, at least two of the second virtual objects located in a target selection range as candidate virtual objects, where the target selection range is a range determined according to at least one of the first map point where the first virtual object is located and the aiming point;
an obtaining module 2405, configured to obtain the linear distances between the at least two candidate virtual objects and the aiming point;
the determining module 2404 is further configured to determine the candidate virtual object with the smallest linear distance as the third virtual object;
the aiming module 2403 is further configured to control the first virtual object to aim at the third virtual object.
In an optional embodiment, the apparatus further comprises:
the interaction module 2402 is further configured to generate the end instruction in response to the targeting operation stopping;
a timing module 2406, configured to, in response to receiving the end instruction of the targeting operation, record an end time when the end instruction is received;
a determining module 2404, configured to determine at least two of the second virtual objects located in a target selection range as candidate virtual objects, where the target selection range is determined according to at least one of a first map point where the first virtual object is located and the aiming point;
a calculating module 2407, configured to calculate a priority score of the candidate virtual object, where the priority score is calculated according to at least one of a type weight of the virtual object type, the linear distance, the time threshold, and a duration of the aiming operation, and the duration is a difference between the end time and the first time;
the determining module 2404, further configured to determine the target virtual object from the candidate virtual objects according to the priority score;
the aiming module 2403 is further configured to control the first virtual object to aim at the target virtual object.
In an optional embodiment, the apparatus further comprises:
an obtaining module 2405, configured to obtain the type weight corresponding to the virtual object type of the candidate virtual object;
the calculating module 2407 is further configured to calculate the linear distance between the candidate virtual object and the aiming point;
the calculating module 2407 is further configured to calculate a time weight corresponding to the duration of the targeting operation, where in response to the duration being greater than the time threshold, the time weight is 0; in response to the duration being less than the time threshold, the time weight is not 0;
the calculating module 2407 is further configured to determine a sum of the product of the type weight and the time weight and the straight-line distance as the priority score of the candidate virtual object.
In an optional embodiment, the apparatus further comprises:
a filtering module 2408, configured to filter the candidate virtual objects according to at least one of aiming conditions and priority conditions, where the aiming conditions include conditions that the virtual objects selectable by the aiming operation should meet, and the priority conditions include at least one of the following conditions:
preferentially selecting a candidate virtual object closest to the aiming point;
preferentially selecting the candidate virtual object with the least blood volume percentage;
the candidate virtual object with the least absolute blood volume is preferentially selected.
In an alternative embodiment, the user interface includes a wheel aiming control including a rocker button and a wheel region, the aiming operation is a drag operation that triggers the wheel aiming control to move the rocker button from an activation point to an offset point, the activation point is a center position of the wheel region; the device further comprises:
a calculating module 2407, configured to calculate an offset vector of the activation point and the offset point of the targeting operation in response to receiving a start instruction of the targeting operation at a first time;
the calculating module 2407 is further configured to calculate an aiming vector according to the offset vector, where the aiming vector is a vector pointing to the aiming point from the position of the first virtual object, a ratio of the offset vector to a radius of a wheel disc is equal to a ratio of the aiming vector to an aiming radius, where the radius of the wheel disc is a radius of the area of the wheel disc, and the aiming radius is a maximum distance at which the first virtual object is aimed;
the calculating module 2407 is further configured to calculate the aiming point according to the aiming vector and a position where the first virtual object is located;
the display module 2401 is further configured to display the point skill indicator on the ground plane of the virtual environment according to the aiming point.
In an optional embodiment, the display module 2401 is further configured to display a selected special effect on the target virtual object, where the selected special effect includes at least one of the following special effects: and displaying a first selected identification on a second map point where the target virtual object is located, and displaying a second selected identification above the target virtual object.
It should be noted that: the aiming device for the virtual object provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the virtual object aiming device provided in the above embodiments and the virtual object aiming method embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments, which are not described herein again.
The application also provides a terminal, which comprises a processor and a memory, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to realize the aiming method of the virtual object provided by the various method embodiments. It should be noted that the terminal may be a terminal as provided in fig. 26 below.
Fig. 26 is a block diagram of a terminal 2900 according to an exemplary embodiment of the present application. The terminal 2900 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio layer iii, motion video Experts compression standard Audio layer 3), an MP4 player (Moving Picture Experts Group Audio layer IV, motion video Experts compression standard Audio layer 4), a notebook computer, or a desktop computer. The terminal 2900 might also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
Generally, the terminal 2900 includes: a processor 2901, and a memory 2902.
The processor 2901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 2901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 2901 may also include a main processor, which is a processor for processing data in the wake-up state, also referred to as a CPU, and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 2901 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 2901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 2902 may include one or more computer-readable storage media, which may be non-transitory. Memory 2902 can also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 2902 is used to store at least one instruction for execution by processor 2901 to implement the method of targeting a virtual object provided by method embodiments herein.
In some embodiments, the terminal 2900 may also optionally include: a peripheral interface 2903 and at least one peripheral. The processor 2901, memory 2902, and peripheral interface 2903 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 2903 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of a radio frequency circuit 2904, a touch display 2905, a camera 2906, an audio circuit 2907, a positioning component 2908, and a power source 2909.
Peripheral interface 2903 may be used to connect at least one peripheral associated with an I/O (Input/Output) to processor 2901 and memory 2902. In some embodiments, processor 2901, memory 2902, and peripheral interface 2903 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 2901, the memory 2902, and the peripheral interface 2903 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 2904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 2904 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 2904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 2904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Radio frequency circuitry 2904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 2904 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 2905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 2905 is a touch display, the display 2905 also has the ability to capture touch signals on or over the surface of the display 2905. The touch signal may be input to the processor 2901 as a control signal for processing. At this point, display 2905 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 2905 may be one, providing the front panel of the terminal 2900; in other embodiments, the display 2905 may be at least two, each disposed on a different surface of the terminal 2900 or in a folded design; in still other embodiments, the display 2905 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 2900. Even further, the display 2905 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 2905 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
Camera assembly 2906 is used to capture images or video. Optionally, camera assembly 2906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, the main camera and the wide-angle camera are fused to realize panoramic shooting and a VR (Virtual Reality) shooting function or other fusion shooting functions. In some embodiments, camera assembly 2906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp and can also be a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 2907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 2901 for processing, or inputting the electric signals to the radio frequency circuit 2904 for realizing voice communication. The microphones may be provided in a plurality, respectively, at different locations of the terminal 2900 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 2901 or the radio frequency circuit 2904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 2907 may also include a headphone jack.
The positioning component 2908 is operable to locate a current geographic location of the terminal 2900 for navigation or LBS (location based Service). The positioning component 2908 may be based on the positioning component of the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
A power supply 2909 is used to power the various components within the terminal 2900. The power source 2909 may be ac, dc, disposable, or rechargeable. When the power source 2909 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 2900 also includes one or more sensors 2910. The one or more sensors 2910 include, but are not limited to: an acceleration sensor 2911, a gyro sensor 2912, a pressure sensor 2913, a fingerprint sensor 2914, an optical sensor 2915, and a proximity sensor 2916.
The acceleration sensor 2911 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 2900. For example, the acceleration sensor 2911 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 2901 may control the touch display 2905 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 2911. The acceleration sensor 2911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 2912 may detect a body direction and a rotation angle of the terminal 2900, and the gyro sensor 2912 may collect a 3D motion of the user with respect to the terminal 2900 in cooperation with the acceleration sensor 2911. The processor 2901, based on the data collected by the gyro sensor 2912, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 2913 may be disposed on a side bezel of the terminal 2900 and/or on a lower layer of the touch display 2905. When the pressure sensor 2913 is disposed on the side frame of the terminal 2900, a user's holding signal to the terminal 2900 may be detected, and the processor 2901 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 2913. When the pressure sensor 2913 is disposed at the lower layer of the touch display 2905, control of the operability control on the UI interface is achieved by the processor 2901 according to the user's pressure operation on the touch display 2905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 2914 is configured to collect a fingerprint of the user, and the processor 2901 identifies the user according to the fingerprint collected by the fingerprint sensor 2914, or the fingerprint sensor 2914 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 2901 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. The fingerprint sensor 2914 may be provided on the front, back, or side of the terminal 2900. When a physical key or vendor Logo is provided on the terminal 2900, the fingerprint sensor 2914 may be integrated with the physical key or vendor Logo.
The optical sensor 2915 is used to collect the ambient light intensity. In one embodiment, the processor 2901 may control the display brightness of the touch display 2905 based on the ambient light intensity collected by the optical sensor 2915. Specifically, when the ambient light intensity is high, the display luminance of the touch display 2905 is turned up; when the ambient light intensity is low, the display brightness of touch display 2905 is turned down. In another embodiment, the processor 2901 may also dynamically adjust the shooting parameters of the camera assembly 2906 based on the ambient light intensity collected by the optical sensor 2915.
The proximity sensor 2916, also called a distance sensor, is generally provided on the front panel of the terminal 2900. The proximity sensor 2916 is used to collect the distance between the user and the front of the terminal 2900. In one embodiment, the processor 2901 controls the touch display 2905 to switch from a bright screen state to a dark screen state when the proximity sensor 2916 detects that the distance between the user and the front of the terminal 2900 gradually decreases; when the proximity sensor 2916 detects that the distance between the user and the front surface of the terminal 2900 gradually becomes larger, the processor 2901 controls the touch display 2905 to switch from the rest screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 26 is not intended to be limiting of terminal 2900, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
The memory also includes one or more programs, stored in the memory, that include methods for targeting virtual objects provided by embodiments of the present application.
The present application provides a computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by the processor, to implement the method of targeting a virtual object provided by the various method embodiments described above.
The present application also provides a computer program product which, when run on a computer, causes the computer to perform the method of targeting a virtual object as provided by the various method embodiments described above.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk, an optical disk, or the like.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method of aiming a virtual object, the method comprising:
displaying a user interface, the user interface including a virtual environment screen, the virtual environment screen including a first virtual object and a second virtual object located in the virtual environment;
in response to receiving a start instruction of a sighting operation at a first time, displaying a spot skill indicator on a ground plane of the virtual environment, the spot skill indicator for indicating a selected sighting point of the sighting operation on the ground plane of the virtual environment;
in response to receiving an end instruction of the aiming operation at a second time, controlling the first virtual object to aim at a target virtual object, the target virtual object being one virtual object determined from the second virtual objects according to a first priority principle, the first priority principle comprising: the priority of the virtual object type is higher than the priority of the straight-line distance between the second virtual object and the aiming point;
wherein a difference between the second time and the first time is less than a time threshold.
2. The method of claim 1, wherein controlling the first virtual object to target a target virtual object in response to receiving an end instruction of the targeting operation at a second time comprises:
in response to receiving an end instruction of the aiming operation at the second time, determining at least two of the second virtual objects located in a target selection range as candidate virtual objects, the target selection range being a range determined according to at least one of a first map point where the first virtual object is located and the aiming point;
acquiring the virtual object types of at least two candidate virtual objects, wherein the virtual object types comprise a first type and a second type;
determining the candidate virtual object of the first type as the target virtual object in response to the priority of the first type being higher than the priority of the second type;
controlling the first virtual object to aim at the target virtual object.
3. The method of claim 2, wherein the determining the candidate virtual object of the first type as the target virtual object in response to the first type having a higher priority than the second type comprises:
in response to the priority of the first type being higher than the priority of the second type and there being at least two of the candidate virtual objects of the first type, obtaining the linear distances of the at least two of the candidate virtual objects of the first type from the aiming point;
determining the candidate virtual object of the first type with the smallest straight-line distance as the target virtual object.
4. The method of any of claims 1 to 3, further comprising:
in response to not receiving an end instruction of the aiming operation at a third time instant, controlling the first virtual object to aim at a third virtual object, the third virtual object being one of the virtual objects determined from the second virtual objects according to a second priority rule, the second priority rule comprising a priority of the type of the virtual object being lower than a priority of the straight-line distance of the second virtual object from the aiming point;
wherein a difference between the third time and the first time is equal to the time threshold.
5. The method of claim 4, wherein controlling the first virtual object to target a third virtual object in response to not receiving an end instruction of the targeting operation at a third time comprises:
in response to not receiving an end instruction of the aiming operation at the third time, determining at least two of the second virtual objects located in a target selection range as candidate virtual objects, wherein the target selection range is a range determined according to at least one of the first map point and the aiming point where the first virtual object is located;
acquiring the linear distances between at least two candidate virtual objects and the aiming point;
determining the candidate virtual object with the minimum straight-line distance as the third virtual object;
controlling the first virtual object to aim at the third virtual object.
6. The method of any of claims 1 to 3, wherein said controlling the first virtual object to target a target virtual object in response to receiving an end instruction of the targeting operation at a second time comprises:
in response to receiving the end instruction of the aiming operation, recording an end time when the end instruction is received;
determining at least two of the second virtual objects located in a target selection range as candidate virtual objects, the target selection range being a range determined according to at least one of a first map point where the first virtual object is located and the aiming point;
calculating a priority score of the candidate virtual object, wherein the priority score is calculated according to at least one of a type weight of the virtual object type, the straight-line distance, the time threshold and a duration of the aiming operation, and the duration is a difference between the ending time and the first time;
determining the target virtual object from the candidate virtual objects according to the priority score;
controlling the first virtual object to aim at the target virtual object.
7. The method of claim 6, wherein the calculating the priority score of the candidate virtual object comprises:
obtaining the type weight corresponding to the virtual object type of the candidate virtual object;
calculating the linear distance between the candidate virtual object and the aiming point;
calculating a time weight corresponding to the duration of the aiming operation, wherein the time weight is 0 in response to the duration being greater than the time threshold; in response to the duration being less than the time threshold, the time weight is not 0;
determining a sum of the product of the type weight and the time weight and the straight-line distance as the priority score of the candidate virtual object.
8. The method of claim 2, wherein after determining at least two of the second virtual objects located in the target selection range as candidate virtual objects, further comprising:
screening the candidate virtual objects according to at least one of aiming conditions and priority conditions, wherein the aiming conditions comprise conditions which the virtual objects which can be selected by the aiming operation are in accordance with, and the priority conditions comprise at least one of the following conditions:
preferentially selecting a candidate virtual object closest to the aiming point;
preferentially selecting the candidate virtual object with the least blood volume percentage;
the candidate virtual object with the least absolute blood volume is preferentially selected.
9. The method of any of claims 1 to 3, wherein the user interface comprises a wheel aiming control comprising a rocker button and a wheel region, the aiming operation is a drag operation that triggers the wheel aiming control to move the rocker button from an activation point to an offset point, the activation point being a center position of the wheel region;
the displaying a spot skill indicator on a ground plane of the virtual environment in response to receiving a start instruction of a targeting operation at a first time, comprising:
in response to receiving a start instruction of a targeting operation at a first time, calculating an offset vector of the activation point and the offset point of the targeting operation;
calculating an aiming vector from the offset vector, the aiming vector being a vector pointing from the location of the first virtual object to the aiming point, the ratio of the offset vector to the radius of the wheel disc being equal to the ratio of the aiming vector to the aiming radius, the radius of the wheel disc being the radius of the area of the wheel disc, the aiming radius being the maximum distance at which the first virtual object is aimed;
calculating the aiming point according to the aiming vector and the position of the first virtual object;
displaying the point skill indicator on the ground plane of the virtual environment according to the targeted point.
10. The method of any of claims 1 to 3, wherein after controlling the first virtual object to target a target virtual object in response to receiving an end instruction of the targeting operation at a second time, further comprising:
displaying a selected effect on the target virtual object, the selected effect including at least one of: and displaying a first selected identification on a second map point where the target virtual object is located, and displaying a second selected identification above the target virtual object.
11. An apparatus for aiming a virtual object, the apparatus comprising:
a display module, configured to display a user interface, where the user interface includes a virtual environment screen, and the virtual environment screen includes a first virtual object and a second virtual object located in the virtual environment;
the interaction module is used for receiving an aiming operation generation starting instruction at a first moment;
a display module for displaying a spot skill indicator on a ground plane of the virtual environment in response to receiving the start instruction of the aiming operation at the first time, the spot skill indicator for indicating a selected aiming point of the aiming operation on the ground plane of the virtual environment;
the interaction module is further used for responding to the stopping of the aiming operation at a second moment and generating an ending instruction;
a targeting module for controlling the first virtual object to target a target virtual object in response to receiving the end instruction of the targeting operation at the second time, the target virtual object being one determined from the second virtual objects according to a first priority principle, the first priority principle comprising a priority of a virtual object type being higher than a linear distance of the second virtual object from the targeting point;
wherein a difference between the second time and the first time is less than a time threshold.
12. The apparatus of claim 11, further comprising:
a determination module, configured to determine, in response to receiving an end instruction of the aiming operation at a second time, at least two of the second virtual objects located in a target selection range as candidate virtual objects, where the target selection range is a range determined according to at least one of a first map point where the first virtual object is located and the aiming point;
an obtaining module, configured to obtain the virtual object types of at least two candidate virtual objects, where the virtual object types include a first type and a second type;
the determining module is further configured to determine the candidate virtual object of the first type as the target virtual object in response to the priority of the first type being higher than the priority of the second type;
the aiming module is also used for controlling the first virtual object to aim at the target virtual object.
13. The apparatus of claim 12, wherein the obtaining module is further configured to obtain linear distances between at least two of the candidate virtual objects of the first type and the aiming point in response to the first type having a higher priority than the second type and at least two of the candidate virtual objects of the first type being present;
the determining module is further configured to determine the candidate virtual object of the first type with the smallest straight-line distance as the target virtual object.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement a method of aiming a virtual object according to any one of claims 1 to 10.
15. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a method of aiming a virtual object according to any one of claims 1 to 10.
CN202010507566.9A 2020-06-05 2020-06-05 Virtual object aiming method, device, equipment and medium Active CN111672118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010507566.9A CN111672118B (en) 2020-06-05 2020-06-05 Virtual object aiming method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010507566.9A CN111672118B (en) 2020-06-05 2020-06-05 Virtual object aiming method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN111672118A true CN111672118A (en) 2020-09-18
CN111672118B CN111672118B (en) 2022-02-18

Family

ID=72454307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010507566.9A Active CN111672118B (en) 2020-06-05 2020-06-05 Virtual object aiming method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN111672118B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112121438A (en) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 Operation prompting method, device, terminal and storage medium
CN113110782A (en) * 2021-03-22 2021-07-13 百度在线网络技术(北京)有限公司 Image recognition method and device, computer equipment and storage medium
CN113209624A (en) * 2021-06-02 2021-08-06 北京字跳网络技术有限公司 Target selection method, terminal, electronic device and storage medium
CN113350792A (en) * 2021-06-16 2021-09-07 网易(杭州)网络有限公司 Contour processing method and device for virtual model, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105194873A (en) * 2015-10-10 2015-12-30 腾讯科技(深圳)有限公司 Information-processing method, terminal and computer storage medium
JP2016043067A (en) * 2014-08-22 2016-04-04 株式会社パオン Game program and game device
CN105597325A (en) * 2015-10-30 2016-05-25 广州银汉科技有限公司 Method and system for assisting in aiming
CN107837529A (en) * 2017-11-15 2018-03-27 腾讯科技(上海)有限公司 A kind of object selection method, device, terminal and storage medium
CN108310772A (en) * 2018-01-22 2018-07-24 腾讯科技(深圳)有限公司 The execution method and apparatus and storage medium of attack operation, electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016043067A (en) * 2014-08-22 2016-04-04 株式会社パオン Game program and game device
CN105194873A (en) * 2015-10-10 2015-12-30 腾讯科技(深圳)有限公司 Information-processing method, terminal and computer storage medium
CN105597325A (en) * 2015-10-30 2016-05-25 广州银汉科技有限公司 Method and system for assisting in aiming
CN107837529A (en) * 2017-11-15 2018-03-27 腾讯科技(上海)有限公司 A kind of object selection method, device, terminal and storage medium
CN108310772A (en) * 2018-01-22 2018-07-24 腾讯科技(深圳)有限公司 The execution method and apparatus and storage medium of attack operation, electronic device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
互联网: "王者荣耀轮盘怎么用", 《HTTPS://JINGYAN.BAIDU.COM/ARTICLE/17BD8E524B54A785AB2BB8C1.HTML》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112121438A (en) * 2020-09-29 2020-12-25 腾讯科技(深圳)有限公司 Operation prompting method, device, terminal and storage medium
CN113110782A (en) * 2021-03-22 2021-07-13 百度在线网络技术(北京)有限公司 Image recognition method and device, computer equipment and storage medium
CN113209624A (en) * 2021-06-02 2021-08-06 北京字跳网络技术有限公司 Target selection method, terminal, electronic device and storage medium
CN113350792A (en) * 2021-06-16 2021-09-07 网易(杭州)网络有限公司 Contour processing method and device for virtual model, computer equipment and storage medium
CN113350792B (en) * 2021-06-16 2024-04-09 网易(杭州)网络有限公司 Contour processing method and device for virtual model, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111672118B (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN111589131B (en) Control method, device, equipment and medium of virtual role
CN110115838B (en) Method, device, equipment and storage medium for generating mark information in virtual environment
CN111672119B (en) Method, apparatus, device and medium for aiming virtual object
CN111589133B (en) Virtual object control method, device, equipment and storage medium
CN111672118B (en) Virtual object aiming method, device, equipment and medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN111467802B (en) Method, device, equipment and medium for displaying picture of virtual environment
CN109529356B (en) Battle result determining method, device and storage medium
CN110694273A (en) Method, device, terminal and storage medium for controlling virtual object to use prop
CN111659119B (en) Virtual object control method, device, equipment and storage medium
CN111672114B (en) Target virtual object determination method, device, terminal and storage medium
CN111481934B (en) Virtual environment picture display method, device, equipment and storage medium
CN112569600B (en) Path information sending method in virtual scene, computer device and storage medium
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN112076467A (en) Method, device, terminal and medium for controlling virtual object to use virtual prop
CN110478904B (en) Virtual object control method, device, equipment and storage medium in virtual environment
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN111282266B (en) Skill aiming method, device, terminal and storage medium in three-dimensional virtual environment
CN111744186A (en) Virtual object control method, device, equipment and storage medium
CN113398571A (en) Virtual item switching method, device, terminal and storage medium
CN111760278A (en) Skill control display method, device, equipment and medium
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN112691370A (en) Method, device, equipment and storage medium for displaying voting result in virtual game
CN112870699A (en) Information display method, device, equipment and medium in virtual environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028504

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant