CN111589131A - Control method, device, equipment and medium of virtual role - Google Patents

Control method, device, equipment and medium of virtual role Download PDF

Info

Publication number
CN111589131A
CN111589131A CN202010333800.0A CN202010333800A CN111589131A CN 111589131 A CN111589131 A CN 111589131A CN 202010333800 A CN202010333800 A CN 202010333800A CN 111589131 A CN111589131 A CN 111589131A
Authority
CN
China
Prior art keywords
control
virtual
skill
state
virtual character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010333800.0A
Other languages
Chinese (zh)
Other versions
CN111589131B (en
Inventor
刘忠斌
粟山东
胡勋
付源
王振法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010333800.0A priority Critical patent/CN111589131B/en
Publication of CN111589131A publication Critical patent/CN111589131A/en
Application granted granted Critical
Publication of CN111589131B publication Critical patent/CN111589131B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a control method, a control device, control equipment and a control medium of a virtual role, and relates to the field of virtual environments. The method comprises the following steps: displaying a first user interface, wherein the first user interface comprises a first virtual environment picture and a skill control in a first state, the first virtual environment picture comprises a main control virtual role, and the skill control in the first state is used for controlling the main control virtual role to call out the calling virtual role; responding to the skill control in the first state to receive skill release operation, and displaying a second user interface, wherein the second user interface comprises a second virtual environment picture and the skill control in the second state, the second virtual environment picture comprises a main control virtual role and a calling virtual role, and the skill control in the second state is used for controlling the calling virtual role; and in response to the skill control in the second state receiving the selection operation, controlling the summoning of the virtual character towards the target object to act in the virtual environment. The method can accurately control the calling of the virtual role on the touch terminal.

Description

Control method, device, equipment and medium of virtual role
Technical Field
The embodiment of the application relates to the field of virtual environments, in particular to a method, a device, equipment and a medium for controlling a virtual role.
Background
In applications of virtual games, such as MOBA (Multiplayer Online Battle Arena) games, users can control the activities of virtual characters in a virtual environment.
In an MOBA game, a virtual character may use skills to summon a summon virtual character in a virtual environment, and the summon virtual character may be: virtual characters, virtual animals, puppets, phantom, monsters and the like, and controls the summoning virtual character to attack the enemy virtual character. In an MOBA game at a computer end, a user controls a virtual character to move in a virtual environment through a right mouse button, and after the calling virtual character is called, the calling virtual character is controlled to initiate attack to a target by pressing a click control mode of an alt button and the right mouse button.
In an MOBA of a touch terminal such as a mobile phone and a tablet, a User controls a virtual character to move in a virtual environment through a set of UI (User Interface) controls, but since a screen of the touch terminal is usually small, more UI controls cannot be displayed to control calling of the virtual character. And because the screen of the touch terminal is small, if the user controls the calling of the virtual character through the point-touch control mode, the user is easy to mistakenly touch, and can not accurately select the target or the position.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a medium for controlling a virtual role, which can accurately control and call the virtual role on a touch terminal. The technical scheme is as follows:
in one aspect, a method for controlling a virtual character is provided, where the method includes:
displaying a first user interface, wherein the first user interface comprises a first virtual environment picture and a skill control in a first state, the first virtual environment picture is a picture for observing a virtual environment from a view angle of a main control virtual role, the first virtual environment picture comprises the main control virtual role, and the skill control in the first state is used for controlling the main control virtual role to call the calling virtual role;
in response to the skill control in the first state receiving a skill release operation, displaying a second user interface, the second user interface comprising a second virtual environment screen and the skill control in a second state, the second virtual environment screen being a screen from which the virtual environment is viewed from the perspective of the master virtual character, the second virtual environment screen comprising the master virtual character and the summons virtual character, the skill control in the second state being used to control the summons virtual character;
in response to the skill control in the second state receiving a selection operation, controlling the summoning avatar to move in the virtual environment toward a target object, the target object being an object selected by the selection operation.
In another aspect, there is provided an apparatus for controlling a virtual character, the apparatus including:
the display module is used for displaying a first user interface, the first user interface comprises a first virtual environment picture and a skill control in a first state, the first virtual environment picture is a picture for observing a virtual environment from a view angle of a main control virtual role, the first virtual environment picture comprises the main control virtual role, and the skill control in the first state is used for controlling the main control virtual role to call the calling virtual role;
the interaction module is used for receiving skill release operation on the skill control in the first state;
the display module is further configured to display a second user interface in response to the skill control in the first state receiving the skill release operation, where the second user interface includes a second virtual environment screen and the skill control in a second state, the second virtual environment screen is a screen for observing the virtual environment from the perspective of the master virtual character, the second virtual environment screen includes the master virtual character and the summoning virtual character, and the skill control in the second state is used to control the summoning virtual character;
the interaction module is further used for receiving selection operation on the skill control in the second state;
and the control module is used for responding to the skill control in the second state receiving the selection operation, and controlling the summoning virtual character to move in the virtual environment towards a target object, wherein the target object is an object selected by the selection operation.
In another aspect, there is provided a computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the control method of a virtual character as described above.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by the processor to implement the control method of a virtual character as described above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the original UI control used for controlling the main control virtual role on the user interface is set to be in two states, and the skill controls in the two states are respectively used for controlling the main control virtual role and calling the virtual role. When the main control virtual role calls the calling virtual role by using the skill control in the first state, the skill control is switched to the second state, and the user can continue to use the skill control in the second state to designate a target object for the calling virtual role, so that the calling virtual role is controlled to move in a virtual environment according to the target object. The two virtual objects are controlled on the touch terminal, the UI control cannot be newly added on the user interface, and the shielding of the UI control on the picture is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal provided in an exemplary embodiment of the present application;
FIG. 2 is a block diagram of a computer system provided in an exemplary embodiment of the present application;
FIG. 3 is a schematic user interface diagram of a method for controlling a virtual character according to an exemplary embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for controlling a virtual character according to an exemplary embodiment of the present application;
fig. 5 is a schematic view of a camera model of a control method for a virtual character according to another exemplary embodiment of the present application;
FIG. 6 is a schematic user interface diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 7 is a schematic user interface diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 8 is a schematic illustration of a skill indicator of a method of controlling a virtual character provided in another exemplary embodiment of the present application;
FIG. 9 is a flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 10 is a schematic diagram of a skills control of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 11 is a schematic user interface diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 12 is a schematic user interface diagram of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 13 is a flowchart illustrating a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 14 is a flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 15 is a flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
FIG. 16 is a flowchart of a method for controlling a virtual character according to another exemplary embodiment of the present application;
fig. 17 is a block diagram of a control apparatus of a virtual character according to another exemplary embodiment of the present application;
fig. 18 is a block diagram of a terminal provided in an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are briefly described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulated world of a real world, a semi-simulated semi-fictional world, or a purely fictional world. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Virtual roles: refers to a movable object in a virtual environment. The movable object may be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual character is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual character has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment. A master virtual character refers to a virtual character that is controlled by a user (client) to be active in a virtual environment. Illustratively, the virtual character may also be referred to as a summons operator.
Calling the virtual role: are moveable objects that are controlled by the master virtual character. Illustratively, a calling avatar is an active object that the master avatar calls out in the virtual environment by releasing skills, or an existing active object in the virtual environment that the master avatar has gained control. The summoning avatar is controlled by the master avatar summoning it, i.e., by the user. The calling of the virtual role under the control of the master virtual role comprises the following steps: the control of the master virtual character is used for moving in the virtual environment, or the control of the master virtual character is used for starting attack in the virtual environment. For example, the calling avatar may be partially controlled by the master avatar, e.g., the calling avatar automatically moves in the virtual environment according to a preset algorithm, but the master avatar may control the calling avatar to attack the target. Illustratively, a summoning avatar may also be referred to as a summoning.
The MOBA game: the game is a game which provides a plurality of base points in a virtual environment, and users in different camps control virtual characters to fight in the virtual environment, take the base points or destroy enemy camp base points. For example, the MOBA game may divide the user into two enemy paradigms, and disperse the virtual characters controlled by the user in the virtual environment to compete with each other to destroy or dominate all the points of the enemy as winning conditions. The MOBA game is in the unit of a game, and the duration of the game is from the time of starting the game to the time of reaching a winning condition.
UI (User Interface) controls, any visual controls or elements visible on the User Interface of the application, such as controls for pictures, input boxes, text boxes, buttons, tabs, etc., some of which are responsive to User operations, such as moving controls, to control the movement of the virtual character within the virtual environment. The UI control referred to in the embodiments of the present application includes, but is not limited to: a movement control, a skill control. Illustratively, the skill control is a button icon displayed in the user interface, and after clicking by the user, the virtual characters can be controlled to release different skill effects, and each virtual character has a plurality of skill controls. Illustratively, the skill controls have different states, and the skill controls in different states have different presentation manners, and the states include but are not limited to: colored icons, gray icons, button clickable, button non-clickable, button click invalid, etc.
The operation mode is as follows: in the user interface, different trigger operations are used for the same UI control, so that different feedback effects can be achieved, and the trigger operations comprise clicking, double clicking, quick double clicking, long pressing dragging, sliding and the like.
Quick click: the user quickly clicks the skill control and then immediately releases the skill control, the user cannot select any target through the triggering operation, and the target is automatically selected by the client according to the preset rule. For example, the user quickly clicks a directional skill control, and the control virtual character releases the skill in the current direction of the control virtual character. The user can be enabled to quickly release the skill with fewer operations by quickly clicking the release skill.
Long press drag: the user long presses the skill control to call out the wheel disc type virtual rocker of the skill control, and drags the wheel disc to select one direction or one position in the remote sensing part of the virtual rocker. The user may select different skill release parameters based on the skill type, for example, a directional skill user may select a skill release direction and a targeted skill user may select a target located in that direction.
Skill blue amount: in the MOBA game, resources are consumed when a part of virtual characters release skills.
Cooling time: in the MOBA game, after the skill is released by part of the virtual characters, the skill cannot be released again within a period of time, and the period of time is cooling time.
An automatic attack state: the calling virtual character is in a state, a target is arranged in the state, when the target is within the common attack range of the calling virtual character, the calling virtual character automatically carries out common attack on the target, and if the target is out of the common attack range of the calling virtual character, the calling virtual character moves towards the target until the target can be attacked by using the common attack.
The method provided in the application can be applied to an application program supporting a virtual environment. Illustratively, the application that supports the virtual environment is an application that a user may play games using a user account. By way of example, the methods provided herein may be applied to: any one of a Virtual Reality (VR) application, an Augmented Reality (AR) application, a three-dimensional map program, a military Simulation program, a Virtual Reality Game, an Augmented Reality Game, a First-Person shooter Game (FPS), a Third-Person shooter Game (TPS), a Multiplayer Online tactical sports Game (MOBA), and a strategic Game (SLG).
In some embodiments, the application may be a shooting game, a racing game, a role playing game, an adventure game, a sandbox game, a tactical competition game, a military simulation program, or the like. The client can support at least one operating system of a Windows operating system, an apple operating system, an android operating system, an IOS operating system and a LINUX operating system, and the clients of different operating systems can be interconnected and intercommunicated. In some embodiments, the client is a program adapted to a mobile terminal having a touch screen.
In some embodiments, the client is an application developed based on a three-dimensional engine.
The terminal in the application has a touch function, or can simulate the touch function. The terminal may be a desktop computer, a laptop computer, a mobile phone, a tablet computer, an e-book reader, an MP3(Moving Picture experts Group Audio Layer III, motion Picture experts compression standard Audio Layer 3) player, an MP4(Moving Picture experts Group Audio Layer IV, motion Picture experts compression standard Audio Layer 4) player, and so on. The terminal is installed and operated with a client supporting a virtual environment, such as a client of an application supporting a three-dimensional virtual environment. The application program may be any one of a Battle Royal (BR) game, a virtual reality application program, an augmented reality program, a three-dimensional map program, a military simulation program, a third person shooter game, a first person shooter game, and a multiplayer online tactic competition game. Alternatively, the application may be a stand-alone application, such as a stand-alone 3D game program, or may be a network online application.
Fig. 1 is a schematic structural diagram of a terminal according to an exemplary embodiment of the present application. As shown in fig. 1, the terminal includes a processor 101, a touch screen 102, and a memory 103.
The processor 101 may be at least one of a single-core processor, a multi-core processor, an embedded chip, and a processor having instruction execution capabilities.
The touch screen 102 includes a general touch screen or a pressure sensitive touch screen. The normal touch screen can measure a pressing operation or a sliding operation applied to the touch screen 102; a pressure sensitive touch screen can measure the degree of pressure exerted on the touch screen 102.
The memory 103 stores an executable program of the processor 101. Illustratively, the memory 103 stores a virtual game program a, an application program B, an application program C, a touch pressure sensing module 18, and a kernel layer 19 of an operating system. The virtual game program a is an application program developed based on the three-dimensional virtual environment module 17. Optionally, the virtual game program a includes, but is not limited to, at least one of a game program, a virtual reality program, and a three-dimensional presentation program developed by a three-dimensional virtual environment module (also referred to as a virtual environment module) 17. For example, when the operating system of the terminal adopts an android operating system, the virtual game program a is developed by adopting Java programming language and C # language; for example, when the operating system of the terminal is the IOS operating system, the virtual game program a is developed using the Object-C programming language and the C # language.
The three-dimensional Virtual environment module 17 is a module supporting multiple operating system platforms, and schematically, the three-dimensional Virtual environment module may be used for program development in multiple fields, such as a game development field, a Virtual Reality (VR) field, and a three-dimensional map field, and the specific type of the three-dimensional Virtual environment module 17 is not limited in the embodiment of the present application, and in the following embodiment, the three-dimensional Virtual environment module 17 is a module developed by using a Unity engine as an example.
The touch (and pressure) sensing module 18 is a module for receiving a touch event (and a pressure touch event) reported by the touch screen driver 191, and optionally, the touch sensing module may not have a pressure sensing function and does not receive a pressure touch event. The touch event includes: the type of touch event and the coordinate values, the type of touch event including but not limited to: a touch start event, a touch move event, and a touch down event. The pressure touch event comprises the following steps: a pressure value and a coordinate value of the pressure touch event. The coordinate value is used for indicating a touch position of the pressure touch operation on the display screen. Optionally, an abscissa axis is established in the horizontal direction of the display screen, and an ordinate axis is established in the vertical direction of the display screen to obtain a two-dimensional coordinate system.
Illustratively, the kernel layer 19 includes a touch screen driver 191 and other drivers 192. The touch screen driver 191 is a module for detecting a pressure touch event, and when the touch screen driver 191 detects the pressure touch event, the pressure touch event is transmitted to the pressure sensing module 18.
Other drivers 192 may be drivers associated with the processor 101, drivers associated with the memory 103, drivers associated with network components, drivers associated with sound components, and the like.
Those skilled in the art will appreciate that the foregoing is merely a general illustration of the structure of the terminal. A terminal may have more or fewer components in different embodiments. For example, the terminal may further include a gravitational acceleration sensor, a gyro sensor, a power supply, and the like.
Fig. 2 shows a block diagram of a computer system provided in an exemplary embodiment of the present application. The computer system 200 includes: terminal 210, server cluster 220.
The terminal 210 is installed and operated with a client 211 supporting a virtual environment, and the client 211 may be an application supporting a virtual environment. When the terminal runs the client 211, a user interface of the client 211 is displayed on the screen of the terminal 210. The client can be any one of an FPS game, a TPS game, a military simulation program, an MOBA game, a tactical competitive game and an SLG game. In the present embodiment, the client is an MOBA game for example. The terminal 210 is a terminal used by the first user 212, and the first user 212 controls the first avatar to participate in the session using the terminal 210.
The device types of the terminal 210 include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only one terminal is shown in fig. 2, but there are a plurality of other terminals 240 in different embodiments. In some embodiments, there is at least one other terminal 240 corresponding to the developer, a development and editing platform supporting the client in the virtual environment is installed on the other terminal 240, the developer can edit and update the client on the other terminal 240, and transmit the updated client installation package to the server cluster 220 through a wired or wireless network, and the terminal 210 can download the client installation package from the server cluster 220 to update the client.
The terminal 210 and the other terminals 240 are connected to the server cluster 220 through a wireless network or a wired network.
The server cluster 220 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Server cluster 220 is used to provide background services for clients that support virtual environments. Optionally, the server cluster 220 undertakes primary computing work and the terminals undertake secondary computing work; or, the server cluster 220 undertakes the secondary computing work, and the terminal undertakes the primary computing work; or, the server cluster 220 and the terminal perform cooperative computing by using a distributed computing architecture.
Optionally, the terminal and the server are both computer devices.
In one illustrative example, server cluster 220 includes servers 221 and 226, where servers 221 include a processor 222, a user account database 223, a combat service module 224, and a user-oriented Input/Output Interface (I/O Interface) 225. The processor 222 is configured to load an instruction stored in the server 221, and process data in the user account database 221 and the combat service module 224; the user account database 221 is used for storing data of user accounts used by the terminal 210 and the other terminals 240, such as head images of the user accounts, nicknames of the user accounts, fighting capacity indexes of the user accounts, and service areas where the user accounts are located; the fight service module 224 is used for providing a plurality of fight rooms for the users to fight against; the user-facing I/O interface 225 is used to establish communication with the terminal 210 through a wireless network or a wired network to exchange data.
With reference to the above description of the virtual environment and the description of the implementation environment, a control method of a virtual character provided in the embodiment of the present application is described, and an execution subject of the method is illustrated as a client running on a terminal shown in fig. 1. The terminal runs an application program, which is a program supporting a virtual environment.
The application provides an exemplary embodiment of the control method of the virtual character applied to the MOBA game.
Illustratively, in the MOBA game, each virtual character has its own set of skills, for example, as shown in fig. 3, the master virtual character 301 has a skill one 302, a skill two 303, and a skill three 304, the master virtual character 301 using the skill one 302 may launch an attack in a specified direction, using the skill two 303 may stun a specified target, and using the skill three 304 may control the virtual character to move to a specified position. Illustratively, the master virtual character in the embodiment has a calling skill, and the master virtual character can call the calling virtual character at a user-specified position or a system default position by using the calling skill, and the calling virtual character can automatically attack enemy units (enemy virtual characters, wild monsters, enemy gun cars, enemy soldiers, enemy defense towers and enemy bases) within an attack range. Illustratively, after the master virtual character calls the calling virtual character, if the master virtual character attacks an enemy unit, the calling virtual character initiates an attack to the enemy unit which the master virtual character attacks; if the main control virtual character does not attack a certain enemy unit, calling the virtual character to automatically search the enemy unit in the attack range to initiate attack; and if no enemy unit exists in the attack range, the calling virtual character automatically moves in the virtual environment along with the main control virtual character. For example, in addition to the above method for automatically finding an enemy unit to initiate an attack, the master virtual character may also designate an enemy unit for the summoning virtual character to initiate an attack, that is, after the master virtual character summates the summoning virtual character, the master virtual character designates an enemy unit for the summoning virtual character by using a UI control (skill control) on a user interface to control the summoning virtual character to initiate an attack to the enemy unit.
In an exemplary embodiment, in the method provided by this embodiment, a user operates two different states of one UI control to implement two control operations of controlling a main control virtual role and controlling a call virtual role. Illustratively, the UI control is a skill control corresponding to the skill three of the master virtual character, and the skill three is a skill for calling the calling virtual character. The skill control has two states: a first state and a second state.
And the skill control in the first state is used for controlling the main control virtual role to call out the calling virtual role at a specified position in the virtual environment. Illustratively, the skill control in the first state is a location-type control, i.e., the user selects an area in the virtual environment within which to summon the summon avatar using the skill control in the first state.
And when the user calls out the calling virtual role by using the skill control in the first state, the skill control is automatically switched to the second state. The skill control in the second state is used to control the summoning of the virtual character. The second state of the skill control is a target-type control. For example, the skill control in the second state has two control modes, the first mode is: the user long presses the skill control to call out the wheel disc type virtual rocker of the skill control, the user drags the rocker part of the wheel disc type virtual rocker to select one direction, if an attack target exists in the direction, the user releases the skill control to select the target, and the calling virtual character is controlled to initiate attack to the target; and the second method comprises the following steps: and the user quickly clicks the skill control to control the calling virtual role to return to the main control virtual role.
Illustratively, the calling avatar has the longest survival time, and if the calling avatar is killed or the survival time of the calling avatar reaches the longest survival time, the skill control will switch from the second state back to the first state, and the user can control the skill control in the first state to call out the calling avatar again.
Illustratively, the skill control in the second state further comprises a circular ring-shaped progress bar, the progress bar is used for timing the survival time of the summoned virtual character, and when the progress bar is read to be full, the survival time of the summoned virtual character reaches the maximum survival time, and the skill control is switched back to the first state from the second state.
Illustratively, the summoning avatar may also be a summoning avatar that can have a gain effect on friend units. For example, after the master avatar calls the summons avatar, a target object is designated for the summons avatar, the summons avatar moves towards the designated target object, and after approaching the target object, a gain effect is added to the target object. The gain effects include: increasing a moving speed, increasing a defense value, increasing a life value, increasing a blue amount, increasing a life value recovery speed, increasing a blue amount recovery speed, increasing a shield, increasing an attack force, increasing an attack speed, distributing damage to a target object, resisting damage to a target object, increasing an exposure rate, increasing a visual field range.
Illustratively, in technical implementation, the client includes a presentation layer and a logic layer to implement the above method. Wherein the logical layer cannot perceive the presentation layer, i.e., the logical layer cannot access data of any presentation layer; the presentation layer can easily access the data of the logic layer, but can not modify the data of the logic layer, the server transmits the request to all the clients in the game after the modification of the logic layer needs to send a message to the server, and the logic layers of all the clients are modified uniformly, so that the logic consistency of all the clients is ensured.
When the user selects a position to release the skill by using the skill control in the first state, the client acquires data such as the skill ID (Identity Document) used by the user, the position selected by the user and the like, and sends a skill release request to the server. And after receiving the skill release request, the server processes the skill release request, judges whether the skill release is legal, and forwards the skill release data to all clients in the current game if the skill release is legal. The logic layer of the client receives the skill release data, the presentation layer displays the calling virtual character in the virtual environment according to the skill release data, the logic layer switches the skill control from the first state to the second state according to the skill release data, and the skill control in the second state is set to be capable of controlling the calling virtual character. And the presentation layer changes the icon of the skill control into the icon in the second state according to the change of the logic layer on the state of the skill control, and displays a progress bar on the skill control. When a user triggers the skill control in the second state to select a target object for calling the virtual character, the presentation layer acquires data such as the skill ID, the target object, the action position and the like, and sends a skill releasing request to the server. And after receiving the skill release request, the server processes the skill release request and forwards the skill release data to all the clients in the game. After receiving the skill release data, the logic layer of the client judges whether the user quickly clicks the skill control in the second state; if the user clicks quickly, the logic layer further judges whether a controllable calling virtual role exists; if the controllable calling virtual role exists, setting the calling virtual role as a main control virtual role according to the following target; and if the controllable calling virtual role does not exist, determining that the skill release is invalid. If the user does not quickly click the skill control, the logic layer further judges whether a controllable calling virtual role exists or not; if the controllable calling virtual role exists, the logic layer judges whether the skill release selects the target object or not; if the target object is selected, setting the calling virtual role into an automatic attack state taking the target object as an attack object; and if the target object is not selected, setting the calling virtual role to automatically follow the main control virtual role.
Fig. 4 is a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment, and the method includes at least the following steps.
Step 401, displaying a first user interface, where the first user interface includes a first virtual environment picture and a skill control in a first state, the first virtual environment picture is a picture for observing a virtual environment from a perspective of a main control virtual character, the first virtual environment picture includes the main control virtual character, and the skill control in the first state is used to control the main control virtual character to call out a calling virtual character.
Illustratively, the first user interface is the interface displayed on the client after the session has been initiated. Exemplary, before the first user interface, may further include: a team forming interface for friend team forming, a matching interface for matching the main control user account with other user accounts, a game loading interface for loading game information of the game, and the like.
Illustratively, the master virtual role in this embodiment is a virtual role controlled by the client. For example, other virtual roles may be included in the office, for example, a second virtual role, which may be a master virtual role controlled by other clients or a human-machine virtual role not controlled by any user.
The first user interface includes a first virtual environment screen, which is acquired by observing the virtual environment from the perspective of the master virtual character.
Optionally, the virtual environment picture is a picture of observing the virtual environment from the perspective of the master virtual character. The view angle refers to an observation angle when the main control virtual character is observed in the virtual environment from the first person view angle or the third person view angle. Optionally, in an embodiment of the present application, the viewing angle is an angle when the master virtual character is observed through the camera model in the virtual environment.
Optionally, the camera model automatically follows the master virtual character in the virtual environment, that is, when the position of the master virtual character in the virtual environment changes, the camera model changes simultaneously with the position of the master virtual character in the virtual environment, and the camera model is always within the preset distance range of the master virtual character in the virtual environment. Optionally, the relative positions of the camera model and the virtual character do not change during the automatic following process.
The camera model is a three-dimensional model positioned around the main control virtual character in a virtual environment, and when a first person visual angle is adopted, the camera model is positioned near the head of the main control virtual character or positioned at the head of the main control virtual character; when a third person name visual angle is adopted, the camera model can be located behind the main control virtual character and bound with the main control virtual character, and also can be located at any position away from the main control virtual character by a preset distance, the main control virtual character located in the virtual environment can be observed from different angles through the camera model, and optionally, when the third person name visual angle is the over-shoulder visual angle of the first person name, the camera model is located behind the main control virtual character (such as the head and the shoulder of the virtual character). Optionally, the viewing angle includes other viewing angles, such as a top viewing angle, in addition to the first person viewing angle and the third person viewing angle; the camera model may be positioned overhead of the head of the master virtual character when a top view is employed, which is a view of viewing the virtual environment from an overhead top view. Optionally, the camera model is not actually displayed in the virtual environment, i.e. the camera model is not displayed in the virtual environment view displayed by the user interface.
For example, the camera model is located at any position away from the main control virtual character by a preset distance, optionally, one virtual character corresponds to one camera model, and the camera model may rotate with the virtual character as a rotation center, for example: the camera model is rotated with any point of the virtual character as a rotation center, the camera model rotates not only angularly but also shifts in displacement during the rotation, and the distance between the camera model and the rotation center is kept constant during the rotation, that is, the camera model rotates on the surface of a sphere with the rotation center as the sphere center, wherein any point of the virtual character can be the head, the trunk or any point around the virtual character, which is not limited in the embodiment of the present application. Optionally, when the virtual character is observed by the camera model, the center of the view angle of the camera model points to the direction in which the point of the spherical surface on which the camera model is located points to the center of the sphere.
Optionally, the camera model may also observe the virtual character at a preset angle in different directions of the virtual character.
Referring to fig. 5, schematically, a point is determined as a rotation center 12 in the master virtual character 11, and the camera model rotates around the rotation center 12, and optionally, the camera model is configured with an initial position, which is a position above and behind the master virtual character (for example, a rear position of the brain). Illustratively, as shown in fig. 5, the initial position is position 13, and when the camera model rotates to position 14 or position 15, the direction of the angle of view of the camera model changes as the camera model rotates.
Optionally, the virtual environment displayed by the virtual environment screen includes: at least one element selected from the group consisting of mountains, flat ground, rivers, lakes, oceans, deserts, swamps, quicksand, sky, plants, buildings, and vehicles.
Illustratively, the skill control has at least two states. The skill controls in different states are displayed at the same location on the user interface. The skill controls in different states have different display modes, and the display modes comprise: at least one of icon, color, special effect, size, shape. The objects controlled by the skill controls in different states are different, for example, the skill control in a first state is used for controlling the master virtual character, and the skill control in a second state is used for controlling the summons of the virtual character. The skill controls in different states operate in different ways. The operation mode comprises the following steps: at least one of a target-type control, a directional control, and a position-type control. Wherein the target type control is used for selecting the target of the skill effect. The direction type control is used for selecting the direction of skill release in the virtual environment. The location-type control is used to select a range of skill release in the virtual environment.
For example, the skill control has two states, and the following illustrates the skill control in different states.
As shown in fig. 3, taking the skill control of skill three 304 as an example, a skill control 501 in a first state is displayed in fig. 3, and a skill control 502 in a second state is displayed in fig. 6. Wherein the skill control 501 in the first state and the skill control 502 in the second state are displayed at the same position on the user interface and have different icons.
Exemplary ways in which the user triggers the skill control include: at least one of click, double click, long press and drag.
For example, as shown in FIG. 7, a user may directly click on the skill control 501 that triggers the first state may achieve a control effect.
The user can also call up the wheel disc type virtual rocker 505 by long pressing the skill control 501 in the first state, and a control effect is realized by dragging the wheel disc type virtual rocker 505. The wheel disk type virtual rocker 505 includes a wheel disk portion 503 and a rocker portion 504, and the rocker portion 504 can move arbitrarily within the area of the wheel disk portion 503. For the directional control or the target control, the client obtains the direction of the user operation according to the position of the rocker portion 504 deviating from the central point 506 of the wheel portion 503. For the position type control, the client acquires the position of the user operation according to the relative position of the joystick portion 504 in the wheel portion 503.
Illustratively, when the user calls out the wheel disk type virtual rocker by pressing the skill control, the corresponding skill indicator will appear in the virtual environment. The skill indicator is used for displaying the current aiming state of the user in the virtual environment, the skill indicator of the direction type control displays a direction in the virtual environment, and the skill indicator of the position type control circles a position range in the virtual environment. For example, as shown in fig. 8, the skill indicators corresponding to the direction type control, the target type control and the position type control are given. Fig. 8 (1) (2) shows the skill indicators of two directional controls, where (1) is a directional arrow with the farthest action distance, the starting point of the arrow is the position of the master virtual character or the call virtual character, and the arrow points to the direction pointed by the joystick; (2) the starting point of the arrow is the position of the main control virtual character or the call virtual character, and the arrow points to the direction pointed by the joystick. Fig. 8 (3) (4) shows the skill indicators of two target-type controls, (3) shows a straight line having the farthest action distance, the starting point of the straight line is the position of the master control virtual character or the call virtual character, the straight line points to the direction pointed by the joystick portion, and when an object is located on the straight line, the object is the target for releasing the skill; (4) and (3) increasing the range of the fan shape on the basis of the step (3), wherein the range of the fan shape is positioned near the straight line, and when an object is positioned in the range of the fan shape, the object is the target of skill release. Fig. 8 (5) and (6) show two position type control skill indicators, (5) a circular area is selected in the virtual environment, the position of the circular area is determined according to the relative positions of the joystick portion and the wheel portion, and (6) a sector area is selected in the virtual environment, the origin of the sector area is the position where the control virtual character or the call virtual character is located, and the sector points in the direction pointed by the joystick portion. For example, the location-type control can also select other shapes, sizes of a range of regions in the virtual environment over which to release skills.
Illustratively, the skill control in the first state may be any one of a location-type control, a target-type control, and a direction-type control.
Illustratively, the skill control in the first state is a position-type control. The user can select a default position in the virtual environment by quickly clicking the skill control in the first state, wherein the default position is the position automatically selected by the client from the virtual environment, for example, the client automatically identifies the enemy unit in the current virtual picture and determines the position of the enemy unit as the default position. The user can also call out the roulette type virtual rocker by long-pressing and dragging, and the user manually selects a position in the virtual environment.
Illustratively, the skill control in the first state is used to control the master virtual character. Illustratively, the skill control in the first state is the skill control corresponding to the move of the master virtual character. Illustratively, a user controls a master avatar to summon a summoning avatar in a virtual environment by triggering a first state skill control.
The summoning avatar is at least one avatar controlled by the master avatar. Illustratively, the summoning avatar may be: at least one of a monster, a robot, a soldier, a phantom of a master virtual character, an animal, and a plant. Illustratively, the summoned virtual character is a virtual character summoned by the master virtual character, or a virtual character which obtains control after the master virtual character releases skills. For example, a user summons a summoning avatar at a first location by triggering a skill control in a first state. Or the user selects the wild monster in the virtual environment by triggering the skill control in the first state, and the wild monster is converted into the calling virtual role, so that the main control virtual role can control the wild monster.
Illustratively, as shown in fig. 3, the first user interface 507 is a kind of first user interface 507 provided in this embodiment, the first user interface 507 includes a first virtual character screen and a skill control 501 in a first state, and the first virtual environment screen includes a master virtual character 301.
And step 403, responding to the skill control in the first state receiving the skill release operation, and displaying a second user interface, where the second user interface includes a second virtual environment picture and the skill control in the second state, the second virtual environment picture is a picture for observing the virtual environment from the perspective of the main control virtual character, the second virtual environment picture includes the main control virtual character and the call virtual character, and the skill control in the second state is used for controlling the call virtual character.
The skill release operation is a user's trigger operation on the skill control in the first state. The skill release operation may be any one of click, double click, long press, slide, and long press drag. When the skill control in the first state is a location-type control, the skill release operation is used to select a location area in the virtual environment on which the client calls the calling avatar. When the skill control in the first state is a direction control or a target control, the skill release operation is to select a direction or a target in the virtual environment, and the client calls the calling virtual character in the direction or calls the calling virtual character to the target.
Illustratively, after receiving the skill release operation, the client calls a calling virtual role in the virtual environment according to the skill release operation, and controls the skill control to switch from the first state to the second state.
For example, the switching of the skill control from the first state to the second state may be automatically switched after the virtual role is called, or may be manually switched by a user through a trigger operation. For example, the client controls the skill control to switch from the first state to the second state in response to receiving a double click of the skill control in the first state by the user.
The skill control in the second state is used to control the summoning of the virtual character. For example, the second state of the skill control may control at least one of movement of the summoning avatar in the virtual environment, attack, defense, and release of skills. Illustratively, the second state of skill controls are target-type controls for selecting a follow-up target and an attack target that summons the virtual character.
Illustratively, as shown in fig. 6, the second user interface 510 is a kind of second user interface 510 provided in this embodiment, the second user interface 510 includes a second virtual environment screen and a skill control 502 in a second state, the second virtual environment screen includes a master virtual character 301 and a summoning virtual character 509 summoned by the master virtual character at a first position 508, where the first position 508 is determined according to the skill releasing operation.
And step 405, responding to the skill control in the second state receiving the selection operation, and controlling the summoning of the virtual character to move in the virtual environment towards a target object, wherein the target object is the object selected by the selection operation.
The selection operation is a triggering operation of the skill control in the second state by the user. The selection operation is at least one of clicking, double clicking, long pressing, sliding and long pressing dragging. Illustratively, the selection operation is used to select a target object. And when the skill control in the second state is the target type control, the target object is the target selected by the target control. And when the skill control in the second state is a direction control or a position control, the target object is a target in the direction or the position selected by the direction control or the position control.
The target object is a reference object that summons the virtual character to move within the virtual environment. And calling the virtual role to move, attack, defend or release skills in the virtual environment according to the target object. For example, a calling avatar moves in a virtual environment following a target object, the calling avatar launches an attack on the target object, the calling avatar withstands damage to the target object, the calling avatar releases skills to the target object, and so on.
In summary, in the method provided in this embodiment, the UI controls for controlling the main control virtual role originally on the user interface are set to two states, and the skill controls in the two states are respectively used for controlling the main control virtual role and calling the virtual role. When the main control virtual role calls the calling virtual role by using the skill control in the first state, the skill control is switched to the second state, and the user can continue to use the skill control in the second state to designate a target object for the calling virtual role, so that the calling virtual role is controlled to move in a virtual environment according to the target object. The two virtual objects are controlled on the touch terminal, the UI control cannot be newly added on the user interface, and the shielding of the UI control on the picture is reduced.
Illustratively, as shown in fig. 9, the present application also provides an exemplary embodiment of controlling the calling of a virtual character using different triggering operations. Illustratively, the switching of the skill control from the first state to the second state is performed automatically upon the invocation of the summoning avatar, the skill control in the second state also serving to time the survival time of the summoning avatar.
Fig. 9 is a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client is a client supporting a virtual environment, and based on the exemplary embodiment shown in fig. 4, step 402 is further included before step 403, step 406 is further included after step 405, and step 405 further includes steps 4051 and 4052.
And 402, in response to the skill control in the first state receiving the skill release operation, controlling the skill control to be switched from the first state to the second state, and timing the survival time of the calling virtual role.
Illustratively, the summoning avatar is an automatically attacking avatar. Namely, after the master virtual character calls the calling virtual character into the virtual environment, the calling virtual character automatically searches for an enemy unit in the virtual environment to attack. The enemy unit includes: and at least one of other virtual characters positioned in the main control virtual character enemy camp, soldiers in the enemy camp, cannon cars, defense towers, bases and resources (monsters, animals and plants) in the virtual environment. For example, the automatic attacking summoning virtual character automatically searches for enemy units to attack in the visual field range of the main control virtual character. For example, when the master virtual character is attacking an enemy unit, the summoning virtual character will preferentially attack the enemy unit that the master virtual character is attacking. For example, the user may control the summoning avatar to stop the automatic attack by triggering the skill control in the second state, or specify the target of the attack for the summoning avatar. Illustratively, the summoning avatar may also be an automatically defending avatar. That is, the calling avatar may automatically find friend units in the virtual environment for protection, and for example, the calling avatar preferentially protects the master avatar. Exemplary ways to summon a virtual character attack include: common attacks and skill attacks. The defense mode for calling the virtual role comprises the following steps: and (4) resisting injury and skill defense for the target object.
Illustratively, the summoning avatar may also be an avatar controlled by other clients, i.e., an avatar controlled by other users. For example, when a virtual character (teammate) in the teammate camp hangs (the user does not control the virtual character for a while), the master virtual character may acquire the control right of the teammate, and the user may switch the skill control to the second state by double-clicking the skill control in the first state, and control the activity of the teammate in the virtual environment using the skill control in the second state.
Illustratively, the summoning avatar has a maximum survival time.
Illustratively, when the summoned virtual character is a virtual character which the master virtual character summons to the virtual environment by releasing skills, the summoned virtual character has the maximum survival time in the virtual environment, for example, the summoned virtual character survives in the virtual environment for 2 minutes at most, and after 2 minutes, the summoned virtual character automatically disappears. Illustratively, the summoning avatar also has a life value, and the summoning avatar also disappears from the virtual environment when the life value of the summoning avatar is below a threshold. The maximum survival time is the maximum time that a summoned avatar can survive in the virtual environment.
When the summoning avatar appears in the virtual environment, the client may begin calculating the survival time of the summoning avatar in the virtual environment to determine if the survival time of the summoning avatar reaches a maximum survival time.
Illustratively, when the client receives the skill releasing operation, the calling virtual role is called in the virtual environment, and the skill control is controlled to be switched from the first state to the second state. Illustratively, the skill controls in the second state include a progress control for timing the time of survival for summoning the virtual character.
Illustratively, the progress control may be any one of a progress bar, a timer, and a special effect, for example, by reading the progress bar to time the survival time of the summoned virtual character, by counting down the survival time of the summoned virtual character, and by playing a fixed special effect animation to time the survival time of the summoned virtual character.
For example, as shown in fig. 10, (1) in fig. 10 is a skill control in a first state, and (2) in fig. 10 is a skill control in a second state, wherein the skill control in the second state has a progress control 511 in a ring shape, and when a progress bar in the progress control 511 is full, the survival time of the summoned virtual character reaches the maximum survival time.
And step 406, in response to that the survival time of the summoned virtual character reaches the maximum survival time, or in response to that the life value of the summoned virtual character is smaller than the life threshold, controlling the summoned virtual character to disappear from the virtual environment, and controlling the skill control to be switched from the second state to the first state.
Illustratively, when the survival time of the calling virtual character reaches the maximum survival time or the calling virtual character dies, the client controls the calling virtual character to disappear from the virtual environment, and controls the skill control to switch from the second state to the first state.
For example, in addition to automatically controlling the switching of the states of the skill control, the user may also manually control the switching of the states of the skill control by other means, such as by double-clicking the skill control in the second state to switch the skill control from the second state to the first state.
After the skill control is switched back to the first state, the user can continue to call out the calling avatar using the skill control in the first state. Illustratively, the skill control has a cooling time, when the skill control is used, the skill control enters the cooling time, the skill control in the cooling time cannot be triggered, and the skill control can be continuously triggered after the cooling time is over. For example, the skill control in the first state and the skill control in the second state may have a cooling time, respectively, or may have the same cooling time. For example, the cooling time of the skill control in the first state is 30 seconds, the cooling time of the skill control in the second state is 6 seconds, after the user calls out the calling avatar using the skill control in the first state, the skill control in the first state enters the cooling time, the skill control is switched to the second state, the user can continue to use the skill control in the second state to control the calling avatar to attack the target, but if the calling avatar dies within 30, and the skill control is switched back to the first state, the skill control in the first state is still in the cooling period at this time, and the user cannot immediately call out the calling avatar again using the skill control in the first state.
Step 4051, in response to the skill control in the second state receiving the first selection operation, controlling the summoning avatar to move in the virtual environment along with the master avatar.
For example, the skill control in the second state may receive different trigger operations, and the different trigger operations have different control effects. For example, the user may use a different trigger action to trigger a skill control in the second state to control activities such as summoning movement of the virtual character in the virtual environment, attacking, defending, and releasing skills. For example, different trigger operations may select different target objects.
Illustratively, the selection operation includes a first selection operation, and a target object of the first selection operation is a master virtual role. The first selection operation is an operation of clicking on the skill control in the second state.
Illustratively, the first selection operation is an operation in which the user quickly clicks the skill control in the second state. When the user quickly clicks the skill control in the second state, the client determines the target object as a main control virtual role, switches the calling virtual role into an automatic following state, and controls the calling virtual role to automatically follow the main control virtual role to move in the virtual environment. For example, in response to the distance between the calling avatar and the master avatar exceeding the distance threshold, the client controls the calling avatar to move to the location of the master avatar until the distance between the calling avatar and the master avatar is less than or equal to the distance threshold.
Illustratively, the first selection operation may also be at least one of a double-click, a long-press, and a slide operation.
Illustratively, as shown in fig. 11, after the master virtual character calls the calling virtual character in the virtual environment, a pointing indicator 512 is displayed near the master virtual character 301, and the pointing indicator 512 is used to indicate the relative orientation of the calling virtual character and the master virtual character 301. When the user performs the first selection operation, a special effect circle 513 and a special effect line 514 are displayed at the position where the main control virtual character 301 is located, and the special effect circle 513 and the special effect line 514 are used for prompting that the target object of the calling virtual character is the main control virtual character 301.
Step 4052, in response to the skill control in the second state receiving the second selection operation, controlling the summoning virtual character to follow the target object, and generating an additional effect on the target object, where the additional effect includes at least one of a gain effect, a gain reduction effect, and an attack effect.
Illustratively, the selection operation includes a second selection operation, and the target object of the second selection operation includes: at least one of an enemy unit of the master virtual character and an friend unit of the master virtual character.
The enemy unit is a unit which is opposite to the main control virtual role. The friend unit is the same unit as the main control virtual role. Illustratively, the enemy unit is a unit which can be attacked by calling the virtual character, and the friend unit is a unit which can not be attacked by calling the virtual character. Illustratively, an enemy unit also includes resources (monsters, animals, plants), etc. in the virtual environment. The unit includes: virtual characters, soldiers, artillery, defense towers, bases and the like.
And the second selection operation is a pointing operation on the wheel disc type virtual rocker, wherein the wheel disc type virtual rocker of the skill control is called out by long-pressing the skill control in the second state.
Illustratively, the second selection operation is an operation in which the user selects a target using the roulette virtual joystick, that is, a target object of the second selection operation is manually selected by the user.
Illustratively, as shown in fig. 12, the second selection operation is an operation in which the user triggers the roulette virtual joystick 505 to select one direction, the client determines the other virtual character 515 located in the direction as the target object, and controls the summoning virtual character 509 to move to the target object. Illustratively, a special effect circle 513 and a special effect line 514 are displayed at the position of the other virtual character 515, and the special effect circle 513 and the special effect line 514 are used for prompting that the target object calling the virtual character is the other virtual character 515.
Illustratively, when the target object is a friend unit, the calling of the virtual character produces a gain effect on the target object, the gain effect including: increasing a moving speed, increasing a defense value, increasing a life value, increasing a blue amount, increasing a life value recovery speed, increasing a blue amount recovery speed, increasing a shield, increasing an attack force, increasing an attack speed, distributing damage to a target object, resisting damage to a target object, increasing an exposure rate, increasing a visual field range.
Illustratively, when the target object is an enemy unit, a virtual character is summoned to attack the target object, or a reduction effect is generated on the target object. The reduction effects include: reducing at least one of moving speed, defense value, life value, blue amount, life value recovery speed, blue amount recovery speed, shield, attack force, attack speed, attack rate and visual field range.
Illustratively, the calling avatar can have an additional effect on the target object only if the target object is within the scope of the calling avatar. And if the target object is not in the action range of the calling virtual role, the calling virtual role automatically moves to the target object. When the target object moves out of the action range, the summoning virtual character moves in the virtual environment along with the target object.
Exemplary embodiments of controlling a summoning avatar to follow a target object and to generate additional effects on the target object are also presented. As shown in FIG. 13, step 4052 further includes steps 4052-1 through 4052-3.
Step 4052-1, in response to the skill control in the second state receiving the second selection operation, acquiring the target position of the target object.
Illustratively, when the user selects the target object in the virtual environment through the second selection operation, the client acquires the position coordinates of the target object in the virtual environment. And the client judges whether the target object is positioned in the action range of calling the virtual role according to the position coordinate of the target object.
Step 4052-2, in response to the target position being outside the range of action of the summoning avatar, controlling the summoning avatar to move towards the target position.
Illustratively, the scope of action is the range centered at the location where the virtual character is summoned. Illustratively, the scope is a circular scope centered on the summoning avatar. For example, the scope of action is within two meters of the radius around the summoning avatar.
When a user selects a target object for calling the virtual role, the client judges whether the target object is positioned in the current action range of the calling role, and if the target object is positioned outside the action range of the calling virtual role, the client controls the calling virtual role to move towards the target object. And if the target object is positioned in the action range of the calling virtual role, the client controls the calling virtual role to generate additional effect on the target object.
And 4052-3, in response to the target position being within the range of the calling avatar, controlling the calling avatar to generate an additional effect on the target object.
In summary, in the method provided in this embodiment, different control effects are generated for calling the virtual character by triggering the skill control in the second state in different operation modes, so that a user can control the calling of the virtual character in multiple different modes through one skill control without adding a UI control on a user interface, thereby reducing the shielding of the UI control on the screen. When the user quickly clicks the skill control in the second state, the calling virtual character is controlled to move in the virtual environment along with the main control virtual character, so that the effect of controlling the calling virtual character to return to the position of the main control virtual character is achieved, and the user can control the calling virtual character to move in the virtual environment.
According to the method provided by the embodiment, different control effects are generated for calling the virtual role by triggering the skill control in the second state in different operation modes, so that a user can control the calling of the virtual role in multiple different modes through one skill control without additionally arranging a UI control on a user interface, and the shielding of the UI control on a picture is reduced. When the user drags the skill control in the second state to select the target object, the calling virtual character is controlled to move towards the target object, and when the target object is within the action range of the calling virtual character, the calling virtual character is controlled to generate an additional effect on the target object, so that the calling virtual character is controlled to generate a reduction effect or an attack effect on an enemy unit, a gain effect on an friend unit is realized, and the user can control the attack or defense of the calling virtual character in a virtual environment.
In the method provided by the embodiment, when a user selects a target object for calling a virtual character by triggering the skill control in the second state, whether the target object needs to be in an action range of the calling virtual character is judged, and if the target object is not in the action range, the calling virtual character is controlled to automatically move to the target object; if the target object is in the action range, controlling the calling of the virtual character to generate an action effect on the target object; therefore, the aim of controlling the calling virtual role to initiate attack or gain to the target designated by the user is achieved.
In the method provided by the embodiment, when the calling virtual character dies, or the survival time of the calling virtual character reaches the maximum survival time, the calling virtual character is controlled to disappear in the virtual environment, and the skill control is controlled to switch from the second state to the first state at the same time, so that the user can continue to use the skill control in the first state to control the main control virtual character to call the calling virtual character.
According to the method provided by the embodiment, when the skill control is switched to the second state, the progress control is displayed and is used for displaying the survival time of the summoned virtual role, when the progress control is fully read, the survival time of the summoned virtual role reaches the maximum survival time, and the skill control is controlled to be switched back to the first state, so that a user can determine the current state of the skill control by observing the progress control, and the skill control is better used for controlling the main control virtual role or the summoned virtual role.
Exemplary, an exemplary embodiment of a multi-sided interaction is also presented. Fig. 14 is a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. Wherein, the player is a user controlling the client, the client comprises a presentation layer and a logic layer, wherein, the logic layer cannot sense the presentation layer, namely, the logic layer cannot access the data of any presentation layer; the presentation layer can easily access the data of the logic layer, but can not modify the data of the logic layer, the server transmits the request to all the clients in the game after the modification of the logic layer needs to send a message to the server, and the logic layers of all the clients are modified uniformly, so that the logic consistency of all the clients is ensured. The method comprises the following steps:
step 601, the player drags the skill control to release the large move (skill).
Step 602, the presentation layer of the client sends a skill release request to the server according to the skill ID of the player releasing the skill, the position of the releasing skill and other data.
Step 603, after receiving the skill release request, the server verifies the validity of the skill release request, and forwards the skill data in the skill release request to all clients in the local bureau.
And step 604, the logic layer of the client receives the skill data forwarded by the server, releases the large call according to the skill data, and calls a calling object (calling a virtual role) in the virtual environment.
And 605, replacing the original skill by the variant skill by the logic layer of the client, wherein the variant skill is a skill control in a second state and is used for controlling the summons, and the original skill is a skill control in a first state and is used for controlling the master control virtual role.
At step 606, the client sets the variant skills to control the summons.
Step 607, the presentation layer of the client changes the icon of the skill control according to the change of the logic layer data, changes the skill control from the first state (original skill) to the second state (variant skill), and adds a countdown control on the skill control.
The player drags or quickly clicks on the variant skills of the move, step 608.
And step 609, the presentation layer of the client sends a skill release request to the server according to the data such as the skill ID, the position, the target and the like.
And step 610, after receiving the skill release request, the server verifies the legality of the skill release request and forwards the skill data in the skill release request to all clients in the local game.
And 611, receiving the skill data forwarded by the server by the logic layer of the client, and judging whether the user quickly clicks the variant skill of the large bid or not according to the skill data. If the user quickly clicks the variant skill of the move, go to step 612; if the user is not quickly clicking on the variant skill of the move, step 614 is performed.
Step 612, the logic layer of the client determines whether there is a call object that can be controlled by the master virtual role, if yes, step 615 is performed, otherwise step 613 is performed.
Step 613, the logic layer of the client determines that the skill release is invalid.
In step 614, the logic layer of the client determines whether there is a call object that can be controlled by the master control virtual role, if so, step 616 is performed, otherwise, step 613 is performed.
Step 615, the logic layer of the client sets the call to automatically follow the master virtual role.
At step 616, the logic layer of the client determines whether the user selected the target object. If the target object is selected, proceed to step 617; otherwise, go to step 615.
Step 617, the logic layer of the client sets the summons to be an automatic attack state with the attack target being the target object.
The application further provides an exemplary embodiment that a logic layer of the client determines whether the summons are automatically attacked or not. Fig. 15 is a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment.
Step 801, a logic layer of a client sets a summons to an automatic attack state.
Step 802, the logic layer of the client determines whether the target object exists and is legal. If yes, go to step 803; otherwise, go to step 806.
In step 803, the logic layer of the client determines whether the target object is within the normal attack range of the summons.
And step 804, the logic layer of the client controls the summons to attack the target object.
At step 805, the logic layer of the client controls the summons to move towards the target object.
At step 806, the logic layer of the client ends the automatic attack state of the summons.
The application further provides an exemplary embodiment that a logic layer of the client judges whether the summons are automatically followed. Fig. 16 is a flowchart of a method for controlling a virtual character according to an exemplary embodiment of the present application. The execution subject of the method is exemplified by a client running on the terminal shown in fig. 1, the client being a client supporting a virtual environment.
In step 901, the logic layer of the client sets the summons to be in an automatic following state.
In step 902, the logic layer of the client determines whether the target object is within the following range of the summons. If yes, go to step 903; otherwise, go to step 904.
Illustratively, the following range is used to determine whether the target object is near the summons, and if the summons are close to the target object, the summons do not need to be moved, and if the target object is far from the summons, the summons need to be moved to the target object.
Illustratively, the summoning object cannot be far away from the main control virtual character, and if the client determines that the summoning object is far away from the main control virtual character, the summoning object is controlled to stop attacking the target object and move towards the main control virtual character.
And step 903, controlling the calling object to stop moving by a logic layer of the client.
Step 904, the logic layer of the client controls the summons hyper-target object to move.
In summary, in the method provided by this embodiment, by setting the logic layer and the presentation layer in the client, the presentation layer can only read the data of the logic layer but cannot change the data of the logic layer, and the logic layer can only be changed by the server, so that the data integrity of the logic layer is ensured, and the data of the logic layer of all clients in the office is consistent.
The above embodiments describe the above method based on the application scenario of the game, and the following describes the above method by way of example in the application scenario of military simulation.
The simulation technology is a model technology which reflects system behaviors or processes by simulating real world experiments by using software and hardware.
The military simulation program is a program specially constructed for military application by using a simulation technology, and is used for carrying out quantitative analysis on sea, land, air and other operational elements, weapon equipment performance, operational actions and the like, further accurately simulating a battlefield environment, presenting a battlefield situation and realizing the evaluation of an operational system and the assistance of decision making.
In one example, soldiers establish a virtual battlefield at a terminal where military simulation programs are located and fight in a team. The soldier controls a virtual object in the virtual battlefield environment to perform at least one operation of standing, squatting, sitting, lying on the back, lying on the stomach, lying on the side, walking, running, climbing, driving, shooting, throwing, attacking, injuring, reconnaissance, close combat and other actions in the virtual battlefield environment. The battlefield virtual environment comprises: at least one natural form of flat ground, mountains, plateaus, basins, deserts, rivers, lakes, oceans and vegetation, and site forms of buildings, vehicles, ruins, training fields and the like. The virtual object includes: virtual characters, virtual animals, cartoon characters, etc., each virtual object having its own shape and volume in the three-dimensional virtual environment occupies a part of the space in the three-dimensional virtual environment.
Based on the above, in one example, soldier a uses the skill control in the first state to control the first avatar to release the robot in the virtual environment (summon the avatar), while the client-controlled skill control switches to the second state. Soldier A uses the skill control in the second state to control the robot to initiate an attack to the second virtual character, or uses the skill control in the second state to control the robot to move along with the first virtual character.
In summary, in this embodiment, the control method of the virtual character is applied to a military simulation program, so that a soldier can control the first virtual character and the robot by using the same skill control, and the simulation of controlling the robot in a battlefield is realized, so that the soldier can be better trained.
In the following, embodiments of the apparatus of the present application are referred to, and for details not described in detail in the embodiments of the apparatus, the above-described embodiments of the method can be referred to.
Fig. 17 is a block diagram of a control device of a virtual character according to an exemplary embodiment of the present application. The device is applied to a terminal, wherein an application program supporting a virtual environment runs in the terminal, and the device comprises:
a display module 701, configured to display a first user interface, where the first user interface includes a first virtual environment picture and a skill control in a first state, the first virtual environment picture is a picture obtained by observing a virtual environment from a perspective of a master virtual character, the first virtual environment picture includes the master virtual character, and the skill control in the first state is used to control the master virtual character to call the calling virtual character;
an interaction module 702 for receiving a skill release operation on the skill control in the first state;
the display module 701 is further configured to, in response to the skill control in the first state receiving the skill release operation, display a second user interface, where the second user interface includes a second virtual environment screen and the skill control in a second state, the second virtual environment screen is a screen for observing the virtual environment from the perspective of the master virtual character, the second virtual environment screen includes the master virtual character and the summoning virtual character, and the skill control in the second state is used to control the summoning virtual character;
the interaction module 702 is further configured to receive a selection operation on the skill control in the second state;
a control module 703, configured to, in response to the skill control in the second state receiving the selection operation, control the summoning avatar to move in the virtual environment toward a target object, where the target object is an object selected by the selection operation.
In an optional exemplary embodiment, the selection operation comprises a first selection operation, and the target object of the first selection operation is the master virtual role;
the interaction module 702 is further configured to receive the first selection operation on the skill control in the second state;
the control module 703 is further configured to, in response to the skill control in the second state receiving the first selection operation, control the summoning avatar to move in the virtual environment along with the master avatar.
In an alternative exemplary embodiment, the first selection operation is an operation of clicking the skill control in the second state.
In one optional exemplary embodiment, the selection operation comprises a second selection operation, the target object of the second selection operation comprises: at least one of an enemy unit of the master virtual character and an friend unit of the master virtual character;
the interaction module 702 is further configured to receive the second selection operation on the skill control in the second state;
the control module 703 is further configured to, in response to the skill control in the second state receiving the second selection operation, control the summoning avatar to follow the target object and generate an additional effect on the target object, where the additional effect includes at least one of a gain effect, a reduction effect, and an attack effect.
In an optional exemplary embodiment, the apparatus further comprises:
an obtaining module 704, configured to, in response to the skill control in the second state receiving the second selection operation, obtain a target position where the target object is located;
the control module 703 is further configured to control the summoning avatar to move to the target location in response to the target location being outside the scope of the summoning avatar;
the control module 703 is further configured to control the summoning avatar to generate the additional effect on the target object in response to the target location being within the scope of the summoning avatar.
In an alternative exemplary embodiment, the second selection operation is long-pressing the skill control in the second state, evoking a wheel disc type virtual rocker of the skill control, and performing a pointing operation on the wheel disc type virtual rocker.
In an alternative exemplary embodiment, the summoning avatar has a maximum survival time: the device further comprises:
the control module 703 is further configured to, in response to the skill control in the first state receiving a skill release operation, control the skill control to switch from the first state to the second state;
a timing module 705, configured to time the survival time of the calling virtual role;
the control module 703 is further configured to control the summoning avatar to disappear from the virtual environment and control the skill control to switch from the second state to the first state in response to the survival time of the summoning avatar reaching the maximum survival time or in response to a life value of the summoning avatar being less than a life threshold.
In an alternative exemplary embodiment, the skill control of the second state includes a progress control for timing the time-to-live of the summoning avatar.
It should be noted that: the control device of the virtual character provided in the above embodiment is only illustrated by the division of the above functional modules, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the control device of the virtual character provided in the above embodiments and the control method embodiment of the virtual character belong to the same concept, and the specific implementation process thereof is described in detail in the method embodiment and is not described herein again.
Fig. 18 shows a block diagram of a terminal 1500 according to an exemplary embodiment of the present application. The terminal 1500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio layer iii, motion video Experts compression standard Audio layer 3), an MP4 player (Moving Picture Experts Group Audio layer IV, motion video Experts compression standard Audio layer 4), a notebook computer, or a desktop computer. Terminal 1500 may also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
In general, terminal 1500 includes: a processor 1501 and memory 1502.
Processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1501 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 1502 may include one or more computer-readable storage media, which may be non-transitory. The memory 1502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one instruction for execution by processor 1501 to implement the method of controlling a virtual character provided by method embodiments herein.
In some embodiments, the terminal 1500 may further include: a peripheral interface 1503 and at least one peripheral. The processor 1501, memory 1502, and peripheral interface 1503 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1503 via buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1504, touch screen display 1505, camera 1506, audio circuitry 1507, positioning assembly 1508, and power supply 1509.
The peripheral interface 1503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1501 and the memory 1502. In some embodiments, the processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 1504 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1504 can communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1505 is a touch display screen, the display screen 1505 also has the ability to capture touch signals on or over the surface of the display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. In this case, the display screen 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1505 may be one, providing the front panel of terminal 1500; in other embodiments, display 1505 may be at least two, each disposed on a different surface of terminal 1500 or in a folded design; in still other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even further, the display 1505 may be configured in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 1505 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 1506 is used to capture images or video. Optionally, the camera assembly 1506 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1507 may include a microphone and speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1501 for processing or inputting the electric signals to the radio frequency circuit 1504 to realize voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of the terminal 1500. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1507 may also include a headphone jack.
The positioning component 1508 is used to locate a current geographic position of the terminal 1500 to implement navigation or LBS (location based Service). The positioning component 1508 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 1509 is used to power the various components in terminal 1500. The power supply 1509 may be alternating current, direct current, disposable or rechargeable. When the power supply 1509 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1510. The one or more sensors 1510 include, but are not limited to: acceleration sensor 1511, gyro sensor 1512, pressure sensor 1513, fingerprint sensor 1514, optical sensor 1515, and proximity sensor 1516.
The acceleration sensor 1511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1500. For example, the acceleration sensor 1511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1501 may control the touch screen display 1505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1511. The acceleration sensor 1511 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 1512 can detect the body direction and the rotation angle of the terminal 1500, and the gyroscope sensor 1512 and the acceleration sensor 1511 cooperate to collect the 3D motion of the user on the terminal 1500. The processor 1501 may implement the following functions according to the data collected by the gyro sensor 1512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1513 may be disposed on a side bezel of terminal 1500 and/or underneath touch display 1505. When the pressure sensor 1513 is disposed on the side frame of the terminal 1500, the holding signal of the user to the terminal 1500 may be detected, and the processor 1501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1513. When the pressure sensor 1513 is disposed at a lower layer of the touch display 1505, the processor 1501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 1505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1514 is configured to capture a fingerprint of the user, and the processor 1501 identifies the user based on the fingerprint captured by the fingerprint sensor 1514, or the fingerprint sensor 1514 identifies the user based on the captured fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1514 may be disposed on the front, back, or side of the terminal 1500. When a physical key or vendor Logo is provided on the terminal 1500, the fingerprint sensor 1514 may be integrated with the physical key or vendor Logo.
The optical sensor 1515 is used to collect ambient light intensity. In one embodiment, processor 1501 may control the brightness of the display on touch screen 1505 based on the intensity of ambient light collected by optical sensor 1515. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1505 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1505 is turned down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1515.
A proximity sensor 1516, also known as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1516 is used to collect the distance between the user and the front surface of the terminal 1500. In one embodiment, when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually decreases, the processor 1501 controls the touch display 1505 to switch from the bright screen state to the dark screen state; when the proximity sensor 1516 detects that the distance between the user and the front surface of the terminal 1500 gradually becomes larger, the processor 1501 controls the touch display 1505 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 18 does not constitute a limitation of terminal 1500 and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be employed.
The present application further provides a computer device, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the control method for virtual roles provided in any of the above exemplary embodiments.
The present application further provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the control method of the virtual character provided in any of the above exemplary embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method for controlling a virtual character, the method comprising:
displaying a first user interface, wherein the first user interface comprises a first virtual environment picture and a skill control in a first state, the first virtual environment picture is a picture for observing a virtual environment at a first moment, the first virtual environment picture comprises a main control virtual role, and the skill control in the first state is used for controlling the main control virtual role to call the calling virtual role;
in response to the skill control in the first state receiving a skill release operation, displaying a second user interface, where the second user interface includes a second virtual environment screen and the skill control in a second state, the second virtual environment screen is a screen for observing the virtual environment at a second time, the second virtual environment screen includes the master virtual character and the summon virtual character, and the skill control in the second state is used for controlling the summon virtual character;
in response to the skill control in the second state receiving a selection operation, controlling the summoning avatar to move in the virtual environment toward a target object, the target object being an object selected by the selection operation.
2. The method of claim 1, wherein the selection operation comprises a first selection operation, and wherein the target object of the first selection operation is the master virtual role;
the controlling the summoning avatar to be active in the virtual environment toward the target object in response to the skill control in the second state receiving a selection operation, comprising:
in response to the skill control in the second state receiving the first selection operation, controlling the summoning avatar to follow the master avatar to move in the virtual environment.
3. The method according to claim 2, wherein the first selection operation is an operation of clicking on the skill control in the second state.
4. The method of claim 1, wherein the selection operation comprises a second selection operation, and wherein the target object of the second selection operation comprises: at least one of an enemy unit of the master virtual character and an friend unit of the master virtual character;
the controlling the summoning avatar to be active in the virtual environment toward the target object in response to the skill control in the second state receiving a selection operation, comprising:
and in response to the skill control in the second state receiving the second selection operation, controlling the calling virtual character to follow the target object and generating additional effects on the target object, wherein the additional effects comprise at least one of a gain effect, a reduction effect and an attack effect.
5. The method of claim 4, wherein the controlling the summoning avatar to follow the target object in response to the skill control in the second state receiving the second selection operation, producing an effect on the target object, comprises:
responding to the skill control in the second state receiving the second selection operation, and acquiring a target position where the target object is located;
in response to the target position being outside the range of action of the summoning virtual character, controlling the summoning virtual character to move towards the target position;
in response to the target location being within the scope of action of the summoning avatar, controlling the summoning avatar to produce the additional effect on the target object.
6. The method according to claim 4 or 5, wherein the second selection operation is a long press of the skill control in the second state, a wheel disc type virtual rocker of the skill control is evoked, and a pointing operation on the wheel disc type virtual rocker is performed.
7. The method of any of claims 1 to 5, wherein the summoning avatar has a maximum survival time: the method further comprises the following steps:
in response to the skill control in the first state receiving a skill release operation, controlling the skill control to switch from the first state to the second state, and timing the survival time of the summoning virtual character;
controlling the summoning avatar to disappear from the virtual environment in response to the survival time of the summoning avatar reaching the maximum survival time, or in response to a life value of the summoning avatar being less than a life threshold, controlling the skill control to switch from the second state to the first state.
8. The method of claim 7, wherein the skill control in the second state comprises a progress control for timing the survival time of the summoned avatar.
9. An apparatus for controlling a virtual character, the apparatus comprising:
the display module is used for displaying a first user interface, the first user interface comprises a first virtual environment picture and a skill control in a first state, the first virtual environment picture is a picture for observing a virtual environment from a view angle of a main control virtual role, the first virtual environment picture comprises the main control virtual role, and the skill control in the first state is used for controlling the main control virtual role to call the calling virtual role;
the interaction module is used for receiving skill release operation on the skill control in the first state;
the display module is further configured to display a second user interface in response to the skill control in the first state receiving the skill release operation, where the second user interface includes a second virtual environment screen and the skill control in a second state, the second virtual environment screen is a screen for observing the virtual environment from the perspective of the master virtual character, the second virtual environment screen includes the master virtual character and the summoning virtual character, and the skill control in the second state is used to control the summoning virtual character;
the interaction module is further used for receiving selection operation on the skill control in the second state;
and the control module is used for responding to the skill control in the second state receiving the selection operation, and controlling the summoning virtual character to move in the virtual environment towards a target object, wherein the target object is an object selected by the selection operation.
10. The apparatus of claim 9, wherein the selection operation comprises a first selection operation, and wherein the target object of the first selection operation is the master virtual role;
the interaction module is further configured to receive the first selection operation on the skill control in the second state;
the control module is further configured to control the summoning avatar to move in the virtual environment along with the master avatar in response to the skill control in the second state receiving the first selection operation.
11. The apparatus of claim 9, wherein the selection operation comprises a second selection operation, and wherein the target object of the second selection operation comprises: at least one of an enemy unit of the master virtual character and an friend unit of the master virtual character;
the interaction module is further configured to receive the second selection operation on the skill control in the second state;
the control module is further configured to control the calling avatar to follow the target object in response to the skill control in the second state receiving the second selection operation, and generate an additional effect on the target object, where the additional effect includes at least one of a gain effect, a reduction effect, and an attack effect.
12. The apparatus of claim 11, further comprising:
the obtaining module is used for responding to the skill control in the second state receiving the second selection operation and obtaining the target position of the target object;
the control module is further used for controlling the calling virtual character to move to the target position in response to the target position being out of the action range of the calling virtual character;
the control module is further used for controlling the calling virtual role to generate the additional effect on the target object in response to the target position being within the action range of the calling virtual role.
13. The apparatus of any of claims 9 to 12, wherein the summoning avatar has a maximum survival time: the device further comprises:
the control module is further used for controlling the skill control to be switched from the first state to the second state in response to the skill control in the first state receiving a skill release operation;
the timing module is used for timing the survival time of the calling virtual role;
the control module is further configured to control the summoning avatar to disappear from the virtual environment and control the skill control to switch from the second state to the first state in response to the survival time of the summoning avatar reaching the maximum survival time or in response to a life value of the summoning avatar being less than a life threshold.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement the control method of a virtual character according to any one of claims 1 to 8.
15. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the control method of a virtual character according to any one of claims 1 to 8.
CN202010333800.0A 2020-04-24 2020-04-24 Control method, device, equipment and medium of virtual role Active CN111589131B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010333800.0A CN111589131B (en) 2020-04-24 2020-04-24 Control method, device, equipment and medium of virtual role

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010333800.0A CN111589131B (en) 2020-04-24 2020-04-24 Control method, device, equipment and medium of virtual role

Publications (2)

Publication Number Publication Date
CN111589131A true CN111589131A (en) 2020-08-28
CN111589131B CN111589131B (en) 2022-02-22

Family

ID=72187684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010333800.0A Active CN111589131B (en) 2020-04-24 2020-04-24 Control method, device, equipment and medium of virtual role

Country Status (1)

Country Link
CN (1) CN111589131B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112076468A (en) * 2020-09-17 2020-12-15 腾讯科技(深圳)有限公司 Virtual environment picture display method, device, equipment and storage medium
CN112190930A (en) * 2020-10-26 2021-01-08 网易(杭州)网络有限公司 Control method and device for game role
CN112274927A (en) * 2020-11-18 2021-01-29 网易(杭州)网络有限公司 Game interaction method and device and electronic equipment
CN112318513A (en) * 2020-11-05 2021-02-05 达闼机器人有限公司 Robot skill debugging method and device, storage medium and electronic equipment
CN112337096A (en) * 2020-11-25 2021-02-09 腾讯科技(深圳)有限公司 Control method and device of virtual role, electronic equipment and storage medium
CN112416196A (en) * 2020-11-19 2021-02-26 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and computer readable storage medium
CN112546638A (en) * 2020-12-18 2021-03-26 网易(杭州)网络有限公司 Virtual role switching method and device, electronic equipment and storage medium
CN112675549A (en) * 2020-12-25 2021-04-20 珠海西山居移动游戏科技有限公司 Skill cooperation execution control method and device
CN113069767A (en) * 2021-04-09 2021-07-06 腾讯科技(深圳)有限公司 Virtual interaction method, device, terminal and storage medium
CN113069771A (en) * 2021-04-09 2021-07-06 网易(杭州)网络有限公司 Control method and device of virtual object and electronic equipment
CN113332724A (en) * 2021-05-24 2021-09-03 网易(杭州)网络有限公司 Control method, device, terminal and storage medium of virtual role
CN113426121A (en) * 2021-06-21 2021-09-24 网易(杭州)网络有限公司 Game control method, game control device, storage medium and computer equipment
CN113521724A (en) * 2021-07-16 2021-10-22 腾讯科技(上海)有限公司 Method, device, equipment and storage medium for controlling virtual role
CN113577780A (en) * 2021-07-27 2021-11-02 网易(杭州)网络有限公司 Method and device for controlling virtual character in game and electronic equipment
CN113633984A (en) * 2021-08-17 2021-11-12 腾讯科技(深圳)有限公司 Game object control method, device, equipment and medium
CN113750518A (en) * 2021-09-10 2021-12-07 网易(杭州)网络有限公司 Skill button control method and device, electronic equipment and computer readable medium
CN113750531A (en) * 2021-09-18 2021-12-07 腾讯科技(深圳)有限公司 Prop control method, device, equipment and storage medium in virtual scene
CN114612553A (en) * 2022-03-07 2022-06-10 北京字跳网络技术有限公司 Virtual object control method and device, computer equipment and storage medium
WO2022142622A1 (en) * 2020-12-30 2022-07-07 腾讯科技(深圳)有限公司 Method and apparatus for selecting virtual object interaction mode, device, medium, and product
WO2022252911A1 (en) * 2021-05-31 2022-12-08 腾讯科技(深圳)有限公司 Method and apparatus for controlling called object in virtual scene, and device, storage medium and program product
WO2024093941A1 (en) * 2022-10-31 2024-05-10 不鸣科技(杭州)有限公司 Method and apparatus for controlling virtual object in virtual scene, device, and product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102886140A (en) * 2011-10-04 2013-01-23 微软公司 Game controller on touch-enabled mobile device
JP5180047B2 (en) * 2008-12-09 2013-04-10 株式会社スクウェア・エニックス Video game processing apparatus, video game processing method, and video game processing program
CN107529442A (en) * 2017-08-03 2018-01-02 腾讯科技(深圳)有限公司 Virtual object control method, device, computer equipment and computer-readable storage medium
CN110559658A (en) * 2019-09-04 2019-12-13 腾讯科技(深圳)有限公司 Information interaction method, device, terminal and storage medium
CN110743166A (en) * 2019-10-22 2020-02-04 腾讯科技(深圳)有限公司 Skill button switching method and device, storage medium and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5180047B2 (en) * 2008-12-09 2013-04-10 株式会社スクウェア・エニックス Video game processing apparatus, video game processing method, and video game processing program
CN102886140A (en) * 2011-10-04 2013-01-23 微软公司 Game controller on touch-enabled mobile device
CN107529442A (en) * 2017-08-03 2018-01-02 腾讯科技(深圳)有限公司 Virtual object control method, device, computer equipment and computer-readable storage medium
CN110559658A (en) * 2019-09-04 2019-12-13 腾讯科技(深圳)有限公司 Information interaction method, device, terminal and storage medium
CN110743166A (en) * 2019-10-22 2020-02-04 腾讯科技(深圳)有限公司 Skill button switching method and device, storage medium and electronic device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
木瓜游戏解说: "第一代安琪拉谁玩过?召唤大熊一坐一个小鲁班,被重做原因很奇葩", 《HTTPS://WWW.BILIBILI.COM/VIDEO/BV12J411T7LX?FROM=SEARCH&SEID=5575412089028043623》 *
游戏晓小新: "王者荣耀:瑶带着梦奇出国去AOV客串?新英雄依夏也太像她俩了", 《HTTPS://BAIJIAHAO.BAIDU.COM/S?ID=1635365890427714426&WFR=SPIDER&FOR=PC》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112076468A (en) * 2020-09-17 2020-12-15 腾讯科技(深圳)有限公司 Virtual environment picture display method, device, equipment and storage medium
CN112076468B (en) * 2020-09-17 2022-07-22 腾讯科技(深圳)有限公司 Virtual environment picture display method, device, equipment and storage medium
CN112190930A (en) * 2020-10-26 2021-01-08 网易(杭州)网络有限公司 Control method and device for game role
CN112190930B (en) * 2020-10-26 2024-03-22 网易(杭州)网络有限公司 Game role control method and device
CN112318513A (en) * 2020-11-05 2021-02-05 达闼机器人有限公司 Robot skill debugging method and device, storage medium and electronic equipment
CN112274927A (en) * 2020-11-18 2021-01-29 网易(杭州)网络有限公司 Game interaction method and device and electronic equipment
CN112416196A (en) * 2020-11-19 2021-02-26 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and computer readable storage medium
CN112416196B (en) * 2020-11-19 2022-08-30 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and computer readable storage medium
CN112337096A (en) * 2020-11-25 2021-02-09 腾讯科技(深圳)有限公司 Control method and device of virtual role, electronic equipment and storage medium
CN112337096B (en) * 2020-11-25 2022-08-30 深圳市腾讯计算机***有限公司 Control method and device of virtual role, electronic equipment and storage medium
CN112546638A (en) * 2020-12-18 2021-03-26 网易(杭州)网络有限公司 Virtual role switching method and device, electronic equipment and storage medium
CN112546638B (en) * 2020-12-18 2024-05-10 网易(杭州)网络有限公司 Virtual role switching method and device, electronic equipment and storage medium
CN112675549A (en) * 2020-12-25 2021-04-20 珠海西山居移动游戏科技有限公司 Skill cooperation execution control method and device
JP7501977B2 (en) 2020-12-30 2024-06-18 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 Method, apparatus, device, medium and computer program for selecting a virtual object interaction mode
WO2022142622A1 (en) * 2020-12-30 2022-07-07 腾讯科技(深圳)有限公司 Method and apparatus for selecting virtual object interaction mode, device, medium, and product
CN113069771A (en) * 2021-04-09 2021-07-06 网易(杭州)网络有限公司 Control method and device of virtual object and electronic equipment
CN113069767A (en) * 2021-04-09 2021-07-06 腾讯科技(深圳)有限公司 Virtual interaction method, device, terminal and storage medium
CN113069771B (en) * 2021-04-09 2024-05-28 网易(杭州)网络有限公司 Virtual object control method and device and electronic equipment
CN113332724B (en) * 2021-05-24 2024-04-30 网易(杭州)网络有限公司 Virtual character control method, device, terminal and storage medium
CN113332724A (en) * 2021-05-24 2021-09-03 网易(杭州)网络有限公司 Control method, device, terminal and storage medium of virtual role
WO2022252911A1 (en) * 2021-05-31 2022-12-08 腾讯科技(深圳)有限公司 Method and apparatus for controlling called object in virtual scene, and device, storage medium and program product
CN113426121A (en) * 2021-06-21 2021-09-24 网易(杭州)网络有限公司 Game control method, game control device, storage medium and computer equipment
CN113426121B (en) * 2021-06-21 2024-03-12 网易(杭州)网络有限公司 Game control method, game control device, storage medium and computer equipment
WO2023284470A1 (en) * 2021-07-16 2023-01-19 腾讯科技(深圳)有限公司 Method and apparatus for controlling virtual character, and device and storage medium
CN113521724A (en) * 2021-07-16 2021-10-22 腾讯科技(上海)有限公司 Method, device, equipment and storage medium for controlling virtual role
CN113521724B (en) * 2021-07-16 2023-11-07 腾讯科技(上海)有限公司 Method, device, equipment and storage medium for controlling virtual character
CN113577780A (en) * 2021-07-27 2021-11-02 网易(杭州)网络有限公司 Method and device for controlling virtual character in game and electronic equipment
CN113633984B (en) * 2021-08-17 2024-02-02 腾讯科技(深圳)有限公司 Game object control method, device, equipment and medium
CN113633984A (en) * 2021-08-17 2021-11-12 腾讯科技(深圳)有限公司 Game object control method, device, equipment and medium
CN113750518A (en) * 2021-09-10 2021-12-07 网易(杭州)网络有限公司 Skill button control method and device, electronic equipment and computer readable medium
CN113750531A (en) * 2021-09-18 2021-12-07 腾讯科技(深圳)有限公司 Prop control method, device, equipment and storage medium in virtual scene
CN113750531B (en) * 2021-09-18 2023-06-16 腾讯科技(深圳)有限公司 Prop control method, device, equipment and storage medium in virtual scene
CN114612553A (en) * 2022-03-07 2022-06-10 北京字跳网络技术有限公司 Virtual object control method and device, computer equipment and storage medium
WO2024093941A1 (en) * 2022-10-31 2024-05-10 不鸣科技(杭州)有限公司 Method and apparatus for controlling virtual object in virtual scene, device, and product

Also Published As

Publication number Publication date
CN111589131B (en) 2022-02-22

Similar Documents

Publication Publication Date Title
CN111589131B (en) Control method, device, equipment and medium of virtual role
CN110694261B (en) Method, terminal and storage medium for controlling virtual object to attack
CN110413171B (en) Method, device, equipment and medium for controlling virtual object to perform shortcut operation
CN110755841B (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN111249730B (en) Virtual object control method, device, equipment and readable storage medium
CN111589124B (en) Virtual object control method, device, terminal and storage medium
CN110613938B (en) Method, terminal and storage medium for controlling virtual object to use virtual prop
CN111589130B (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN110694273A (en) Method, device, terminal and storage medium for controlling virtual object to use prop
CN113398571B (en) Virtual item switching method, device, terminal and storage medium
CN111589133A (en) Virtual object control method, device, equipment and storage medium
CN113289331B (en) Display method and device of virtual prop, electronic equipment and storage medium
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN112494955A (en) Skill release method and device for virtual object, terminal and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN111589140A (en) Virtual object control method, device, terminal and storage medium
CN111596838B (en) Service processing method and device, computer equipment and computer readable storage medium
CN111282266B (en) Skill aiming method, device, terminal and storage medium in three-dimensional virtual environment
CN113577765B (en) User interface display method, device, equipment and storage medium
CN111589127A (en) Control method, device and equipment of virtual role and storage medium
CN111921194A (en) Virtual environment picture display method, device, equipment and storage medium
CN111672104A (en) Virtual scene display method, device, terminal and storage medium
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN111672118A (en) Virtual object aiming method, device, equipment and medium
CN113398572A (en) Virtual item switching method, skill switching method and virtual object switching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40027389

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant