CN110743168B - Virtual object control method in virtual scene, computer equipment and storage medium - Google Patents

Virtual object control method in virtual scene, computer equipment and storage medium Download PDF

Info

Publication number
CN110743168B
CN110743168B CN201910999297.XA CN201910999297A CN110743168B CN 110743168 B CN110743168 B CN 110743168B CN 201910999297 A CN201910999297 A CN 201910999297A CN 110743168 B CN110743168 B CN 110743168B
Authority
CN
China
Prior art keywords
virtual object
virtual
scene
control
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910999297.XA
Other languages
Chinese (zh)
Other versions
CN110743168A (en
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910999297.XA priority Critical patent/CN110743168B/en
Publication of CN110743168A publication Critical patent/CN110743168A/en
Application granted granted Critical
Publication of CN110743168B publication Critical patent/CN110743168B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4488Object-oriented
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a virtual object control method, computer equipment and a storage medium in a virtual scene, and relates to the technical field of virtual scenes. The method comprises the following steps: and displaying a display interface of the application program, wherein in the display interface, a first trigger control is respectively displayed corresponding to each candidate virtual object in the first virtual object, then the first trigger control displayed by each candidate virtual object is triggered, the target virtual object in each candidate virtual object is determined, and then the second virtual object is controlled to automatically move to the appointed relative position of the target virtual object, so that the virtual object is automatically controlled to move to another virtual object according to the operation of a user in a scene interface of a virtual scene, the control flexibility of the virtual object is improved, the moving efficiency of the virtual object in the virtual scene is improved, and the electric quantity and the data flow consumed by a terminal are further saved.

Description

Virtual object control method in virtual scene, computer equipment and storage medium
Technical Field
The present application relates to the field of virtual scenes, and in particular, to a virtual object control method, a computer device, and a storage medium in a virtual scene.
Background
Many game applications that construct virtual scenes currently have a function of controlling the movement of virtual objects.
In the related art, when the terminal displays a scene picture of a virtual scene, a movement control is superimposed on the scene picture, for example, a virtual rocker is superimposed, and a user can perform touch operation on the virtual rocker to control a virtual object corresponding to the terminal to move in the virtual scene, where the direction of the virtual rocker is the moving direction of the virtual object. In the process that the virtual object moves to a certain target position, the whole moving path of the virtual object is manually controlled by a user through a virtual rocker.
The efficiency of the movement of virtual objects in a virtual scene is an important factor affecting the duration of a game. In the related art, the scheme of controlling the virtual object to move in the virtual scene by only moving the control results in lower moving efficiency of the virtual object, and further results in longer time required by a game, so that the electric quantity and the data flow of the terminal are greatly consumed.
Disclosure of Invention
The embodiment of the application provides a virtual object control method, computer equipment and storage medium in a virtual scene, which can improve the moving efficiency of a virtual object in the virtual scene, further save the electric quantity and data flow consumed by a terminal, and the technical scheme is as follows:
In one aspect, a virtual object control method in a virtual scene is provided, the method comprising:
displaying a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene, and the scene picture comprises a first virtual object; the first virtual object is at least one other virtual object except a second virtual object controlled by the terminal;
in the display interface, a first trigger control is displayed corresponding to each candidate virtual object in the first virtual object; the candidate virtual objects are all or part of the first virtual object;
determining target virtual objects in the candidate virtual objects according to triggering operations of the first triggering controls respectively displayed on the candidate virtual objects;
and controlling the second virtual object to automatically move to the appointed relative position of the target virtual object.
In one aspect, there is provided a virtual object control apparatus in a virtual scene, the apparatus comprising:
the interface display module is used for displaying a display interface of the application program, wherein the display interface comprises a scene picture of a virtual scene, and the scene picture comprises a first virtual object; the first virtual object is at least one other virtual object except a second virtual object controlled by the terminal;
The control display module is used for displaying a first trigger control corresponding to each candidate virtual object in the first virtual object in the display interface; the candidate virtual objects are all or part of the first virtual object;
the target determining module is used for determining target virtual objects in the candidate virtual objects according to the triggering operation of the first triggering control which is respectively displayed on the candidate virtual objects;
and the object moving module is used for controlling the second virtual object to automatically move to the appointed relative position of the target virtual object.
Optionally, the control display module comprises,
the second control display sub-module is used for displaying a second trigger control in a superposition manner in the display interface;
and the first control display sub-module is used for respectively displaying one first trigger control corresponding to each candidate virtual object in the display interface when receiving the trigger operation of the second trigger control.
Optionally, the second control display sub-module is configured to superimpose and display the second trigger control on the display interface when the second virtual object has the specified virtual prop.
Optionally, the apparatus further includes:
and the candidate object determining module is used for determining the virtual objects, of which the distances between the first virtual object and the second virtual object are smaller than a specified distance threshold, as the candidate virtual objects before the control display module displays a first trigger control corresponding to each candidate virtual object in the first virtual object in a display interface.
Optionally, the apparatus further includes:
the target redetermination module is used for determining a new target virtual object if the number of times of the current movement of the second virtual object does not reach a number threshold, wherein the new target virtual object is a virtual object in a preset range around the second virtual object;
the object moving module is further configured to control the second virtual object to automatically move to a specified relative position of the new target virtual object.
Optionally, the target virtual object includes at least two virtual objects, where the at least two virtual objects are candidate virtual objects corresponding to at least two first trigger controls that receive a trigger operation respectively; the object moving module is used for controlling the second virtual object to sequentially move to the appointed relative position of the at least two virtual objects.
Optionally, the apparatus further includes:
and the operation control module is used for controlling the second virtual object to execute the appointed operation on the target virtual object when controlling the second virtual object to automatically move to the appointed relative position of the target virtual object.
In another aspect, a computer device is provided, the computer device including a processor and a memory, where the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, and the at least one instruction, the at least one program, the set of codes, or the set of instructions are loaded and executed by the processor to implement the virtual object control method in the virtual scene.
In yet another aspect, a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions loaded and executed by a processor to implement the virtual object control method in a virtual scene described above is provided.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
through a display interface of the display application program, a first trigger control is displayed in the display interface corresponding to each candidate virtual object in the first virtual object, then trigger operation is carried out on the first trigger control displayed by each candidate virtual object respectively, a target virtual object in each candidate virtual object is determined, and then the second virtual object is controlled to automatically move to a specified relative position of the target virtual object, so that the virtual object is automatically controlled to move to another virtual object according to the operation of a user in a scene interface of a virtual scene, the control flexibility of the virtual object is improved, the moving efficiency of the virtual object in the virtual scene is improved, and the electric quantity and the data flow consumed by a terminal are further saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic structural view of a terminal according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of a display interface of a virtual scene provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a virtual object control flow in a virtual scene provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of a display interface of a virtual scene according to the embodiment shown in FIG. 3;
FIG. 5 is a method flow diagram of a virtual object control method in a virtual scene provided by an exemplary embodiment of the application;
FIG. 6 is a schematic diagram of a display interface according to the embodiment of FIG. 5;
FIG. 7 is a schematic diagram of a determination candidate virtual object according to the embodiment shown in FIG. 5;
FIG. 8 is a schematic diagram of a second trigger control display interface involved in the embodiment of FIG. 5;
FIG. 9 is a schematic diagram of a first trigger control display interface involved in the embodiment of FIG. 5;
FIG. 10 is a schematic diagram of a determination target virtual object according to the embodiment shown in FIG. 5;
FIG. 11 is a schematic view of a camera model related to the embodiment shown in FIG. 5;
FIG. 12 is a flow chart of controlling virtual object movement and performing attack actions in a virtual scenario provided by an exemplary embodiment of the present application;
fig. 13 is a block diagram illustrating a configuration of a virtual object control apparatus in a virtual scene according to an exemplary embodiment of the present application;
fig. 14 is a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
Virtual scene: is a virtual scene that an application displays (or provides) while running on a terminal. The virtual scene can be a simulation environment scene of a real world, a half-simulation half-fictional three-dimensional environment scene, or a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are exemplified by the virtual scene being a three-dimensional virtual scene, but are not limited thereto. Optionally, the virtual scene may also be used for virtual scene fight between at least two virtual characters. Optionally, the virtual scene may also be used to fight between at least two virtual characters using a virtual firearm. Optionally, the virtual scene may be further operable to use the virtual firearm to fight between at least two virtual characters within a target area range that is continuously smaller over time in the virtual scene.
Virtual object: refers to movable objects in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, a virtual vehicle. Alternatively, when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional stereoscopic model created based on an animated skeleton technique. Each virtual object has its own shape, volume, and orientation in the three-dimensional virtual scene and occupies a portion of the space in the three-dimensional virtual scene.
Virtual scenes are typically presented by application generation in a computer device such as a terminal based on hardware (such as a screen) in the terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a notebook computer or a personal computer device of a stationary computer.
Referring to fig. 1, a schematic structural diagram of a terminal according to an exemplary embodiment of the present application is shown. As shown in fig. 1, the terminal includes a main board 110, an external input/output device 120, a memory 130, an external interface 140, a touch system 150, and a power supply 160.
Wherein, the motherboard 110 has integrated therein processing elements such as a processor and a controller.
The external input/output device 120 may include a display component (such as a display screen), a sound playing component (such as a speaker), a sound collecting component (such as a microphone), various types of keys, and the like.
The memory 130 has stored therein program codes and data.
The external interface 140 may include a headset interface, a charging interface, a data interface, and the like.
The touch system 150 may be integrated in a display component or key of the external input/output device 120, and the touch system 150 is used to detect a touch operation performed by a user on the display component or key.
The power supply 160 is used to power the other various components in the terminal.
In an embodiment of the present application, the processor in the motherboard 110 may generate a virtual scene by executing or calling program codes and data stored in the memory, and present the generated virtual scene through the external output/input device 120. In the process of displaying the virtual scene, the capacitive touch system 150 can detect the touch operation performed when the user interacts with the virtual scene.
The virtual scene may be a three-dimensional virtual scene, or the virtual scene may be a two-dimensional virtual scene. Taking an example that the virtual scene is a three-dimensional virtual scene, please refer to fig. 2, which illustrates a schematic diagram of a display interface of the virtual scene provided in an exemplary embodiment of the present application. As shown in fig. 2, the display interface of the virtual scene includes a scene screen 200, and the scene screen 200 includes a virtual object 210 currently controlled, an environment screen 220 of the three-dimensional virtual scene, and a virtual object 240. Wherein, the virtual object 240 may be a virtual object controlled by a corresponding user of other terminals or a virtual object controlled by an application program.
In fig. 2, the currently controlled virtual object 210 and the virtual object 240 are three-dimensional models in a three-dimensional virtual scene, and an environmental screen of the three-dimensional virtual scene displayed in the scene screen 200 is an object observed from a perspective of the currently controlled virtual object 210, and as illustrated in fig. 2, an environmental screen 220 of the three-dimensional virtual scene displayed under the perspective of the currently controlled virtual object 210 is the ground 224, the sky 225, the horizon 223, the hill 221, and the factory building 222, for example.
The currently controlled virtual object 210 may be moved instantaneously under the control of the user. For example, the screen of the terminal supports touch operation, and the scene picture 200 of the virtual scene contains a virtual control, so that when the user touches the virtual control, the virtual object 210 currently controlled can move in the virtual scene.
In the embodiment of the application, the virtual object which is currently controlled can move under the control of the terminal and execute operations such as specified actions.
For example, please refer to fig. 3, which illustrates a schematic diagram of a virtual object control flow in a virtual scene according to an exemplary embodiment of the present application. As shown in fig. 3, a terminal running an application program corresponding to the virtual scene (such as the terminal shown in fig. 1) may control the virtual object in the virtual scene to move by performing the following steps.
Step 31, displaying a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene, and the scene picture comprises a first virtual object; the first virtual object is at least one other virtual object than the second virtual object controlled by the terminal.
Step 32, in the display interface, displaying a first trigger control corresponding to each candidate virtual object in the first virtual object; the respective candidate virtual objects are all or part of the first virtual object.
And step 33, determining target virtual objects in the candidate virtual objects according to the triggering operation of the first triggering control which is respectively displayed for the candidate virtual objects.
Step 34, controlling the second virtual object to automatically move to the specified relative position of the target virtual object.
Alternatively, the specified relative position may be a certain position with respect to the target virtual object. For example, the specified relative position of the target virtual object may be the back or side of the target virtual object, and so on.
In the embodiment of the application, when the terminal displays the scene picture of the virtual scene, the trigger control can be displayed corresponding to other virtual objects on the scene picture, and when the user clicks the trigger control, the terminal can determine the target virtual object according to the trigger operation of the user and automatically control the current virtual object (namely the second virtual object) to move to the vicinity of the target virtual object, for example, automatically control the current virtual object to move to the back of the target virtual object. In automatically controlling the current virtual object to move to the vicinity of the target virtual object, the user is not required to perform other control operations.
Alternatively, the controlling the second virtual object to automatically move to the specified relative position of the target virtual object may be controlling the second virtual object to directly appear at the specified relative position.
Optionally, the second virtual object is controlled to move to the designated relative position of the target virtual object at a target speed, where the target speed is greater than the moving speed of the second virtual object under the control of the virtual rocker.
The virtual rocker is a virtual control for receiving a manual control operation, and the manual control operation is an operation for controlling the moving direction of the second virtual object.
For example, please refer to fig. 4, which illustrates a display interface schematic diagram of a virtual scene according to an embodiment of the present application. As shown in fig. 4, the virtual scene display interface 40 includes a scene screen 41 and at least one main virtual control 42 (two virtual controls are shown in fig. 4), where the scene screen 41 includes a virtual object 41a that is currently controlled, and a virtual object 41b that is controlled by another terminal or artificial intelligence. The user can control the currently controlled virtual object 41a to move in the virtual scene or perform a specified action through the main virtual control 42. When the user controls the virtual object 41a to move through the main virtual control 42, the virtual object 41a moves at a first speed. And when the user controls the virtual object 41a to automatically move to the virtual object 41b through the first trigger control displayed corresponding to the virtual object 41b, the terminal will control the virtual object 41a to move to the body side of the virtual object 41b at the second speed. Wherein the second speed is greater than the first speed.
In summary, the solution shown in the embodiment of the present application displays, by displaying a display interface of an application program, a first trigger control corresponding to each candidate virtual object in a first virtual object in the display interface, then performs a trigger operation on the first trigger control displayed by each candidate virtual object, determines a target virtual object in each candidate virtual object, and then controls a second virtual object to automatically move to a specified relative position of the target virtual object, thereby implementing automatic control of movement of the virtual object to another virtual object according to operation of a user in a scene interface of a virtual scene, improving flexibility of control of the virtual object, improving movement efficiency of the virtual object in the virtual scene, and further saving electric quantity and data flow consumed by a terminal.
Referring to fig. 5, a method flowchart of a virtual object control method in a virtual scene according to an exemplary embodiment of the present application is shown. As shown in fig. 5, a terminal running an application program corresponding to the virtual scene (e.g., the terminal shown in fig. 1) may control movement of a virtual object in the virtual scene, perform a specified action, and the like by performing the following steps.
Step 501, a terminal displays a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene, and the scene picture comprises a first virtual object; the first virtual object is at least one other virtual object than the second virtual object controlled by the terminal.
For example, please refer to fig. 6, which illustrates a display interface diagram according to an embodiment of the present application. As shown in fig. 6, the virtual scene display interface 60 includes a scene screen 61, a first virtual object 62, and a second virtual object 63. Wherein the first virtual object 62 may be a virtual object controlled by other users or a virtual object controlled by an application (such as artificial intelligence), the first virtual object 62 may be one or more virtual objects. The second virtual object 63 may be a virtual object controlled by a current user, the second virtual object 63 may be moved to a designated position controlled by the user, and the second virtual object 63 may perform an attack or the like on the first virtual object 62.
In step 502, the terminal determines, as each candidate virtual object, a virtual object whose distance from the second virtual object is smaller than a specified distance threshold value from among the first virtual objects.
In the embodiment of the application, the specified distance threshold may be a preset fixed distance value, the terminal uses the second virtual object controlled by the current user as a circle center, uses the specified distance threshold as a radius to make a circle, and the first virtual object in the circle or on the circle may be determined as the candidate virtual object.
For example, please refer to fig. 7, which illustrates a schematic diagram of determining candidate virtual objects according to an embodiment of the present application. As shown in fig. 7, a circular area range 73 is obtained with the second virtual object 71 as a center and the specified distance threshold 72 as a radius, and the terminal detects that four first virtual objects, namely, an a virtual object 74a, a b virtual object 74b, a c virtual object 74c and a D virtual object 74D exist in the scene. The C virtual object 74C and the D virtual object 74D are within the circular area 73, and the a virtual object 74a and the B virtual object 74B are outside the circular area 73, and the terminal can confirm that the C virtual object 74C and the D virtual object 74D within the circular area 73 are candidate virtual objects.
And step 503, the terminal displays a second trigger control in an overlapping manner in the display interface.
In the embodiment of the application, the second trigger control is a virtual control used for switching the movement and attack modes of the second virtual object. Wherein the second trigger control may be one or more.
The second trigger control may be displayed according to a preset condition. For example, the second trigger control is not displayed in the display interface in an overlaid manner at the beginning. And when the terminal detects that the second virtual object currently controlled meets the preset condition, displaying the second trigger control.
Optionally, when the second virtual object has the specified virtual prop, the second trigger control is displayed in an overlaid manner in the display interface.
For example, in one possible implementation, when the second virtual object has a specified virtual prop, the terminal superimposes and displays a second trigger control in the display interface, where the second trigger control is used to switch the movement mode of the second virtual object.
For example, please refer to fig. 8, which illustrates a second trigger control display interface according to an embodiment of the present application. As shown in fig. 8, when the second virtual object 83 controlled by the user obtains the designated virtual prop and the designated prop is a virtual dagger, a second trigger control 84 is added to the display interface 80 of the current virtual scene, in addition to the scene picture 81, the first virtual object 82 and the second virtual object 83, where the second trigger control 84 is used to control the movement mode of the second virtual object 83 and execute the attack action on the target use dagger.
And step 504, when receiving the triggering operation of the second triggering control, displaying one first triggering control corresponding to each candidate virtual object in the display interface.
The execution sequence of the steps 502 and 503 is not limited in the embodiment of the present application. For example, the above step 502 may be performed before step 503, that is, the terminal may continuously determine and update the candidate virtual object; alternatively, the step 502 may be performed after the step 503, that is, the terminal may determine the candidate virtual object after displaying the second trigger control; alternatively, the step 502 may be performed after receiving the triggering operation of the second triggering control and before displaying the first triggering control.
In the embodiment of the application, the first trigger control may be located above each candidate virtual object, and the first trigger control is used for selecting the target virtual object.
For example, please refer to fig. 9, which illustrates a first trigger control display interface according to an embodiment of the present application. As shown in fig. 9, when the terminal receives the triggering operation of the user on the second triggering control 94, it may be determined according to the specified distance threshold that the a virtual object 92a and the B virtual object 92B are not in the circular area range 95, the C virtual object 92C and the D virtual object 92D are candidate virtual objects, and one first triggering control 96 is displayed above the corresponding positions of the C virtual object 92C and the D virtual object 92D respectively.
And 505, the terminal determines the target virtual object in each candidate virtual object according to the triggering operation of the first triggering control which is respectively displayed for each candidate virtual object.
In the embodiment of the application, the target virtual object may be the only one virtual object, or the target virtual object may also be at least one virtual object (such as two or more virtual objects).
Alternatively, when the target virtual object is only one virtual object, the target virtual object may be a candidate virtual object corresponding to a single first trigger control that receives the trigger operation.
The user can select one first trigger control from the first trigger controls displayed by the terminal to trigger, and accordingly, the terminal selects the candidate virtual object corresponding to the first trigger control as the target virtual object.
For example, in fig. 9, if the user selects only the first trigger control corresponding to the C virtual object 92C, the terminal determines the C virtual object 92C as the target virtual object.
Optionally, the target virtual object includes at least two virtual objects, where the at least two virtual objects are candidate virtual objects corresponding to at least two first trigger controls that receive the trigger operation respectively.
For example, the user may perform a triggering operation on the plurality of first triggering controls in the first triggering controls displayed by the terminal, and correspondingly, the terminal selects, according to the user operation, candidate virtual objects corresponding to the plurality of first triggering controls as the target virtual objects.
For example, in fig. 9, if the user selects the first trigger controls corresponding to the C virtual object 92C and the D virtual object 92D, respectively, the terminal determines both the C virtual object 92C and the D virtual object 92D as target virtual objects.
Step 506, controlling the second virtual object to automatically move to the specified relative position of the target virtual object.
In the embodiment of the application, after the user selects the target virtual object, the terminal controls the second virtual object to move to the position where the target virtual object is located.
When the target virtual object is the only one of the candidate virtual objects, the terminal controls the second virtual object to directly move to the position of the target virtual object; and when the target virtual object comprises at least two candidate virtual objects, the second virtual object can be sequentially moved according to the sequence of the triggering operation of the first triggering control by the user.
For example, as shown in fig. 9, when the user performs a trigger operation on the first trigger control 96 corresponding to the D virtual object 92D and then performs a trigger operation on the first trigger control 96 corresponding to the C virtual object 92C, the terminal may control the second virtual object 93 to move to the position corresponding to the D virtual object 92D and then to the position corresponding to the C virtual object 92C. When the user performs the triggering operation only on the first triggering control 96 corresponding to the C virtual object 92C, the second virtual object directly moves to the position corresponding to the C virtual object 92C.
Optionally, when controlling the second virtual object to automatically move to the specified relative position of the target virtual object, the terminal further controls the second virtual object to execute the specified operation on the target virtual object.
Wherein the specified operation may be an attack operation or the like.
In step 507, if the number of times the second virtual object moves does not reach the number threshold, a new target virtual object is determined, where the new target virtual object is a virtual object within a preset range around the second virtual object.
The number of times of the movement refers to the number of times that the terminal controls the movement of the second virtual object after the user executes the triggering operation on the first triggering control. The terminal controls the second virtual object to move to the appointed relative position of one target virtual object, and the terminal is regarded as controlling the second virtual object to move once.
In the embodiment of the present application, the frequency threshold may be a frequency preset by an application program, where the frequency threshold specifies a maximum value of the number of times that the second virtual object may move using the virtual prop.
Alternatively, the number of times threshold value may be determined by the kind of virtual prop used by the second virtual object.
For example, when the virtual prop is a virtual dagger, the number of times threshold may be set to 4 times, and when the virtual prop is a virtual short sword, the number of times threshold may be set to 3 times.
For example, when the second virtual object 71 shown in fig. 7 moves to the positions corresponding to the second virtual object 74d and the second virtual object 74c in sequence, and attacks on the second virtual object 74d and the second virtual object 74c using the virtual props, and finally stays at the corresponding positions of the second virtual object 74c, the second virtual object moves twice in total, and when the virtual props used are virtual daggers, the number of times threshold is set to 4, and the number of times the second virtual object moves this time is twice and is smaller than the number of times threshold, at this time, the terminal can confirm the new target virtual object. Referring to fig. 10, a schematic diagram of a determination target virtual object according to an embodiment of the application is shown. As shown in fig. 10, a circular area range 1003 is obtained by detecting a first virtual object within a preset range around a second virtual object 1001, centering around the second virtual object 1001, and centering around a specified distance threshold 1002, detecting that a B virtual object 1004B is located in the circular area range 1003, and determining that a new target virtual object is the B virtual object 1004B.
Step 508, controlling the second virtual object to automatically move to the specified relative position of the new target virtual object.
In the embodiment of the application, when the new target virtual object is at least two virtual objects, the second virtual object is controlled to sequentially move to the appointed relative positions of the at least two virtual objects.
Optionally, when the second virtual object moves to the new target virtual object, the second virtual object may move in the order from the near to the far of the target virtual object from the second virtual object, or may move in the order in which the user selects the first trigger control corresponding to the target virtual object.
Optionally, when the second virtual object moves to the position of the corresponding target virtual object, the viewing angle direction of the display interface is the viewing angle direction of the second virtual object.
Wherein the viewing angle direction is a direction in which the virtual object is observed by a camera model located over a rear portion of the virtual object in the virtual scene.
The scene screen in the display interface may be a screen when a virtual object (such as a virtual character or a virtual carrier) in the virtual scene is observed in a certain view angle direction in the virtual scene, and the view angle direction is a direction when the virtual object is observed in the virtual scene through a camera model.
The camera model is a three-dimensional model located around the virtual object in the virtual scene. When the first person perspective is employed, the camera model is located at a viewing location of the virtual object, such as near or within the cockpit of the virtual vehicle when the virtual object is a virtual vehicle, and near or within the head of the virtual character when the virtual object is a virtual character. When a third person viewing angle is adopted, the camera model can be positioned behind the virtual object and bound with the virtual object, and can also be positioned at any position with a preset distance from the virtual object, and the virtual object in the virtual scene can be observed from different angles through the camera model. Optionally, the camera model is not actually displayed in the virtual scene, i.e. the camera model cannot be recognized in the virtual scene displayed by the user interface.
Describing the camera model as being located at any position at a preset distance from the virtual object, optionally, one virtual object corresponds to one camera model, and the camera model may rotate with the virtual object as a rotation center, for example: the camera model is rotated by taking any point of the virtual object as a rotation center, the camera model rotates in an angle and also shifts in a displacement in the rotation process, and the distance between the camera model and the rotation center is kept unchanged during rotation, namely, the camera model rotates on the surface of a sphere taking the rotation center as a sphere center, wherein any point of the virtual object can be a virtual object or any point around the virtual object, for example, any point of the virtual object can be any point around the center of a cabin, a front seat, a rear seat or a virtual object of a virtual carrier, or any point of the virtual object can be any point around the head, the trunk or the virtual character of a virtual character, and the embodiment of the application is not limited to the above. Optionally, when the camera model observes the virtual object, the view angle direction of the camera model is that the perpendicular on the tangent plane of the sphere where the camera model is located points to the direction of the virtual object.
Optionally, the camera model may also observe the virtual object at a preset angle in different directions of the virtual object.
Referring to fig. 11, a schematic diagram of a camera model according to an embodiment of the present application is shown. A point in the virtual object 111 is determined as a center of rotation 112 around which the camera model rotates, optionally configured with an initial position that is a position posterior and superior to the virtual object (e.g. a posterior position of the brain). Schematically, as shown in fig. 11, the initial position is position 113, and when the camera model is rotated to position 114 or position 115, the viewing angle direction of the camera model is changed with the rotation of the camera model.
In fig. 11, the virtual object is exemplified as a virtual character object, and in the embodiment of the present application, the virtual object 111 may be a virtual carrier in a virtual scene, or may be any other virtual object that may be controlled by a user, such as a virtual animal.
In summary, the solution shown in the embodiment of the present application displays, by displaying a display interface of an application program, a first trigger control corresponding to each candidate virtual object in a first virtual object in the display interface, then performs a trigger operation on the first trigger control displayed by each candidate virtual object, determines a target virtual object in each candidate virtual object, and then controls a second virtual object to automatically move to a specified relative position of the target virtual object, thereby implementing automatic control of movement of the virtual object to another virtual object according to operation of a user in a scene interface of a virtual scene, improving flexibility of control of the virtual object, improving movement efficiency of the virtual object in the virtual scene, and further saving electric quantity and data flow consumed by a terminal.
Referring to fig. 12, a flowchart of controlling movement of a virtual object and performing an attack in a virtual scene according to an exemplary embodiment of the present application is shown. As shown in fig. 12, the flow of controlling the virtual object to move and perform the attack action in the virtual scene may be as follows:
s121, when the current weapon of the virtual object controlled by the user has instantaneous attack skills, the user uses the instantaneous attack skills through the trigger control.
And S122, after the user triggers the control to use the instantaneous attack skill, the terminal judges whether a target object exists in the detection range of the virtual object controlled by the current user.
And S123, if judging that the target object does not exist in the detection range of the virtual object controlled by the current user, canceling the use of the instantaneous attack skill by the terminal.
And S124, if the target object exists in the detection range of the virtual object controlled by the current user, displaying a control which can be triggered and selected at the position of the terminal on the target object head in the detection range.
S125, the terminal judges whether the user triggers any one of the position display controls on the target object head in the detection range. If the trigger is not generated, waiting for the user to perform the trigger operation, and then executing the next step.
And S126, after the user triggers the control displayed at the position on the head of the target object, under the control of the terminal, the virtual object controlled by the user instantaneously moves to the position of the target object corresponding to the control triggered first according to the sequence of triggering the control, and attacks the target object.
S127, under the control of the terminal, the virtual objects controlled by the user sequentially and instantly move to the positions of the target objects in sequence and attack the target objects.
And S128, the terminal judges whether the current attack quantity of the target object reaches the preset attack quantity upper limit, if so, the transient attack is stopped and the virtual object controlled by the user stays at the current position.
S129, if the current attack quantity of the target object does not reach the preset upper limit of the attack quantity, the terminal judges whether the target object can be selected in the detection range of the position of the virtual object controlled by the current user. If the fact that the target object can be selected in the detection range of the position of the virtual object controlled by the current user is judged, the transient attack operation is continued until the fact that no target object can be selected in the detection range of the position of the virtual object controlled by the current user is judged.
S1210, if the terminal judges that no target object can be selected in the detection range of the position of the virtual object controlled by the current user, the transient attack action is stopped, and the virtual object controlled by the user stays at the position of the target object of the final transient attack.
S1211, the terminal ends the use of the transient attack skill, and the virtual object controlled by the user is restored to a state before the use of the skill.
According to the scheme provided by the embodiment of the application, in the virtual scene of the game, the user can trigger the current virtual object to move instantaneously or quickly to the vicinity of other virtual objects and execute the attack action by selecting the trigger control on the top of the other virtual objects, so that the control efficiency of the movement and attack of the virtual objects is greatly improved, the game duration can be greatly shortened, and the electric quantity and the data flow consumed by the terminal are saved.
In summary, the solution shown in the embodiment of the present application displays, by displaying a display interface of an application program, a first trigger control corresponding to each candidate virtual object in a first virtual object in the display interface, then performs a trigger operation on the first trigger control displayed by each candidate virtual object, determines a target virtual object in each candidate virtual object, and then controls a second virtual object to automatically move to a specified relative position of the target virtual object, thereby implementing automatic control of movement of the virtual object to another virtual object according to operation of a user in a scene interface of a virtual scene, improving flexibility of control of the virtual object, improving movement efficiency of the virtual object in the virtual scene, and further saving electric quantity and data flow consumed by a terminal.
Fig. 13 is a block diagram showing a structure of a virtual object control apparatus in a virtual scene according to an exemplary embodiment. The virtual object control device in the virtual scene may be used in the terminal to perform all or part of the steps performed by the terminal in the method shown in the corresponding embodiment of fig. 3 or fig. 5. The virtual object control apparatus in the virtual scene may include:
the interface display module 1301 is configured to display a display interface of an application program, where the display interface includes a scene picture of a virtual scene, and the scene picture includes a first virtual object; the first virtual object is at least one other virtual object except a second virtual object controlled by the terminal;
the control display module 1302 is configured to display, in the display interface, a first trigger control corresponding to each candidate virtual object in the first virtual objects; the candidate virtual objects are all or part of the first virtual object;
the target determining module 1303 is configured to determine a target virtual object in the candidate virtual objects according to a trigger operation of the first trigger control that is respectively displayed on the candidate virtual objects;
An object moving module 1304 is configured to control the second virtual object to automatically move to a specified relative position of the target virtual object.
Optionally, the control display module 1302, including,
the second control display sub-module is used for displaying a second trigger control in a superposition manner in the display interface;
and the first control display sub-module is used for respectively displaying one first trigger control corresponding to each candidate virtual object in the display interface when receiving the trigger operation of the second trigger control.
Optionally, the second control display sub-module is configured to superimpose and display the second trigger control on the display interface when the second virtual object has the specified virtual prop.
Optionally, the apparatus further includes:
the candidate object determining module is configured to determine, as each candidate virtual object, a virtual object, of the first virtual objects, having a distance with the second virtual object smaller than a specified distance threshold, before the control display module 1302 displays, in a display interface, a first trigger control corresponding to each candidate virtual object in the first virtual objects.
Optionally, the apparatus further includes:
the target redetermination module is used for determining a new target virtual object if the number of times of the current movement of the second virtual object does not reach a number threshold, wherein the new target virtual object is a virtual object in a preset range around the second virtual object;
the object moving module 1304 is further configured to control the second virtual object to automatically move to a specified relative position of the new target virtual object.
Optionally, the target virtual object includes at least two virtual objects, where the at least two virtual objects are candidate virtual objects corresponding to at least two first trigger controls that receive a trigger operation respectively; the object moving module 1304 is configured to control the second virtual object to sequentially move to a specified relative position of the at least two virtual objects.
Optionally, the apparatus further includes:
and the operation control module is used for controlling the second virtual object to execute the appointed operation on the target virtual object when controlling the second virtual object to automatically move to the appointed relative position of the target virtual object.
In summary, the solution shown in the embodiment of the present application displays, by displaying a display interface of an application program, a first trigger control corresponding to each candidate virtual object in a first virtual object in the display interface, then performs a trigger operation on the first trigger control displayed by each candidate virtual object, determines a target virtual object in each candidate virtual object, and then controls a second virtual object to automatically move to a specified relative position of the target virtual object, thereby implementing automatic control of movement of the virtual object to another virtual object according to operation of a user in a scene interface of a virtual scene, improving flexibility of control of the virtual object, improving movement efficiency of the virtual object in the virtual scene, and further saving electric quantity and data flow consumed by a terminal.
Fig. 14 is a block diagram illustrating a computer device 1400, according to an example embodiment. The computer device 1400 may be a user terminal such as a smart phone, tablet, MP3 player (Moving Picture Experts Group Audio Layer III, mpeg 3), MP4 (Moving Picture Experts Group Audio Layer IV, mpeg 4) player, notebook or desktop. The computer device 1400 may also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, and the like.
In general, the computer device 1400 includes: a processor 1401 and a memory 1402.
Processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1401 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1401 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1401 may be integrated with a GPU (Graphics Processing Unit, image processor) for rendering and rendering of content required to be displayed by the display screen. In some embodiments, the processor 1401 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1402 may include one or more computer-readable storage media, which may be non-transitory. Memory 1402 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1402 is used to store at least one instruction for execution by processor 1401 to implement all or part of the steps of a method provided by an embodiment of the method of the present application.
In some embodiments, the computer device 1400 may also optionally include: a peripheral interface 1403 and at least one peripheral. The processor 1401, memory 1402, and peripheral interface 1403 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1403 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1404, a touch display screen 1405, a camera 1406, audio circuitry 1407, and a power source 1409.
Peripheral interface 1403 may be used to connect at least one Input/Output (I/O) related peripheral to processor 1401 and memory 1402. In some embodiments, processor 1401, memory 1402, and peripheral interface 1403 are integrated on the same chip or circuit board; in some other embodiments, either or both of processor 1401, memory 1402, and peripheral interface 1403 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1404 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1404 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1404 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1404 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1404 may also include NFC (Near Field Communication, short-range wireless communication) related circuits, which the present application is not limited to.
The display screen 1405 is used to display UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1405 is a touch display screen, the display screen 1405 also has the ability to collect touch signals at or above the surface of the display screen 1405. The touch signal may be input to the processor 1401 as a control signal for processing. At this time, the display 1405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1405 may be one, providing a front panel of the computer device 1400; in other embodiments, the display 1405 may be at least two, disposed on different surfaces of the computer device 1400 or in a folded configuration; in still other embodiments, the display 1405 may be a flexible display disposed on a curved surface or a folded surface of the computer device 1400. Even more, the display 1405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 1405 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera component 1406 is used to capture images or video. Optionally, camera assembly 1406 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1406 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1407 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1401 for processing, or inputting the electric signals to the radio frequency circuit 1404 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, each disposed at a different location of the computer device 1400. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1401 or the radio frequency circuit 1404 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuitry 1407 may also include a headphone jack.
The power supply 1409 is used to power the various components in the computer device 1400. The power supply 1409 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1409 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1400 also includes one or more sensors 1410. The one or more sensors 1410 include, but are not limited to: acceleration sensor 1411, gyro sensor 1412, pressure sensor 1413, optical sensor 1415, and proximity sensor 1416.
The acceleration sensor 1411 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the computer device 1400. For example, the acceleration sensor 1411 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1401 may control the touch display 1405 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1411. The acceleration sensor 1411 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1412 may detect the body direction and the rotation angle of the computer apparatus 1400, and the gyro sensor 1412 may collect the 3D motion of the user to the computer apparatus 1400 in cooperation with the acceleration sensor 1411. The processor 1401 may implement the following functions based on the data collected by the gyro sensor 1412: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1413 may be disposed on a side frame of computer device 1400 and/or on an underside of touch screen 1405. When the pressure sensor 1413 is disposed at a side frame of the computer device 1400, a user's grip signal to the computer device 1400 may be detected, and the processor 1401 performs a left-right hand recognition or a quick operation according to the grip signal collected by the pressure sensor 1413. When the pressure sensor 1413 is disposed at the lower layer of the touch screen 1405, the processor 1401 realizes control of the operability control on the UI interface according to the pressure operation of the user on the touch screen 1405. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 1415 is used to collect the ambient light intensity. In one embodiment, the processor 1401 may control the display brightness of the touch screen 1405 based on the intensity of ambient light collected by the optical sensor 1415. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 1405 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 1405 is turned down. In another embodiment, the processor 1401 may also dynamically adjust the shooting parameters of the camera assembly 1406 based on the ambient light intensity collected by the optical sensor 1415.
A proximity sensor 1416, also known as a distance sensor, is typically provided on the front panel of the computer device 1400. The proximity sensor 1416 is used to capture the distance between the user and the front of the computer device 1400. In one embodiment, when the proximity sensor 1416 detects a gradual decrease in the distance between the user and the front of the computer device 1400, the processor 1401 controls the touch display 1405 to switch from the bright screen state to the off screen state; when the proximity sensor 1416 detects that the distance between the user and the front of the computer device 1400 gradually increases, the touch display 1405 is controlled by the processor 1401 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 14 is not limiting as to the computer device 1400, and may include more or fewer components than shown, or may combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, including instructions, for example, a memory including at least one instruction, at least one program, a set of codes, or a set of instructions, executable by a processor, to perform all or part of the steps of the methods shown in the corresponding embodiments of fig. 3 or 5. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (9)

1. A virtual object control method in a virtual scene, the method being performed by a terminal, the method comprising:
displaying a display interface of an application program, wherein the display interface comprises a scene picture of a virtual scene, and the scene picture comprises a first virtual object; the first virtual object is at least one other virtual object except a second virtual object controlled by the terminal;
in the display interface, a first trigger control is displayed corresponding to each candidate virtual object in the first virtual object; the candidate virtual objects are all or part of the first virtual object;
determining target virtual objects in the candidate virtual objects according to triggering operations of the first triggering controls respectively displayed on the candidate virtual objects; the target virtual object comprises at least two virtual objects, and the at least two virtual objects are candidate virtual objects respectively corresponding to at least two first trigger controls which receive trigger operation;
And controlling the second virtual object to move to the appointed relative position of the at least two virtual objects according to the sequence in which the first trigger controls corresponding to the at least two virtual objects are triggered.
2. The method according to claim 1, wherein displaying, in the display interface, one first trigger control corresponding to each candidate virtual object in the first virtual objects, respectively, includes:
superposing and displaying a second trigger control in the display interface;
and when receiving the triggering operation of the second triggering control, respectively displaying one first triggering control corresponding to each candidate virtual object in the display interface.
3. The method of claim 2, wherein the overlaying the second trigger control in the display interface comprises:
and when the second virtual object has the appointed virtual prop, displaying the second trigger control in a superposition manner in the display interface.
4. The method of claim 1, wherein before displaying, in the display interface, one first trigger control for each candidate virtual object in the first virtual objects, respectively, further comprises:
And determining the virtual objects, of which the distances between the first virtual object and the second virtual object are smaller than a specified distance threshold, as the candidate virtual objects.
5. The method of claim 1, wherein after controlling the second virtual object to automatically move to the specified relative position of the target virtual object, the method further comprises:
if the number of times of the current movement of the second virtual object does not reach the number threshold, determining a new target virtual object, wherein the new target virtual object is a virtual object in a preset range around the second virtual object;
and controlling the second virtual object to automatically move to the appointed relative position of the new target virtual object.
6. The method according to any one of claims 1 to 5, further comprising:
and when the second virtual object is controlled to automatically move to the appointed relative position of the target virtual object, controlling the second virtual object to execute appointed operation on the target virtual object.
7. A virtual object control apparatus in a virtual scene, the apparatus being used in a terminal, the apparatus comprising:
The interface display module is used for displaying a display interface of the application program, wherein the display interface comprises a scene picture of a virtual scene, and the scene picture comprises a first virtual object; the first virtual object is at least one other virtual object except a second virtual object controlled by the terminal;
the control display module is used for displaying a first trigger control corresponding to each candidate virtual object in the first virtual object in the display interface; the candidate virtual objects are all or part of the first virtual object;
the target determining module is used for determining target virtual objects in the candidate virtual objects according to the triggering operation of the first triggering control which is respectively displayed on the candidate virtual objects; the target virtual object comprises at least two virtual objects, and the at least two virtual objects are candidate virtual objects respectively corresponding to at least two first trigger controls which receive trigger operation;
and the object moving module is used for controlling the second virtual object to move to the appointed relative position of the at least two virtual objects according to the sequence in which the first trigger controls corresponding to the at least two virtual objects are triggered.
8. A computer device comprising a processor and a memory, wherein the memory has stored therein at least one program that is loaded and executed by the processor to implement the virtual object control method in a virtual scene as claimed in any one of claims 1 to 6.
9. A computer-readable storage medium, wherein at least one program is stored in the storage medium, and the at least one program is loaded and executed by a processor to implement the virtual object control method in a virtual scene as claimed in any one of claims 1 to 6.
CN201910999297.XA 2019-10-21 2019-10-21 Virtual object control method in virtual scene, computer equipment and storage medium Active CN110743168B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910999297.XA CN110743168B (en) 2019-10-21 2019-10-21 Virtual object control method in virtual scene, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910999297.XA CN110743168B (en) 2019-10-21 2019-10-21 Virtual object control method in virtual scene, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110743168A CN110743168A (en) 2020-02-04
CN110743168B true CN110743168B (en) 2023-10-20

Family

ID=69279053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910999297.XA Active CN110743168B (en) 2019-10-21 2019-10-21 Virtual object control method in virtual scene, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110743168B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112181530A (en) * 2020-03-10 2021-01-05 简吉波 Virtual reality scene loading method based on cloud platform and virtual reality system
CN111589128B (en) * 2020-04-23 2022-02-18 腾讯科技(深圳)有限公司 Operation control display method and device based on virtual scene
CN111589126B (en) * 2020-04-23 2023-07-04 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium
CN111744184B (en) * 2020-07-28 2023-08-22 腾讯科技(深圳)有限公司 Control showing method in virtual scene, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598438A (en) * 2016-12-22 2017-04-26 腾讯科技(深圳)有限公司 Scene switching method based on mobile terminal, and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8556720B2 (en) * 2008-01-14 2013-10-15 Disney Enterprises, Inc. System and method for touchscreen video game combat

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598438A (en) * 2016-12-22 2017-04-26 腾讯科技(深圳)有限公司 Scene switching method based on mobile terminal, and mobile terminal

Also Published As

Publication number Publication date
CN110743168A (en) 2020-02-04

Similar Documents

Publication Publication Date Title
US11224810B2 (en) Method and terminal for displaying distance information in virtual scene
CN110743168B (en) Virtual object control method in virtual scene, computer equipment and storage medium
CN109712224B (en) Virtual scene rendering method and device and intelligent device
CN108710525B (en) Map display method, device, equipment and storage medium in virtual scene
CN109917910B (en) Method, device and equipment for displaying linear skills and storage medium
CN111589125B (en) Virtual object control method and device, computer equipment and storage medium
US20220164159A1 (en) Method for playing audio, terminal and computer-readable storage medium
US11845007B2 (en) Perspective rotation method and apparatus, device, and storage medium
CN111589132A (en) Virtual item display method, computer equipment and storage medium
US11878240B2 (en) Method, apparatus, device, and storage medium for perspective rotation
US11675488B2 (en) Method and apparatus for constructing building in virtual environment, device, and storage medium
CN111013137B (en) Movement control method, device, equipment and storage medium in virtual scene
CN110738738B (en) Virtual object marking method, equipment and storage medium in three-dimensional virtual scene
CN110052030B (en) Image setting method and device of virtual character and storage medium
CN111389015A (en) Method and device for determining game props and storage medium
CN111013136A (en) Movement control method, device, equipment and storage medium in virtual scene
CN112274936B (en) Method, device, equipment and storage medium for supplementing sub-props of virtual props
CN112121438B (en) Operation prompting method, device, terminal and storage medium
CN110597389B (en) Virtual object control method in virtual scene, computer device and storage medium
CN110992268B (en) Background setting method, device, terminal and storage medium
CN113559494B (en) Virtual prop display method, device, terminal and storage medium
CN113509729B (en) Virtual prop control method and device, computer equipment and storage medium
CN116954356A (en) Interaction method, device, equipment and storage medium based on gestures
CN114377395A (en) Virtual carrier and virtual object control method, device, equipment and medium
CN113633976A (en) Operation control method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40021548

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant