CN116617652A - Game control method, apparatus, device, storage medium, and computer program product - Google Patents

Game control method, apparatus, device, storage medium, and computer program product Download PDF

Info

Publication number
CN116617652A
CN116617652A CN202310427922.XA CN202310427922A CN116617652A CN 116617652 A CN116617652 A CN 116617652A CN 202310427922 A CN202310427922 A CN 202310427922A CN 116617652 A CN116617652 A CN 116617652A
Authority
CN
China
Prior art keywords
target
character
role
virtual
touch point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310427922.XA
Other languages
Chinese (zh)
Inventor
王辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310427922.XA priority Critical patent/CN116617652A/en
Publication of CN116617652A publication Critical patent/CN116617652A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • A63F2300/306Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display for displaying a marker associated to an object or location in the game field
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a game control method, a game control device, a game control storage medium and a game control program product. And providing a graphical user interface through the terminal equipment, wherein a virtual scene and a target skill control are displayed in the graphical user interface, and the virtual scene comprises a target virtual role. The method comprises the following steps: in response to receiving a first trigger instruction for the target skill control, determining whether a touch point corresponding to the first trigger instruction maintains a target state in the monitoring area for a preset time; in response to determining that the touch point keeps the target state in the monitoring area for a preset time, displaying a role identifier corresponding to a target virtual role at a target position corresponding to the touch point, wherein the target virtual role is a virtual role determined in a virtual scene according to the target position; and in response to receiving the second trigger instruction for the target character identification, releasing the target skills corresponding to the target skill control to the target virtual characters corresponding to the target character identification.

Description

Game control method, apparatus, device, storage medium, and computer program product
Technical Field
The present application relates to the field of computer technology, and in particular, to a game control method, apparatus, device, storage medium, and computer program product.
Background
In the related art, when a target character to be released is clicked after a skill is required to be selected in a game scene, the problem that the target character cannot be accurately selected is caused because the problem that the number of characters is large or character models overlap may exist.
Disclosure of Invention
In view of the above, the present application aims to provide a game control method, a game control device, a game control apparatus, a game control storage medium and a game control program product.
Based on the above object, in a first aspect, the present application provides a game control method, providing a graphical user interface through a terminal device, wherein a virtual scene and a target skill control are displayed in the graphical user interface, and the virtual scene comprises a target virtual character; the method comprises the following steps:
in response to receiving a first trigger instruction for the target skill control, determining whether a touch point corresponding to the first trigger instruction maintains a target state in a monitoring area for a preset time;
in response to determining that the touch point keeps a target state in the monitoring area for a preset time, displaying a role identifier corresponding to the target virtual role at a target position corresponding to the touch point, wherein the target virtual role is a virtual role determined in the virtual scene according to the target position;
And responding to receiving a second trigger instruction aiming at the target character identification, and releasing target skills corresponding to the target skill control to a target virtual character corresponding to the target character identification.
In a second aspect, the present application provides a game control apparatus, providing, by a terminal device, a graphical user interface in which a virtual scene and a target skill control are presented, the virtual scene including a target virtual character; the device comprises:
the first determining module is configured to determine whether a touch point corresponding to a first trigger instruction keeps a target state in a monitoring area for a preset time or not in response to receiving the first trigger instruction for the target skill control;
the second determining module is configured to display a role identifier corresponding to the target virtual role at a target position corresponding to the touch point in response to determining that the touch point keeps a target state in the monitoring area for a preset time, wherein the target virtual role is a virtual role determined in the virtual scene according to the target position;
and the control module is configured to respond to receiving a second trigger instruction aiming at the target role identification, and then release the target skills corresponding to the target skill control to the target virtual roles corresponding to the target role identification.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the game control method according to the first aspect when executing the program.
In a fourth aspect, the present application provides a computer-readable storage medium storing computer instructions for causing a computer to execute the game control method according to the first aspect.
In a fifth aspect, the present application provides a computer program product comprising computer program instructions which, when run on a computer, cause the computer to perform the game control method according to the first aspect.
As can be seen from the foregoing, the game control method, apparatus, device, storage medium and computer program product provided by the present application provide a graphical user interface through a terminal device, where a virtual scene and a target skill control are displayed in the graphical user interface, the virtual scene includes a target virtual character, when a first trigger instruction for the target skill control is received, a trigger point corresponding to the first trigger instruction may be determined, and whether the duration of maintaining the target state in the listening area at the trigger point reaches a preset time is determined. Further, when it is determined that the duration of the touch point maintaining the target state in the listening area reaches a preset time, a role identifier corresponding to a target virtual role can be displayed at a target position corresponding to the touch point, wherein the target virtual role is a virtual role determined in the virtual scene according to the target position. Still further, when a second trigger instruction for the target role identifier is received, a target skill corresponding to the target skill control may be released to the target virtual role corresponding to the target role identifier. By monitoring the state of the touch point corresponding to the first trigger instruction aiming at the target skill control in the monitoring area, and further generating and displaying the character identifier corresponding to the optional target virtual character when the target state is kept in the monitoring area by the touch point to meet the preset time requirement, the problem that the characters are difficult to select due to the fact that the number of the characters is large or the models corresponding to the characters are mutually overlapped can be effectively avoided through selecting the character identifier, and therefore the effect of accurately selecting the skill release object is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the present application or related art, the drawings that are required to be used in the description of the embodiments or related art will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort to those of ordinary skill in the art.
FIG. 1 shows a schematic diagram of a related art game character model overlapping each other.
Fig. 2 is a schematic flow chart of an exemplary game control method according to an embodiment of the present application.
Fig. 3 shows an exemplary schematic diagram of a listening area in an embodiment according to the application.
Fig. 4 shows an exemplary schematic diagram of a scenario of selecting character identifications in an embodiment of the application.
Fig. 5 is a schematic diagram showing an exemplary configuration of a game control apparatus according to an embodiment of the present application.
Fig. 6 shows an exemplary structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail below with reference to specific embodiments and with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present application more apparent.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present application should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present application belongs. The terms "first," "second," and the like, as used in embodiments of the present application, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As described in the background section, when a target character to be released is clicked again by a skill to be selected in a game scene, there may be a problem that the number of characters is large or character models overlap, so that the target character cannot be accurately selected.
FIG. 1 shows a schematic diagram of a related art game character model overlapping each other.
According to the research of the inventor, in the related art, referring to fig. 1, a scene may include a character a, a character B and a character C, and when a player selects a skill release target, a diaphragm representing a selected state (for example, a dotted circle corresponding to the character C in fig. 1) appears around a selected object model when a finger contacts a screen, but in a case that a plurality of models are in a serious occlusion relationship, the player still cannot be helped to quickly and accurately judge the currently selected release target. Therefore, in the related art center, there is a problem that in the process that the player selects the target by the finger, due to the occlusion of the finger, it is not possible to determine which target is selected, resulting in misoperation when skill release selection objects occur.
As such, the present application provides a game control method, apparatus, device, storage medium, and computer program product, in which a graphical user interface is provided through a terminal device, in which a virtual scene including a target virtual character and a target skill control are displayed, when a first trigger instruction for the target skill control is received, a trigger point corresponding to the first trigger instruction may be determined, and whether the duration of maintaining a target state in a listening area for the trigger point reaches a preset time is determined. Further, when it is determined that the duration of the touch point maintaining the target state in the listening area reaches a preset time, a role identifier corresponding to a target virtual role can be displayed at a target position corresponding to the touch point, wherein the target virtual role is a virtual role determined in the virtual scene according to the target position. Still further, when a second trigger instruction for the target role identifier is received, a target skill corresponding to the target skill control may be released to the target virtual role corresponding to the target role identifier. By monitoring the state of the touch point corresponding to the first trigger instruction aiming at the target skill control in the monitoring area, and further generating and displaying the character identifier corresponding to the optional target virtual character when the target state is kept in the monitoring area by the touch point to meet the preset time requirement, the problem that the characters are difficult to select due to the fact that the number of the characters is large or the models corresponding to the characters are mutually overlapped can be effectively avoided through selecting the character identifier, and therefore the effect of accurately selecting the skill release object is achieved.
The game control method provided by the embodiment of the application is specifically described below by way of specific embodiments.
Fig. 2 is a schematic flow chart of an exemplary game control method according to an embodiment of the present application.
Referring to fig. 2, in the game control method provided by the embodiment of the present application, a graphical user interface may be provided through a terminal device, where a virtual scene and a target skill control are displayed in the graphical user interface, where the virtual scene includes a target virtual character; the method specifically comprises the following steps:
s202: and in response to receiving a first trigger instruction for the target skill control, determining whether a touch point corresponding to the first trigger instruction keeps a target state in a monitoring area for a preset time.
S204: and in response to determining that the touch point keeps a target state in the monitoring area for a preset time, displaying a role identifier corresponding to the target virtual role at a target position corresponding to the touch point, wherein the target virtual role is a virtual role determined in the virtual scene according to the target position.
S206: and responding to receiving a second trigger instruction aiming at the target character identification, and releasing target skills corresponding to the target skill control to a target virtual character corresponding to the target character identification.
Fig. 3 shows an exemplary schematic diagram of a listening area in an embodiment according to the application.
In some embodiments, referring to fig. 3, a listening area may be created in an area corresponding to a target avatar such that the listening area can overlay the target avatar. Specifically, whether a target virtual character exists in the current virtual scene may be monitored, and if the target virtual character exists in the current virtual scene, a listening area may be created so as to cover the target virtual character, wherein the listening area may be used to monitor the current state of the touch point. For example, a touch event within the listening area that affects a touch operation to the target virtual character may be detected.
Further, the current virtual role being controlled, i.e., the virtual role to which the target skill control belongs, may be determined. In creating the listening area, the target virtual character may be a friend character of the current virtual character, for example, a virtual character subordinate to the same camp as the current virtual character may be determined as the first target virtual character. And the target virtual character may also be a local character of the current virtual character, for example, a virtual character subordinate to a different camp from the current virtual character, which may be determined as a second target virtual character.
Still further, a listening area may be created that can cover both the first target avatar and the second target avatar, and to ensure that it is more targeted when listening to target avatars of different camps, a first listening area may be created in an area with the first target avatar so that the first listening area can cover the first target avatar, and similarly, a second listening area may be created in an area with the second target avatar so that the second listening area can cover the second target avatar. And monitoring touch events which are made aiming at the first target virtual role in the first monitoring area, and monitoring touch events which are made aiming at the second target virtual role in the second monitoring area.
In some embodiments, when a first trigger instruction for the target skill control is received, a listening area may be enabled to perform a listening state, further, a maximum instantaneous speed of a touch point corresponding to the first trigger instruction in a preset time period in the listening area may be determined, and whether the maximum instantaneous speed exceeds a preset speed threshold may be determined. The first trigger instruction may include a key instruction signal generated by performing a first trigger operation on a target key of the target peripheral, where the target key may be a key or a combination of a plurality of keys on the target peripheral. For example, by striking a "K" key on a keyboard to generate a key command signal; alternatively, the key command signal may be generated by simultaneously pressing the "X" and "Y" key positions on the handle; still alternatively, a key instruction signal may also be generated by moving a mouse to hover over the target skill control and clicking. When a first trigger operation is performed on the target skill control through the mouse, specific operations including but not limited to a single click operation, a double click operation, a long press operation and the like are included.
Further, if it is determined that the maximum instantaneous speed does not exceed the preset speed threshold within the preset time, it is determined that the touch point maintains the target state within the listening area for the preset time. The target state may be a hovering state, that is, the touch point maintains the hovering state in the listening area for a preset time. It should be noted that, if the maximum instantaneous speed does not exceed the preset speed threshold within the preset time, the touch point may be determined to be in a relatively stationary state with respect to the listening area within the preset time, so that it may be determined that the state of maintaining the hover of the touch point in the listening area reaches the preset time, that is, it is determined that the state of maintaining the target state of the touch point in the listening area reaches the preset time.
Still further, when it is determined that the touch point maintains the target state in the listening area for a preset time, a character identifier corresponding to the target virtual character may be generated, and the character identifier may be displayed at a target position corresponding to the touch point.
In some embodiments, if it is determined that the target state of the touch point in the listening area is not maintained for the preset time, further, it may be determined whether the touch point is always located in the listening area during the duration of the first trigger command, that is, whether the touch point leaves the listening area. When the touch point is determined to be always located in the monitoring area within the duration of the first trigger instruction, the trigger area is displayed at the position corresponding to the virtual character in the monitoring area from the position never leaving the monitoring area, for example, a ring is displayed under each virtual character foot in the monitoring area, the ring is the trigger area of the virtual character, and when the touch point is located in any trigger area, the release of the target skill corresponding to the target skill control to the virtual character corresponding to the trigger area can be determined. The trigger area may be an area surrounded by a circle, a square, or any polygon centered on the centroid of the virtual character.
Still further, if it is determined that the touch point has left the listening area for the duration of the first trigger instruction, a first point in time at which the touch point has left the listening area may be determined, and the first trigger instruction may be terminated at the first point in time.
Fig. 4 shows an exemplary schematic diagram of a scenario of selecting character identifications in an embodiment of the application.
In particular, referring to fig. 4, a character identification corresponding to the target virtual character may be determined, for example, when there are a plurality of target virtual characters, the character identification may include a first character identification, a second character identification, a third character identification, and so on. Further, the relative distance between the target virtual character and the touch point may be determined, for example, the relative distance between the target virtual character a and the touch point is closest, followed by centering the relative distance between the target virtual character B and the touch point, and the relative distance between the target virtual character C and the touch point is farthest. The character identifiers may be displayed sequentially at the target positions corresponding to the touch points in the order from the near to the far relative distances, for example, the first character identifier corresponding to the target virtual character a, the second character identifier corresponding to the target virtual character B, and the third character identifier corresponding to the target virtual character C are displayed sequentially at the target positions.
The order of arrangement of character identifiers at target positions is not limited to the order from the near to the far relative distance, but may be the order from the far to the near relative distance. Preferably, the character identifiers are arranged in the order from the near to the far according to the relative distance, so that the character identifier corresponding to the target virtual character closest to the touch point appears at the most front position, and the selection is convenient.
In some embodiments, the display location of the character identity may also be determined based on current blood volume information of the target virtual character. For example, the character identifier corresponding to the target virtual character and the blood volume information corresponding to the target virtual character may be determined, and further, the character identifiers may be sequentially displayed at the target positions corresponding to the touch points in the order of the blood volume information from low to high. For example, the current blood volume of the target virtual character a is lowest, followed by the current blood volume of the target virtual character B being centered, and the current blood volume of the target virtual character C being highest. The character identifiers may be displayed sequentially at the target positions corresponding to the touch points in the order of the blood volume information from low to high, for example, the first character identifier corresponding to the target virtual character a, the second character identifier corresponding to the target virtual character B, and the third character identifier corresponding to the target virtual character C are displayed sequentially at the target positions.
The order of arrangement of character identifiers at the target positions is not limited to the order of the blood volume information from low to high, but may be the order of the blood volume information from high to low. Preferably, the order of the blood volume information from low to high can be adopted, because when a player operates the target skill control and the target of the released skill is an enemy character, the enemy character with lower fire blood volume can be collected preferentially, thereby causing enemy camping and reducing staff and improving the possibility of winning by my.
In some embodiments, upon receiving a second trigger instruction for the target character identification, it may be determined whether the target skill corresponding to the target skill control is a gain class skill. The second trigger instruction may include a key instruction signal generated by performing a second trigger operation on a target key of the target peripheral, where the target key may be a key or a combination of a plurality of keys on the target peripheral. For example, by striking a "K" key on a keyboard to generate a key command signal; alternatively, the key command signal may be generated by simultaneously pressing the "X" and "Y" key positions on the handle; still alternatively, a key command signal may be generated by moving a mouse to hover over the target character identifier and click. And when the second triggering operation is executed on the target character identifier through the mouse, specific operations including but not limited to a single click operation, a double click operation, a long press operation and the like are included.
Further, if the target skill is a gain class skill, it may be released to the target avatar of the friend's camp, so it may be determined whether the target avatar corresponding to the target avatar identification is a first target avatar, where the first target avatar is a target avatar belonging to the same camp as the avatar currently releasing the skill. When the target virtual character corresponding to the target character identification is determined to be the first target virtual character, that is, the target virtual character selected through the second trigger instruction is the virtual character of the friend's camp, the target skill can be released to the target virtual character, and a gain effect can be applied to the target virtual character.
In some embodiments, upon receiving a second trigger instruction for the target character identification, it may be determined whether the target skill corresponding to the target skill control is a minus-type skill. If the target skill is a subtractive skill, it may be released to a target avatar of an enemy camp, so it may be determined whether the target avatar corresponding to the target avatar identification is a second target avatar, where the second target avatar is a target avatar belonging to a different camp than the avatar currently releasing skill. When the target virtual character corresponding to the target character identification is determined to be the second target virtual character, that is, when the target virtual character selected through the second trigger instruction is the virtual character of the hostile camp, the target skill can be released, and the reducing effect can be exerted on the target virtual character.
When the gain class skills are released to the second target virtual character or the reduced class skills are released to the first target virtual character, a prompt message may be generated to prompt the player that the class skills cannot be released to the target virtual character and prohibit the release of the target skills.
In some embodiments, when the target skill release targets are at least two, the same number of target virtual characters as the target skill release targets can be selected through the second trigger instruction, so that the target skill release is performed to the selected target virtual characters.
In some embodiments, when it is determined that the second trigger instruction for the target character identifier is not received beyond the target time, the character identifier may be hidden, that is, may be restored to the listening state, and when the first trigger instruction for the target skill control is received again, steps S202, S204, S206 in the present application are re-executed.
In some embodiments, a terminating interaction control may be further provided in the graphical user interface, when a third trigger instruction for the terminating interaction control is received, the triggering operation on the target skill control may be terminated, the role identifier is hidden, that is, the listening state may be restored, and when the first trigger instruction for the target skill control is received again, steps S204, S206 and S208 in the present application are re-executed. The third trigger instruction may include a key instruction signal generated by performing a third trigger operation on a target key of the target peripheral, where the target key may be a key or a combination of multiple keys on the target peripheral. For example, by striking a "K" key on a keyboard to generate a key command signal; alternatively, the key command signal may be generated by simultaneously pressing the "X" and "Y" key positions on the handle; still alternatively, a key instruction signal may also be generated by moving a mouse to hover and click on a terminating interaction control. And when the third triggering operation is executed on the termination interaction control through the mouse, specific operations including but not limited to a single click operation, a double click operation, a long press operation and the like are included.
As can be seen from the foregoing, the game control method, apparatus, device, storage medium and computer program product provided by the present application provide a graphical user interface through a terminal device, where a virtual scene and a target skill control are displayed in the graphical user interface, the virtual scene includes a target virtual character, when a first trigger instruction for the target skill control is received, a trigger point corresponding to the first trigger instruction may be determined, and whether the duration of maintaining the target state in the listening area at the trigger point reaches a preset time is determined. Further, when it is determined that the duration of the touch point maintaining the target state in the listening area reaches a preset time, a role identifier corresponding to a target virtual role can be generated and displayed at a target position corresponding to the touch point, wherein the target virtual role is a virtual role determined in the virtual scene according to the target position. Still further, when a second trigger instruction for the target role identifier is received, a target skill corresponding to the target skill control may be released to the target virtual role corresponding to the target role identifier. The method comprises the steps of monitoring the state of a touch point corresponding to a first trigger instruction of a target skill control in a monitoring area, displaying a character identifier corresponding to an optional target virtual character when the state of the target is kept in the monitoring area by the touch point to meet the preset time requirement, and effectively avoiding the problem that characters are difficult to select due to the fact that the number of the characters is large or models corresponding to the characters are mutually overlapped by selecting the character identifier, so that the effect of accurately selecting skill release objects is achieved.
It should be noted that, the method of the embodiment of the present application may be performed by a single device, for example, a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the method of an embodiment of the present application, the devices interacting with each other to accomplish the method.
It should be noted that the foregoing describes some embodiments of the present application. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Fig. 5 is a schematic diagram showing an exemplary configuration of a game control apparatus according to an embodiment of the present application.
Based on the same inventive concept, the application also provides a game control device corresponding to the method of any embodiment.
Referring to fig. 5, the game control apparatus provides a graphic user interface through a terminal device, wherein a virtual scene and a target skill control are displayed in the graphic user interface, and the virtual scene comprises a target virtual character; the device comprises: the device comprises a first determining module, a second determining module and a control module; wherein,,
the first determining module is configured to determine whether a touch point corresponding to a first trigger instruction keeps a target state in a monitoring area for a preset time or not in response to receiving the first trigger instruction for the target skill control;
the second determining module is configured to display a role identifier corresponding to the target virtual role at a target position corresponding to the touch point in response to determining that the touch point keeps a target state in the monitoring area for a preset time, wherein the target virtual role is a virtual role determined in the virtual scene according to the target position;
and the control module is configured to respond to receiving a second trigger instruction aiming at the target role identification, and then release the target skills corresponding to the target skill control to the target virtual roles corresponding to the target role identification.
In one possible implementation manner, the apparatus further includes: creating a module;
the creation module is configured to:
detecting whether the target virtual role exists in the current virtual scene;
if the target virtual role exists in the current virtual scene, a monitoring area is created to cover the target virtual role; the monitoring area is used for monitoring the current state of the touch point.
In one possible implementation, the first determining module is further configured to:
in response to determining that the target state of the touch point in the monitoring area is not reached to a preset time, determining whether the touch point is always located in the monitoring area within the duration time of the first trigger instruction;
responding to the fact that the touch point is always located in the monitoring area within the duration time of the first trigger instruction, and displaying a trigger area at a position corresponding to the virtual character in the monitoring area;
and responding to the touch point being in any triggering area, releasing the target skills corresponding to the target skill control to the virtual roles corresponding to the triggering area.
In one possible implementation, the first determining module is further configured to:
Determining a first time point when the touch point leaves the monitoring area within the duration time of the first trigger instruction, and terminating the first trigger instruction at the first time point.
In one possible implementation, the first determining module is further configured to:
in response to receiving a first trigger instruction for the target skill control, determining a maximum instantaneous speed of a touch point corresponding to the first trigger instruction in the monitoring area within the preset time, and determining whether the maximum instantaneous speed exceeds a preset speed; the first triggering instruction comprises a key instruction signal generated by performing a first triggering operation on a target key of a target peripheral; the target key is one key or a combination of a plurality of keys on the target peripheral;
and if the maximum instantaneous speed does not exceed the preset speed within the preset time, determining that the target state of the touch point in the monitoring area is kept for the preset time.
In one possible implementation, the second determining module is further configured to:
determining a role identifier corresponding to the target virtual role, and determining a relative distance between the target virtual role and the touch point;
And displaying the character identifiers at the target positions corresponding to the touch points in sequence according to the sequence from the near to the far of the relative distances.
In one possible implementation, the second determining module is further configured to:
determining a character identifier corresponding to the target virtual character and blood volume information corresponding to the target virtual character;
and displaying the character identifiers at target positions corresponding to the touch points in sequence according to the sequence from low to high of the blood volume information.
In one possible implementation, the target virtual character includes: a first target virtual role subordinate to the same camping as the current virtual role to which the target skill control belongs;
the control module is further configured to:
in response to receiving a second trigger instruction for a target role identification, determining whether a target skill corresponding to the target skill control is a gain class skill; the second triggering instruction comprises a key instruction signal generated by performing a second triggering operation on a target key of the target peripheral; the target key is one key or a combination of a plurality of keys on the target peripheral;
if the target skill is a gain skill, determining whether a target virtual character corresponding to the target character identifier is the first target virtual character;
And if the target virtual role corresponding to the target role identifier is the first target virtual role, releasing the target skill to the first target virtual role.
In one possible implementation, the target virtual character includes: a second target virtual role subordinate to a different camp with the current virtual role to which the target skill control belongs;
the control module is further configured to:
determining whether a target skill corresponding to the target skill control is a minus-type skill;
if the target skill is a reduced skill, determining whether a target virtual character corresponding to the target character identifier is the second target virtual character;
and if the target virtual role corresponding to the target role identifier is the second target virtual role, releasing the target skill to the first target virtual role.
In one possible implementation manner, the apparatus further includes: a hiding module;
the concealment module is configured to:
and in response to the fact that the second trigger instruction for the target role identification is not received beyond the target time, hiding the role identification at the target position.
In one possible implementation, the graphical user interface further includes: terminating the interactive control;
The concealment module is further configured to:
and responding to the third trigger instruction for terminating the interaction control, terminating the trigger operation of the target skill control, and hiding the character identification at the target position.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of each module may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
The device of the foregoing embodiment is used to implement the corresponding game control method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Fig. 6 shows an exemplary structural diagram of an electronic device according to an embodiment of the present application.
Based on the same inventive concept, the application also provides an electronic device corresponding to the method of any embodiment, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the game control method of any embodiment when executing the program. Fig. 6 shows a more specific hardware architecture of an electronic device according to this embodiment, where the device may include: processor 610, memory 620, input/output interface 630, communication interface 640, and bus 650. Wherein processor 610, memory 620, input/output interface 630, and communication interface 640 enable communication connections among each other within the device via bus 650.
The processor 610 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 620 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), a static storage device, a dynamic storage device, or the like. Memory 620 may store an operating system and other application programs, and when the technical solutions provided by the embodiments of the present specification are implemented in software or firmware, relevant program codes are stored in memory 620 and invoked for execution by processor 610.
The input/output interface 630 is used for connecting with an input/output module to realize information input and output. The input/output module may be configured as a component in a device (not shown in the figure) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
The communication interface 640 is used to connect a communication module (not shown in the figure) to enable communication interaction between the present device and other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 650 includes a path to transfer information between components of the device (e.g., processor 610, memory 620, input/output interface 630, and communication interface 640).
It should be noted that although the above device only shows the processor 610, the memory 620, the input/output interface 630, the communication interface 640, and the bus 650, in the implementation, the device may further include other components necessary for achieving normal operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The electronic device of the foregoing embodiment is configured to implement the corresponding game control method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Based on the same inventive concept, the present application also provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the game control method according to any of the above embodiments, corresponding to the method according to any of the above embodiments.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The storage medium of the above embodiment stores computer instructions for causing the computer to execute the game control method according to any one of the above embodiments, and has the advantages of the corresponding method embodiments, which are not described herein.
Based on the same inventive concept, the present disclosure also provides a computer program product corresponding to the game control method described in any of the above embodiments, which includes computer program instructions. In some embodiments, the computer program instructions may be executed by one or more processors of a computer to cause the computer and/or the processor to perform the game control method. Corresponding to the execution subject corresponding to each step in each embodiment of the game control method, the processor for executing the corresponding step may belong to the corresponding execution subject.
The computer program product of the above embodiment is configured to enable the computer and/or the processor to perform the game control method according to any one of the above embodiments, and has the advantages of the corresponding method embodiments, which are not described herein.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the application (including the claims) is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined within the idea of the application, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the application as described above, which are not provided in detail for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present application. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present application, and also in view of the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the embodiments of the present application are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the application, it should be apparent to one skilled in the art that embodiments of the application can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the application has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, equivalent substitutions, improvements, and the like, which are within the spirit and principles of the embodiments of the application, are intended to be included within the scope of the application.

Claims (15)

1. A game control method is characterized in that a graphical user interface is provided through a terminal device, wherein a virtual scene and a target skill control are displayed in the graphical user interface, and the virtual scene comprises a target virtual role; the method comprises the following steps:
in response to receiving a first trigger instruction for the target skill control, determining whether a touch point corresponding to the first trigger instruction maintains a target state in a monitoring area for a preset time;
in response to determining that the touch point keeps a target state in the monitoring area for a preset time, displaying a role identifier corresponding to the target virtual role at a target position corresponding to the touch point, wherein the target virtual role is a virtual role determined in the virtual scene according to the target position;
And responding to receiving a second trigger instruction aiming at the target character identification, and releasing target skills corresponding to the target skill control to a target virtual character corresponding to the target character identification.
2. The method of claim 1, wherein in response to receiving a first trigger instruction for the target skill control, determining whether a touch point corresponding to the first trigger instruction remains in a target state within a listening area for a preset time, further comprising:
detecting whether the target virtual role exists in the current virtual scene;
if the target virtual role exists in the current virtual scene, a monitoring area is created to cover the target virtual role; the monitoring area is used for monitoring the current state of the touch point.
3. The method of claim 1, wherein the determining whether the touch point corresponding to the first trigger instruction remains in the target state for a preset time in the listening area further comprises:
in response to determining that the target state of the touch point in the monitoring area is not reached to a preset time, determining whether the touch point is always located in the monitoring area within the duration time of the first trigger instruction;
Responding to the fact that the touch point is always located in the monitoring area within the duration time of the first trigger instruction, and displaying a trigger area at a position corresponding to the virtual character in the monitoring area;
and responding to the touch point being in any triggering area, releasing the target skills corresponding to the target skill control to the virtual roles corresponding to the triggering area.
4. The method of claim 3, wherein the determining whether the touch point is always within the listening area for the duration of the first trigger instruction further comprises:
determining a first time point when the touch point leaves the monitoring area within the duration time of the first trigger instruction, and terminating the first trigger instruction at the first time point.
5. The method of claim 1, wherein the determining, in response to receiving a first trigger instruction for the target skill control, whether a touch point corresponding to the first trigger instruction remains in the listening area for a target state for a preset time comprises:
in response to receiving a first trigger instruction for the target skill control, determining a maximum instantaneous speed of a touch point corresponding to the first trigger instruction in the monitoring area within the preset time, and determining whether the maximum instantaneous speed exceeds a preset speed; the first triggering instruction comprises a key instruction signal generated by performing a first triggering operation on a target key of a target peripheral; the target key is one key or a combination of a plurality of keys on the target peripheral;
And if the maximum instantaneous speed does not exceed the preset speed within the preset time, determining that the target state of the touch point in the monitoring area is kept for the preset time.
6. The method of claim 1, wherein displaying the character identifier corresponding to the target virtual character at the target location corresponding to the touch point comprises:
determining a role identifier corresponding to the target virtual role, and determining a relative distance between the target virtual role and the touch point;
and displaying the character identifiers at the target positions corresponding to the touch points in sequence according to the sequence from the near to the far of the relative distances.
7. The method of claim 1, wherein displaying the character identifier corresponding to the target virtual character at the target location corresponding to the touch point comprises:
determining a character identifier corresponding to the target virtual character and blood volume information corresponding to the target virtual character;
and displaying the character identifiers at target positions corresponding to the touch points in sequence according to the sequence from low to high of the blood volume information.
8. The method of claim 1, wherein the target avatar comprises: a first target virtual role subordinate to the same camping as the current virtual role to which the target skill control belongs;
The responding to the second trigger instruction for the target character identification, the releasing the target skill corresponding to the target skill control to the target virtual character corresponding to the target character identification comprises the following steps:
in response to receiving a second trigger instruction for a target role identification, determining whether a target skill corresponding to the target skill control is a gain class skill; the second triggering instruction comprises a key instruction signal generated by performing a second triggering operation on a target key of the target peripheral; the target key is one key or a combination of a plurality of keys on the target peripheral;
if the target skill is a gain skill, determining whether a target virtual character corresponding to the target character identifier is the first target virtual character;
and if the target virtual role corresponding to the target role identifier is the first target virtual role, releasing the target skill to the first target virtual role.
9. The method of claim 8, wherein the target avatar comprises: a second target virtual role subordinate to a different camp with the current virtual role to which the target skill control belongs;
The responding to the second trigger instruction for the target role identification further comprises:
determining whether a target skill corresponding to the target skill control is a minus-type skill;
if the target skill is a reduced skill, determining whether a target virtual character corresponding to the target character identifier is the second target virtual character;
and if the target virtual role corresponding to the target role identifier is the second target virtual role, releasing the target skill to the first target virtual role.
10. The method of claim 1, wherein after displaying the character identifier corresponding to the target virtual character at the target position corresponding to the touch point, further comprising:
and in response to the fact that the second trigger instruction for the target role identification is not received beyond the target time, hiding the role identification at the target position.
11. The method of claim 1, wherein the graphical user interface further comprises: terminating the interactive control;
the method further comprises the steps of:
and responding to the third trigger instruction for terminating the interaction control, terminating the trigger operation of the target skill control, and hiding the character identification at the target position.
12. A game control device, characterized in that a graphical user interface is provided through a terminal device, wherein a virtual scene and a target skill control are displayed in the graphical user interface, and the virtual scene comprises a target virtual character; the device comprises:
the first determining module is configured to determine whether a touch point corresponding to a first trigger instruction keeps a target state in a monitoring area for a preset time or not in response to receiving the first trigger instruction for the target skill control;
the second determining module is configured to display a role identifier corresponding to the target virtual role at a target position corresponding to the touch point in response to determining that the touch point keeps a target state in the monitoring area for a preset time, wherein the target virtual role is a virtual role determined in the virtual scene according to the target position;
and the control module is configured to respond to receiving a second trigger instruction aiming at the target role identification, and then release the target skills corresponding to the target skill control to the target virtual roles corresponding to the target role identification.
13. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 11 when the program is executed by the processor.
14. A computer readable storage medium storing computer instructions for causing the computer to implement the method of any one of claims 1 to 11.
15. A computer program product comprising computer program instructions which, when run on a computer, cause the computer to perform the method of any of claims 1-11.
CN202310427922.XA 2023-04-18 2023-04-18 Game control method, apparatus, device, storage medium, and computer program product Pending CN116617652A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310427922.XA CN116617652A (en) 2023-04-18 2023-04-18 Game control method, apparatus, device, storage medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310427922.XA CN116617652A (en) 2023-04-18 2023-04-18 Game control method, apparatus, device, storage medium, and computer program product

Publications (1)

Publication Number Publication Date
CN116617652A true CN116617652A (en) 2023-08-22

Family

ID=87637302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310427922.XA Pending CN116617652A (en) 2023-04-18 2023-04-18 Game control method, apparatus, device, storage medium, and computer program product

Country Status (1)

Country Link
CN (1) CN116617652A (en)

Similar Documents

Publication Publication Date Title
WO2017054464A1 (en) Information processing method, terminal and computer storage medium
CN110559658B (en) Information interaction method, device, terminal and storage medium
CN111760274B (en) Skill control method, skill control device, storage medium and computer equipment
JP2023171885A (en) Virtual object control method and related apparatus
KR20180004783A (en) Information processing method, terminal, and computer storage medium
CN112569611B (en) Interactive information display method, device, terminal and storage medium
US10528247B2 (en) Operation system having touch operation enabling use of large screen area, operation control method, and operation control program
CN113101652A (en) Information display method and device, computer equipment and storage medium
CN111265872B (en) Virtual object control method, device, terminal and storage medium
CN112870718B (en) Prop using method, prop using device, storage medium and computer equipment
CN113398590A (en) Sound processing method, sound processing device, computer equipment and storage medium
CN106984044B (en) Method and equipment for starting preset process
CN115040873A (en) Game grouping processing method and device, computer equipment and storage medium
CN113332719B (en) Virtual article marking method, device, terminal and storage medium
CN116617652A (en) Game control method, apparatus, device, storage medium, and computer program product
CN115040867A (en) Game card control method and device, computer equipment and storage medium
CN112221123B (en) Virtual object switching method and device, computer equipment and storage medium
CN114849238A (en) Animation execution method, device, equipment and medium
CN113867873A (en) Page display method and device, computer equipment and storage medium
CN116474366A (en) Display control method, display control device, electronic apparatus, storage medium, and program product
CN117150166A (en) Page interaction method, page interaction device, electronic equipment and computer readable storage medium
CN115337641A (en) Switching method and device of game props, computer equipment and storage medium
CN116262176A (en) Information interaction method, device, electronic equipment and storage medium
CN116920384A (en) Information display method and device in game, computer equipment and storage medium
CN116920390A (en) Control method and device of virtual weapon, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination