WO2021043000A1 - 信息交互方法和相关装置 - Google Patents

信息交互方法和相关装置 Download PDF

Info

Publication number
WO2021043000A1
WO2021043000A1 PCT/CN2020/110199 CN2020110199W WO2021043000A1 WO 2021043000 A1 WO2021043000 A1 WO 2021043000A1 CN 2020110199 W CN2020110199 W CN 2020110199W WO 2021043000 A1 WO2021043000 A1 WO 2021043000A1
Authority
WO
WIPO (PCT)
Prior art keywords
skill
target
effect model
game screen
joystick
Prior art date
Application number
PCT/CN2020/110199
Other languages
English (en)
French (fr)
Inventor
俞耀
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to SG11202108571VA priority Critical patent/SG11202108571VA/en
Priority to JP2021550032A priority patent/JP7242121B2/ja
Priority to KR1020217026753A priority patent/KR102602113B1/ko
Priority to EP20861591.4A priority patent/EP3919145A4/en
Priority to US17/156,087 priority patent/US11684858B2/en
Publication of WO2021043000A1 publication Critical patent/WO2021043000A1/zh
Priority to US18/314,299 priority patent/US20230271091A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/426Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5372Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/58Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/798Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/90Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
    • A63F13/92Video game devices specially adapted to be hand-held while playing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0338Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of limited linear or angular displacement of an operating part of the device from a neutral position, e.g. isotonic or isometric joysticks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1068Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad
    • A63F2300/1075Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad using a touch screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Definitions

  • This application relates to the computer field, specifically to information interaction.
  • Game skills can achieve specific effects for specific people, objects, areas, etc. in the game at a specific time under certain game conditions.
  • the user can control the virtual characters in the game through the client to cast various game skills in the virtual scene. .
  • the embodiments of the present application provide an information interaction method and related devices, which can improve the accuracy of information interaction.
  • an embodiment of the present application provides an information exchange method, the method is executed by a terminal, and the method includes:
  • a skill effect model of the target skill is generated at the at least one skill generation position.
  • an embodiment of the present application also provides an information interaction device, including:
  • the screen unit is used to display a game screen, the game screen including a candidate skill area;
  • the skill unit is used to determine the target skill based on the skill selection operation for the candidate skill area
  • the joystick unit is used to display a virtual joystick object on the game screen
  • a position unit configured to calculate at least one skill generation position of the target skill based on the movement operation when a movement operation for the virtual joystick object is detected;
  • the generating unit is configured to generate a skill effect model of the target skill at the at least one skill generating position when a casting operation on the virtual joystick object is detected.
  • an embodiment of the present application also provides a storage medium, the storage medium is used to store a computer program, and the computer program is used to execute the information interaction method in the above aspect.
  • an embodiment of the present application also provides a terminal, including a memory storing a plurality of instructions; the processor loads instructions from the memory to execute the information interaction method in the above aspect.
  • the embodiments of the present application provide a computer program product including instructions, which when run on a computer, cause the computer to execute the information interaction method in the above aspects.
  • the embodiment of the application can display the game screen, the game screen includes the candidate skill area; based on the skill selection operation for the candidate skill area, the target skill is determined; the virtual joystick object is displayed on the game screen; when the movement of the virtual joystick object is detected During operation, at least one skill generation position of the target skill is calculated based on the movement operation; when a casting operation for the virtual joystick object is detected, a skill effect model of the target skill is generated at the at least one skill generation position.
  • the user can control and adjust the generating positions of multiple skill effect models of the game skill through the virtual joystick object in the game screen, thus, the solution can improve the accuracy of information interaction.
  • Fig. 1a is a schematic diagram of a scenario of an information interaction method provided by an embodiment of the present application
  • Figure 1b is a schematic flowchart of an information interaction method provided by an embodiment of the present application.
  • Fig. 1c is a schematic structural diagram of a virtual joystick object provided by an embodiment of the present application.
  • FIG. 1d is a schematic diagram of the effect of overlay display provided by an embodiment of the present application.
  • FIG. 1e is a schematic diagram of the relationship between the skill generation location and the skill generation area provided by an embodiment of the present application
  • FIG. 1f is a schematic diagram of the mapping relationship between the joystick position and the skill generation position provided by an embodiment of the present application
  • FIG. 1g is a schematic diagram of changes in the skill effect distribution trajectory provided by an embodiment of the present application.
  • FIG. 2a is a schematic diagram of the orientation of the first skill effect model provided by an embodiment of the present application.
  • FIG. 2b is a schematic diagram of the orientation of the first skill effect model provided by an embodiment of the present application.
  • Fig. 3 is a schematic structural diagram of an information interaction device provided by an embodiment of the present application.
  • Fig. 4 is a schematic structural diagram of a network device provided by an embodiment of the present application.
  • the embodiments of the present application provide an information interaction method and related devices.
  • the information interaction device may be specifically integrated in a terminal, and the terminal may be a mobile phone, a tablet computer, a smart Bluetooth device, a notebook computer, or a personal computer (PC) and other devices.
  • the terminal may be a mobile phone, a tablet computer, a smart Bluetooth device, a notebook computer, or a personal computer (PC) and other devices.
  • the information interaction device can be specifically integrated in a smart phone.
  • the smart phone can be installed with game software.
  • the smart phone can run the game software.
  • the smart phone can display a game screen.
  • the screen may include a candidate skill area; based on the user's skill selection operation for the candidate skill area, the target skill can be determined; the virtual joystick object is displayed on the game screen; when the user's movement operation on the virtual joystick object is detected, based on the movement
  • the operation calculates at least one skill generation position of the target skill; when the user's casting operation on the virtual joystick object is detected, the skill effect model of the target skill is generated at the at least one skill generation position.
  • an information exchange method is provided. As shown in Fig. 1b, the specific process of the information exchange method may be as follows:
  • the game screen may include the game scene screen of the game software and the user interface (UI).
  • the game scene screen may display the scenes in the game, virtual characters, etc.
  • the user interface may include game buttons, text, windows, etc. Game design elements that have direct or indirect contact with game users.
  • the user can interact with the game content through the game screen.
  • the game screen may include a candidate skill area, and the candidate skill area may contain skill information of at least one candidate skill, where the skill information may be information such as a skill name, a skill control, and the like.
  • game skills can be a series of virtual events in an electronic game. These virtual events can trigger specific effects for specific people, objects, areas, etc. in the game at a specific time, provided that certain conditions are met.
  • game skills include the trigger mechanism of the game skill (when the life cycle of the skill is started in a certain way, skill events (atomic information describing the occurrence of the skill), and skill effects (causing changes to the current game environment).
  • Game skills are specific The skills and effects of can be formulated by those skilled in the art according to their needs.
  • a game skill can include its effect model and its numerical model.
  • the realization of the game skill is to generate the effect model of the game skill in the game scene and apply its numerical model to the corresponding target object to achieve the skill. effect.
  • the effect model of the game skill can have multiple types, for example, a scene model, an architectural model, a character model, an animation model, a particle effect model, and so on.
  • the game effect of game skill A is: summon 5 virtual followers at the same time around the skill cast object of the game skill A, then the implementation method is: around the skill cast object model of the game skill A Generate 5 virtual entourage models.
  • the user has various operation modes for the skill selection operation of the candidate skill area.
  • the user can select the skill in the candidate skill area by dragging, clicking, swiping, pressing, and touching.
  • the skill selection operation can also be generated by simulating a user operation, for example, simulating an operation for a designated position in the candidate skill area to generate a skill selection operation.
  • other operations mentioned in this application for example, move operations, release operations, and other operations can also be generated by simulation.
  • the candidate skill area may include skill information of at least one candidate skill, and the skill information may be a skill control, and the control may be expressed in the form of icons, symbols, buttons, and the like.
  • the target skill can be determined among the candidate skills.
  • the candidate skill area may include at least one skill control of the candidate skill, so step 102 may specifically It includes the following steps:
  • the target skill is determined in at least one candidate skill.
  • the candidate skill area may include skill icons of multiple candidate skills, and the user can select one candidate skill from the skill icons of the multiple candidate skills and click, that is, determine the candidate skill as the target skill.
  • the candidate skill area may include a skill icon of a candidate skill, and the user can swipe down the skill icon of the candidate skill in the candidate skill area to switch the candidate skill area to display the skill icon of the next candidate skill, and Determine the next candidate skill as the target skill; and, the user can swipe up the skill icon of the candidate skill in the candidate skill area, so that the candidate skill area is switched to the skill icon of the previous candidate skill, and the previous skill icon is displayed.
  • Candidate skills are determined as target skills.
  • the virtual joystick object is a virtual component that can be used for human-computer interaction.
  • the user inputs information to the terminal by controlling the movement of the virtual joystick object.
  • the virtual joystick object may include a preset joystick control, a preset joystick movement range, and a range center axis of the preset joystick movement range.
  • the user can control the preset joystick control to move arbitrarily within the preset joystick movement range.
  • the preset joystick control and the center axis of the preset joystick movement range The relative distance and direction can be used as information input by the user to interact with the game.
  • step 103 may include the following steps:
  • Overlay display means that the virtual joystick object and the skill control are layered, and the virtual joystick object is placed above the skill control, and then the virtual joystick object and the skill control are displayed.
  • the coverage can be full coverage, partial coverage, etc.
  • the virtual joystick object when the target skill is skill A, and the expression form of the skill control of skill A is an icon, the virtual joystick object will be overlaid on the skill A icon of the target skill, that is, the virtual joystick object and the target skill's After the skill A icons are stacked and the virtual joystick object is above the skill A icon, then the virtual joystick object and the skill A icon are displayed
  • the game screen may also include a cancel casting control, so after step 103, the following steps may be specifically included:
  • the virtual joystick object is stopped to be displayed on the game screen.
  • the virtual joystick object is stopped to be displayed in the game screen.
  • the skill preview effect model and skill generation area of the target skill can also be stopped displaying in the game screen, wherein the skill preview effect model and the skill generation area
  • the skill preview effect model and the skill generation area For the specific introduction of the area, please refer to the description in step 104, which will not be repeated here.
  • the user can move the virtual joystick control of the virtual joystick object by dragging, swiping, clicking, etc.
  • the skill generation position refers to the position where the skill effect model is actually generated in the game scene when the game skill is triggered.
  • the skill generation position can be one or multiple.
  • two thunderclouds can be generated at the easternmost and westernmost end of the game scene.
  • the skill effect model of skill X is to generate two thunderclouds, and the skill generation position of skill X is the easternmost end. (100, 0) and the westernmost end (-100, 0).
  • the virtual joystick object may include a virtual joystick control and a preset movement range of the joystick.
  • Step 104 may include the following steps:
  • the skill cast object refers to the virtual object for which the skill effect model of the game skill takes effect, where the virtual object may be a character virtual object, an item virtual object, a scene virtual object, and so on.
  • the skill cast object of game skill X is the virtual character that casts the skill in the game scene;
  • the skill cast object of game skill Y is other virtual characters selected by the user in the game scene;
  • the skill cast of game skill Z The object is a certain virtual item, a certain virtual place, etc. selected by the user in the game scene.
  • the current position of the skill cast object in the game screen refers to the position of the skill cast object in the game.
  • the skill cast object is a tree in the game scene, and the current position of the tree in a certain scene of the game is (299, 5, 41).
  • the preset skill casting range refers to the maximum casting range of the game skill, which can be range distance, range radius, range area, range volume, etc.
  • the preset skill casting range can be set by the game developer to improve gameplay and maintain Game balance and so on.
  • the skill cast object of the game skill "Summon Thundercloud” is a certain place in the game scene specified by the user, and its preset skill cast range is a circle with a radius of 8 meters.
  • the skill generation area is displayed on the game screen.
  • the skill generation area refers to the area where the skill effect model of the game skill can be generated in the game scene, that is, the area where the skill generation position is located.
  • the skill generation area of the target skill in the game screen can be based on the preset skill casting range based on the current position of the target of the skill cast.
  • the skill generation area can be centered on the current position and the preset skill cast
  • the range is a circle, fan, ball, etc. with a radius; for another example, the skill generation area can be a square, diamond, pyramid, etc. with the current position as the center and the preset skill cast range as the side length.
  • the skill generation area can be displayed in the game scene.
  • the skill generation area is the center of the circle and the preset skill casting range d as the radius.
  • a circular area where the user can control the specific position of the skill generation position (x, y) of the target skill in the skill generation area through the virtual joystick object.
  • the skill generation area may be displayed in the game scene in a highlighted form.
  • the edge of the skill generation area may be displayed in blue
  • the skill generation area may be filled in gray
  • the filled skills may be displayed. Generate regions, and so on.
  • the preset joystick control can move within the movement range of the joystick.
  • the joystick coordinate system is established with the range center axis as the coordinate axis zero point, and the position (x, y) of the joystick control in the joystick coordinate system is preset, that is, the joystick position.
  • the movement range of the joystick is the maximum cast range that the joystick can move.
  • the preset skill cast range can be set by the game developer to improve gameplay, maintain game balance, and so on.
  • the moving range of the joystick is the maximum range that the joystick can move.
  • the range can be range distance, range radius, range area, range volume, etc.
  • the rocker moves
  • the range can be set by the game developer or adjusted by the user.
  • the number of skill generation positions can be defined in the game skills.
  • the skill generation positions in a circular area with the preset skill cast range as the radius ie, the skill generation area
  • the preset joystick movement range as the radius
  • the joystick position mapping in the circular area ie, the preset joystick movement area
  • the step of "calculating at least one skill generation position of the target skill based on the preset joystick movement range, joystick position, preset skill cast range, and current position" may specifically include the following steps:
  • the relative position is the relative position between the skill generation position and the skill cast object
  • the preset joystick movement range is radius r
  • the user moves the preset joystick control of the virtual joystick object to the joystick position (x, y)
  • the preset skill cast range of the target skill is radius R
  • the current position of the skill cast object is (a, b)
  • the calculation method of the skill generation position (X, Y) is as follows:
  • the interaction range ratio is R/r, that is, the mapping ratio between the preset joystick movement range and the preset skill casting range;
  • the relative position includes the relative position of the x-axis and the relative position of the y-axis, and the relative position of the x-axis is x* R/r, the relative position of the y axis is y*R/r.
  • the effect direction of the skill effect model can be controlled according to the relative position between the skill generation position and the skill cast object.
  • the skill effect of the skill "Summon Follower” is Summon
  • the relative positions of the virtual followers are determined according to the relative position between the call location controlled by the user (that is, the skill generation location) and the skill cast object, so that the fronts of the followers face these relative directions.
  • the relative direction of the skill effect model relative to the skill cast object is calculated according to the relative position.
  • the relative direction is arctanB/A; and if the relative position is polar coordinates ( ⁇ , ⁇ ), the relative direction is ⁇ .
  • the target skill can generate skill effect models at multiple skill generation locations.
  • the skill "Summon Followers” can generate game models of 3 virtual followers at the skill generation location specified by the user, in order to further enhance the information interaction. Accuracy and reduced operational complexity.
  • This embodiment provides the concept of the skill effect model distribution trajectory.
  • the skill generation positions of the target skill are all on the skill effect model distribution trajectory. Step C may include steps a, b, c, d, and e. ,as follows:
  • the number of skill generation positions can be pre-determined in the game skill. For example, the number of skill generation positions for the skill “Summon Followers" is 3. After the skill is triggered, 3 virtual followers can be generated at the skill generation positions specified by the user.
  • the distribution trajectory of the skill effect model of the target skill can be calculated.
  • step b may specifically include the following steps:
  • the relative distance is weighted and summed according to the preset coefficient, and the distribution radius of the skill effect model of the target skill is obtained.
  • the preset coefficient K can be preset by the game developer, and the calculation formula of the relative distance d is as follows:
  • the relative position is (x, y).
  • the large circle is the preset skill cast range
  • the small circle is the skill effect model distribution trajectory with the skill effect model distribution radius as the radius
  • the triangle is the relative position of the skill cast object; the farther the relative position is (ie , The larger the relative distance), the larger the small circle.
  • step c it can include the following steps:
  • the relative position is the center of the circle and the skill effect model distribution trajectory of the target skill is determined based on the distribution radius of the skill effect model;
  • the distribution trajectory of the skill effect model of the target skill is determined by taking the relative position as the center of the circle and based on the minimum skill effect model distribution radius.
  • the preset distribution volume of the skill effect model is the game model volume of the skill effect model.
  • a game model of a virtual tree can be generated at each of the 4 skill generation positions.
  • the relative position is the center of the circle, and the skill effect model distribution trajectory of the target skill is determined based on the skill effect model distribution radius;
  • the relative position is taken as the center of the circle and the skill effect model distribution trajectory of the target skill is determined based on 12 meters.
  • the skill effect model distribution trajectory is equally divided by the number of skill generation positions, and each average point is used as Distribution points of skill effect models.
  • the effect direction of the skill effect model can be controlled according to the vertical direction of the distribution point of the skill effect model.
  • the skill effect of the skill "Summon Followers” is to summon 3 virtual characters.
  • the vertical direction of the distribution points is defined as the relative directions of these virtual followers, so that the fronts of these followers face these relative directions.
  • step d the following steps may also be included:
  • the vertical direction of the distribution point of the skill effect model is determined.
  • the position of the skill effect model distribution point (m, n) in the game screen, that is, the skill generation position is (m+a, n+b) ).
  • the skill effect of the skill "Summon Followers” is to summon 3 virtual followers.
  • a virtual follower model is generated at the 3 skill generation positions in the game scene.
  • step 105 may include the following steps:
  • the direction of the skill effect model of the target skill is modified based on the relative direction, and the target skill with the modified direction is obtained;
  • the skill effect model of the target skill with the modified direction is generated at the skill generating position.
  • a virtual follower model is generated.
  • the model direction of the virtual follower model is ⁇ , and the relative direction obtained in step B of step 104 is ⁇ , then the model direction of the virtual follower model is modified It is ⁇ + ⁇ .
  • step 105 may include the following steps:
  • the direction of the skill effect model of the target skill is modified based on the vertical direction, and the target skill with the modified direction is obtained;
  • the skill effect model of the target skill with the modified direction is generated at the skill generating position.
  • step 105 may specifically include the following steps:
  • the skill effect model of the target skill is generated at the skill generation position.
  • step 105 in order to enable the user to intuitively observe the cast position of the target skill while controlling the virtual joystick object before casting the skill, so that the user can adjust the skill generation position while observing, so after step 105 It includes the following steps:
  • the skill preview effect model is an effect model for users to preview.
  • the skill preview effect model is generated in the game scene, the game skill often has not yet taken effect, and the game skill officially takes effect when the skill effect model is generated in the game scene.
  • the user can grasp the accuracy of the game cast.
  • the generation of the skill preview effect model of the target skill can be stopped at at least one skill generation position, and the skill effect model of the target skill can be generated.
  • the skill effect of the game skill "Summon Followers” is to summon 3 virtual followers around the target of the skill cast, and the virtual followers can carry out long-range attacks on nearby enemies. Then, when the user adjusts the skill generation position through the virtual joystick object, a skill preview effect model of the game skill, such as a virtual shadow model of a virtual follower, is generated at the skill generation position. Until the user performs the casting operation, the virtual follower's virtual model is removed from the skill generation position, and the skill effect model of the game skill is generated, such as the game model of the virtual follower, and the virtual follower can perform remote attacks on nearby enemies.
  • a skill preview effect model of the game skill such as a virtual shadow model of a virtual follower
  • the game screen can be displayed by the method provided in the embodiments of this application, and the game screen includes candidate skill areas; the target skill is determined based on the user's skill selection operation for the candidate skill areas; the virtual joystick object is displayed on the game screen; when When the user’s movement operation on the virtual joystick object is detected, at least one skill generation position of the target skill is calculated based on the movement operation; when the user’s release operation on the virtual joystick object is detected, the target skill’s position is generated at the at least one skill generation position Skill effect model.
  • this solution can control the generation positions of multiple skill effect models of the game skill through the virtual joystick object, so that the game skill is cast more accurately, thereby improving the accuracy of information interaction.
  • the application of the information interaction method in a mobile phone game with a smart phone as a terminal is taken as an example, and the method in the embodiment of the present application will be described in detail.
  • game skills have multiple types, such as summoning types and spell types.
  • the summoning type game skill is a model of generating one or more summoning units in the game scene
  • the spell type game skill is a model of generating a spell special effect in the game scene.
  • the flow of the information interaction method is as follows:
  • the game screen is displayed, and the game screen includes candidate skill areas.
  • the game screen may include candidate skill areas.
  • the game screen may also include the character information of the player's own virtual character, such as a nickname, the blood volume of the own virtual character, the gain effect of the own virtual character, and so on.
  • the game screen may also include a battle time control, which can be used to display the duration of the player battle.
  • the game screen may also include a second virtual joystick object used to control the movement of its own virtual character.
  • a second virtual joystick object may be included in the lower left corner of the game screen.
  • the candidate skill area may include a plurality of candidate skill controls, and the candidate skill controls may be candidate skill icons.
  • the candidate skill area may include 3 candidate skill icons, which are the skill icon for the skill “Light Ball”, the skill icon for the skill “Summoning: Warrior”, and the skill icon for the skill “Summoning: Archer”.
  • the game screen includes a second virtual joystick object, and the second virtual joystick object can be used to control the movement of its own virtual character.
  • the game screen includes a first virtual joystick object.
  • the virtual joystick object can be used to control the casting of game skills.
  • a conventional skill casting method can be used for information interaction; when the skill type of the target skill is a summoning type, this information interaction method is used for information interaction.
  • the target skill is the skill "Summoning: Warrior”
  • the skill effect of the skill “Summoning: Warrior” is: summon a fighter unit within 7.2 meters of the virtual character, and the fighter unit is hostile to nearby The virtual character performs a melee attack.
  • the preset skill cast range of the skill "Summoning: Warrior” is 7.2 meters.
  • the preset joystick movement range is 3.6 meters, and the player will use the virtual joystick.
  • the method of calculating the skill generation position (X, Y) of the target skill based on the drag operation is as follows:
  • the target skill is the skill "Summoning: Archer”
  • the skill effect of the skill “Summoning: Archer” is: summon 3 archer followers within 10 meters of the virtual character, Archer
  • the entourage unit conducts long-range attacks on nearby hostile virtual characters.
  • the preset skill cast range of the skill “Summoning: Archer” is 10 meters, and the number of skill generating positions for the skill “Summoning: Archer” is 3.
  • the preset joystick movement range is 4 meters.
  • the method of calculating the skill generation position (X, Y) of the target skill based on the drag operation is as follows:
  • the model volume of the archer follower unit of the skill "Summoning: Archer” is 1 meter * 1 meter.
  • the model of the 3 archer follower units overlaps.
  • the distribution radius r of the skill effect model of the target skill is not less than the minimum skill effect model distribution radius min_r.
  • the distribution trajectory of the skill effect model is not less than the minimum skill effect model distribution radius min_r.
  • the five-pointed star is its own virtual character
  • the large circle is the preset skill cast range
  • the small circle is the distribution trajectory of the skill effect model
  • the arrow points to the relative position (1.25, 3.75)
  • the distribution trajectory of the skill effect model It is a circle with a center of (1.25, 3.75) and a radius r of 93.75 meters.
  • three archer entourage units can be evenly distributed on a semicircle at one end farther from the virtual character. That is, the semicircle of the skill effect model distribution trajectory at the end farther from the current position (1, 2) is divided into the number of skill generation positions + 1 equal division (ie, 4 equal divisions), and the distribution points of the skill effect model are the skill effect model distribution. Small black dots on the trajectory.
  • the distribution points of the skill effect model are (1, 1), (1.5, 0), (1, -1), and the current position of the skill cast object in the game screen is (10, 10), then the skill generation position is (10, 10), (10.5, 10), (10, 9).
  • the skill preview model of the skill "Summoning: Archer” is the preview model of the archer follower unit.
  • the preview model of the archer entourage unit is a triangle, and its orientation is a dotted arrow pointing direction, that is, its orientation can be calculated from the relative position and the skill generation position.
  • the preview model of the archer follower unit is a triangle, and its orientation is a dotted arrow pointing direction, that is, its orientation can be calculated from the current position and the skill generation position.
  • the orientation of the skill preview model needs to be modified to the orientation calculated above.
  • three archer follower unit models can be generated at the above three skill generation positions according to the modified orientation.
  • the player can accurately cast the summoning skill through the virtual joystick object.
  • the player can also use the traditional roulette method to cast the spell skill.
  • the operation consistency of the summoning skill and the spell skill can be improved.
  • this solution can also adjust the orientation and distribution density of the skill effect models at the same time when the game skill can generate multiple skill effect models.
  • this solution can display the game screen, the game screen includes candidate skill areas; based on the user's skill selection operation for the candidate skill areas, the target skill is determined; the virtual joystick object is displayed on the game screen; when the skill type of the target skill When it is a summoning type, and when the user's drag operation on the virtual joystick object is detected, calculate at least one skill generation position of the target skill based on the drag operation; when the user's release operation on the virtual joystick object is detected, At least one skill generation location generates the skill effect of the target skill.
  • this solution can simultaneously control the casting position and direction of the game skill effect model through the virtual joystick object, and automatically control the casting density of the game skill effect model, thereby reducing the complexity of operations and improving information interaction. Accuracy.
  • an embodiment of the present application also provides an information interaction device.
  • the information interaction device may be specifically integrated in a terminal.
  • the terminal may be a smart phone, a tablet computer, a smart Bluetooth device, a notebook computer, or a personal computer. Computers and other equipment.
  • the integration of the information interaction device in a smart phone will be taken as an example to describe the method of the embodiment of the present application in detail.
  • the information interaction apparatus may include a screen unit 301, a skill unit 302, a joystick unit 303, a position unit 304, and a generating unit 305 as follows:
  • the screen unit 301 may be used to display a game screen, where the game screen may include a candidate skill area.
  • the skill unit 302 may be used to determine the target skill based on the skill selection operation for the candidate skill area.
  • the candidate skill area includes at least one skill control of the candidate skill, and the skill unit 302 may be specifically used for:
  • the target skill is determined in at least one candidate skill.
  • the joystick unit 303 may be used to display virtual joystick objects on the game screen.
  • the game screen further includes a cancel cast control
  • the joystick unit 303 can also be used to:
  • the virtual joystick object is stopped to be displayed on the game screen.
  • the position unit 304 may be used to calculate at least one skill generation position of the target skill based on the movement operation.
  • the virtual joystick object may include a virtual joystick control and a preset joystick movement range
  • the position unit 304 may include a current position subunit, a joystick subunit, and a position subunit, as follows:
  • the current location subunit can be used to determine the target skill's skill cast object and preset skill cast range, and obtain the current position of the skill cast object in the game screen;
  • the current location subunit can also be used to:
  • the skill generation area is displayed on the game screen.
  • the joystick subunit can be used to obtain the joystick position of the virtual joystick control in the preset joystick movement range when the user's movement operation on the virtual joystick object is detected;
  • the position subunit may be used to calculate at least one skill generation position of the target skill based on the preset joystick movement range, joystick position, preset skill cast range, and current position.
  • the position sub-unit may include a proportion sub-module, a relative position sub-module, and a generating position sub-module, as follows:
  • the ratio example module can be used to determine the interaction range ratio between the preset joystick movement range and the preset skill cast range
  • the relative position submodule can be used to determine the relative position according to the ratio of the interaction range and the position of the joystick.
  • the relative position is the relative position between the skill generation position and the skill cast object;
  • the relative position submodule can also be used to:
  • Generate the skill effect model of the target skill in at least one skill generation position including:
  • the skill effect model of the target skill with the modified direction is generated at the skill generating position.
  • the generating position sub-module may be used to determine the relative position of the skill generating position in the game screen according to the current position of the skill casting object in the game screen.
  • the generating location submodule can be used to:
  • the generating unit 305 may be used to generate a skill effect model of the target skill at at least one skill generating position.
  • the generating unit 305 may be specifically used to:
  • the generating unit 305 may be specifically used to:
  • the skill effect model of the target skill is generated at the skill generation position.
  • the generating unit 305 may be used to:
  • the generating unit 305 may also be used to:
  • each of the above units can be generated as an independent entity, or can be arbitrarily combined, and generated as the same or several entities.
  • each of the above units please refer to the previous method embodiments, which will not be repeated here.
  • the information interaction device of this embodiment displays the game screen by the screen unit, and the game screen includes the candidate skill area; the skill unit determines the target skill based on the user's skill selection operation for the candidate skill area; the joystick unit displays the game screen
  • the virtual joystick object is displayed on the upper part; when the user's movement operation on the virtual joystick object is detected, the position unit calculates at least one skill generation position of the target skill based on the movement operation; when the user's release operation on the virtual joystick object is detected ,
  • the generating unit generates the skill effect model of the target skill in at least one skill generating position.
  • the embodiment of the present application also provides a terminal, which may be a mobile phone, a tablet computer, a smart Bluetooth device, a notebook computer, or a personal computer and other devices.
  • a terminal which may be a mobile phone, a tablet computer, a smart Bluetooth device, a notebook computer, or a personal computer and other devices.
  • the terminal may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be connected by the multiple nodes through network communication.
  • Distributed system formed.
  • nodes can form a peer-to-peer (P2P, Peer To Peer) network, and any form of computing equipment, such as servers, terminals, and other electronic devices, can become a node in the blockchain system by joining the peer-to-peer network.
  • P2P peer-to-peer
  • computing equipment such as servers, terminals, and other electronic devices
  • the terminal of this embodiment is a smart phone as an example for detailed description.
  • FIG. 4 shows a schematic diagram of the structure of the terminal involved in the embodiment of the present application, specifically:
  • the terminal may include one or more processing core processors 401, one or more computer-readable storage medium memory 402, power supply 403, input module 404, communication module 405 and other components.
  • processing core processors 401 one or more computer-readable storage medium memory 402, power supply 403, input module 404, communication module 405 and other components.
  • power supply 403 input module 404
  • communication module 405 communication module 405 and other components.
  • FIG. 4 does not constitute a limitation on the terminal, and may include more or less components than those shown in the figure, or combine some components, or arrange different components. among them:
  • the processor 401 is the control center of the terminal. It uses various interfaces and lines to connect various parts of the entire terminal. By running or executing software programs and/or modules stored in the memory 402, and calling data stored in the memory 402, Perform various functions of the terminal and process data to monitor the terminal as a whole.
  • the processor 401 may include one or more processing cores; in some embodiments, the processor 401 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user For interface and application programs, the modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 401.
  • the memory 402 may be used to store software programs and modules.
  • the processor 401 executes various functional applications and data processing by running the software programs and modules stored in the memory 402.
  • the memory 402 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of the terminal, etc.
  • the memory 402 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the memory 402 may further include a memory controller to provide the processor 401 with access to the memory 402.
  • the terminal also includes a power supply 403 for supplying power to various components.
  • the power supply 403 may be logically connected to the processor 401 through a power management system, so that functions such as charging, discharging, and power management are generated through the power management system.
  • the power supply 403 may also include any components such as one or more DC or AC power supplies, a recharging system, a power failure detection circuit, a power converter or inverter, and a power status indicator.
  • the terminal may also include an input module 404, which can be used to receive input digital or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • an input module 404 which can be used to receive input digital or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • the terminal may also include a communication module 405.
  • the communication module 405 may include a wireless module.
  • the terminal may perform short-distance wireless transmission through the wireless module of the communication module 405, thereby providing users with wireless broadband Internet access.
  • the communication module 405 can be used to help users send and receive emails, browse webpages, and access streaming media.
  • the terminal may also include a display unit, etc., which will not be repeated here.
  • the processor 401 in the terminal loads the executable files corresponding to the processes of one or more applications into the memory 402 according to the following instructions, and the processor 401 runs and stores them in the memory.
  • the applications in 402, thereby generating various functions, are as follows:
  • the game screen includes candidate skill areas
  • a skill effect model of the target skill is generated at at least one skill generation position.
  • an embodiment of the present application also provides a storage medium, where the storage medium is used to store a computer program, and the computer program is used to execute the method provided in the foregoing embodiment.
  • the instruction can perform the following steps:
  • the game screen includes candidate skill areas
  • a skill effect model of the target skill is generated at at least one skill generation position.
  • the storage medium may include: read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
  • the instructions stored in the storage medium can execute the steps in any information interaction method provided in the embodiments of the present application, it can achieve what can be achieved by any information interaction method provided in the embodiments of the present application.
  • the beneficial effects see the previous embodiment for details, and will not be repeated here.
  • the embodiments of the present application also provide a computer program product including instructions, which when run on a computer, cause the computer to execute the method provided in the above-mentioned embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例公开了一种信息交互方法和相关装置;本申请实施例可以显示游戏画面,其中,该游戏画面包括候选技能区域;基于针对候选技能区域的技能选取操作,确定目标技能;在游戏画面上显示虚拟摇杆对象;当检测到针对虚拟摇杆对象的移动操作时,基于移动操作计算目标技能的至少一个技能生成位置;当检测到针对虚拟摇杆对象的施放操作时,在至少一个技能生成位置生成目标技能的技能效果模型。在本申请实施例中用户可以通过虚拟摇杆对象来控制游戏技能的多个技能效果模型生成位置,由此,该方案可以提升信息交互的精确度。

Description

信息交互方法和相关装置
本申请要求于2019年09月04日提交中国专利局、申请号为201910833875.2、申请名称为“信息交互方法、装置、终端以及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机领域,具体涉及信息交互。
背景技术
游戏技能可以在满足一定的游戏条件下,在特定时间针对游戏中特定的人、物、区域等实现特定的效果,用户可以通过客户端控制游戏中的虚拟角色在虚拟场景内施放各种游戏技能。
随着触摸屏幕的出现,玩家可以通过手指或其它物体触摸显示屏幕来进行人机交互,从而控制游戏中的虚拟角色施放游戏技能。
发明内容
本申请实施例提供一种信息交互方法和相关装置,可以提升信息交互的精确度。
一方面,本申请实施例提供一种信息交互方法,所述方法由终端执行,所述方法包括:
显示游戏画面,所述游戏画面包括候选技能区域;
基于针对所述候选技能区域的技能选取操作,确定目标技能;
在所述游戏画面上显示虚拟摇杆对象;
当检测到针对所述虚拟摇杆对象的移动操作时,基于所述移动操作计算所述目标技能的至少一个技能生成位置;
当检测到针对所述虚拟摇杆对象的施放操作时,在所述至少一个技能生成位置生成所述目标技能的技能效果模型。
另一方面,本申请实施例还提供一种信息交互装置,包括:
画面单元,用于显示游戏画面,所述游戏画面包括候选技能区域;
技能单元,用于基于针对所述候选技能区域的技能选取操作,确定目标技能;
摇杆单元,用于在所述游戏画面上显示虚拟摇杆对象;
位置单元,用于当检测到针对所述虚拟摇杆对象的移动操作时,基于所述移动操作计算所述目标技能的至少一个技能生成位置;
生成单元,用于当检测到针对所述虚拟摇杆对象的施放操作时,在所述至少一个技能生成位置生成所述目标技能的技能效果模型。
另一方面,本申请实施例还提供一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行以上方面的信息交互方法。
另一方面,本申请实施例还提供一种终端,包括存储器存储有多条指令;所述处理器从所述存储器中加载指令,以执行以上方面的信息交互方法。
另一方面,本申请实施例提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行以上方面的信息交互方法。
本申请实施例可以显示游戏画面,游戏画面包括候选技能区域;基于针对候选技能区域的技能选取操作,确定目标技能;在游戏画面上显示虚拟摇杆对象;当检测到针对虚拟摇杆对象的移动操作时,基于移动操作计算目标技能的至少一个技能生成位置;当检测到针对虚拟摇杆对象的施放操作时,在至少一个技能生成位置生成目标技能的技能效果模型。
在本申请中,用户可以通过游戏画面中的虚拟摇杆对象,来对游戏技能的多个技能效果模型生成位置进行控制与调整,由此,该方案可以提升信息交互的精确度。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a是本申请实施例提供的信息交互方法的场景示意图;
图1b是本申请实施例提供的信息交互方法的流程示意图;
图1c是本申请实施例提供的虚拟摇杆对象的结构示意图;
图1d是本申请实施例提供的覆盖显示的效果示意图;
图1e是本申请实施例提供的技能生成位置与技能生成区域之间的关系示意图;
图1f是本申请实施例提供的摇杆位置和技能生成位置之间的映射关系示意图;
图1g是本申请实施例提供的技能效果分布轨迹的变化示意图;
图2a是本申请实施例提供的第一种技能效果模型的朝向示意图;
图2b是本申请实施例提供的第一种技能效果模型的朝向示意图;
图3是本申请实施例提供的信息交互装置的结构示意图;
图4是本申请实施例提供的网络设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供一种信息交互方法和相关装置。
其中,该信息交互装置具体可以集成在终端中,该终端可以为手机、平板电脑、智能蓝牙设备、笔记本电脑、或者个人电脑(Personal Computer,PC)等设备。
参考图1a,该信息交互装置具体可以集成在智能手机中,智能手机中可以安装游戏软件,该智能手机可以运行该游戏软件,运行游戏软件时,该智能手机可以显示游戏画面,其中,该游戏画面可以包括候选技能区域;基于用户针对该候选技能区域的技能选取操作,可以确定目标技能;在游戏画面上显示虚拟摇杆对象;当检测到用户针对虚拟摇杆对象的移动操作时,基于移动操作计算目标技能的至少一个技能生成位置;当检测到用户针对虚拟摇杆对象的施放操作时,在至少一个技能生成位置生成目标技能的技能效果模型。
以下分别进行详细说明。需说明的是,以下实施例的序号不作为对实施例优选顺序的限定。
在本实施例中,提供了一种信息交互方法,如图1b所示,该信息交互方法的具体流程可以如下:
101、显示游戏画面,该游戏画面包括候选技能区域。
其中,游戏画面可以包括游戏软件的游戏场景画面和用户界面(User  Interface,UI),其中,游戏场景画面可以展现游戏内的场景、虚拟角色等内容,用户界面可以包括游戏按钮、文字、窗口等与游戏用户直接或间接接触的游戏设计元素。
用户可以通过游戏画面来与游戏内容进行互动。
参考图1a,游戏画面可以包括候选技能区域,该候选技能区域中可以包含至少一个候选技能的技能信息,其中,技能信息可以是技能名称、技能控件等信息。
其中,游戏技能可以是电子游戏中的一系列虚拟事件,在满足一定条件的前提下,这些虚拟事件可以在特定时间针对游戏中特定的人、物、区域等触发特定的效果。
游戏技能的基本要素包括该游戏技能的触发机制(何时以某种方式开启技能的生命周期、技能事件(描述技能发生的原子信息)以及技能效果(对当前游戏环境造成改变)。游戏技能具体的技能效果可以由本领域技术人员根据需求制定。
一般来说,游戏技能可以包括其效果模型、其数值模型,游戏技能的实现方式是在游戏场景中生成该游戏技能的效果模型,以及将其数值模型套用到对应的目标对象上,从而达到技能效果。
其中,游戏技能的效果模型可以具有多种类型,比如,场景模型、建筑模型、人物模型、动画模型、粒子效果模型,等等。
例如,在一些实施例中,游戏技能A的游戏效果是:在该游戏技能A的技能施放对象周围同时召唤5个虚拟随从,则其实现方式为:在该游戏技能A的技能施放对象模型周围生成5个虚拟随从模型。
102、基于针对候选技能区域的技能选取操作,确定目标技能。
用户针对候选技能区域的技能选取操作的操作方式具有多种,比如,用户可以通过拖拽、点击、划动、按压、触摸等方式来在候选技能区域进行技能选取。除此之外,技能选取操作也可以是通过模拟用户操作的方式生成,例如模拟针对候选技能区域中指定位置的操作来生成技能选取操作。除了技能选取操作,本申请提及的其他操作(例如移动操作、释放操作等各类操作)也均可通过模拟的方式生成。
候选技能区域中可以包括至少一个候选技能的技能信息,该技能信息可以是技能控件,该控件可以以图标、符号、按钮等形式表现。通过用户针对候选技能区域的技能选取操作,可以在这些候选技能中确定目标技能。
在一些实施例中,为了使得技能施放更加直观、优化在多技能情况下的技能施放,从而进一步提升信息交互的精确度,候选技能区域可以包括至少一个候选技能的技能控件,故步骤102具体可以包括以下步骤:
基于针对技能控件的选取操作,在至少一个候选技能中确定目标技能。
例如,候选技能区域中可以包括多个候选技能的技能图标,用户可以在多个候选技能的技能图标中选取一个候选技能进行点击,即将该候选技能确定为目标技能。
再例如,候选技能区域中可以包括一个候选技能的技能图标,用户可以在候选技能区域中向下划动该候选技能的技能图标,使得候选技能区域切换显示到下一候选技能的技能图标,并将该下一候选技能确定为目标技能;以及,用户可以在候选技能区域中向上划动该候选技能的技能图标,使得候选技能区域切换显示到上一候选技能的技能图标,并将该上一候选技能确定为目标技能。
103、在游戏画面上显示虚拟摇杆对象。
虚拟摇杆对象是一种可以用于人机交互的虚拟构件,用户通过控制虚拟摇杆对象的运动变化来向终端输入信息。
在一些实施例中,参考图1c,虚拟摇杆对象可以包括预设摇杆控件、预设摇杆移动范围,以及该预设摇杆移动范围的范围中心轴。
用户可以控制预设摇杆控件在预设摇杆移动范围内任意移动,当用户控制该预设摇杆控件进行移动时,该预设摇杆控件和预设摇杆移动范围的范围中心轴之间的相对距离、方向即可作为用户输入的信息与游戏进行交互。
在一些实施例中,具体地,步骤103可以包括以下步骤:
在目标技能的技能控件上覆盖显示虚拟摇杆对象。
覆盖显示是指将虚拟摇杆对象与技能控件层叠,且使得虚拟摇杆对象处于技能控件上方之后,再对虚拟摇杆对象和技能控件进行显示。
其中,覆盖可以是全部覆盖、部分覆盖等覆盖方式。
比如,参考图1d,当目标技能为技能A,技能A的技能控件的表现形式为 图标时,则在目标技能的技能A图标上覆盖显示虚拟摇杆对象,即将虚拟摇杆对象与目标技能的技能A图标层叠,且使得虚拟摇杆对象处于技能A图标上方之后,再对虚拟摇杆对象和技能A图标进行显示
在一些实施例中,为了便于用户随时停止施放游戏技能,游戏画面还可以包括取消施放控件,故步骤103之后具体还可以包括以下步骤:
当检测到针对取消施放控件的取消施放操作时,在游戏画面中停止显示虚拟摇杆对象。
例如,参考图1a,当检测到针对取消施放控件的取消施放操作时,在游戏画面中停止显示虚拟摇杆对象。
以及,在一些实施例中,当检测到针对取消施放控件的取消施放操作时,还可以在游戏画面中停止显示目标技能的技能预览效果模型和技能生成区域,其中,技能预览效果模型和技能生成区域的具体介绍可以参考步骤104中的叙述,在此不做赘述。
104、当检测到针对虚拟摇杆对象的移动操作时,基于移动操作计算目标技能的至少一个技能生成位置。
其中,用户可以通过拖拽、划动、点击等来对虚拟摇杆对象的虚拟摇杆控件进行移动。
技能生成位置是指游戏技能触发时,技能效果模型在游戏场景中实际生成的位置,该技能生成位置可以为一个,也可以为多个。
比如,触发技能X时,可以在游戏场景的场景最东端和场景最西端生成两团雷云,则技能X的技能效果模型为生成两团雷云,技能X的技能生成位置为最东端(100,0)和最西端(-100,0)。
在一些实施例中,虚拟摇杆对象可以包括虚拟摇杆控件、预设摇杆移动范围,步骤104可以包括以下步骤:
(1)确定目标技能的技能施放对象和预设技能施放范围,获取技能施放对象在游戏画面中的当前位置。
其中,技能施放对象是指该游戏技能的技能效果模型生效的虚拟对象,其中,该虚拟对象可以是角色虚拟对象、物品虚拟对象、场景虚拟对象等等。比如,游戏技能X的技能施放对象是游戏场景中施放该技能的虚拟角色;再比如, 游戏技能Y的技能施放对象是游戏场景中用户选择的其他虚拟角色;再比如,游戏技能Z的技能施放对象是游戏场景中用户选择的某一虚拟物品、某一虚拟地点,等等。
技能施放对象在游戏画面中的当前位置指技能施放对象在游戏内的位置,比如,技能施放对象为游戏场景内的一颗树,该树在游戏某场景中的当前位置为(299,5,41)。
预设技能施放范围是指游戏技能最大的施放范围,可以为范围距离、范围半径、范围面积、范围体积等等,该预设技能施放范围可以由游戏开发人员设置,用于提高游戏性、保持游戏平衡性等等。
例如,游戏技能“召唤雷云”的技能施放对象为用户指定的游戏场景中的某一地点,其预设技能施放范围为半径为8米的圆形。
在一些实施例中,为了使得用户直观地观察到可以施放技能的区域、优化用户体验,在步骤“确定目标技能的预设技能施放范围和技能施放对象,获取技能施放对象在游戏画面中的当前位置”之后,还可以包括以下步骤:
以技能施放对象的当前位置为中心,基于预设技能施放范围确定目标技能在游戏画面中的技能生成区域;
在游戏画面上显示技能生成区域。
其中,技能生成区域是指游戏技能的技能效果模型在游戏场景中可生成的区域,即,技能生成位置所在的区域。
以技能施放对象的当前位置为中心,基于预设技能施放范围确定目标技能在游戏画面中的技能生成区域的方式具有多种,比如,技能生成区域可以是以当前位置为圆心、预设技能施放范围为半径的圆形、扇形、球型,等等;再比如,技能生成区域可以是以当前位置为中心、预设技能施放范围为边长的正方形、菱形、金字塔型,等等。
例如,参考图1e,当用户已经选出了目标技能后,可以在游戏场景中显示技能生成区域,该技能生成区域为以技能施放对象的当前位置为圆心、预设技能施放范围d为半径的一个圆形区域,用户可以通过虚拟摇杆对象来控制目标技能的技能生成位置(x,y)在该技能生成区域中的具***置。
在一些实施例中,技能生成区域可以以高亮的形式在游戏场景中显示,比 如,可以以蓝色显示该技能生成区域的边缘、以灰色填充该技能生成区域,并显示该填充后的技能生成区域,等等。
(2)当检测到针对虚拟摇杆对象的移动操作时,获取虚拟摇杆控件在预设摇杆移动范围中的摇杆位置。
当用户针对虚拟摇杆对象进行移动操作时,预设摇杆控件可以在摇杆移动范围中移动。
参考图1c,以范围中心轴为坐标轴零点建立摇杆坐标系,预设摇杆控件在该摇杆坐标系中的位置(x,y)即摇杆位置。
(3)基于预设摇杆移动范围、摇杆位置、预设技能施放范围、当前位置,计算目标技能的至少一个技能生成位置。
摇杆移动范围为摇杆可移动的最大的施放范围,该预设技能施放范围可以由游戏开发人员设置,用于提高游戏性、保持游戏平衡性等等。
在一些实施例中,摇杆移动范围为摇杆可移动的最大的范围,该范围的类型具有多种,比如,可以为范围距离、范围半径、范围面积、范围体积等等,该摇杆移动范围可以由游戏开发人员设置,也可以由用户自行调整。
技能生成位置的数量可以在游戏技能中定义,在以预设技能施放范围为半径的圆形区域(即,技能生成区域)中的技能生成位置,可以与在以预设摇杆移动范围为半径的圆形区域(即,预设摇杆移动区域)中的摇杆位置映射。
在一些实施例中,步骤“基于预设摇杆移动范围、摇杆位置、预设技能施放范围、当前位置,计算目标技能的至少一个技能生成位置”具体可以包括以下步骤:
A.确定预设摇杆移动范围和预设技能施放范围之间的交互范围比例;
B.根据交互范围比例和摇杆位置确定相对位置,相对位置为技能生成位置和技能施放对象之间相对的位置;
C.根据技能施放对象在游戏画面中的当前位置,确定相对位置在游戏画面中的技能生成位置。
比如,参考图1f,预设摇杆移动范围为半径r,用户将虚拟摇杆对象的预设摇杆控件移动到了摇杆位置(x,y)、目标技能的预设技能施放范围为半径R,技能施放对象的当前位置为(a,b),则技能生成位置(X,Y)的计算方 法如下:
d=1.25 2+3.75 2=15.625
其中,交互范围比例为R/r,即预设摇杆移动范围和预设技能施放范围之间的映射比例;相对位置包括x轴相对位置和y轴相对位置,x轴相对位置围为x*R/r,y轴相对位置为y*R/r。
在一些实施例中,可以在生成目标技能的技能效果模型时,根据技能生成位置与技能施放对象之间的相对位置来控制技能效果模型的效果朝向,比如技能“召唤随从”的技能效果是召唤3个虚拟随从,则根据用户控制的召唤地点(即,技能生成位置)与技能施放对象之间的相对位置,来确定这些虚拟随从的相对方向,使得这些随从的正面朝向这些相对方向。
故步骤“对交互范围比例和摇杆位置进行相乘计算,得到相对位置”之后,还可以包括以下步骤:
根据相对位置计算技能效果模型相对于所述技能施放对象的相对方向。
比如,相对位置为(A,B),则相对方向为arctanB/A;再比如相对位置为极坐标(ρ,θ),则相对方向为θ。
在一些实施例中,目标技能可以在多个技能生成位置生成技能效果模型,比如,技能“召唤随从”可以在用户指定的技能生成位置生成3个虚拟随从的游戏模型,为了进一步提升信息交互的精确度、降低操作复杂度,本实施例提供技能效果模型分布轨迹的概念,目标技能的技能生成位置均在该技能效果模型分布轨迹上,步骤C可以包括步骤a、b、c、d、e,如下:
a.确定目标技能的技能生成位置数量。
b.根据相对位置计算目标技能的技能效果模型分布半径。
技能生成位置数量可以在游戏技能中预先制定,比如,技能“召唤随从”的技能生成位置数量为3,触发该技能后可以在用户指定的技能生成位置生成3个虚拟随从的游戏模型。
根据技能效果模型分布半径可以计算出该目标技能的技能效果模型分布轨迹。
根据相对位置计算目标技能的技能效果模型分布半径的方式具有多种,比如,相对位置越远,技能效果模型分布半径越小;再比如,相对位置越近,技 能效果模型分布半径越小,等等。
在一些实施例中,步骤b具体可以包括以下步骤:
获取预设系数;
根据相对位置计算相对距离;
根据预设系数对相对距离进行加权求和,得到目标技能的技能效果模型分布半径。
技能效果模型分布半径R的计算公式如下:
Figure PCTCN2020110199-appb-000001
其中,预设系数K可以由游戏开发人员预先设定,相对距离d的计算公式如下:
x 2+y 2
其中,相对位置为(x,y)。
c.以相对位置为圆心,基于技能效果模型分布半径确定目标技能的技能效果模型分布轨迹。
比如,参考图1g,其中,大圈为预设技能施放范围,小圈为以技能效果模型分布半径为半径的技能效果模型分布轨迹,三角形为技能施放对象的相对位置;相对位置越远(即,相对距离越大),则小圈越来越大。
在一些实施例中,由于游戏技能的技能效果模型具有体积,则当一个游戏技能具有多个技能生成位置时,要保证在这些技能生成位置上生成的技能效果模型不会相互碰撞、交叉,故参考图1g的多个小圈中,最左边的4个小圈都保持同样的大小,使得在相对位置小于一定数值时,技能效果模型分布轨迹的技能效果模型分布半径不再减小,步骤c具体可以包括以下步骤:
获取目标技能的技能效果模型预设分布体积;
对技能效果模型预设分布体积和技能生成位置数量进行相乘计算,得到最小技能效果模型分布半径;
当目标技能的技能效果模型分布半径不小于最小技能效果模型分布半径时,以相对位置为圆心,基于技能效果模型分布半径确定目标技能的技能效果模型分布轨迹;
当目标技能的技能效果模型分布半径小于最小技能效果模型分布半径时, 以相对位置为圆心、基于最小技能效果模型分布半径确定目标技能的技能效果模型分布轨迹。
其中,技能效果模型预设分布体积是技能效果模型的游戏模型体积,比如,触发技能“种树”时,可以在4个技能生成位置每个位置生成1棵虚拟树的游戏模型,该虚拟树的模型体积为3米*3米,则最小技能效果模型分布半径为3*4=12米。
当技能“种树”的技能效果模型分布半径不小于12米时,以相对位置为圆心,基于技能效果模型分布半径确定目标技能的技能效果模型分布轨迹;
当技能“种树”的技能效果模型分布半径小于12米时,以相对位置为圆心、基于12米确定目标技能的技能效果模型分布轨迹。
d.基于至少一个技能生成位置数量,在技能效果模型分布轨迹上确定多个技能效果模型分布点。
基于技能生成位置数量,在技能效果模型分布轨迹上确定多个技能效果模型分布点的方法具有多种,比如,以技能生成位置数量将技能效果模型分布轨迹均分,将每个均分点作为技能效果模型分布点。
在一些实施例中,可以在生成目标技能的技能效果模型时,根据技能效果模型分布点的垂线方向来控制技能效果模型的效果朝向,比如技能“召唤随从”的技能效果是召唤3个虚拟随从,则将分布点的垂线方向规定为这些虚拟随从的相对方向,使得这些随从的正面朝向这些相对方向。
故步骤d之后,还可以包括以下步骤:
基于技能效果模型分布半径,确定技能效果模型分布点的垂线方向。
e.根据技能施放对象在游戏画面中的当前位置,确定技能效果模型分布点在游戏画面中的位置作为技能生成位置。
比如,技能施放对象在游戏画面中的当前位置为(a,b),则技能效果模型分布点(m,n)在游戏画面中的位置,即技能生成位置为(m+a,n+b)。
105、当检测到用户针对虚拟摇杆对象的施放操作时,在至少一个技能生成位置生成目标技能的技能效果模型。
比如,技能“召唤随从”的技能效果是召唤3个虚拟随从,则当检测到用户针对虚拟摇杆对象的施放操作时,在游戏场景内的3个技能生成位置生成虚 拟随从模型。
在一些实施例中,为了使得技能效果模型正面朝向相对方向,基于步骤104的步骤B中得到的相对方向,步骤105可以包括以下步骤:
当检测到用户针对虚拟摇杆对象的施放操作时,基于相对方向修改目标技能的技能效果模型方向,得到方向修改后的目标技能;
在技能生成位置生成方向修改后的目标技能的技能效果模型。
比如,触发技能“召唤随从”时要生成1个虚拟随从模型,虚拟随从模型的模型方向均为ω,在步骤104的步骤B中得到的相对方向为θ,则将虚拟随从模型的模型方向修改为ω+θ。
类似地,在一些实施例中,技能效果模型正面朝向垂线方向,根据步骤104的步骤d中得到的垂线方向,步骤105可以包括以下步骤:
当检测到用户针对虚拟摇杆对象的施放操作时,基于垂线方向修改目标技能的技能效果模型方向,得到方向修改后的目标技能;
在技能生成位置生成方向修改后的目标技能的技能效果模型。
在一些实施例中,为了防止用户施放的游戏技能超出技能生成区域,从而进一步提升信息交互精度,根据步骤104的步骤(1)中得到的技能生成区域,步骤105具体可以包括以下步骤:
当技能生成位置属于技能生成区域时,在技能生成位置生成目标技能的技能效果模型。
在一些实施例中,为了在施放技能前,使得用户在控制虚拟摇杆对象的同时可以直观的观察到目标技能的施放位置,使得用户可以一边观察一边调整技能生成位置,故步骤105之后还可以包括以下步骤:
在至少一个技能生成位置生成目标技能的技能预览效果模型。
其中,技能预览效果模型是供用户进行预览的效果模型,在游戏场景中生成技能预览效果模型时游戏技能往往还未生效,当在游戏场景中生成技能效果模型时,该游戏技能才正式生效。
通过生成目标技能的技能预览效果模型,可以使得用户把握游戏施放的精度。
在一些实施例中,当检测到用户针对虚拟摇杆对象的施放操作时,可以在 至少一个技能生成位置停止生成目标技能的技能预览效果模型,并生成目标技能的技能效果模型。
例如,游戏技能“召唤随从”的技能效果是:在技能施放对象周围召唤3个虚拟随从,虚拟随从可以对附近的敌人进行远程攻击。则在用户通过虚拟摇杆对象调整技能生成位置时,在该技能生成位置会生成该游戏技能的技能预览效果模型,比如虚拟随从的虚影模型。直到用户进行施放操作,在技能生成位置移除虚拟随从的虚影模型,并生成该游戏技能的技能效果模型,比如虚拟随从的游戏模型,且该虚拟随从可以对附近的敌人进行远程攻击。
由上可知,通过本申请实施例提供的方法可以显示游戏画面,游戏画面包括候选技能区域;基于用户针对候选技能区域的技能选取操作,确定目标技能;在游戏画面上显示虚拟摇杆对象;当检测到用户针对虚拟摇杆对象的移动操作时,基于移动操作计算目标技能的至少一个技能生成位置;当检测到用户针对虚拟摇杆对象的施放操作时,在至少一个技能生成位置生成目标技能的技能效果模型。由此,本方案可以通过虚拟摇杆对象来控制游戏技能的多个技能效果模型生成位置,使得游戏技能施放地更加精确,从而提升信息交互的精确度。
根据上述实施例所描述的方法,以下将作进一步详细说明。
在本实施例中,将以信息交互方法应用在以智能手机为终端的***中为例,对本申请实施例的方法进行详细说明。
在该***中,玩家可以通过操作自身虚拟角色施放游戏技能来与其他玩家操作的敌对虚拟角色进行游戏对战,在该***中,游戏技能具有多种类型,比如,召唤类型和法术类型。
其中,召唤类型的游戏技能是在游戏场景中生成一个或多个召唤单位的模型;法术类型的游戏技能是在游戏场景中生成一个法术特效的模型。
在本实施例中,信息交互方法的流程如下:
(一)显示游戏画面,游戏画面包括候选技能区域。
其中,游戏画面可以包括候选技能区域。
游戏画面还可以包括玩家自身虚拟角色的角色信息,比如昵称、自身虚拟角色血量、自身虚拟角色的增益效果,等等。
游戏画面还可以包括对战时间控件,该对战时间控件可以用于显示玩家对 战的持续时间。
除此之外,游戏画面还可以包括用于控制自身虚拟角色移动的第二虚拟摇杆对象,比如,在游戏画面的左下角可以包括第二虚拟摇杆对象。
(二)基于针对候选技能区域的技能选取操作,确定目标技能。
候选技能区域可以包括多个候选技能控件,该候选技能控件可以是候选技能图标。
比如,候选技能区域可以包括3个候选技能图标,分别是技能“光球术”的技能图标、技能“召唤术:战士”的技能图标、技能“召唤术:弓箭手”的技能图标。
当玩家点击技能“召唤术:战士”的技能图标时,将技能“召唤术:战士”确定为目标技能;当玩家点击技能“召唤术:弓箭手”的技能图标时,将技能“召唤术:弓箭手”确定为目标技能;当玩家点击技能“光球术”的技能图标时,将技能“光球术”确定为目标技能。
(三)在游戏画面上显示虚拟摇杆对象。
在一些实施例中,游戏画面中包括第二虚拟摇杆对象,该第二虚拟摇杆对象可以用于控制自身虚拟角色的移动,同时,游戏画面中包括第一虚拟摇杆对象,该第一虚拟摇杆对象可以用于控制游戏技能的施放。
(四)当该目标技能的技能类型为召唤类型时,且当检测到用户针对虚拟摇杆对象的拖动操作时,基于拖动操作计算目标技能的至少一个技能生成位置。
在一些实施例中,当该目标技能的技能类型为法术类型时,可以采用常规的技能施放方式进行信息交互;当该目标技能的技能类型为召唤类型时,采用本信息交互方法进行信息交互。由此,保证了玩家在施放法术技能和召唤技能时操作的一致性,降低了操作复杂度,从而进一步提高了操作精度。
在一些实施例中,假设目标技能是技能“召唤术:战士”,技能“召唤术:战士”的技能效果是:在自身虚拟角色周围7.2米内召唤一个战士随从单位,战士随从单位对附近的敌对虚拟角色进行近战攻击。
则该技能“召唤术:战士”的预设技能施放范围为7.2米,当玩家自身虚拟角色所在的当前位置为(10,10),预设摇杆移动范围为3.6米,玩家将虚拟摇杆控件移动到摇杆位置(0.3,-0.4)时,基于拖动操作计算目标技能的技能 生成位置(X,Y)的方式如下:
X=7.2/3.6*0.3+10=0.6+10=10.6
Y=7.2/3.6*(-0.4)+10=9.2
则技能“召唤术:战士”的技能生成位置为(10.6,9.2)。
在另一些实施例中,假设目标技能是技能“召唤术:弓箭手”,技能“召唤术:弓箭手”的技能效果是:在自身虚拟角色周围10米内召唤3个弓箭手随从单位,弓箭手随从单位对附近的敌对虚拟角色进行远程攻击。
则该技能“召唤术:弓箭手”的预设技能施放范围为10米,技能“召唤术:弓箭手”的技能生成位置数量为3,当玩家自身虚拟角色所在的当前位置为(1,2),预设摇杆移动范围为4米,玩家将虚拟摇杆控件移动到摇杆位置(0.5,1.5)时,基于拖动操作计算目标技能的技能生成位置(X,Y)的方式如下:
(1)根据相对位置计算目标技能的技能效果模型分布半径。
技能“召唤术:弓箭手”的弓箭手随从单位的模型体积为1米*1米,为了防止玩家指定的技能生成位置过于接近玩家自身虚拟角色从而导致的3个弓箭手随从单位出现模型重叠、挤压等现象,在此可以设定最小技能效果模型分布半径min_r=3*1米=3米。
首先,计算相对位置(x,y):
Figure PCTCN2020110199-appb-000002
Figure PCTCN2020110199-appb-000003
再计算相对距离d:
d=1.25 2+3.75 2=15.625
当预设系数K=6时,根据对距离计算目标技能的技能效果模型分布半径r的计算方法如下:
r=6*15.625=93.75
(2)以相对位置为圆心,基于技能效果模型分布半径确定目标技能的技能效果模型分布轨迹。
此时,目标技能的技能效果模型分布半径r不小于最小技能效果模型分布半径min_r,则此时以相对位置(1.25,3.75)为圆心,基于技能效果模型分布 半径r=93.75米,确定目标技能的技能效果模型分布轨迹。
(3)基于技能生成位置数量,在技能效果模型分布轨迹上确定多个技能效果模型分布点(4)
参考图2a和图2b,五角星为自身虚拟角色,大圈为预设技能施放范围,小圈为技能效果模型分布轨迹,箭头指向的是相对位置(1.25,3.75),该技能效果模型分布轨迹为圆心为(1.25,3.75)、半径r为93.75米的圆。
在本实施例中,可以将3个弓箭手随从单位平均分布在离自身虚拟角色较远一端的半圆上。即,将距离当前位置(1,2)较远一端的技能效果模型分布轨迹的半圆进行技能生成位置数量+1次等分(即,4等分),技能效果模型分布点为技能效果模型分布轨迹上的小黑点。
(5)根据技能施放对象在游戏画面中的当前位置,确定技能效果模型分布点在游戏画面中的位置,得到技能生成位置,并在技能生成位置生成目标技能的技能预览模型。
比如,技能效果模型分布点分别为(1,1)、(1.5,0)、(1,-1),技能施放对象在游戏画面中的当前位置为(10,10),则技能生成位置为(10,10)、(10.5,10)、(10,9)。
在本实施例中,技能“召唤术:弓箭手”的技能预览模型为弓箭手随从单位预览模型。
参考图2a,在一些实施例中,弓箭手随从单位预览模型为三角形,其朝向为虚线箭头指向方向,即,其朝向可以由相对位置和技能生成位置计算得到。
参考图2b,在另一些实施例中,弓箭手随从单位预览模型为三角形,其朝向为虚线箭头指向方向,即,其朝向可以由当前位置和技能生成位置计算得到。
当在技能生成位置生成目标技能的技能预览模型之前,需要将技能预览模型的朝向修改为上述计算得到的朝向。
(五)当检测到用户针对虚拟摇杆对象的施放操作时,在至少一个技能生成位置生成目标技能的技能效果。
类似于上述生成目标技能的技能预览模型,当检测到用户针对虚拟摇杆对象的施放操作时,可以在上述3个技能生成位置,按照修改后的朝向生成3个弓箭手随从单位模型。
通过上述方法,玩家可以通过虚拟摇杆对象来准确地施放召唤技能,同时,玩家还可以通过传统轮盘的方式施放法术技能,此时,可以提高施放召唤技能和施放法术技能的操作一致性。
此外,本方案还可以在游戏技能可以生成多个技能效果模型的情况下,同时调整技能效果模型的朝向、分布密度等。
故对于传统轮盘的操作方式、拖拽技能的操作方式以及本方案的操作方式进行对比,其效果如表1所示:
方案 位置 多单位 多单位密度 操作一致性
传统轮盘 仅控制角度 不能 不能 一致
拖拽 不能 不一致
本设计方案 一致
表1
由上可知,本方案可以显示游戏画面,游戏画面包括候选技能区域;基于用户针对候选技能区域的技能选取操作,确定目标技能;在游戏画面上显示虚拟摇杆对象;当该目标技能的技能类型为召唤类型时,且当检测到用户针对虚拟摇杆对象的拖动操作时,基于拖动操作计算目标技能的至少一个技能生成位置;当检测到用户针对虚拟摇杆对象的施放操作时,在至少一个技能生成位置生成目标技能的技能效果。由此,本方案可以通过虚拟摇杆对象来同时控制游戏技能效果模型的施放位置、施放方向,以及自动地控制游戏技能效果模型的施放密度,从而在降低操作复杂度的同时,提升了信息交互的精确度。
为了更好地实施以上方法,本申请实施例还提供一种信息交互装置,该信息交互装置具体可以集成在终端中,该终端可以为智能手机、平板电脑、智能蓝牙设备、笔记本电脑、或者个人电脑等设备。
比如,在本实施例中,将以信息交互装置集成在智能手机中为例,对本申请实施例的方法进行详细说明。
例如,如图3所示,该信息交互装置可以包括画面单元301、技能单元302、摇杆单元303、位置单元304以及生成单元305如下:
(一)画面单元301:
画面单元301可以用于显示游戏画面,其中,该游戏画面可以包括候选技能区域。
(二)技能单元302:
技能单元302可以用于基于针对候选技能区域的技能选取操作,确定目标技能。
在一些实施例中,候选技能区域包括至少一个候选技能的技能控件,技能单元302具体可以用于:
基于针对技能控件的选取操作,在至少一个候选技能中确定目标技能。
(三)摇杆单元303:
摇杆单元303可以用于在游戏画面上显示虚拟摇杆对象。
在一些实施例中,游戏画面还包括取消施放控件,摇杆单元303还可以用于:
当检测到针对取消施放控件的取消施放操作时,在游戏画面中停止显示虚拟摇杆对象。
(四)位置单元304:
当检测到针对虚拟摇杆对象的移动操作时,位置单元304可以用于基于移动操作计算目标技能的至少一个技能生成位置。
在一些实施例中,虚拟摇杆对象可以包括虚拟摇杆控件、预设摇杆移动范围,位置单元304可以包括当前位置子单元、摇杆子单元和位置子单元,如下:
(1)当前位置子单元
当前位置子单元可以用于确定目标技能的技能施放对象和预设技能施放范围,获取技能施放对象在游戏画面中的当前位置;
在一些实施例中,当前位置子单元还可以用于:
以技能施放对象的当前位置为中心,基于预设技能施放范围确定目标技能在游戏画面中的技能生成区域;
在游戏画面上显示技能生成区域。
(2)摇杆子单元:
摇杆子单元可以用于当检测到用户针对虚拟摇杆对象的移动操作时,获取虚拟摇杆控件在预设摇杆移动范围中的摇杆位置;
(3)位置子单元:
位置子单元可以用于基于预设摇杆移动范围、摇杆位置、预设技能施放范围、当前位置,计算目标技能的至少一个技能生成位置。
在一些实施例中,位置子单元可以包括比例子模块、相对位置子模块和生成位置子模块,如下:
A.比例子模块:
比例子模块可以用于确定预设摇杆移动范围和预设技能施放范围之间的交互范围比例;
B.相对位置子模块:
相对位置子模块可以用于根据交互范围比例和摇杆位置确定相对位置,相对位置为技能生成位置和技能施放对象之间相对的位置;
在一些实施例中,相对位置子模块还可以用于:
根据相对位置计算技能效果模型的相对方向;
在至少一个技能生成位置生成目标技能的技能效果模型,包括:
基于相对方向修改目标技能的技能效果模型方向,得到方向修改后的目标技能;
在技能生成位置生成方向修改后的目标技能的技能效果模型。
C.生成位置子模块:
生成位置子模块可以用于根据技能施放对象在游戏画面中的当前位置,确定相对位置在游戏画面中的技能生成位置。
在一些实施例中,生成位置子模块可以用于:
确定目标技能的技能生成位置数量;
根据相对位置计算目标技能的技能效果模型分布半径;
以相对位置为圆心,基于技能效果模型分布半径确定目标技能的技能效果模型分布轨迹;
基于技能生成位置数量,在技能效果模型分布轨迹上确定多个技能效果模型分布点;
根据技能施放对象在游戏画面中的当前位置,确定技能效果模型分布点在游戏画面中的位置作为技能生成位置。
(五)生成单元305:
当检测到用户针对虚拟摇杆对象的施放操作时,生成单元305可以用于在至少一个技能生成位置生成目标技能的技能效果模型。
在一些实施例中,生成单元305具体可以用于:
在目标技能的技能控件上覆盖显示虚拟摇杆对象。
在一些实施例中,生成单元305具体可以用于:
当技能生成位置属于技能生成区域时,在技能生成位置生成目标技能的技能效果模型。
在一些实施例中,生成单元305可以用于:
当检测到用户针对虚拟摇杆对象的施放操作时,在至少一个技能生成位置停止生成目标技能的技能预览效果模型,并生成目标技能的技能效果模型。
在一些实施例中,生成单元305还可以用于:
在至少一个技能生成位置生成目标技能的技能预览效果模型。
具体实施时,以上各个单元可以作为独立的实体来生成,也可以进行任意组合,作为同一或若干个实体来生成,以上各个单元的具体实施可参见前面的方法实施例,在此不再赘述。
由上可知,本实施例的信息交互装置由画面单元显示游戏画面,游戏画面包括候选技能区域;由技能单元基于用户针对候选技能区域的技能选取操作,确定目标技能;由摇杆单元在游戏画面上显示虚拟摇杆对象;当检测到用户针对虚拟摇杆对象的移动操作时,由位置单元基于移动操作计算目标技能的至少一个技能生成位置;当检测到用户针对虚拟摇杆对象的施放操作时,由生成单元在至少一个技能生成位置生成目标技能的技能效果模型。由此,本方案可以提升信息交互的精确度。
本申请实施例还提供一种终端,该终端可以为手机、平板电脑、智能蓝牙设备、笔记本电脑、或者个人电脑等设备。
在一些实施例中,该终端可以是一个分布式***中的一个节点,其中,该分布式***可以为区块链***,该区块链***可以是由该多个节点通过网络通信的形式连接形成的分布式***。其中,节点之间可以组成点对点(P2P,Peer To Peer)网络,任意形式的计算设备,比如服务器、终端等电子设备都可以通 过加入该点对点网络而成为该区块链***中的一个节点。
在本实施例中,将以本实施例的终端是智能手机为例进行详细描述,比如,如图4所示,其示出了本申请实施例所涉及的终端的结构示意图,具体来讲:
该终端可以包括一个或者一个以上处理核心的处理器401、一个或一个以上计算机可读存储介质的存储器402、电源403、输入模块404以及通信模块405等部件。本领域技术人员可以理解,图4中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
处理器401是该终端的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器402内的软件程序和/或模块,以及调用存储在存储器402内的数据,执行终端的各种功能和处理数据,从而对终端进行整体监控。在一些实施例中,处理器401可包括一个或多个处理核心;在一些实施例中,处理器401可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器401中。
存储器402可用于存储软件程序以及模块,处理器401通过运行存储在存储器402的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器402可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器402还可以包括存储器控制器,以提供处理器401对存储器402的访问。
终端还包括给各个部件供电的电源403,在一些实施例中,电源403可以通过电源管理***与处理器401逻辑相连,从而通过电源管理***生成管理充电、放电、以及功耗管理等功能。电源403还可以包括一个或一个以上的直流或交流电源、再充电***、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
该终端还可包括输入模块404,该输入模块404可用于接收输入的数字或字 符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
该终端还可包括通信模块405,在一些实施例中通信模块405可以包括无线模块,终端可以通过该通信模块405的无线模块进行短距离无线传输,从而为用户提供了无线的宽带互联网访问。比如,该通信模块405可以用于帮助用户收发电子邮件、浏览网页和访问流式媒体等。
尽管未示出,终端还可以包括显示单元等,在此不再赘述。具体在本实施例中,终端中的处理器401会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器402中,并由处理器401来运行存储在存储器402中的应用程序,从而生成各种功能,如下:
显示游戏画面,游戏画面包括候选技能区域;
基于针对候选技能区域的技能选取操作,确定目标技能;
在游戏画面上显示虚拟摇杆对象;
当检测到针对虚拟摇杆对象的移动操作时,基于移动操作计算目标技能的至少一个技能生成位置;
当检测到针对虚拟摇杆对象的施放操作时,在至少一个技能生成位置生成目标技能的技能效果模型。
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。
本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储介质中,并由处理器进行加载和执行。
另外,本申请实施例还提供了一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行上述实施例提供的方法。例如,该指令可以执行如下步骤:
显示游戏画面,游戏画面包括候选技能区域;
基于针对候选技能区域的技能选取操作,确定目标技能;
在游戏画面上显示虚拟摇杆对象;
当检测到针对虚拟摇杆对象的移动操作时,基于移动操作计算目标技能的至少一个技能生成位置;
当检测到针对虚拟摇杆对象的施放操作时,在至少一个技能生成位置生成目标技能的技能效果模型。
其中,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
由于该存储介质中所存储的指令,可以执行本申请实施例所提供的任一种信息交互方法中的步骤,因此,可以实现本申请实施例所提供的任一种信息交互方法所能实现的有益效果,详见前面的实施例,在此不再赘述。
本申请实施例还提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例提供的方法。
以上对本申请实施例所提供的一种信息交互方法、装置、终端以及计算机可读存储介质进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (16)

  1. 一种信息交互方法,所述方法由终端执行,所述方法包括:
    显示游戏画面,所述游戏画面包括候选技能区域;
    基于针对所述候选技能区域的技能选取操作,确定目标技能;
    在所述游戏画面上显示虚拟摇杆对象;
    当检测到针对所述虚拟摇杆对象的移动操作时,基于所述移动操作计算所述目标技能的至少一个技能生成位置;
    当检测到针对所述虚拟摇杆对象的施放操作时,在所述至少一个技能生成位置生成所述目标技能的技能效果模型。
  2. 如权利要求1所述的信息交互方法,所述虚拟摇杆对象包括虚拟摇杆控件和预设摇杆移动范围;所述当检测到针对所述虚拟摇杆对象的移动操作时,基于所述移动操作计算所述目标技能的至少一个技能生成位置,包括:
    确定所述目标技能的技能施放对象和预设技能施放范围,获取所述技能施放对象在游戏画面中的当前位置;
    当检测到针对所述虚拟摇杆对象的移动操作时,获取所述虚拟摇杆控件在所述预设摇杆移动范围中的摇杆位置;
    基于所述预设摇杆移动范围、摇杆位置、预设技能施放范围和当前位置,计算所述目标技能的至少一个技能生成位置。
  3. 如权利要求2所述的信息交互方法,所述确定所述目标技能的预设技能施放范围和技能施放对象,获取所述技能施放对象在游戏画面中的当前位置之后,还包括:
    以所述技能施放对象的当前位置为中心,基于所述预设技能施放范围确定所述目标技能在所述游戏画面中的技能生成区域;
    在所述游戏画面上显示所述技能生成区域;
    在所述至少一个技能生成位置生成所述目标技能的技能效果模型,包括:
    当所述至少一个技能生成位置属于所述技能生成区域时,在所述至少一个技能生成位置生成所述目标技能的技能效果模型。
  4. 如权利要求2所述的信息交互方法,所述基于所述预设摇杆移动范围、摇杆位置、预设技能施放范围和当前位置,计算所述目标技能的至少一个技能 生成位置,包括:
    确定所述预设摇杆移动范围和预设技能施放范围之间的交互范围比例;
    根据所述交互范围比例和所述摇杆位置确定相对位置,所述相对位置为技能生成位置和技能施放对象之间相对的位置;
    根据所述技能施放对象在游戏画面中的当前位置,确定所述相对位置在游戏画面中的技能生成位置。
  5. 如权利要求4所述的信息交互方法,所述根据所述交互范围比例和摇杆位置确定相对位置之后,还包括:
    根据所述相对位置计算技能效果模型相对于所述技能施放对象的相对方向;
    所述在所述至少一个技能生成位置生成所述目标技能的技能效果模型,包括:
    基于所述相对方向修改所述目标技能的技能效果模型方向,得到方向修改后的目标技能;
    在所述至少一个技能生成位置生成所述方向修改后的目标技能的技能效果模型。
  6. 如权利要求4所述的信息交互方法,所述根据所述技能施放对象在游戏画面中的当前位置,确定所述相对位置在游戏画面中的技能生成位置,包括:
    确定所述目标技能的技能生成位置数量;
    根据所述相对位置计算所述目标技能的技能效果模型分布半径;
    以所述相对位置为圆心,基于所述技能效果模型分布半径确定所述目标技能的技能效果模型分布轨迹;
    基于所述至少一个技能生成位置数量,在所述技能效果模型分布轨迹上确定多个技能效果模型分布点;
    根据所述技能施放对象在游戏画面中的当前位置,确定所述技能效果模型分布点在游戏画面中的位置作为所述技能生成位置。
  7. 如权利要求6所述的信息交互方法,基于所述至少一个技能生成位置数量,在所述技能效果模型分布轨迹上确定多个技能效果模型分布点之后,还包括:
    基于所述技能效果模型分布半径,确定所述技能效果模型分布点的垂线方向;
    所述在所述至少一个技能生成位置生成所述目标技能的技能效果模型,包括:
    基于所述垂线方向修改所述目标技能的技能效果模型方向,得到方向修改后的目标技能;
    在所述至少一个技能生成位置生成所述方向修改后的目标技能的技能效果模型。
  8. 如权利要求6所述的信息交互方法,所述根据所述相对位置计算所述目标技能的技能效果模型分布半径,包括:
    获取预设系数;
    根据所述相对位置计算相对距离;
    根据所述预设系数对所述相对距离进行加权求和,得到所述目标技能的技能效果模型分布半径。
  9. 如权利要求6所述的信息交互方法,所述以所述相对位置为圆心,基于所述技能效果模型分布半径确定所述目标技能的技能效果模型分布轨迹,包括:
    获取所述目标技能的技能效果模型预设分布体积;
    对所述技能效果模型预设分布体积和技能生成位置数量进行相乘计算,得到最小技能效果模型分布半径;
    当所述目标技能的技能效果模型分布半径不小于所述最小技能效果模型分布半径时,以所述相对位置为圆心,基于所述技能效果模型分布半径确定所述目标技能的技能效果模型分布轨迹;
    当所述目标技能的技能效果模型分布半径小于所述最小技能效果模型分布半径时,以所述相对位置为圆心、基于所述最小技能效果模型分布半径确定所述目标技能的技能效果模型分布轨迹。
  10. 如权利要求1所述的信息交互方法,所述当检测到针对所述虚拟摇杆对象的移动操作时,基于所述移动操作计算所述目标技能的至少一个技能生成位置之后,还包括:
    在所述至少一个技能生成位置生成所述目标技能的技能预览效果模型;
    所述当检测到针对所述虚拟摇杆对象的施放操作时,在所述至少一个技能生成位置生成所述目标技能的技能效果模型,包括:
    当检测到用户针对所述虚拟摇杆对象的施放操作时,在所述至少一个技能生成位置停止生成所述目标技能的技能预览效果模型,并生成所述目标技能的技能效果模型。
  11. 如权利要求1所述的信息交互方法,所述候选技能区域包括至少一个候选技能的技能控件;所述基于针对所述候选技能区域的技能选取操作,确定目标技能,包括:
    基于针对所述技能控件的选取操作,在至少一个候选技能中确定目标技能;
    在所述游戏画面上显示虚拟摇杆对象,包括:
    在所述目标技能的技能控件上覆盖显示虚拟摇杆对象。
  12. 如权利要求1所述的信息交互方法,所述游戏画面还包括取消施放控件;所述在所述游戏画面上显示虚拟摇杆对象之后,还包括:
    当检测到针对所述取消施放控件的取消施放操作时,在所述游戏画面中停止显示所述虚拟摇杆对象。
  13. 一种信息交互装置,包括:
    画面单元,用于显示游戏画面,所述游戏画面包括候选技能区域;
    技能单元,用于基于针对所述候选技能区域的技能选取操作,确定目标技能;
    摇杆单元,用于在所述游戏画面上显示虚拟摇杆对象;
    位置单元,用于当检测到针对所述虚拟摇杆对象的移动操作时,基于所述移动操作计算所述目标技能的至少一个技能生成位置;
    生成单元,用于当检测到针对所述虚拟摇杆对象的施放操作时,在所述至少一个技能生成位置生成所述目标技能的技能效果模型。
  14. 一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序用于执行权利要求1~12任一项所述的信息交互方法。
  15. 一种终端,包括处理器和存储器,所述存储器存储有多条指令;所述处理器从所述存储器中加载指令,以执行如权利要求1~12任一项所述的信息 交互方法。
  16. 一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行如权利要求1~12任一项所述的信息交互方法。
PCT/CN2020/110199 2019-09-04 2020-08-20 信息交互方法和相关装置 WO2021043000A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
SG11202108571VA SG11202108571VA (en) 2019-09-04 2020-08-20 Information interaction method and related device
JP2021550032A JP7242121B2 (ja) 2019-09-04 2020-08-20 情報インタラクション方法及び関連装置
KR1020217026753A KR102602113B1 (ko) 2019-09-04 2020-08-20 정보 상호작용 방법 및 관련 장치
EP20861591.4A EP3919145A4 (en) 2019-09-04 2020-08-20 INFORMATION INTERACTION METHOD AND RELATED DEVICE
US17/156,087 US11684858B2 (en) 2019-09-04 2021-01-22 Supplemental casting control with direction and magnitude
US18/314,299 US20230271091A1 (en) 2019-09-04 2023-05-09 Information exchange method and related apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910833875.2A CN110559658B (zh) 2019-09-04 2019-09-04 信息交互方法、装置、终端以及存储介质
CN201910833875.2 2019-09-04

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/156,087 Continuation US11684858B2 (en) 2019-09-04 2021-01-22 Supplemental casting control with direction and magnitude

Publications (1)

Publication Number Publication Date
WO2021043000A1 true WO2021043000A1 (zh) 2021-03-11

Family

ID=68777795

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/110199 WO2021043000A1 (zh) 2019-09-04 2020-08-20 信息交互方法和相关装置

Country Status (7)

Country Link
US (2) US11684858B2 (zh)
EP (1) EP3919145A4 (zh)
JP (1) JP7242121B2 (zh)
KR (1) KR102602113B1 (zh)
CN (1) CN110559658B (zh)
SG (1) SG11202108571VA (zh)
WO (1) WO2021043000A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113769391A (zh) * 2021-09-27 2021-12-10 腾讯科技(深圳)有限公司 在虚拟环境中获取技能的方法、装置、设备及介质
JP2023528119A (ja) * 2021-05-14 2023-07-04 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 仮想オブジェクトの制御方法、装置、機器及びコンピュータプログラム

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110559658B (zh) * 2019-09-04 2020-07-28 腾讯科技(深圳)有限公司 信息交互方法、装置、终端以及存储介质
CN111228805B (zh) * 2020-01-08 2023-07-14 腾讯科技(深圳)有限公司 虚拟操作对象的控制方法和装置、存储介质及电子装置
CN111530075B (zh) 2020-04-20 2022-04-05 腾讯科技(深圳)有限公司 虚拟环境的画面显示方法、装置、设备及介质
CN111589131B (zh) * 2020-04-24 2022-02-22 腾讯科技(深圳)有限公司 虚拟角色的控制方法、装置、设备及介质
CN111589134A (zh) * 2020-04-28 2020-08-28 腾讯科技(深圳)有限公司 虚拟环境画面的显示方法、装置、设备及存储介质
CN111760287B (zh) * 2020-06-30 2024-02-02 网易(杭州)网络有限公司 游戏技能的控制方法、装置、电子设备及计算机可读介质
CN111760283B (zh) * 2020-08-06 2023-08-08 腾讯科技(深圳)有限公司 虚拟对象的技能施放方法、装置、终端及可读存储介质
CN113750518A (zh) * 2021-09-10 2021-12-07 网易(杭州)网络有限公司 技能按钮的控制方法、装置、电子设备及计算机可读介质
CN114296597A (zh) * 2021-12-01 2022-04-08 腾讯科技(深圳)有限公司 虚拟场景中的对象交互方法、装置、设备及存储介质
CN114860148B (zh) * 2022-04-19 2024-01-16 北京字跳网络技术有限公司 一种交互方法、装置、计算机设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140302900A1 (en) * 2011-12-29 2014-10-09 Neowiz Games Corporation Method and apparatus for manipulating character of soccer game
CN106033340A (zh) * 2015-03-16 2016-10-19 广州四三九九信息科技有限公司 手游战斗技能的可视化编辑方法及***
CN107168611A (zh) * 2017-06-16 2017-09-15 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN109550241A (zh) * 2018-09-20 2019-04-02 厦门吉比特网络技术股份有限公司 一种单摇杆控制方法和***
CN110559658A (zh) * 2019-09-04 2019-12-13 腾讯科技(深圳)有限公司 信息交互方法、装置、终端以及存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9262073B2 (en) * 2010-05-20 2016-02-16 John W. Howard Touch screen with virtual joystick and methods for use therewith
CN105194873B (zh) * 2015-10-10 2019-01-04 腾讯科技(成都)有限公司 一种信息处理方法、终端及计算机存储介质
JP6143934B1 (ja) 2016-11-10 2017-06-07 株式会社Cygames 情報処理プログラム、情報処理方法、及び情報処理装置
KR20180111397A (ko) * 2017-04-02 2018-10-11 둘툰 주식회사 외부 입력장치를 이용한 게임 가상 컨트롤러 생성 및 매핑 방법
CN107661630A (zh) 2017-08-28 2018-02-06 网易(杭州)网络有限公司 一种射击游戏的控制方法及装置、存储介质、处理器、终端
CN108509139B (zh) * 2018-03-30 2019-09-10 腾讯科技(深圳)有限公司 虚拟对象的移动控制方法、装置、电子装置及存储介质
CN108771869B (zh) * 2018-06-04 2022-02-08 腾讯科技(深圳)有限公司 性能测试方法和装置、存储介质及电子装置
CN109011572B (zh) 2018-08-27 2022-09-16 广州要玩娱乐网络技术股份有限公司 游戏魔法技能处理方法及存储介质、计算机设备
CN109550240A (zh) * 2018-09-20 2019-04-02 厦门吉比特网络技术股份有限公司 一种游戏的技能释放方法和装置
CN109568938B (zh) * 2018-11-30 2020-08-28 广州要玩娱乐网络技术股份有限公司 多资源游戏触控操作方法、装置、存储介质和终端
CN109745698A (zh) 2018-12-28 2019-05-14 北京金山安全软件有限公司 一种取消技能释放的方法、装置及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140302900A1 (en) * 2011-12-29 2014-10-09 Neowiz Games Corporation Method and apparatus for manipulating character of soccer game
CN106033340A (zh) * 2015-03-16 2016-10-19 广州四三九九信息科技有限公司 手游战斗技能的可视化编辑方法及***
CN107168611A (zh) * 2017-06-16 2017-09-15 网易(杭州)网络有限公司 信息处理方法、装置、电子设备及存储介质
CN109550241A (zh) * 2018-09-20 2019-04-02 厦门吉比特网络技术股份有限公司 一种单摇杆控制方法和***
CN110559658A (zh) * 2019-09-04 2019-12-13 腾讯科技(深圳)有限公司 信息交互方法、装置、终端以及存储介质

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2023528119A (ja) * 2021-05-14 2023-07-04 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 仮想オブジェクトの制御方法、装置、機器及びコンピュータプログラム
US11865449B2 (en) 2021-05-14 2024-01-09 Tencent Technology (Shenzhen) Company Limited Virtual object control method, apparatus, device, and computer-readable storage medium
JP7413563B2 (ja) 2021-05-14 2024-01-15 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 仮想オブジェクトの制御方法、装置、機器及びコンピュータプログラム
CN113769391A (zh) * 2021-09-27 2021-12-10 腾讯科技(深圳)有限公司 在虚拟环境中获取技能的方法、装置、设备及介质
CN113769391B (zh) * 2021-09-27 2023-06-27 腾讯科技(深圳)有限公司 在虚拟环境中获取技能的方法、装置、设备及介质

Also Published As

Publication number Publication date
CN110559658B (zh) 2020-07-28
SG11202108571VA (en) 2021-09-29
EP3919145A4 (en) 2022-05-25
US11684858B2 (en) 2023-06-27
US20230271091A1 (en) 2023-08-31
EP3919145A1 (en) 2021-12-08
CN110559658A (zh) 2019-12-13
KR20210117329A (ko) 2021-09-28
KR102602113B1 (ko) 2023-11-13
JP7242121B2 (ja) 2023-03-20
JP2022522443A (ja) 2022-04-19
US20210138351A1 (en) 2021-05-13

Similar Documents

Publication Publication Date Title
WO2021043000A1 (zh) 信息交互方法和相关装置
WO2021036581A1 (zh) 虚拟对象的控制方法和相关装置
KR102050934B1 (ko) 정보 처리 방법, 단말, 및 컴퓨터 저장 매체
EP3939681A1 (en) Virtual object control method and apparatus, device, and storage medium
WO2022017094A1 (zh) 界面显示方法、装置、终端及存储介质
JP2023162233A (ja) 仮想オブジェクトの制御方法、装置、端末及び記憶媒体
WO2021244306A1 (zh) 虚拟对象的选择方法、装置、设备及存储介质
WO2021227684A1 (en) Method for selecting virtual objects, apparatus, terminal and storage medium
JP2024519880A (ja) 仮想環境画面の表示方法、装置、端末及びコンピュータプログラム
WO2023138192A1 (zh) 控制虚拟对象拾取虚拟道具的方法、终端及存储介质
CN113546419A (zh) 游戏地图显示方法、装置、终端及存储介质
WO2024007606A1 (zh) 虚拟物品的展示方法、装置、计算机设备及存储介质
WO2024045528A1 (zh) 一种游戏控制方法、装置、计算机设备及存储介质
CN113426115A (zh) 游戏角色的展示方法、装置和终端
Mei et al. Sightx: A 3d selection technique for xr
US12017141B2 (en) Virtual object control method and apparatus, device, and storage medium
US11978152B2 (en) Computer-assisted graphical development tools
CN113082712B (zh) 虚拟角色的控制方法、装置、计算机设备和存储介质
WO2024051414A1 (zh) 热区的调整方法、装置、设备、存储介质及程序产品
WO2024060895A1 (zh) 用于虚拟场景的群组建立方法、装置、设备及存储介质
CN113082712A (zh) 虚拟角色的控制方法、装置、计算机设备和存储介质
CN115564916A (zh) 虚拟场景的编辑方法、装置、计算机设备及存储介质
CN115193062A (zh) 游戏的控制方法、装置、存储介质及计算机设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20861591

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217026753

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021550032

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020861591

Country of ref document: EP

Effective date: 20210903

NENP Non-entry into the national phase

Ref country code: DE