WO2017054452A1 - 一种信息处理方法、终端及计算机存储介质 - Google Patents
一种信息处理方法、终端及计算机存储介质 Download PDFInfo
- Publication number
- WO2017054452A1 WO2017054452A1 PCT/CN2016/081051 CN2016081051W WO2017054452A1 WO 2017054452 A1 WO2017054452 A1 WO 2017054452A1 CN 2016081051 W CN2016081051 W CN 2016081051W WO 2017054452 A1 WO2017054452 A1 WO 2017054452A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- role
- character
- terminal
- user interface
- graphical user
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/533—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/214—Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
- A63F13/2145—Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/33—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/45—Controlling the progress of the video game
- A63F13/49—Saving the game status; Pausing or ending the game
- A63F13/497—Partially or entirely replaying previous game actions
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
- A63F13/537—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
- A63F13/5372—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
- A63F13/58—Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/847—Cooperative playing, e.g. requiring coordinated actions from several players to achieve a common goal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/33—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
- A63F13/335—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/30—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
- A63F2300/308—Details of the user interface
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/55—Details of game data or player data management
- A63F2300/5526—Game data structure
- A63F2300/5533—Game data structure using program state or machine event data, e.g. server keeps track of the state of multiple players on in a multiple player game
Definitions
- the present invention relates to information processing technologies, and in particular, to an information processing method, a terminal, and a computer storage medium.
- GUI graphical user interface
- a graphical user interface rendered on a large screen or a large screen often only displays a part of the virtual area where the virtual character operated by the user is located, so that when the user controls, the graphical user interface may not include the same group as the user.
- the target object manipulated by the group member in this case, the user wants to get the view of the group member, and requires multiple operations (such as sliding operations) to move the character until the character moves to the vicinity of the target object, thus the current graphic
- the image presented by the graphical user interface controlled by the group member is obtained in the user interface, that is, the field of view of the group members is obtained.
- This process has a long manipulation time and cannot meet the requirements of the quickness of information interaction.
- there is currently no effective solution in the related technology there is currently no effective solution in the related technology.
- the embodiments of the present invention are intended to provide an information processing method, a terminal, and a computer storage medium, which can quickly obtain a view image of a group member in the process of information interaction, thereby improving the user experience.
- An embodiment of the present invention provides an information processing method, by executing a software application on a processor of a terminal and rendering on a display of the terminal, to obtain a graphical user interface, the processor, a graphical user interface, and the Software applications are implemented on a gaming system; the methods include:
- At least one character object deployed in at least one character selection area of the graphical user interface includes at least one window bit
- a field of view image captured by the virtual lens associated with at least one of the character action objects is rendered on the graphical user interface.
- the embodiment of the present invention further provides a terminal, where the terminal includes: a rendering processing unit, a deployment unit, a detecting unit, and an operation executing unit;
- the rendering processing unit is configured to execute a software application and render to obtain a graphical user interface; render at least one virtual resource object on the graphical user interface; and further configured to render the operation execution on the graphical user interface a view image captured by the unit and the virtual lens associated with the at least one of the character operation objects;
- the deployment unit configured to deploy at least one role object of the at least one role selection area of the graphical user interface, including at least one window bit;
- the detecting unit is configured to detect a view acquiring gesture for at least one character operating object in the character object;
- the operation execution unit is configured to detect, when the detection unit detects the object object When at least one character operates the object's field of view acquisition gesture, a field of view image captured by the virtual lens associated with at least one of the character action objects is obtained.
- An embodiment of the present invention further provides a terminal, where the terminal includes: a processor and a display; the processor is configured to execute a software application and perform rendering on the display to obtain a graphical user interface, the processor, A graphical user interface and the software application are implemented on a gaming system;
- the processor is configured to render at least one virtual resource object on the graphical user interface; the at least one role object object deployed in the at least one role selection area of the graphical user interface includes at least one window bit;
- a view acquisition gesture for at least one of the character action objects is detected, a view image captured by the virtual lens associated with the at least one of the character action objects is rendered on the graphical user interface.
- the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the information processing method according to the embodiment of the invention.
- the information processing method, the terminal, and the computer storage medium of the embodiment of the present invention are related to the second role object belonging to the same group as the user role object by the window bit in the character device object of the role selection area deployed in the graphical user interface.
- the associated role operation object is rendered in the corresponding window bit, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the field operation object of the character, thereby greatly improving the operation of the user in the interaction process.
- FIG. 1 is a schematic diagram of an application architecture of an information processing method according to an embodiment of the present invention
- FIG. 2 is a schematic flowchart of an information processing method according to Embodiment 1 of the present invention.
- FIG. 3 is a first schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention.
- FIG. 4 is a schematic flowchart of an information processing method according to Embodiment 2 of the present invention.
- FIG. 5 is a schematic flowchart diagram of an information processing method according to Embodiment 3 of the present invention.
- FIG. 6 is a second schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention.
- FIG. 7 is a schematic flowchart diagram of an information processing method according to Embodiment 4 of the present invention.
- FIG. 8 is a third schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention.
- FIG. 9 is a schematic diagram of an interaction application of an information processing method according to an embodiment of the present invention.
- FIG. 10 is a fourth schematic diagram of a graphical user interface in an information processing method according to an embodiment of the present invention.
- FIG. 11 is a schematic structural diagram of a terminal of a terminal according to Embodiment 5 of the present invention.
- FIG. 12 is a schematic structural diagram of a terminal of a sixth embodiment of the present invention.
- FIG. 13 is a schematic structural diagram of a terminal of a seventh embodiment of the present invention.
- FIG. 1 is a schematic diagram of an application architecture of an information processing method according to an embodiment of the present invention
- the application architecture includes: a server 101 and at least one terminal, where the terminal includes: The terminal 102, the terminal 103, the terminal 104, the terminal 105, and the terminal 106, wherein the at least one terminal can establish a connection with the server 101 through a network 100 such as a wired network or a wireless network.
- the terminal includes a mobile phone, a desktop computer, a PC, an all-in-one, and the like.
- the processor of the terminal is capable of executing a software application and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface and the software application being implemented on the game system .
- the at least one terminal may perform information interaction with the server 101 through a wired network or a wireless network.
- a one-to-one or many-to-many eg, three to three, five to five
- the one-to-one application scenario may be that the virtual resource object in the graphical user object that is rendered by the terminal interacts with the information of the virtual resource object preset in the game system (can be understood as a human-machine battle), that is, The information interaction between the terminal and the server; the one-to-one application scenario may also be a virtual resource object in a graphical user object rendered by one terminal and a virtual resource object in a graphical user object rendered by another terminal.
- the information interaction for example, the virtual resource object in the graphical user object rendered by the terminal 102 interacts with the information of the virtual resource object in the graphical user object rendered by the terminal 103.
- the multi-to-many application mode scenario takes a three-to-three application mode scenario as an example, and the virtual resource objects in the graphical user objects respectively rendered by the terminal 1, the terminal 2, and the terminal 3 form a first group, and the terminal 4,
- the virtual resource objects in the graphical user objects respectively rendered by the terminal 5 and the terminal 6 constitute a second group, and the information exchange between the group members of the first group and the group members of the second group.
- FIG. 1 is only an example of an application architecture that implements an embodiment of the present invention.
- the embodiment of the present invention is not limited to the application structure described in FIG. 1 above, and various embodiments of the present invention are proposed based on the application architecture.
- FIG. 2 is a schematic flowchart diagram of an information processing method according to Embodiment 1 of the present invention.
- the information processing method is applied to a terminal, by executing a software application on a processor of the terminal and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface, and the software application Implemented on a gaming system; as shown in Figure 2, the method includes:
- Step 201 Render at least one virtual resource object on the graphical user interface. At least one of the virtual resource objects is configured to perform a user action object of the first virtual operation in accordance with the input first user command.
- Step 202 At least one role object deployed in at least one role selection area of the graphical user interface includes at least one window bit.
- Step 203 When detecting a visual field acquisition gesture of at least one character operation object in the character object, rendering a visual field image captured by the virtual lens associated with at least one of the character operation objects on the graphic user interface .
- the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
- An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role associated with the role operation object Objects belong to the same group as user role objects.
- the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
- FIG. 3 is a first schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention
- a graphical user interface 800 rendered on a display of the terminal includes at least one virtual resource object;
- the virtual resource object includes at least one user role object a10, and the user of the terminal can perform information interaction through the graphical user interface, that is, input a user command; the user role object a10 can be detected based on the terminal
- the first user command performs a first virtual operation; the first virtual operation includes, but is not limited to, a mobile operation, a physical attack operation, a skill attack operation, and the like.
- the user role object a10 is a character object manipulated by a user of the terminal; in the game system, the user role object a10 can perform corresponding in the graphical user interface based on the operation of the user. Actions.
- the graphical user interface 800 further includes at least one skill object 803, and the user can control the user role object a10 to perform a corresponding skill release operation by a skill release operation.
- the graphical user interface has a role selection area 802; a role object is deployed in the role selection area 802.
- the role object is Characterization by the role selection bar object (ie, the character object object presents a strip display effect).
- the character object includes at least one window bit, and the character operation object associated with the second character object belonging to the same group of the user role object is rendered in a corresponding window bit;
- the role selection area 802 includes at least one avatar; the at least one avatar respectively corresponds to at least one second role object of the same group of the user role objects. As shown in FIG.
- the five-to-five application scenario includes four role objects that belong to the same group as the user role object a10, and four role operations in the role selection area 802.
- This embodiment can be applied to an application scenario involving a multiplayer battle with at least two group members.
- the mutual positional relationship of at least two role operation objects in the role selection area 802 is determined according to a chronological order in which the at least two role operation objects enter the game system. As shown in FIG. 3, the role object associated with the character operation object a11 enters the game system earlier than the role object associated with the role operation object a12, the role operation object a13 and the role operation object a14 and so on, and details are not described herein again.
- the virtual user lens associated with the at least one of the character operation objects is captured on the graphic user interface.
- a view image wherein the view capture gesture may be a long press gesture, a double tap gesture, or the like, and is not limited to the above gesture.
- the method further includes: generating and transmitting a first instruction, where the first instruction is used to invoke the at least one character, when the visual field acquisition gesture of the at least one character operation object in the character object is detected Manipulating the virtual lens associated with the object and controlling the virtual lens to acquire a view image; and obtaining the virtual mirror during the detection of the view capture gesture The field of view image captured by the head.
- the terminal when detecting a role operation object in the character selection area 802 (such as the character operation object a11 shown in FIG. 3) a long press gesture, the terminal generates a first instruction, based on the first instruction, establishing a network link of another terminal corresponding to the role object associated with the role operation object, and based on the network link
- the other terminal corresponding to the role object associated with the role operation object sends the first instruction to control the another terminal to invoke the virtual lens of the other terminal based on the first instruction, and collects by using the virtual lens a field of view image
- the terminal obtains a field of view image sent by the other terminal in real time, and renders the field of view on the graphic user interface An image; as shown in the enlarged view 801a of the visual field image display area 801 and the visual field image display area 801 shown in FIG.
- the field of view image of the character object b11 performing a release operation of a skill object can be seen in FIG. It can be understood that, by acquiring the gesture (such as a long press gesture) through the view, the terminal can quickly switch to the view image of the corresponding other terminal, so that the user of the terminal can quickly obtain the view image of the teammate.
- the role operation object associated with the second role object belonging to the same group as the user role object is configured by the window bit in the role object of the role selection area deployed in the graphical user interface.
- the rendering is performed in the corresponding window bit, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.
- FIG. 4 is a schematic flowchart diagram of an information processing method according to Embodiment 2 of the present invention.
- the information processing method is applied to a terminal, by executing a software application on a processor of the terminal and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface, and the software application Implemented on the game system; as shown in Figure 4,
- Step 301 Render at least one virtual resource object on the graphical user interface.
- Step 302 The at least one role object deployed in the at least one role selection area of the graphical user interface includes at least one window bit.
- the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
- An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role associated with the role operation object Objects belong to the same group as user role objects.
- the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
- At least one virtual resource object is included in the graphical user interface 800 rendered on the display of the terminal; wherein the virtual resource object includes at least one user role object a10, the terminal
- the user can perform information interaction through the graphical user interface, that is, input a user command; the user role object a10 can perform a first virtual operation based on the first user command detected by the terminal; the first virtual operation includes But not limited to: mobile operations, physical attack operations, skill attack operations, and so on.
- the user role object a10 is a character object manipulated by a user of the terminal; in the game system, the user role object a10 can be executed in the graphical user interface based on the operation of the user. The corresponding action is taken.
- the graphical user interface 800 further includes at least one skill object 803, and the user can control the user role object a10 to perform a corresponding skill release operation by a skill release operation.
- the graphical user interface has a role selection area 802; a role object is deployed in the role selection area 802.
- the role object is characterized by a role selection bar object.
- the character object object presents a strip display effect.
- the character object includes at least one window bit, and the character operation object associated with the second character object belonging to the same group of the user role object is rendered in a corresponding window bit;
- the role selection area 802 includes at least one avatar; the at least one avatar respectively corresponds to at least one second role object of the same group of the user role objects. As shown in FIG.
- the five-to-five application scenario includes four role objects that belong to the same group as the user role object a10, and four role operations in the role selection area 802.
- This embodiment can be applied to an application scenario involving a multiplayer battle with at least two group members.
- the mutual positional relationship of at least two role operation objects in the role selection area 802 is determined according to a chronological order in which the at least two role operation objects enter the game system. As shown in FIG. 3, the role object associated with the character operation object a11 enters the game system earlier than the role object associated with the role operation object a12, the role operation object a13 and the role operation object a14 and so on, and details are not described herein again.
- Step 303 When detecting a visual field acquisition gesture of at least one character operation object in the character object object, generating and transmitting a first instruction, and acquiring, in the detection process of the visual field acquisition gesture, the virtual lens acquired a view image; the first instruction is used to invoke the at least one A virtual lens associated with the character manipulation object and controlling the virtual lens acquisition field of view image to render a field of view image captured by the virtual lens associated with at least one of the character manipulation objects on the graphical user interface.
- the terminal when detecting a role operation object in the character selection area 802 (such as the character operation object a11 shown in FIG. 3) a long press gesture, the terminal generates a first instruction, based on the first instruction, establishing a network link of another terminal corresponding to the role object associated with the role operation object, and based on the network link
- the other terminal corresponding to the role object associated with the role operation object sends the first instruction to control the another terminal to invoke the virtual lens of the other terminal based on the first instruction, and collects by using the virtual lens a field of view image
- the terminal obtains a field of view image sent by the other terminal in real time, and renders the field of view on the graphic user interface An image; as shown in the enlarged view 801a of the visual field image display area 801 and the visual field image display area 801 shown in FIG.
- the field of view image of the character object b11 performing a release operation of a skill object can be seen in FIG. It can be understood that, by acquiring the gesture (such as a long press gesture) through the view, the terminal can quickly switch to the view image of the corresponding other terminal, so that the user of the terminal can quickly obtain the view image of the teammate.
- Step 304 When the view acquisition gesture is terminated, generating a second instruction to terminate a call to the virtual shot associated with the at least one character operation object based on the second instruction.
- the role operation object associated with the second role object belonging to the same group as the user role object is configured by the window bit in the role object of the role selection area deployed in the graphical user interface.
- the rendering is performed in the corresponding window bit, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.
- FIG. 5 is a schematic flowchart diagram of an information processing method according to Embodiment 3 of the present invention.
- the information processing method is applied to a terminal, by executing a software application on a processor of the terminal and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface, and the software application Implemented on the game system; as shown in Figure 5,
- Step 401 Render at least one virtual resource object on the graphical user interface.
- Step 402 The at least one role object deployed in the at least one role selection area of the graphical user interface includes at least one window bit.
- the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
- An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role associated with the role operation object Objects belong to the same group as user role objects.
- the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
- the interface 800 includes at least one virtual resource object, wherein the virtual resource object includes at least one user role object a10, and the user of the terminal can perform information interaction through the graphical user interface, that is, input a user command;
- the user role object a10 can perform a first virtual operation based on the first user command detected by the terminal; the first virtual operation includes, but is not limited to, a mobile operation, a physical attack operation, a skill attack operation, and the like.
- the user role object a10 is a character object manipulated by a user of the terminal; in the game system, the user role object a10 can perform corresponding in the graphical user interface based on the operation of the user. Actions.
- the graphical user interface 800 further includes at least one skill object 803, and the user can control the user role object a10 to perform a corresponding skill release operation by a skill release operation.
- the graphical user interface has a role selection area 802; a role object is deployed in the role selection area 802.
- the role object is characterized by a role selection bar object.
- the character object object presents a strip display effect.
- the character object includes at least one window bit, and the character operation object associated with the second character object belonging to the same group of the user role object is rendered in a corresponding window bit;
- the role selection area 802 includes at least one avatar; the at least one avatar respectively corresponds to at least one second role object of the same group of the user role objects. As shown in FIG.
- the five-to-five application scenario includes four role objects that belong to the same group as the user role object a10, and four role operations in the role selection area 802.
- This embodiment can be applied to an application scenario involving a multiplayer battle with at least two group members.
- At least two role operation pairs in the role selection area 802 The mutual positional relationship of the images is determined in accordance with the chronological order in which the at least two character operation objects enter the game system. As shown in FIG. 3, the role object associated with the character operation object a11 enters the game system earlier than the role object associated with the role operation object a12, the role operation object a13 and the role operation object a14 and so on, and details are not described herein again.
- Step 403 When detecting a visual field acquisition gesture of at least one character operation object in the character object, rendering a visual field image captured by the virtual lens associated with at least one of the character operation objects on the graphic user interface. .
- the method further includes: generating and transmitting a first instruction, where the first instruction is used to invoke the at least one character, when the visual field acquisition gesture of the at least one character operation object in the character object is detected Manipulating the virtual lens associated with the object and controlling the virtual lens to acquire a field of view image; and during the detecting of the field of view acquisition gesture, obtaining a field of view image acquired by the virtual lens.
- the terminal when detecting a role operation object in the character selection area 802 (such as the character operation object a11 shown in FIG. 3) a long press gesture, the terminal generates a first instruction, based on the first instruction, establishing a network link of another terminal corresponding to the role object associated with the role operation object, and based on the network link
- the other terminal corresponding to the role object associated with the role operation object sends the first instruction to control the another terminal to invoke the virtual lens of the other terminal based on the first instruction, and collects by using the virtual lens a field of view image
- the terminal obtains a field of view image sent by the other terminal in real time, and renders the field of view on the graphic user interface An image; as shown in the enlarged view 801a of the visual field image display area 801 and the visual field image display area 801 shown in FIG.
- the associated role object c11 is currently performing a release operation of the skill object toward the other character object b11, and the view image display area 801 of the graphical user interface 800 displays the current role c11 associated with the role operation object a11.
- a field of view image of a release operation of a skill object is being performed toward another character object b11, as shown in FIG. It can be understood that, by acquiring the gesture (such as a long press gesture) through the view, the terminal can quickly switch to the view image of the corresponding other terminal, so that the user of the terminal can quickly obtain the view image of the teammate.
- a second instruction is generated to terminate a call to the virtual shot associated with the at least one character operation object based on the second instruction.
- Step 404 Continuously record the change of the state attribute of the user role object in the graphical user interface, generate state attribute information of the user role object, and synchronously update the state attribute information to the server.
- Step 405 Obtain state attribute information of the at least one role object associated with the at least one role operation object from the server, and the state attribute information is corresponding to the associated role operation object according to the first preset display mode. Rendering is performed in at least one of the window bits.
- the terminal continuously records the change of the state attribute of the user role object in the graphical user interface, that is, in the process of information interaction between the user role object and other role objects, the terminal records in real time.
- the state attribute information of the user role object is changed, thereby obtaining state attribute information of the user role object; the state attribute information includes, but is not limited to, a blood volume value, a health value or skill attribute information of the user role object.
- the terminal synchronizes the obtained state attribute information of the user role object to the server in real time.
- the second role for at least one second role object belonging to the same group as the user role object, the second role
- the terminal corresponding to the object also acquires state attribute information of the second role object to the server in real time.
- the terminal obtains state attribute information of the at least one second role object synchronized by the other terminal from the server, that is, obtains at least one role operation object in the character device object in the graphic user interface.
- the state attribute information of the associated at least one role object may be understood as: the terminal obtains state attribute information of the second role object belonging to the same group as the user role object; and the state attribute of the second role object The information is rendered in at least one of the window bits corresponding to the associated character operation object in a first preset display manner.
- 6 is a second schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention; as shown in FIG.
- the state attribute information is a blood volume value
- the role in the role selection area 802 The outer ring region of the operation object a21 serves as a blood cell display region a211, and the current blood volume value of the corresponding second character object is represented by the proportional relationship of the blood volume in the blood cell display region in the blood cell display region.
- the manner in which the state attribute information in the embodiment of the present invention renders the role operation object associated with the second role object in the corresponding window bit is not limited to that shown in FIG. 6.
- the role associated with the second role object belonging to the same group as the user role object is configured by the window bit in the role object of the role selection area deployed in the graphical user interface.
- the operation object is rendered in the corresponding window bit, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.
- state attribute information of the second role object associated with the role operation object in the character object object is obtained by synchronizing state attribute information of the second role object (ie, teammate) belonging to the same group, and The state attribute information is rendered in a corresponding window position in a specific manner, that is, the state attribute information of the second character object (ie, teammate) is reflected on the corresponding character operation object (UI avatar), so that the user can quickly learn the second role.
- the state of the object (ie teammate) Attribute information improves the user's operating experience during the information interaction process.
- FIG. 7 is a schematic flowchart diagram of an information processing method according to Embodiment 4 of the present invention.
- the information processing method is applied to a terminal, by executing a software application on a processor of the terminal and rendering on a display of the terminal to obtain a graphical user interface, the processor, the graphical user interface, and the software application Implemented on the game system; as shown in Figure 7,
- Step 501 Render at least one virtual resource object on the graphical user interface.
- Step 502 The at least one role object deployed in the at least one role selection area of the graphical user interface includes at least one window bit.
- the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
- An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role associated with the role operation object Objects belong to the same group as user role objects.
- the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
- At least one virtual resource object is included in the graphical user interface 800 rendered on the display of the terminal; wherein the virtual resource object includes at least one user role object a10, the terminal
- the user can perform information interaction through the graphical user interface, that is, input a user command; the user role object a10 can perform a first virtual operation based on the first user command detected by the terminal; the first virtual operation includes But not limited to: mobile operations, physical attack operations, skill attack operations, and so on.
- the user role object a10 is a character object manipulated by a user of the terminal; in the game system, the The user role object a10 is capable of performing a corresponding action in the graphical user interface based on the user's operation.
- the graphical user interface 800 further includes at least one skill object 803, and the user can control the user role object a10 to perform a corresponding skill release operation by a skill release operation.
- the graphical user interface has a role selection area 802; a role object is deployed in the role selection area 802.
- the role object is characterized by a role selection bar object.
- the character object object presents a strip display effect.
- the character object includes at least one window bit, and the character operation object associated with the second character object belonging to the same group of the user role object is rendered in a corresponding window bit;
- the role selection area 802 includes at least one avatar; the at least one avatar respectively corresponds to at least one second role object of the same group of the user role objects. As shown in FIG.
- the five-to-five application scenario includes four role objects that belong to the same group as the user role object a10, and four role operations in the role selection area 802.
- This embodiment can be applied to an application scenario involving a multiplayer battle with at least two group members.
- the mutual positional relationship of at least two role operation objects in the role selection area 802 is determined according to a chronological order in which the at least two role operation objects enter the game system. As shown in FIG. 3, the role object associated with the character operation object a11 enters the game system earlier than the role object associated with the role operation object a12, the role operation object a13 and the role operation object a14 and so on, and details are not described herein again.
- Step 503 When detecting a view acquiring gesture of at least one character operation object in the character object, rendering, on the graphic user interface, at least one of the character operation objects The view image captured by the associated virtual lens.
- the method further includes: generating and transmitting a first instruction, where the first instruction is used to invoke the at least one character, when the visual field acquisition gesture of the at least one character operation object in the character object is detected Manipulating the virtual lens associated with the object and controlling the virtual lens to acquire a field of view image; and during the detecting of the field of view acquisition gesture, obtaining a field of view image acquired by the virtual lens.
- the terminal when detecting a role operation object in the character selection area 802 (such as the character operation object a11 shown in FIG. 3) a long press gesture, the terminal generates a first instruction, based on the first instruction, establishing a network link of another terminal corresponding to the role object associated with the role operation object, and based on the network link
- the other terminal corresponding to the role object associated with the role operation object sends the first instruction to control the another terminal to invoke the virtual lens of the other terminal based on the first instruction, and collects by using the virtual lens a field of view image
- the terminal obtains a field of view image sent by the other terminal in real time, and renders the field of view on the graphic user interface An image; as shown in the enlarged view 801a of the visual field image display area 801 and the visual field image display area 801 shown in FIG.
- the field of view image of the character object b11 performing a release operation of a skill object can be seen in FIG. It can be understood that, by acquiring the gesture (such as a long press gesture) through the view, the terminal can quickly switch to the view image of the corresponding other terminal, so that the user of the terminal can quickly obtain the view image of the teammate.
- a second instruction is generated to terminate a call to the virtual shot associated with the at least one character operation object based on the second instruction.
- Step 504 continuously record changes of state attributes of the user role object in the graphical user interface, generate state attribute information of the user role object, and continuously record changes of skill attributes of the user role object in the graphical user interface. And determining that the skill attribute of the user role object reaches a preset condition, generating skill attribute information of the user role object; and updating the state attribute information and the skill attribute information to the server.
- the terminal continuously records the change of the state attribute of the user role object in the graphical user interface, that is, in the process of performing information interaction between the user role object and other role objects,
- the terminal records the change of the state attribute of the user role object in real time, thereby obtaining state attribute information of the user role object;
- the state attribute information includes, but is not limited to, a blood volume value, a health value or a skill attribute of the user role object. information.
- the terminal synchronizes the obtained state attribute information of the user role object to the server in real time.
- the terminal corresponding to the second role object also acquires state attribute information of the second role object to the server in real time.
- the terminal continuously records the change of the skill attribute of the user role object in the graphical user interface, that is, in the process of information interaction between the user role object and other role objects, the terminal records the real-time in the terminal.
- the change of the skill attribute of the user role object since the user character object needs to recover after a period of time after releasing a skill object, the skill object can be restored again after the period of time Release
- the terminal records the change of the skill attribute of the user role object in real time, determines that the at least one skill object can be released, determines that the skill attribute of the user role object reaches a preset condition, and generates the user role object. Skill attribute information that characterizes the user character object is capable of releasing at least one skill object.
- the terminal synchronizes the acquired skill attribute information of the user role object to the server in real time.
- the terminal corresponding to the second role object also acquires the skill attribute information of the second role object to the server in real time.
- Step 505 Obtain state attribute information and skill attribute information of the at least one role object associated with the at least one role operation object from the server, and use the state attribute information in the first preset display manner in the associated role. Performing rendering in at least one of the window bits corresponding to the operation object, and rendering the skill attribute information in at least one of the window positions corresponding to the associated character operation object in a second preset display manner.
- the terminal obtains state attribute information of the at least one second role object synchronized by the other terminal from the server, that is, obtains at least one role of the character object in the graphical user interface.
- the state attribute information of the at least one role object associated with the operation object may be understood as: the terminal obtains state attribute information of the second role object belonging to the same group as the user role object; and the second role object
- the state attribute information is rendered in at least one of the window bits corresponding to the associated character operation object in a first preset display manner.
- 8 is a third schematic diagram of a graphical user interface of an information processing method according to an embodiment of the present invention; as shown in FIG.
- the state attribute information is a blood volume value
- the role in the role selection area 802 The outer ring region of the operation object a31 serves as a blood cell display region a311, and the current blood volume value of the corresponding second character object is represented by the proportional relationship of the blood volume in the blood cell display region a311 in the blood cell display region a311.
- the manner in which the state attribute information in the embodiment of the present invention renders the role operation object associated with the second role object in the corresponding window bit is not limited to that shown in FIG. 8.
- the terminal obtains the skill attribute information of the at least one second role object synchronized by the other terminal from the server, that is, obtains at least one role operation in the character device object in the graphic user interface.
- the skill attribute information of the at least one role object associated with the object may be understood as: the terminal obtains skill attribute information of the second role object belonging to the same group as the user role object; and the skill of the second role object
- the attribute information is rendered in at least one of the window positions corresponding to the associated role operation object according to the first preset display mode; and the skill attribute information is displayed in the character operation object to indicate that the corresponding second role object is currently released.
- At least one skill object Referring to FIG.
- the skill attribute information is represented by a circular identifier a312 in the upper right corner of the character operation object a31 in the character object; when the character operation object displays the circular identifier a312, it indicates The second character object associated with the character operation object is currently capable of releasing at least one skill object; when the character operation object does not display the circular identifier, indicating that the second role object associated with the role operation object is not currently Ability to release any skill objects.
- the manner in which the state attribute information in the embodiment of the present invention renders the role operation object associated with the second role object in the corresponding window bit is not limited to that shown in FIG. 8.
- the role associated with the second role object belonging to the same group as the user role object is configured by the window bit in the role object of the role selection area deployed in the graphical user interface.
- the operation object is rendered in the corresponding window bit, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.
- the state attribute of the second role object associated with the role operation object in the character object is obtained by synchronizing state attribute information and skill attribute information of the second character object (ie, teammate) belonging to the same group.
- Information and skill attribute information, and the state attribute information and the skill attribute information are rendered in a corresponding window position in a specific manner, that is, the state attribute information and the skill attribute information of the second role object (ie, teammates) are reflected in the corresponding role.
- Operation object UI header
- the user can quickly know the state attribute information and the skill attribute information of the second character object (ie, teammate), and enhance the user's operation experience in the information interaction process.
- FIG. 9 is a schematic diagram of an interaction application of an information processing method according to an embodiment of the present invention. As shown in FIG. 9, in the application scenario, the terminal 1, the terminal 2, the terminal 3, the terminal 4, and the server 5 are included, and the terminal 1 passes the user.
- the triggering operation is performed by the user 2; the terminal 2 performs a triggering operation by the user 3; the terminal 4 performs a triggering operation by the user 4; the method includes:
- Step 11 User 1 can trigger the game system and log in to the authentication information, which can be a username and password.
- Step 12 The terminal 1 transmits the obtained authentication information to the server 5, and the server 5 performs identity verification, and after the identity verification is passed, returns to the first graphical user interface to the terminal 1;
- the first graphical user interface includes a first role object capable of performing a virtual operation based on a triggering operation of the user 1, the virtual operation including a moving operation of the first character object, the first character Objects attack operations or skill release operations for other character objects, and so on.
- Step 21 User 2 can trigger the game system and log in to the authentication information, which can be a username and password.
- Step 22 The terminal 2 transmits the obtained authentication information to the server 5, and the server 5 performs identity verification, and after the identity verification is passed, returns to the second graphical user interface to the terminal 2;
- the second graphical user interface includes a second role object, and the second role object is capable of performing a virtual operation based on a trigger operation of the user 2, the virtual operation including The moving operation of the second character object, the attacking operation or the skill releasing operation of the second character object for other character objects, and the like.
- the first role object rendered in the terminal 1 and the second role object rendered in the terminal 2 belong to the same group, and the role object object of the first graphical user interface in the terminal 1
- the window bit includes a character operation object associated with the second character object; when the character operation object is operated by a field of view acquisition gesture (such as a long press gesture), the terminal 1 can invoke the virtual lens of the terminal 2, Thereby, the visual field image of the terminal 2 can be obtained by the virtual lens, and the terminal 1 displays the visual field image when the visual field acquisition gesture (such as a long press gesture) continues to operate.
- the window bit of the character device object of the second graphical user interface in the terminal 2 includes a role operation object associated with the first role object, which is similar to the terminal 1 in the terminal 2
- the terminal 2 can invoke the virtual lens of the terminal 1 so that the view image of the terminal 1 can be obtained through the virtual lens, and no longer Narration.
- Step 31 User 3 can trigger the game system and log in to the authentication information, which can be a username and password.
- Step 32 The terminal 3 transmits the obtained authentication information to the server 5, and the server 5 performs identity verification, and after the identity verification is passed, returns to the third graphical user interface to the terminal 3;
- the third graphical user interface includes a third role object capable of performing a virtual operation based on a trigger operation of the user 3, the virtual operation including a movement operation of the third role object, the third role Objects attack operations or skill release operations for other character objects, and so on.
- Step 41 User 4 can trigger the game system and log in to the authentication information, which can be a username and password.
- Step 42 The terminal 4 transmits the obtained authentication information to the server 5, and the server 5 performs identity verification, and returns to the fourth graphical user interface after the identity verification is passed.
- the fourth graphical user interface includes a fourth role object, the fourth role object being capable of performing a virtual operation based on a triggering operation of the user 5, the virtual operation including the fourth role object
- both the terminal 3 and the terminal 4 render a role operation object associated with another role object belonging to the same group; when detecting When the view of the character operation object is acquired, such as a long press gesture, the view image of the character object associated with the character operation object is obtained, and details are not described herein.
- the role objects in the first group can be made based on a triggering operation.
- the role object in the second group as an object of information interaction.
- step 13 user 1 triggers a first graphical user interface presented by the terminal 1, the triggering operation may be for any virtual resource object in the first graphical user interface, including for any skill object Skill release operation, information interaction for any role object (which can be understood as a physical attack operation), movement of the first character object, and the like.
- the triggering operation is a field of view acquisition gesture operation for a character operation object in the character object of the first graphical user interface.
- Step 14 When the terminal 1 acquires the triggering operation, it identifies an instruction corresponding to the triggering operation gesture, and executes the instruction; for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on. And, in the process of executing the instruction, the change of the corresponding data is recorded.
- an instruction corresponding to the triggering operation gesture for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on.
- the change of the corresponding data is recorded.
- step 15 the changed data is synchronized to the server 5 as the first data corresponding to the terminal 1.
- step 23 user 2 triggers a second graphical user interface presented by the terminal 2, the triggering operation may be for any virtual resource object in the second graphical user interface, including for any skill object Skill release operation, information interaction for any role object (which can be understood as a physical attack operation), movement of the second character object, and the like.
- the triggering operation is a field of view acquisition gesture operation for a character operation object in the character device object of the second graphical user interface.
- Step 24 When the terminal 2 acquires the triggering operation, it identifies an instruction corresponding to the triggering operation gesture, and executes the instruction; for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on. And, in the process of executing the instruction, the change of the corresponding data is recorded.
- an instruction corresponding to the triggering operation gesture for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on.
- the change of the corresponding data is recorded.
- step 25 the changed data is synchronized to the server 5 as the second data corresponding to the terminal 2.
- step 33 user 3 triggers a third graphical user interface presented by the terminal 3, the triggering operation may be for any virtual resource object in the third graphical user interface, including for any skill object Skill release operation, information interaction for any role object (which can be understood as a physical attack operation), movement of the third character object, and the like.
- the triggering operation is a field of view acquisition gesture operation for a character operation object in the character device object of the third graphical user interface.
- Step 34 When the terminal 3 acquires the triggering operation, it identifies an instruction corresponding to the triggering operation gesture, and executes the instruction; for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on. And, in the process of executing the instruction, the change of the corresponding data is recorded.
- an instruction corresponding to the triggering operation gesture for example, executing a skill release instruction for the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on.
- the change of the corresponding data is recorded.
- step 35 the changed data is synchronized to the server 5 as the third data corresponding to the terminal 3.
- step 43 the fourth graphical user interface presented by the user 4 to the terminal 4.
- a triggering operation which may be directed to any virtual resource object in the fourth graphical user interface, including a skill release operation for any skill object, an information interaction operation for any role object (which may be understood as a physical attack operation)
- the movement operation of the fourth character object and the like.
- the triggering operation is a field of view acquisition gesture operation for a character operation object in the character device object of the fourth graphical user interface.
- Step 44 When the terminal 4 acquires the triggering operation, it identifies an instruction corresponding to the triggering operation gesture, and executes the instruction; for example, executing a skill release instruction on the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on. And, in the process of executing the instruction, the change of the corresponding data is recorded.
- an instruction corresponding to the triggering operation gesture for example, executing a skill release instruction on the corresponding operation object, and executing an information interaction instruction for the corresponding role object (eg, Physical attack instructions), execution of move instructions, and so on.
- an information interaction instruction for the corresponding role object eg, Physical attack instructions
- step 45 the changed data is synchronized to the server 5 as the fourth data corresponding to the terminal 4.
- step 50 based on the first data synchronized by the terminal 1, the second data synchronized by the terminal 2, the third data synchronized by the terminal 3, and the fourth data synchronized by the terminal 4, the data is updated, and the updated data is respectively Synchronizing to the terminal 1, the terminal 2, the terminal 3, and the terminal 4.
- the application scenario relates to a Multiplayer Online Battle Arena Games (MOBA).
- MOBA Multiplayer Online Battle Arena Games
- the technical terms involved in MOBA are: 1) UI layer: the icon in the graphical user interface; 2) skill indicator: special effects, aperture, operation to assist the release of skills; 3) virtual lens: can be understood as a game The camera; 4) Small map: The reduced version of the big map can be understood as a radar map, and the information and location of the enemy will be displayed on the map.
- FIG. 10 is a fourth schematic diagram of a graphical user interface in an information processing method according to an embodiment of the present invention; the present application is based on an application scenario of an actual interaction process.
- the graphical user interface 90 rendered in this embodiment includes a character selection area 92, and the character selection area 92 includes a character object; the character object includes four window positions in the present illustration.
- Each window The bits respectively render a role operation object, including a role operation object 921, a role operation object 922, a role operation object 923, and a role operation object 924; each role operation object is associated with a role object; four role objects and user roles Objects belong to the same group.
- the graphical user interface 90 further includes an area 91; when the view acquisition gesture of any of the character operation objects in the character selection area 92 is not detected, the area 91 renders the deployment layout of the enemy and the enemy. Small map (see FIG. 10); when detecting a view acquisition gesture (such as a long press gesture) for any character operation object (such as the character operation object 921) in the character selection area 92, the terminal passes Directing a virtual shot corresponding to the character object associated with the character operation object 921, controlling the virtual lens to collect a field of view image and returning to the graphical user interface 90 of the terminal; the area 91 rendering a corresponding character operation object A field of view image of the associated character object of 921 (not shown in FIG. 10). In this way, the user can quickly obtain the view image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.
- a view acquisition gesture such as a long press gesture
- the terminal passes Directing a virtual
- the embodiment of the invention further provides a terminal.
- 11 is a schematic structural diagram of a terminal according to Embodiment 5 of the present invention; as shown in FIG. 11, the terminal includes: a rendering processing unit 61, a deployment unit 62, a detecting unit 63, and an operation executing unit 64;
- the rendering processing unit 61 is configured to execute a software application and render to obtain a graphical user interface; render at least one virtual resource object on the graphical user interface; and further configured to render the operation on the graphical user interface a view image captured by the virtual lens associated with the at least one of the character operation objects obtained by the execution unit 64;
- the deployment unit 62, the at least one role object configured to be deployed in the at least one role selection area of the graphical user interface includes at least one window bit;
- the detecting unit 63 is configured to detect a view acquiring gesture for at least one character operating object in the character object;
- the operation execution unit 64 is configured to obtain a virtual lens associated with at least one of the character operation objects when the detection unit 63 detects a field of view acquisition gesture for at least one of the character object objects Captured field of view image.
- the graphical user interface includes at least one role selection area, where the role selection area includes at least one character object, and the role object includes at least one window bit, wherein at least part of the window bit carries the corresponding role.
- An operation object wherein the character operation object is represented by an identifier of the role object associated with the role operation object (the identifier may be an avatar) in the graphical user interface; here, the role associated with the role operation object Objects belong to the same group as user role objects.
- the rendering manner of the character device object in the character selection area includes, but is not limited to, a strip shape and a ring shape, that is, the character object object can be characterized by a character selection bar object or a character selection disk object.
- the graphical user interface 800 rendered by the rendering processing unit 61 includes at least one virtual resource object; wherein the virtual resource object includes at least one user role object a10, the terminal
- the user can perform information interaction through the graphical user interface, that is, input a user command; the user role object a10 can perform a first virtual operation based on the first user command detected by the terminal; the first virtual operation includes but Not limited to: mobile operations, physical attack operations, skill attack operations, and so on.
- the user role object a10 is a character object manipulated by a user of the terminal; in the game system, the user role object a10 can perform corresponding in the graphical user interface based on the operation of the user. Actions.
- the graphical user interface 800 further includes at least one skill object 803, and the user can control the user character object a10 to perform a corresponding skill release operation through a skill release operation.
- the deployment unit 62 deploys a role selection area 802 in the graphical user interface; a role object is deployed in the role selection area 802.
- the role object is Characterization by the role selection bar object (ie, the character object object presents a strip display effect).
- the roler object includes at least one window bit that belongs to the user role object
- the role operation object associated with the second role object of the same group is rendered in the corresponding window bit;
- the character operation object is represented by the avatar representation as an example, that is, the role selection area 802 includes at least one avatar;
- the at least one avatar is in one-to-one correspondence with the at least one second role object of the same group of the user role objects.
- the five-to-five application scenario includes four role objects that belong to the same group as the user role object a10, and four role operations in the role selection area 802.
- This embodiment can be applied to an application scenario involving a multiplayer battle with at least two group members.
- the mutual positional relationship of at least two role operation objects in the role selection area 802 is determined according to a chronological order in which the at least two role operation objects enter the game system. As shown in FIG. 3, the role object associated with the character operation object a11 enters the game system earlier than the role object associated with the role operation object a12, the role operation object a13 and the role operation object a14 and so on, and details are not described herein again.
- the operation executing unit 64 is configured to generate and send a first instruction when the detecting unit 63 detects a view acquiring gesture of at least one character operating object in the character object, the first The instruction is configured to invoke the virtual lens associated with the at least one character operation object and control the virtual lens to acquire a field of view image; and the detecting unit 63 obtains the virtual lens acquisition during the detection of the view field acquisition gesture View image.
- the operation execution unit 64 when the detecting unit 63 detects a role operation object in the character selection area 802 (as shown in FIG. 3 ) When the character operates the long-press gesture of the object a11), the operation execution unit 64 generates a first instruction, based on the first instruction, to establish a network link of another terminal corresponding to the role object associated with the role operation object, And operating the object to the role based on the network link The other terminal corresponding to the associated role object sends the first instruction to control the another terminal to invoke the virtual lens of the other terminal based on the first instruction, and collects a view image through the virtual lens; In the process that the detecting unit 63 continuously detects the long press gesture for the character operation object a11, the operation executing unit 64 obtains the visual field image sent by the other terminal in real time, and renders on the graphic user interface.
- the view image is displayed; as shown in the enlarged view 801a of the view image display area 801 and the view image display area 801 shown in FIG. 3, the view image corresponding to the character operation object a11 is displayed on the view image.
- the view image is an image that the manipulation user of the role object associated with the character operation object a11 can browse; for example, the role object c11 associated with the character operation object a11 is currently facing another role
- the object b11 performs a release operation of a skill object
- the visual field image display area 801 of the graphical user interface 800 is displayed to include the Color character object associated with the operation target a11 c11 toward the other character objects are in view image b11 for a release operation skills object, see FIG. 3. It can be understood that, by acquiring the gesture (such as a long press gesture) through the view, the terminal can quickly switch to the view image of the corresponding other terminal, so that the user of the terminal can quickly obtain the view image of the teammate.
- the gesture such as a long press
- the operation executing unit 64 is further configured to: when the detecting unit 63 detects that the view capturing gesture is terminated, generate a second instruction, and terminate the at least one role based on the second instruction The call to the virtual shot associated with the action object.
- the operation executing unit 64 when the detecting unit 63 detects that the long press gesture is terminated, the operation executing unit 64 generates a second instruction, which is terminated based on the second instruction. Invoking a virtual shot associated with the at least one character operation object, and terminating a network link of the terminal with the other terminal.
- the functions of the processing units in the terminal of the embodiment of the present invention can be understood by referring to the related description of the information processing method, and the processing units in the information processing terminal according to the embodiments of the present invention can implement the present invention.
- an embodiment of the present invention further provides a terminal.
- 12 is a schematic structural diagram of a terminal of a sixth embodiment of the present invention; as shown in FIG. 12, the terminal includes: a rendering processing unit 61, a deployment unit 62, a detecting unit 63, an operation executing unit 64, and a communication unit 65;
- the rendering processing unit 61 is configured to execute a software application and render to obtain a graphical user interface; render at least one virtual resource object on the graphical user interface; and further configured to render the operation on the graphical user interface a view image captured by the virtual lens associated with the at least one of the character operation objects obtained by the execution unit 64; further configured to correlate the state attribute information obtained by the operation execution unit 64 in a first preset display manner Rendering in at least one of the window bits corresponding to the associated character operation object;
- the deployment unit 62, the at least one role object configured to be deployed in the at least one role selection area of the graphical user interface includes at least one window bit;
- the detecting unit 63 is configured to detect a view acquiring gesture for at least one character operating object in the character object;
- the operation execution unit 64 is configured to obtain a virtual lens associated with at least one of the character operation objects when the detection unit 63 detects a field of view acquisition gesture for at least one of the character object objects a captured view image; configured to continuously record a change in a state attribute of the user character object in the graphical user interface, generate state attribute information of the user role object, and synchronize the state attribute information through the communication unit 65 Updating to the server; further configured to obtain, by the communication unit 65, status attribute information of the at least one role object associated with the at least one role operation object from the server.
- the functions of the rendering processing unit 61, the deployment unit 62, the detecting unit 63, and the operation executing unit 64 may be referred to the description of Embodiment 5, and details are not described herein again.
- the operation execution unit 64 continuously records the change of the state attribute of the user role object in the graphical user interface, that is, in the process of information interaction between the user role object and other role objects.
- the terminal records the change of the state attribute of the user role object in real time, thereby obtaining state attribute information of the user role object; the status attribute information includes, but is not limited to, a blood volume value, a health value or a skill of the user role object. Attribute information.
- the operation execution unit 64 synchronizes the acquired state attribute information of the user role object to the server through the communication unit 65 in real time.
- the terminal corresponding to the second role object also acquires state attribute information of the second role object to the server in real time.
- the operation execution unit 64 obtains state attribute information of the at least one second role object synchronized by the other terminal from the server by using the communication unit 65, that is, obtains a role in the graphic user interface.
- the state attribute information of the at least one character object associated with the at least one character operation object in the object object is understood to be that the operation execution unit 64 obtains the state attribute of the second role object belonging to the same group as the user role object. And displaying the state attribute information of the second character object in at least one of the window bits corresponding to the associated character operation object according to the first preset display manner. Referring to FIG.
- the state attribute information is a blood volume value
- an outer ring region of a character operation object in the character device object is used as a blood cell display region, and blood volume is displayed in the blood cell through the blood cell.
- the proportional relationship in the blood cell display area characterizes the current blood volume value of the corresponding second character object.
- the manner in which the state attribute information in the embodiment of the present invention renders the role operation object associated with the second role object in the corresponding window bit is not limited to that shown in FIG. 6.
- the operation execution unit 64 is further configured to continuously record a change of the skill attribute of the user role object in the graphical user interface, and determine that the skill attribute of the user role object reaches a preset condition, and generate The skill attribute information of the user role object is synchronously updated to the server by the communication unit 65;
- the communication unit 65 obtains, from the server, skill attribute information of at least one role object associated with the at least one role operation object;
- the rendering processing unit 61 is further configured to perform the skill attribute information obtained by the operation executing unit 64 in at least one of the window positions corresponding to the associated character operation object in a second preset display manner. Rendering.
- the operation execution unit 64 continuously records the change of the skill attribute of the user role object in the graphical user interface, that is, in the process of performing information interaction between the user role object and other role objects, the operation is performed.
- the unit 64 records the change of the skill attribute of the user role object in real time; since the user role object needs to recover after a period of time after releasing a skill object, the skill object can be recovered, that is, after the time period, the skill The object can be released again; in this embodiment, the operation execution unit 64 records the change of the skill attribute of the user role object in real time, and determines that the at least one skill object can be released, and determines the user role object.
- the skill attribute reaches a preset condition, and the skill attribute information of the user role object is generated, and the skill attribute information indicates that the user role object can release at least one skill object.
- the operation execution unit 64 synchronizes the acquired skill attribute information of the user role object to the server through the communication unit 65 in real time.
- the terminal corresponding to the second role object also acquires the skill attribute information of the second role object to the server in real time.
- the operation execution unit 64 obtains the skill attribute information of the at least one second role object synchronized by the other terminal from the server through the communication unit 65, that is, obtains the role object object in the graphic user interface.
- the skill attribute information of the at least one character object associated with the at least one role operation object may be understood as: the operation execution unit 64 obtains the skill attribute information of the second role object belonging to the same group as the user role object; The skill attribute information of the second role object is rendered in at least one of the window bits corresponding to the associated character operation object in a first preset display manner; the skill is displayed in the character operation object The attribute information indicates that the corresponding second character object is currently capable of releasing at least one skill object. Referring to FIG.
- the skill attribute information is represented by a circular identifier in the upper right corner of the character operation object in the character selection area 802; when the character operation object displays the circular identifier, it indicates that the The second role object associated with the character operation object is currently capable of releasing at least one skill object; when the character operation object does not display the circle identifier, indicating that the second character object associated with the role operation object is currently unable to be released Any skill object.
- the manner in which the state attribute information in the embodiment of the present invention renders the role operation object associated with the second role object in the corresponding window bit is not limited to that shown in FIG. 8.
- the functions of the processing units in the terminal of the embodiment of the present invention can be understood by referring to the related description of the information processing method, and the processing units in the information processing terminal according to the embodiments of the present invention can implement the present invention.
- the implementation of the analog circuit of the functions described in the embodiments can also be implemented by running the software of the functions described in the embodiments of the present invention on the smart terminal.
- the rendering processing unit 61, the deploying unit 62, the detecting unit 63, and the operation executing unit 64 in the terminal may be used by a central processing unit in the terminal in practical applications.
- CPU Central Processing Unit
- DSP Digital Signal Processor
- FPGA Field-Programmable Gate Array
- the embodiment of the invention further provides a terminal.
- the terminal may be an electronic device such as a PC, and may also be a portable electronic device such as a tablet computer, a laptop computer, a smart phone, etc., and the game system is implemented on the terminal by installing a software application (such as a game application), the terminal.
- a software application such as a game application
- At least a memory for storing data and a processor for data processing are included.
- the processor used for data processing when performing processing, microprocessor, CPU, DSP or FPGA can be used.
- the implementation of the information processing method in the embodiment of the present invention is implemented by the operation instruction.
- the terminal includes: a processor 71 and a display 72; the processor 71 is configured to execute a software application and on the display 72. Rendering to obtain a graphical user interface, the processor 71, the graphical user interface and the software application being implemented on a gaming system;
- the processor 71 is configured to render at least one virtual resource object on the graphical user interface; the at least one role object deployed in the at least one role selection area of the graphical user interface includes at least one window bit;
- a view acquisition gesture for at least one of the character action objects is detected, a view image captured by the virtual lens associated with the at least one of the character action objects is rendered on the graphical user interface.
- the processor 71 is configured to generate and send a first instruction when the visual field acquisition gesture of the at least one character operation object in the character object object is detected, where the first instruction is used to invoke the at least one a virtual lens associated with the character operation object and controlling the virtual lens to acquire a visual field image; and during the detection of the visual field acquisition gesture, obtaining a visual field image acquired by the virtual lens.
- the processor 71 is further configured to: when the view acquisition gesture is terminated, generate a second instruction, and terminate the virtual shot associated with the at least one character operation object based on the second instruction transfer.
- the server further includes a communication device 74.
- the processor 71 is further configured to continuously record a change of a state attribute of the user role object in the graphical user interface, and generate a state of the user role object.
- the attribute information is synchronously updated to the server by the communication device 74.
- the processor 71 is further configured to obtain state attribute information of the at least one role object associated with the at least one role operation object from the server by using the communication device 74, and press the status attribute information
- the first preset display mode is rendered in at least one of the window bits corresponding to the associated character operation object.
- the processor 71 is further configured to continuously record a change of a skill attribute of a user role object in the graphical user interface, and determine that the skill attribute of the user role object reaches a preset condition, and generate a The skill attribute information of the user role object is updated by the communication device 74 to the server.
- the processor 71 is further configured to obtain, by the communication device 74, skill attribute information of the at least one role object associated with the at least one role operation object from the server, and press the skill attribute information
- the second preset display mode is rendered at at least one of the window bits corresponding to the associated character operation object.
- the terminal in this embodiment includes: a processor 71, a display 72, a memory 73, an input device 76, a bus 75, and a communication device 74; the processor 71, the memory 73, the input device 76, the display 72, and the communication device 74 are both Connected via a bus 75 for transferring data between the processor 71, the memory 73, the display 72, and the communication device 74.
- the input device 76 is mainly configured to obtain an input operation of a user, and the input device 76 may also be different when the terminals are different.
- the input device 76 may be an input device 76 such as a mouse or a keyboard; when the terminal is a portable device such as a smart phone or a tablet computer, the input device 76 may be a touch device. Screen.
- a computer storage medium is stored in the memory 73, and the computer storage medium stores computer executable instructions, and the computer executable instructions are used in the information processing method according to the embodiment of the present invention.
- the disclosed apparatus and method may be implemented in other manners.
- the device embodiments described above are merely illustrative, examples
- the division of the unit is only a logical function division, and the actual implementation may have another division manner, for example, multiple units or components may be combined, or may be integrated into another system, or some features may be ignored. Or not.
- the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
- the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
- the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
- the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
- the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
- ROM read-only memory
- RAM random access memory
- magnetic disk or an optical disk.
- optical disk A medium that can store program code.
- the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
- the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions. Enabling a computer device (which may be a personal computer, server, or network device, etc.) to perform all of the methods of the various embodiments of the present invention or section.
- the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disk.
- the role operation object associated with the second role object belonging to the same group as the user role object is in the corresponding window position by the window bit in the character device object of the role selection area deployed in the graphical user interface.
- the rendering is performed, so that the user can quickly obtain the visual field image of the corresponding second character object by acquiring the gesture for the view of the character operation object, thereby greatly improving the operation experience of the user in the interaction process.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- User Interface Of Digital Computer (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
Claims (22)
- 一种信息处理方法,通过在终端的处理器上执行软件应用并在所述终端的显示器上进行渲染,以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏***上被实施;所述方法包括:在所述图形用户界面上渲染出至少一个虚拟资源对象;部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
- 根据权利要求1所述的方法,其中,所述检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,所述方法还包括:生成并发送第一指令,所述第一指令用于调用所述至少一个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像;以及在所述视野获取手势的检测过程中,获得所述虚拟镜头采集到的视野图像。
- 根据权利要求2所述的方法,其中,当所述视野获取手势终止时,所述方法还包括:生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用。
- 根据权利要求1所述的方法,其中,所述方法还包括:连续记录所述图形用户界面中的用户角色对象的状态属性的改变,生成所述用户角色对象的状态属性信息,将所述状态属性信息同步更新到服务器。
- 根据权利要求1所述的方法,其中,所述方法还包括:连续记录所述图形用户界面中的用户角色对象的技能属性的改变,确定所述用户角色 对象的技能属性达到预设条件时,生成所述用户角色对象的技能属性信息,将所述技能属性信息同步更新到服务器。
- 根据权利要求4所述的方法,其中,所述方法还包括:从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的状态属性信息,将所述状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染。
- 根据权利要求5所述的方法,其中,所述方法还包括:从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的技能属性信息,将所述技能属性信息按第二预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位进行渲染。
- 一种终端,所述终端包括:渲染处理单元、部署单元、检测单元和操作执行单元;其中,所述渲染处理单元,配置为执行软件应用并进行渲染得到图形用户界面;在所述图形用户界面上渲染出至少一个虚拟资源对象;还配置为在所述图形用户界面上渲染出所述操作执行单元获得的所述与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像;所述部署单元,配置为部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;所述检测单元,配置为检测到对所述角色器对象中至少一个角色操作对象的视野获取手势;所述操作执行单元,配置为当所述检测单元检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,获得与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
- 根据权利要求8所述的终端,其中,所述操作执行单元,配置为所述检测单元检测到对所述角色器对象中至少一个角色操作对象的视野获取 手势时,生成并发送第一指令,所述第一指令用于调用所述至少一个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像;以及所述检测单元在所述视野获取手势的检测过程中,获得所述虚拟镜头采集到的视野图像。
- 根据权利要求9所述的终端,其中,所述操作执行单元,还配置为当所述检测单元检测到所述视野获取手势终止时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用。
- 根据权利要求8所述的终端,其中,所述终端还包括通讯单元;所述操作执行单元,还配置为连续记录所述图形用户界面中的用户角色对象的状态属性的改变,生成所述用户角色对象的状态属性信息,将所述状态属性信息通过所述通讯单元同步更新到服务器。
- 根据权利要求8所述的终端,其中,所述终端还包括通讯单元;所述操作执行单元,还配置为连续记录所述图形用户界面中的用户角色对象的技能属性的改变,确定所述用户角色对象的技能属性达到预设条件时,生成所述用户角色对象的技能属性信息,将所述技能属性信息通过所述通讯单元同步更新到服务器。
- 根据权利要求11所述的终端,其中,所述操作执行单元,还配置为通过所述通讯单元从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的状态属性信息;相应的,所述渲染处理单元,还配置为将所述操作执行单元获得的状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染。
- 根据权利要求12所述的终端,其中,所述操作执行单元,还配置为通过所述通讯单元从所述服务器中获得所述至少一个角色操作对象相关 联的至少一个角色对象的技能属性信息;相应的,所述渲染处理单元,还配置为将所述操作执行单元获得的所述技能属性信息按第二预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位进行渲染。
- 一种终端,所述终端包括:处理器和显示器;所述处理器,配置为执行软件应用并在所述显示器上进行渲染以得到图形用户界面,所述处理器、图形用户界面和所述软件应用在游戏***上被实施;所述处理器,配置为在所述图形用户界面上渲染出至少一个虚拟资源对象;部署于所述图形用户界面至少一角色选择区域的至少一个角色器对象包括至少一个窗口位;以及,检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,在所述图形用户界面上渲染出与至少一个所述角色操作对象相关联的虚拟镜头所捕获的视野图像。
- 根据权利要求15所述的终端,其中,所述处理器,配置为检测到对所述角色器对象中至少一个角色操作对象的视野获取手势时,生成并发送第一指令,所述第一指令用于调用所述至少一个角色操作对象相关联的虚拟镜头以及控制所述虚拟镜头采集视野图像;以及在所述视野获取手势的检测过程中,获得所述虚拟镜头采集到的视野图像。
- 根据权利要求16所述的终端,其中,所述处理器,还配置为当所述视野获取手势终止时,生成第二指令,基于所述第二指令终止对所述至少一个角色操作对象相关联的虚拟镜头的调用。
- 根据权利要求15所述的终端,其中,所述服务器还包括通讯设备;所述处理器,还配置为连续记录所述图形用户界面中的用户角色对象的状态属性的改变,生成所述用户角色对象的状态属性信息,通过所述通讯设备将所述状态属性信息同步更新到服务器。
- 根据权利要求15所述的终端,其中,所述服务器还包括通讯设备;所述处理器,还配置为连续记录所述图形用户界面中的用户角色对象的技能属性的改变,确定所述用户角色对象的技能属性达到预设条件时,生成所述用户角色对象的技能属性信息,通过所述通讯设备将所述技能属性信息同步更新到服务器。
- 根据权利要求18所述的终端,其中,所述处理器,还配置为通过所述通讯设备从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的状态属性信息,将所述状态属性信息按第一预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位中进行渲染。
- 根据权利要求19所述的终端,其中,所述处理器,还配置为通过所述通讯设备从所述服务器中获得所述至少一个角色操作对象相关联的至少一个角色对象的技能属性信息,将所述技能属性信息按第二预设显示方式在相关联的角色操作对象对应的至少一个所述窗口位进行渲染。
- 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1至7任一项所述的信息处理方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020177035385A KR20180005689A (ko) | 2015-09-29 | 2016-05-04 | 정보 처리 방법, 단말기 및 컴퓨터 저장 매체 |
JP2017564016A JP6830447B2 (ja) | 2015-09-29 | 2016-05-04 | 情報処理方法、端末、およびコンピュータ記憶媒体 |
CA2985867A CA2985867C (en) | 2015-09-29 | 2016-05-04 | Information processing method, terminal, and computer storage medium |
MYPI2017704330A MY195861A (en) | 2015-09-29 | 2016-05-04 | Information Processing Method, Electronic Device, and Computer Storage Medium |
EP16850077.5A EP3285156B1 (en) | 2015-09-29 | 2016-05-04 | Information processing method and terminal, and computer storage medium |
US15/725,146 US10661171B2 (en) | 2015-09-29 | 2017-10-04 | Information processing method, terminal, and computer storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510633319.2 | 2015-09-29 | ||
CN201510633319.2A CN105159687B (zh) | 2015-09-29 | 2015-09-29 | 一种信息处理方法、终端及计算机存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/725,146 Continuation-In-Part US10661171B2 (en) | 2015-09-29 | 2017-10-04 | Information processing method, terminal, and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017054452A1 true WO2017054452A1 (zh) | 2017-04-06 |
Family
ID=54800554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/081051 WO2017054452A1 (zh) | 2015-09-29 | 2016-05-04 | 一种信息处理方法、终端及计算机存储介质 |
Country Status (8)
Country | Link |
---|---|
US (1) | US10661171B2 (zh) |
EP (1) | EP3285156B1 (zh) |
JP (1) | JP6830447B2 (zh) |
KR (1) | KR20180005689A (zh) |
CN (1) | CN105159687B (zh) |
CA (1) | CA2985867C (zh) |
MY (1) | MY195861A (zh) |
WO (1) | WO2017054452A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11811681B1 (en) | 2022-07-12 | 2023-11-07 | T-Mobile Usa, Inc. | Generating and deploying software architectures using telecommunication resources |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105159687B (zh) | 2015-09-29 | 2018-04-17 | 腾讯科技(深圳)有限公司 | 一种信息处理方法、终端及计算机存储介质 |
CN105653187A (zh) * | 2015-12-24 | 2016-06-08 | 杭州勺子网络科技有限公司 | 一种用于多点触摸终端的触摸控制方法 |
CN106445588B (zh) * | 2016-09-08 | 2018-05-04 | 腾讯科技(深圳)有限公司 | 属性信息的更新方法及装置 |
CN106453638B (zh) * | 2016-11-24 | 2018-07-06 | 腾讯科技(深圳)有限公司 | 一种应用业务内信息交互方法及*** |
CN106774907B (zh) * | 2016-12-22 | 2019-02-05 | 腾讯科技(深圳)有限公司 | 一种在虚拟场景中调整虚拟对象可视区域的方法及移动终端 |
WO2018103634A1 (zh) | 2016-12-06 | 2018-06-14 | 腾讯科技(深圳)有限公司 | 一种数据处理的方法及移动终端 |
CN106598438A (zh) * | 2016-12-22 | 2017-04-26 | 腾讯科技(深圳)有限公司 | 一种基于移动终端的场景切换方法及移动终端 |
CN107174824B (zh) * | 2017-05-23 | 2021-01-15 | 网易(杭州)网络有限公司 | 特效信息处理方法、装置、电子设备及存储介质 |
CN107617213B (zh) * | 2017-07-27 | 2019-02-19 | 网易(杭州)网络有限公司 | 信息处理方法及装置、存储介质、电子设备 |
CN107704165B (zh) * | 2017-08-18 | 2020-05-15 | 网易(杭州)网络有限公司 | 虚拟镜头的控制方法及装置、存储介质、电子设备 |
CN108434742B (zh) | 2018-02-02 | 2019-04-30 | 网易(杭州)网络有限公司 | 游戏场景中虚拟资源的处理方法和装置 |
US11093103B2 (en) * | 2018-04-09 | 2021-08-17 | Spatial Systems Inc. | Augmented reality computing environments-collaborative workspaces |
CN108804013B (zh) * | 2018-06-15 | 2021-01-15 | 网易(杭州)网络有限公司 | 信息提示的方法、装置、电子设备及存储介质 |
CN108920124B (zh) * | 2018-07-25 | 2020-11-03 | 腾讯科技(深圳)有限公司 | 一种信息显示方法、装置及存储介质 |
CN109331468A (zh) * | 2018-09-26 | 2019-02-15 | 网易(杭州)网络有限公司 | 游戏视角的显示方法、显示装置和显示终端 |
JP7094216B2 (ja) * | 2018-12-28 | 2022-07-01 | グリー株式会社 | 配信ユーザの動きに基づいて生成されるキャラクタオブジェクトのアニメーションを含む動画をライブ配信する動画配信システム、動画配信方法及び動画配信プログラム |
CN112035714B (zh) * | 2019-06-03 | 2024-06-14 | 鲨鱼快游网络技术(北京)有限公司 | 一种基于角色陪伴的人机对话方法 |
JP7202981B2 (ja) * | 2019-06-28 | 2023-01-12 | グリー株式会社 | 動画配信システム、プログラム、及び情報処理方法 |
CN111124226B (zh) * | 2019-12-17 | 2021-07-30 | 网易(杭州)网络有限公司 | 游戏画面的显示控制方法、装置、电子设备及存储介质 |
CN113326212B (zh) * | 2020-02-28 | 2023-11-03 | 加特兰微电子科技(上海)有限公司 | 数据处理方法、装置及相关设备 |
JP7051941B6 (ja) | 2020-06-30 | 2022-05-06 | グリー株式会社 | 端末装置の制御プログラム、端末装置の制御方法、端末装置、サーバ装置の制御方法、一又は複数のプロセッサにより実行される方法、及び配信システム |
CN111773703B (zh) * | 2020-07-31 | 2023-10-20 | 网易(杭州)网络有限公司 | 游戏对象显示方法、装置、存储介质与终端设备 |
JP7018617B1 (ja) | 2020-12-11 | 2022-02-14 | 正啓 榊原 | プレイ記録動画作成システム |
CN113144604B (zh) * | 2021-02-08 | 2024-05-10 | 网易(杭州)网络有限公司 | 游戏角色的信息处理方法、装置、设备及存储介质 |
CN113318444B (zh) * | 2021-06-08 | 2023-01-10 | 天津亚克互动科技有限公司 | 角色的渲染方法和装置、电子设备和存储介质 |
CN113633963B (zh) * | 2021-07-15 | 2024-06-11 | 网易(杭州)网络有限公司 | 游戏控制的方法、装置、终端和存储介质 |
CN113633989A (zh) * | 2021-08-13 | 2021-11-12 | 网易(杭州)网络有限公司 | 游戏对象的显示控制方法、装置和电子设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1743043A (zh) * | 2005-06-19 | 2006-03-08 | 珠海市西山居软件有限公司 | 一种网络游戏***及其实现方法 |
CN102356373A (zh) * | 2009-03-20 | 2012-02-15 | 微软公司 | 虚拟对象操纵 |
CN102414641A (zh) * | 2009-05-01 | 2012-04-11 | 微软公司 | 改变显示环境内的视图视角 |
CN103365596A (zh) * | 2013-07-01 | 2013-10-23 | 天脉聚源(北京)传媒科技有限公司 | 一种控制虚拟世界的方法及装置 |
US20140152758A1 (en) * | 2012-04-09 | 2014-06-05 | Xiaofeng Tong | Communication using interactive avatars |
CN105159687A (zh) * | 2015-09-29 | 2015-12-16 | 腾讯科技(深圳)有限公司 | 一种信息处理方法、终端及计算机存储介质 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000070549A (ja) * | 1998-09-01 | 2000-03-07 | Konami Co Ltd | ゲームシステム、画像の保存方法及びゲームプログラムが記録された記録媒体 |
JP3734815B2 (ja) * | 2003-12-10 | 2006-01-11 | 任天堂株式会社 | 携帯ゲーム装置及びゲームプログラム |
US8316408B2 (en) * | 2006-11-22 | 2012-11-20 | Verizon Patent And Licensing Inc. | Audio processing for media content access systems and methods |
JP5425407B2 (ja) | 2008-03-31 | 2014-02-26 | 株式会社バンダイナムコゲームス | プログラム及びゲーム装置 |
US9737796B2 (en) * | 2009-07-08 | 2017-08-22 | Steelseries Aps | Apparatus and method for managing operations of accessories in multi-dimensions |
JP4977248B2 (ja) * | 2010-12-10 | 2012-07-18 | 株式会社コナミデジタルエンタテインメント | ゲーム装置及びゲーム制御プログラム |
CN103107982B (zh) * | 2011-11-15 | 2016-04-20 | 北京神州泰岳软件股份有限公司 | 一种互联网络中群组成员互动的方法和*** |
US8535163B2 (en) * | 2012-01-10 | 2013-09-17 | Zynga Inc. | Low-friction synchronous interaction in multiplayer online game |
US8954890B2 (en) * | 2012-04-12 | 2015-02-10 | Supercell Oy | System, method and graphical user interface for controlling a game |
US8814674B2 (en) * | 2012-05-24 | 2014-08-26 | Supercell Oy | Graphical user interface for a gaming system |
US9403090B2 (en) * | 2012-04-26 | 2016-08-02 | Riot Games, Inc. | Video game system with spectator mode hud |
JP2017104145A (ja) * | 2014-03-07 | 2017-06-15 | 株式会社ソニー・インタラクティブエンタテインメント | ゲームシステム、表示制御方法、表示制御プログラム及び記録媒体 |
JP6025806B2 (ja) * | 2014-11-17 | 2016-11-16 | 株式会社ソニー・インタラクティブエンタテインメント | 装置、表示制御方法、プログラム及び情報記憶媒体 |
US20180288354A1 (en) * | 2017-03-31 | 2018-10-04 | Intel Corporation | Augmented and virtual reality picture-in-picture |
-
2015
- 2015-09-29 CN CN201510633319.2A patent/CN105159687B/zh active Active
-
2016
- 2016-05-04 JP JP2017564016A patent/JP6830447B2/ja active Active
- 2016-05-04 MY MYPI2017704330A patent/MY195861A/en unknown
- 2016-05-04 CA CA2985867A patent/CA2985867C/en active Active
- 2016-05-04 EP EP16850077.5A patent/EP3285156B1/en active Active
- 2016-05-04 WO PCT/CN2016/081051 patent/WO2017054452A1/zh unknown
- 2016-05-04 KR KR1020177035385A patent/KR20180005689A/ko not_active IP Right Cessation
-
2017
- 2017-10-04 US US15/725,146 patent/US10661171B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1743043A (zh) * | 2005-06-19 | 2006-03-08 | 珠海市西山居软件有限公司 | 一种网络游戏***及其实现方法 |
CN102356373A (zh) * | 2009-03-20 | 2012-02-15 | 微软公司 | 虚拟对象操纵 |
CN102414641A (zh) * | 2009-05-01 | 2012-04-11 | 微软公司 | 改变显示环境内的视图视角 |
US20140152758A1 (en) * | 2012-04-09 | 2014-06-05 | Xiaofeng Tong | Communication using interactive avatars |
CN103365596A (zh) * | 2013-07-01 | 2013-10-23 | 天脉聚源(北京)传媒科技有限公司 | 一种控制虚拟世界的方法及装置 |
CN105159687A (zh) * | 2015-09-29 | 2015-12-16 | 腾讯科技(深圳)有限公司 | 一种信息处理方法、终端及计算机存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11811681B1 (en) | 2022-07-12 | 2023-11-07 | T-Mobile Usa, Inc. | Generating and deploying software architectures using telecommunication resources |
Also Published As
Publication number | Publication date |
---|---|
CA2985867C (en) | 2021-09-28 |
US10661171B2 (en) | 2020-05-26 |
EP3285156A1 (en) | 2018-02-21 |
CN105159687B (zh) | 2018-04-17 |
JP6830447B2 (ja) | 2021-02-17 |
JP2018525050A (ja) | 2018-09-06 |
US20180028916A1 (en) | 2018-02-01 |
MY195861A (en) | 2023-02-24 |
CA2985867A1 (en) | 2017-04-06 |
EP3285156A4 (en) | 2018-12-12 |
KR20180005689A (ko) | 2018-01-16 |
EP3285156B1 (en) | 2021-02-17 |
CN105159687A (zh) | 2015-12-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017054452A1 (zh) | 一种信息处理方法、终端及计算机存储介质 | |
WO2017054450A1 (zh) | 一种信息处理方法、终端和计算机存储介质 | |
WO2017054453A1 (zh) | 一种信息处理方法、终端及计算机存储介质 | |
US11003261B2 (en) | Information processing method, terminal, and computer storage medium | |
WO2017054464A1 (zh) | 一种信息处理方法、终端及计算机存储介质 | |
JP2020039880A (ja) | 情報処理方法、端末及びコンピュータ記憶媒体 | |
WO2017054466A1 (zh) | 一种信息处理方法、终端及计算机存储介质 | |
JP2018517533A (ja) | 情報処理方法、端末、及びコンピュータ記憶媒体 | |
WO2018090909A1 (zh) | 一种基于移动终端的对象扫描方法及移动终端 | |
WO2019104533A1 (zh) | 一种视频播放方法及装置 | |
CN113082712B (zh) | 虚拟角色的控制方法、装置、计算机设备和存储介质 | |
WO2019165580A1 (zh) | 移动设备控制方法以及控制终端 | |
CN116159308A (zh) | 游戏交互方法、装置、计算机设备和计算机可读存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16850077 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2985867 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 20177035385 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017564016 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |