CN115400427A - Information processing method and device in game, electronic equipment and storage medium - Google Patents

Information processing method and device in game, electronic equipment and storage medium Download PDF

Info

Publication number
CN115400427A
CN115400427A CN202211032640.1A CN202211032640A CN115400427A CN 115400427 A CN115400427 A CN 115400427A CN 202211032640 A CN202211032640 A CN 202211032640A CN 115400427 A CN115400427 A CN 115400427A
Authority
CN
China
Prior art keywords
virtual object
expression
target expression
game
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211032640.1A
Other languages
Chinese (zh)
Inventor
林�智
刘勇成
胡志鹏
袁思思
程龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202211032640.1A priority Critical patent/CN115400427A/en
Publication of CN115400427A publication Critical patent/CN115400427A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an information processing method and device in a game, electronic equipment and a storage medium, wherein a first graphic user interface is provided through terminal equipment, when a scene parameter corresponding to a first virtual object meets a trigger condition, a second virtual object is determined according to the type of a first target expression, the first target expression is sent to the second virtual object, and information corresponding to the first target expression is informed through a second graphic user interface corresponding to the second virtual object. By adopting the method and the device, the convenience of using the expression in the game by the player can be improved, and the interaction of the expression in the game is enhanced, so that the use rate of the expression in the game is improved.

Description

Information processing method and device in game, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of game technologies, and in particular, to an information processing method and apparatus in a game, an electronic device, and a storage medium.
Background
In some games (such as chicken games, MOBA games, and the like), an expression wheel disc is usually preset, a player needs to define which expressions are contained in the expression wheel disc before starting a game, when the player is in the game, if the player needs to use the expressions, the player needs to call out the expression wheel disc first, then selects a proper expression from the expression wheel disc to use, and pops up the expression near a player character after use. In the mode, the operation steps of the player using the expression are complicated and time-consuming, and the player character is easy to attack in the expression using process, so that the willingness of the player to use the expression is reduced, and the expression using rate in the game is low.
Disclosure of Invention
The invention aims to provide an information processing method, an information processing device, electronic equipment and a storage medium in a game, so that convenience of using expressions in the game by a player is improved, interaction of the expressions in the game is enhanced, and the use rate of the expressions in the game is improved.
In a first aspect, an embodiment of the present disclosure provides an information processing method in a game, in which a first graphical user interface is provided through a terminal device; the method comprises the following steps: responding to a control operation aiming at the first virtual object, and controlling the first virtual object to execute an action corresponding to the control operation; responding to the situation parameter corresponding to the first virtual object meeting a trigger condition, and determining a second virtual object according to the type of the first target expression; the type of the first target expression comprises a first type shared by the whole members and a second type shared by the same team, and the second virtual object is other virtual objects in the same game as the first virtual object; the first target expression is an expression corresponding to the scene parameter; and sending the first target expression to the second virtual object so as to inform information corresponding to the first target expression through a second graphical user interface corresponding to the second virtual object.
In a second aspect, an embodiment of the present disclosure further provides an information processing method in a game, where a second graphical user interface is provided through a terminal device; the second graphical user interface comprises at least a portion of a game scene, a second virtual object, the method comprising: responding to a control operation aiming at the second virtual object, and controlling the second virtual object to execute an action corresponding to the control operation; receiving a first target expression sent by a first virtual object; the first target expression is sent by the terminal equipment corresponding to the first virtual object when the scene parameter corresponding to the first virtual object meets a trigger condition, and is matched with the scene parameter; determining a display mode of information corresponding to the first target expression according to the game scene; the display mode comprises display content and a display position; and displaying information corresponding to the first target expression in the second graphical user interface according to the display mode.
In a third aspect, an embodiment of the present disclosure further provides an information processing apparatus in a game, where a first graphical user interface is provided through a terminal device; the device comprises: the first control module is used for responding to control operation aiming at a first virtual object and controlling the first virtual object to execute an action corresponding to the control operation; the first determining module is used for determining a second virtual object according to the type of the first target expression in response to the situation parameter corresponding to the first virtual object meeting the trigger condition; the type of the first target expression comprises a first type shared by the whole members and a second type shared by the same team, and the second virtual object is other virtual objects in the same game as the first virtual object; the first target expression is an expression corresponding to the scene parameter; and the first expression sending module is used for sending the first target expression to the second virtual object so as to inform information corresponding to the first target expression through a second graphical user interface corresponding to the second virtual object.
In a fourth aspect, an embodiment of the present disclosure further provides an information processing apparatus in a game, where a second graphical user interface is provided through a terminal device; the second graphical user interface comprising at least a portion of a game scene, a second virtual object, the apparatus comprising: the second control module is used for responding to the control operation aiming at the second virtual object and controlling the second virtual object to execute the action corresponding to the control operation; the first expression receiving module is used for receiving a first target expression sent by the first virtual object; the first target expression is sent by the terminal equipment corresponding to the first virtual object when the situation parameters corresponding to the first virtual object meet the trigger condition, and the first target expression is matched with the situation parameters; the second determining module is used for determining a display mode of the information corresponding to the first target expression according to the game scene; the display mode comprises display content and a display position; and the information display module is used for displaying the information corresponding to the first target expression in the second graphical user interface according to the display mode.
In a fifth aspect, an embodiment of the present disclosure further provides an electronic device, including a processor and a memory, where the memory stores computer-executable instructions that can be executed by the processor, and the processor executes the computer-executable instructions to implement the information processing method in the game.
In a sixth aspect, the disclosed embodiments also provide a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the information processing method in the game.
The embodiment of the disclosure provides an information processing method and device in a game, an electronic device and a storage medium, wherein a first graphical user interface is provided through a terminal device, when a scene parameter corresponding to a first virtual object meets a trigger condition, a second virtual object is determined according to the type of a first target expression, the first target expression is sent to the second virtual object, and information corresponding to the first target expression is notified through a second graphical user interface corresponding to the second virtual object. By adopting the technology, when the scene parameter corresponding to the first virtual object meets the trigger condition, the first target expression matched with the scene parameter is automatically sent to the second virtual object, so that the convenience of sending the expression in the game is improved, the expression matched with the scene parameter can be automatically sent to other virtual objects under the condition that a player does not perform additional operation in the game process, and the information of the expression is notified in the graphical user interfaces corresponding to other virtual objects, so that the interaction efficiency and the action of the expression in the game are enhanced, and the utilization rate of the expression in the game is improved.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment in an embodiment of the disclosure;
FIG. 2 is a flow chart illustrating an information processing method in a game according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a graphical user interface in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of another graphical user interface in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of another graphical user interface in an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another graphical user interface in an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of another graphical user interface in an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of an expression configuration sub-interface in an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of another expression configuration sub-interface in an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of another emoticon configuration sub-interface in an embodiment of the disclosure;
FIG. 11 is a schematic diagram of another graphical user interface in an embodiment of the present disclosure;
FIG. 12 is a flow chart illustrating an information processing method in another game according to an embodiment of the present disclosure;
FIG. 13 is a schematic illustration of another graphical user interface in an embodiment of the present disclosure;
FIG. 14 is a schematic illustration of another graphical user interface in an embodiment of the present disclosure;
FIG. 15 is a schematic illustration of another graphical user interface in an embodiment of the present disclosure;
FIG. 16 is a schematic illustration of another graphical user interface in an embodiment of the present disclosure;
FIG. 17 is a schematic illustration of another graphical user interface in an embodiment of the present disclosure;
FIG. 18 is a schematic illustration of another graphical user interface in an embodiment of the present disclosure;
FIG. 19 is a schematic illustration of another graphical user interface in an embodiment of the present disclosure;
FIG. 20 is a schematic illustration of another graphical user interface in an embodiment of the present disclosure;
FIG. 21 is a schematic diagram of an information processing apparatus in a game according to an embodiment of the present disclosure;
FIG. 22 is a schematic diagram of an information processing apparatus in another game according to an embodiment of the present disclosure;
fig. 23 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
The technical solutions of the present disclosure will be described below clearly and completely with reference to embodiments, and it should be apparent that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
First, the disclosure is directed to the introduction of nouns:
(1) Virtual scene (Game scene)
A virtual scene is a virtual scene that an application program displays (or provides) when running on a terminal or a server. Optionally, the virtual scene is a simulated environment of the real world, or a semi-simulated semi-fictional virtual environment, or a purely fictional virtual environment. The virtual scene is any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene and a three-dimensional virtual scene, and the virtual environment can be sky, land, sea and the like, wherein the land comprises environmental elements such as desert, city and the like. For example, in a sandbox type 3D shooting game, the virtual scene is a 3D game world for a player to control a virtual object to play against, and an exemplary virtual scene may include: at least one element selected from a group consisting of a mountain, a flat ground, a river, a lake, an ocean, a desert, a sky, a plant, a building, and a vehicle; for example, for a 2D or 2.5D card game, the virtual scene is a scene for displaying a released card or a virtual object corresponding to the released card, and example virtual scenes may include: a arena, a battle field, or other 'field' elements or other elements capable of displaying the card battle state; for a 2D or 2.5D multiplayer online tactical sports game, the virtual scene is a 2D or 2.5D terrain scene for the virtual object to fight, and exemplary virtual scenes may include: mountains, lines, rivers, classrooms, tables and chairs, podium and other elements in the canyon style.
(2) Game interface
The game interface is an interface corresponding to an application program provided or displayed through a graphical user interface, and the interface comprises a UI interface and a game picture for a player to interact. In alternative embodiments, game controls (e.g., skill controls, movement controls, functionality controls, etc.), indicators (e.g., directional indicators, character indicators, etc.), information presentation areas (e.g., number of clicks, game play time, etc.), or game setting controls (e.g., system settings, stores, coins, etc.) may be included in the UI interface. In an optional embodiment, the game screen is a display screen corresponding to a virtual scene displayed by the terminal device, and the game screen may include virtual objects such as a game character, an NPC character, and an AI character that execute a game logic in the virtual scene.
(3) Virtual object
A virtual object refers to a dynamic object that can be controlled in a virtual scene. Alternatively, the dynamic object may be a virtual character, a virtual animal, an animation character, or the like. The virtual object is a Character controlled by a Player through an input device, or an Artificial Intelligence (AI) set in a virtual environment match-up through training, or a Non-Player Character (NPC) set in a virtual scene match-up. Optionally, the virtual object is a virtual character playing a game in a virtual scene. Optionally, the number of virtual objects in the virtual scene match is preset, or is dynamically determined according to the number of clients participating in the match, which is not limited in the embodiment of the present disclosure. In one possible implementation, the user can control the virtual object to move in the virtual scene, e.g., control the virtual object to run, jump, crawl, etc., and can also control the virtual object to fight against other virtual objects using skills, virtual props, etc., provided by the application.
(4) Player character
A player character refers to a virtual object that can be manipulated by a player to move in a game environment, and may also be referred to as a god character or a hero character in some electronic games. The player character may be at least one of different forms of a virtual character, a virtual animal, an animated character, a virtual vehicle, and the like.
For ease of understanding, a game scene applicable to the present embodiment is first described. In a game, a plurality of virtual objects participate, and each user corresponds to one virtual object; the method comprises the steps that a plurality of virtual objects can be divided into two types, a first virtual object (namely a player character) controlled by a player and a second virtual object (namely other virtual objects which are in the same game with the first virtual object and also other game characters) not controlled by the player are preset in a game, the player defines which expressions are contained in the expression wheel before starting the game, when the player plays the game, if the player needs to use the expressions, the expression wheel can be called out in a mode of pressing a keyboard designated key, then a left mouse key is used for clicking a proper expression in the expression wheel, and the expression can be popped up near the player character after being used.
Based on the problem that the expression using operation is complex and time-consuming, the embodiment of the disclosure provides an information processing method and apparatus in a game, an electronic device, and a storage medium, so as to improve convenience of a player in using an expression in the game, enhance an interaction of the expression in the game, and improve a usage rate of the expression in the game. The technology can be applied to the game scenes or other game scenes of game play.
In an information processing method in a game in one embodiment of the present disclosure, the method may be executed in a terminal device or a server. The terminal device may be a local terminal device. When the information processing method in the game runs on the server, the method can be implemented and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an optional embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud games. Taking a cloud game as an example, the cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, the running main body of the game program and the game picture presenting main body are separated, the storage and the running of the information processing method in the game are completed on a cloud game server, and the client device is used for receiving and sending data and presenting the game picture, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; however, the terminal device performing information processing is a cloud game server in the cloud. When a game is played, a player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, data such as game pictures and the like are coded and compressed, the data are returned to the client device through a network, and finally, the data are decoded through the client device and the game pictures are output.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used for presenting a game screen. The local terminal device is used for interacting with the player through a graphical user interface, namely, a game program is downloaded and installed and operated through an electronic device conventionally. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal or provided to the player through holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including a game screen and a processor for running the game, generating the graphical user interface, and controlling display of the graphical user interface on the display screen.
In a possible implementation manner, an embodiment of the present invention provides an information processing method in a game, where a graphical user interface is provided through a terminal device, where the terminal device may be the aforementioned local terminal device, and may also be the aforementioned client device in a cloud interaction system. A graphical user interface is provided through the terminal device, the graphical user interface including at least a portion of a game scene, a first virtual object, and a plurality of second virtual objects, the plurality of second virtual objects being other virtual objects in the same game as the first virtual object.
The disclosed embodiment provides a schematic diagram of the implementation environment shown in fig. 1. The implementation environment may include: the game server comprises a first terminal device, a game server and a second terminal device. The first terminal device and the second terminal device are respectively communicated with the server to realize data communication. In the present embodiment, the first terminal device and the second terminal device are respectively equipped with a client terminal that executes the display method of the game progress provided by the present disclosure, and the game server is a server terminal that executes the information processing method in the game provided by the present disclosure. Through the client, the first terminal device and the second terminal device can respectively communicate with the game server.
Taking the first terminal device as an example, the first terminal device establishes communication with the game server by running the client. In an alternative embodiment, the server establishes the game pair according to the game request of the client. The parameters of the game play can be determined according to the parameters in the received game request, for example, the parameters of the game play can include the number of people participating in the game play, the level of characters participating in the game play, and the like. And when the first terminal equipment receives the response of the server, displaying the virtual scene corresponding to the game play through the graphical user interface of the first terminal equipment. In an optional implementation manner, the server determines a target game play for the client from a plurality of established game plays according to a game request of the client, and when the first terminal device receives a response of the server, displays a virtual scene corresponding to the game play through a graphical user interface of the first terminal device. The first terminal device is controlled by a first user, the virtual object displayed in the graphical user interface of the first terminal device is a player character controlled by the first user, and the first user inputs an operation instruction through the graphical user interface so as to control the player character to execute corresponding operation in a virtual scene.
Taking the second terminal device as an example, the second terminal device establishes communication with the game server by running the client. In an alternative embodiment, the server establishes the game pair based on the game request from the client. The parameters of the game play can be determined according to the parameters in the received game request, for example, the parameters of the game play can include the number of people participating in the game play, the level of characters participating in the game play, and the like. And when the second terminal equipment receives the response of the server, displaying the virtual scene corresponding to the game play through the graphical user interface of the second terminal equipment. In an optional implementation manner, the server determines a target game play for the client from a plurality of established game plays according to a game request of the client, and when the second terminal device receives a response from the server, displays a virtual scene corresponding to the game play through a graphical user interface of the second terminal device. The second terminal device is controlled by a second user, the virtual object displayed in the graphical user interface of the second terminal device is a player character controlled by the second user, and the second user inputs an operation instruction through the graphical user interface so as to control the player character to execute corresponding operation in the virtual scene.
The server performs data calculation according to game data reported by the first terminal device and the second terminal device, and synchronizes the calculated game data to the first terminal device and the second terminal device, so that the first terminal device and the second terminal device control rendering of a corresponding virtual scene and/or a corresponding virtual object in a graphical user interface according to the synchronization data issued by the server.
In the present embodiment, the virtual object controlled by the first terminal device and the virtual object controlled by the second terminal device are virtual objects in the same game play. The virtual object controlled by the first terminal device and the virtual object controlled by the second terminal device may have the same role attribute or different role attributes.
It should be noted that, the virtual objects in the current game play may include two or more virtual objects, and different virtual objects may correspond to different terminal devices, that is, in the current game play, there are more than two terminal devices that respectively perform sending and synchronization of game data with the game server.
For the convenience of understanding the present embodiment, a control method for a virtual character disclosed in the embodiments of the present disclosure is first described in detail.
The embodiment of the disclosure provides an information processing method in a game, which is applied to terminal equipment (such as a mobile phone, a computer, a Pad and the like), and provides a first graphical user interface through the terminal equipment. The terminal device may be a touch device with a touch function, or may be a non-touch device, for the touch device, the game operation may be performed by touching a control on the first graphical user interface, and for the non-touch device, the game operation may be performed by an external device such as a mouse, a keyboard, or a handle of the non-touch device. Referring to a flow chart of an information processing method in a game shown in fig. 2, an execution subject of the method is described by taking the terminal device as an example, and the method comprises the following steps:
step S202, in response to a control operation for the first virtual object, controls the first virtual object to execute an action corresponding to the control operation.
The control operation may be an operation of pressing a designated key of a keyboard, an operation of touching a designated area of a graphical user interface, an operation of clicking the designated area of the graphical user interface with a mouse, and the like, and may be specifically determined by the user according to actual needs, and is not limited.
Step S204, in response to the situation parameter corresponding to the first virtual object meeting the trigger condition, determining a second virtual object according to the type of the first target expression; the type of the first target expression comprises a first type shared by the whole members and a second type shared by the same team, and the second virtual object is other virtual objects in the same game as the first virtual object; the first target expression is an expression corresponding to the context parameter.
The context parameters may include state parameters, location parameters, and the like; the state parameter can represent the action state of the first virtual object and/or the interaction state between the first virtual object and other virtual objects in the game; the position parameter may characterize the position of the first virtual object itself in the game scene, and/or the relative positional relationship between the first virtual object and other virtual objects in the game. For example, the state parameter may represent a state in which the first virtual object performs an action alone (e.g., the first virtual object stays in place, the first virtual object is moving, etc.), the position parameter may represent a position of the first virtual object in the game scene (e.g., the first virtual object enters a stealth grass, etc.), and accordingly, the situation parameter satisfying the trigger condition may be that the first virtual object is performing a specific action, the first virtual object is located at a specific position in the game scene, etc. For another example, the state parameter may represent a state of interaction between the first virtual object and another virtual object in the game (for example, the first virtual object enters a battle, the first virtual object is attacked, and the like), and the position parameter may represent a relative position relationship between the first virtual object and another virtual object in the game (for example, the first virtual object enters a visual field range of another virtual object, and the like), and accordingly, the situation parameter satisfying the trigger condition may be that the first virtual object and another virtual object in the game maintain a specific interaction state, the first virtual object and another virtual object in the game maintain a specific relative position relationship, and the like. The scene parameters can be customized according to actual needs, and are not limited.
The type of the first target expression may be represented by a type identifier (e.g., 0,1), for example, if the type of the first target expression is 0, it indicates that the first target expression is the first type shared by all members, and the first target expression may be sent to all online virtual objects in the game except the first virtual object. If the type of the first target expression is 1, the first target expression is a second type shared by the same team, the first target expression can be sent to an online virtual object in the game, which belongs to the same team as the first virtual object, and the first target expression cannot be sent to other virtual objects which do not belong to the same team as the first virtual object. Based on this, the determining the second virtual object according to the type of the first target expression specifically includes: if the type of the first target expression is the first type, determining that the second virtual object is all online virtual objects except the first virtual object in the game; and if the type of the first target expression is the second type, determining that the second virtual object is an online virtual object which belongs to the same group as the first virtual object in the game.
In order to make the first target expression correspond to the context parameter, a corresponding relationship between the first target expression and the context parameter may be established in advance. For example, the corresponding relationship between each first target expression and the context parameter is respectively established, so that when the context parameter corresponding to the first virtual object meets the trigger condition, the terminal device can determine the first target expression matched with the context parameter based on the corresponding relationship, and thereby determine the second virtual object according to the type of the first target expression.
Step S206, the first target expression is sent to the second virtual object, so that information corresponding to the first target expression is notified through a second graphical user interface corresponding to the second virtual object.
The first target expression sent to the second virtual object may refer to information corresponding to the first target expression. For example, a first graphical user interface provided by the terminal device includes at least a portion of a game scene and a first virtual object (i.e., a player character); when the scene parameter corresponding to the first virtual object meets the trigger condition, the terminal device sends the first target expression to the second virtual object so as to notify information corresponding to the first target expression through a second graphical user interface corresponding to the second virtual object. Referring to fig. 3, a graphical user interface (i.e., a first graphical user interface) provided by a terminal device includes a virtual object a (i.e., a first virtual object, i.e., a player character), a virtual object B, and a virtual object C, where the virtual object a and the virtual object B belong to a first team, the virtual object C belongs to a second team, a scenario parameter corresponding to the virtual object a meets a trigger condition that the virtual object a enters a battle, a first target expression represents an expression that the virtual object a enters the battle, and the expression is of a first type shared by all members; when the virtual object A and the virtual object C fight, the terminal device determines the virtual object B and the virtual object C as a second virtual object, and sends the expression to the second virtual object, so that information corresponding to the expression is displayed through a second graphical user interface corresponding to the second virtual object.
According to the information processing method in the game, the terminal device provides the first graphical user interface, when the scenario parameter corresponding to the first virtual object meets the trigger condition, the second virtual object is determined according to the type of the first target expression, the first target expression is sent to the second virtual object, and the second graphical user interface corresponding to the second virtual object is used for informing the information corresponding to the first target expression. By adopting the technology, when the scene parameter corresponding to the first virtual object meets the trigger condition, the first target expression matched with the scene parameter is automatically sent to the second virtual object, so that the convenience of sending the expression in the game is improved, the expression matched with the scene parameter can be automatically sent to other virtual objects under the condition that a player does not perform additional operation in the game process, and the information of the expression is notified in the graphical user interfaces corresponding to other virtual objects, so that the interaction efficiency and the action of the expression in the game are enhanced, and the utilization rate of the expression in the game is improved.
In order to make the automatically sent target expression more consistent with the desire of the player, as a possible implementation manner, the information processing method in the game may further include the following steps: and responding to expression configuration operation, and configuring a first target expression matched with the scene parameters for the chat frame corresponding to the first virtual object. For example, the first graphical user interface provided by the terminal device includes at least a part of a game scene, a first virtual object (i.e., a player character), and a chat frame corresponding to the first virtual object, and when the player performs an emotion configuration operation, the terminal device configures a first target emotion matched with the scene parameters for the chat frame. Through the expression configuration operation, a player can configure the first target expressions for the chat frame according to the preference of the player, and the scene parameters corresponding to each first target expression can be preset in the game or can be set by the player according to the preference of the player, which is not limited in the embodiment of the disclosure.
Based on this, the step of sending the first target expression to the second virtual object may include the following operation modes: and sending the first target expression to the second virtual object through the chat box. Referring to fig. 3, a graphical user interface (i.e., a first graphical user interface) provided by a terminal device includes a virtual object a (i.e., a first virtual object, i.e., a player character), a virtual object B, a virtual object C, and a chat box 11, where the virtual object a and the virtual object B belong to a first team, the virtual object C belongs to a second team, a scenario parameter satisfies a trigger condition that the virtual object a is being attacked, the chat box 11 is configured with an expression (i.e., a first target expression) representing that the virtual object a is being attacked, and a type of the expression is a first type shared by all members; when the virtual object a is attacked by the virtual object C, the terminal device determines the virtual object B and the virtual object C as a second virtual object, and sends the expression to the second virtual object through the chat box 11, so as to display information corresponding to the expression through a second graphical user interface corresponding to the second virtual object.
The player can perform expression configuration operation on the chat box in scenes such as before or after the game; after responding to the expression configuration operation, the terminal device configures a first target expression matched with the scene parameters for the chat frame. Illustratively, the operation of this step may include:
(11) Displaying the expression wheel on a first graphical user interface in response to a display operation directed to the expression wheel; the emoticon comprises a plurality of preconfigured emoticons, and the emoticons in the emoticon are usually presented in the form of emoticons or emoticon controls, which are referred to as emoticons.
(12) Responding to a first dragging operation of a first target expression in the watch wheel disc, and moving the first target expression according to the first dragging operation; wherein the first target expression is matched with the scene parameters.
(13) And responding to the first target emotion to move to the chat box in the first graphical user interface, and configuring the first target emotion suitable for the game for the chat box.
The display operation for the expression wheel can be an operation of pressing a designated key of a keyboard, an operation of touching a designated control of a graphical user interface, an operation of clicking the designated control of the graphical user interface by a mouse, and the like, and can be determined by self according to actual needs without limitation. For example, in fig. 4 and fig. 5, in the first graphical user interface, there are a virtual object a (i.e., a first virtual object, i.e., a player character) and a chat box 11, and after the terminal device responds to a player pressing a keyboard designation key (i.e., performing an emoji wheel display operation), an emoji wheel 12 is displayed on the first graphical user interface, where the emoji wheel 12 includes an emoji 13, an emoji 14, an emoji 15, and an emoji 16; in fig. 5, the player drags the emoticon 14 in the emoticon wheel by using the left mouse button (i.e. the first target emoticon, and the shadow on the emoticon 14 in fig. 5 indicates that the emoticon 14 is targeted for the dragging operation), and the terminal device controls the emoticon 14 to move according to the dragging direction after responding to the dragging operation (the position of the emoticon 14 after moving is indicated by the shaded box 14a in fig. 5), until the emoticon 14 is configured for the chat box 11 when the emoticon 14 moves to the chat box 11, where the emoticon 14 can be suitable for the game. By adopting the operation mode, the player can directly drag the emotions in the emoticon wheel disc to the chat frame, so that the chat frame is configured with the emotions suitable for being used in a single game, and the convenience of emoticon configuration operation can be improved while the chat frame and the emotions are linked.
In order to enhance the interactivity of the information and enable the player to intuitively know whether the expression is successfully configured, as a possible implementation manner, after the step (13), the information processing method in the game may further include the following steps: displaying first prompt information of successful configuration of the first target expression in a first graphical user interface according to a first preset mode; wherein, the first preset mode comprises at least one of the following modes: displaying a first set icon or first text information at a position corresponding to the chat frame, and displaying the chat frame in a first set color; and canceling the display of the first prompt message in response to the display duration of the first prompt message reaching a first time threshold. Through the display mode of the first prompt message, the player can quickly know that the configuration is finished, and the player does not need to perform additional operation, so that the information interactivity of the game is improved.
On the basis of fig. 5, referring to the graphical user interface schematic diagram shown in fig. 6, after the emoticon 14 is bound to the chat frame 11 by the terminal device, the text "binding successfully | can be displayed on the right side of the chat frame 11! "until the display time reaches 3s, the rectangular icon 18 is not displayed. By adopting the mode, the player can judge whether the expression configuration is successful according to the prompt information, and the intuitiveness of the expression configuration of the chat frame is improved.
In order to enhance the flexibility of the game and meet different use requirements of the player, the function of the expression wheel disc is expanded, and the player can directly select the expression in the expression wheel disc and send the expression to other virtual objects. For example: in some scenes, an expression wheel disc is displayed in the graphical user interface, and when an expression needs to be sent to other virtual objects, the expression can be selected from the expression wheel disc and sent to the virtual objects of teammates or whole members. Based on this, on the basis that the first graphical user interface displays the expressive roulette wheel, as a possible implementation manner, the information processing method in the game may further include the following steps: and responding to a second dragging operation aiming at a second target expression in the expression wheel disc, and moving the second target expression according to the second dragging operation. And responding to the second target expression to move to the chat frame, sending the second target expression to a third virtual object corresponding to the second target expression through the chat frame, and informing information corresponding to the second target expression through a third graphical user interface corresponding to the third virtual object. The expression wheel disc can be displayed on the first graphical user interface through triggering operation, the triggering operation can be realized by pressing a set key of a keyboard of the terminal equipment, and the triggering operation can also be realized by touching an expression wheel disc display control on the first graphical user interface. Through the mode, the expressions in the expression wheel disc can be rapidly sent to the third virtual object side, so that the player corresponding to the third virtual object can rapidly know the information corresponding to the second target expression, the timeliness of game information interaction is enhanced, the interest of a game is promoted, and the use scene of the expression wheel disc is expanded.
In order to facilitate distinguishing the dragging operation for the expression in the expression wheel, the second dragging operation and the first dragging operation in the embodiment of the present disclosure are two different dragging operations, taking a non-touch terminal as an example of the terminal device, where the first dragging operation may be a dragging operation performed by pressing a left mouse button, and the second dragging operation may be a dragging operation performed by pressing a right mouse button. Or, in the case that the terminal device is a non-touch terminal, the first dragging operation may be an operation corresponding to pressing a left mouse button and pressing a keyboard a button, and the first dragging operation may be an operation corresponding to pressing a left mouse button and pressing a keyboard B button. Taking the terminal device as a touch terminal as an example, the first dragging operation and the second dragging operation are operations with different dragging directions, for example, the first dragging operation may be a dragging operation in which the emoticon is dragged upwards for a distance from the emoticon wheel and then towards the chat frame, and the second dragging operation may be a dragging operation in which the emoticon is dragged downwards for a distance from the emoticon wheel and then towards the chat frame.
Referring to fig. 7, the first graphical user interface includes a virtual object a (i.e., a first virtual object, i.e., a player character), a virtual object B, a virtual object C, a chat box 11, and an emoticon wheel 12, where the emoticon wheel 12 includes emoticons 13, emoticons 14, emoticons 15, and emoticons 16, the virtual object a and the virtual object B belong to a first team, the virtual object C belongs to a second team, and the chat box 11 is configured with the emoticons 13, the emoticons 14, the emoticons 15, and the emoticons 16 in advance; the player drags the emoticon 15 representing that the virtual object a requests teammate support in the emoticon wheel by using a right mouse button (namely, the second target emoticon, and the shadow on the emoticon 15 is used for representing that the dragging operation is directed at the emoticon 15 in fig. 7), the terminal device controls the emoticon 15 to move according to the dragging direction after responding to the operation (the position of the emoticon 15 after moving is represented by the shaded box 15a in fig. 7), and the emoticon 15 is sent to the third virtual object corresponding to the second target emoticon through the chat box 11 until the emoticon 15 moves to the chat box 11, so that the information corresponding to the emoticon 15 is displayed on the current graphical user interface (namely, the third graphical user interface) of the player through the virtual object B (namely, the third virtual object). By adopting the operation mode, the player can directly drag the emotion which is configured in the chat frame in the emotion wheel disc or the emotion which is not configured in the chat frame to the chat frame, so that the emotion is sent to the virtual objects of other players through the chat frame, and the convenience of emotion sending can be improved while the chat frame is linked with the emotion.
Besides the mode of realizing the emotion configuration of the chat frame by dragging the emotions on the emotion wheel, as a possible implementation mode, the configuration can be completed through an emotion configuration sub-page. Based on this, the step of configuring, in response to the emotion configuration operation, the first target emotion matching the scene parameter for the chat frame corresponding to the first virtual object may further include the following operation modes:
(31) Responding to expression configuration operation aiming at the first virtual object, and displaying an expression configuration sub-interface on the first graphical user interface; the emotion configuration sub-page may be a character/character configuration page corresponding to a virtual object, and the emotion configuration sub-page includes a first virtual object (i.e., a model of the first virtual object) and an emotion library.
(32) Responding to a third dragging operation of the first target expression in the expression library, and moving the first target expression according to the third dragging operation; wherein the first target expression is matched with the scene parameters. The third dragging operation may be a dragging operation performed by pressing a left or right mouse button, or a pressing dragging operation performed by touching the first target expression.
(33) And responding to the movement of the first target expression to the first virtual object, and configuring a first target expression suitable for being used by the multi-game for the chat box corresponding to the first virtual object. The fact that the first target expression moves to the first virtual object specifically means that a partial overlapping area exists between the first target expression and the first virtual object, or the distance between the first target expression and the first virtual object is smaller than a preset distance threshold.
The expression configuration operation for the first virtual object may be an operation of pressing a designated key of a keyboard, an operation of touching a designated control of a graphical user interface, an operation of clicking the designated control of the graphical user interface with a mouse, and the like, and may be specifically determined by self according to actual needs, and is not limited. Referring to fig. 8 and 9, in fig. 8, after responding to a keyboard designation key pressed by a player (i.e., performing an expression configuration operation on a first virtual object), the terminal device displays an expression configuration sub-interface on a first graphical user interface, where the expression configuration sub-interface includes a virtual object a (i.e., the first virtual object, i.e., a player character) and an expression library 19, and the expression library 19 includes expressions 13, 14, 15, 16, and other expressions; in fig. 9, the player drags the emoticon 15 in the emoticon library 19 with a finger (i.e. the first target emoticon, and the dragging operation is directed to the emoticon 15 as indicated by the shadow on the emoticon 15 in fig. 9), and the terminal device controls the emoticon 15 to move according to the dragging direction after responding to the operation (the position of the emoticon 15 after moving is indicated by the shaded box 15b in fig. 9), until the emoticon 15 is configured for the chat box when the emoticon 15 moves to the virtual object a. This expression binding approach may be suitable for multiple games, i.e. the expression 15 may be suitable for multiple games. By adopting the operation mode, the player can directly drag the emotions in the emoticon library to the player role on the emoticon configuration sub-interface, so that the chat frame is configured with the emotions suitable for being used in a plurality of games, and the convenience of emoticon configuration operation can be improved while the linkage of the chat frame and the emotions is realized.
The expressions in the expression library can be divided into two types, one type of expression is an expression configured with a game scene, the type of expression is suitable for the configuration of a chat frame, the other type of expression is an expression not configured with the game scene, the type of expression is not suitable for the configuration of the chat frame, and the expression library can be used for the configuration in an expression wheel disc and can also be used for other expression use scenes. In order to distinguish two types of expressions, the two types of expressions can be divided into different expression groups in the expression library, and the two types of expressions can be distinguished by the names of the expression groups, for example: and automatically sending the expression group and manually sending the expression group, wherein the expression in the automatically sent expression group is the expression configured with the game scene, and the expression in the manually sent expression group is the expression not configured with the game scene. Or the designated position of the emoticon displays the identifier of the category to which the expression belongs, for example, the emoticon with the letter "A" at the upper right corner is an expression corresponding to the automatic sending emoticon, and the emoticon with the letter "N" at the upper right corner is an expression corresponding to the manual sending emoticon.
In order to facilitate the determination of whether the binding operation is completed, as a possible implementation manner, after the step (33), the method for processing information in the game may further include the steps of:
(34) Displaying second prompt information of successful binding of the first target expression in the expression configuration sub-interface according to a second preset mode; wherein the second preset mode comprises at least one of the following modes: and displaying and setting a second icon or second character information at a position corresponding to the first virtual object, and displaying the first virtual object in a second set color.
(35) And canceling the display of the second prompt message in response to the display duration of the second prompt message reaching a second time threshold.
On the basis of the graphical user interface shown in fig. 9, referring to fig. 10, after the terminal device configures the emoticon 15 for the chat box, the emoticon "binding success | is displayed on the upper left of the virtual object a (i.e. the first virtual object, i.e. the player character)! "until the display time reaches 3s, the rectangular icon 20 is not displayed. Or, after the terminal device configures the emoticon 15 for the chat frame, an icon of a hook shape is displayed in the upper right corner of the position where the emoticon 15 is located in the emoticon library 19, so as to indicate that the chat frame has been successfully configured with the emoticon 15. By adopting the method, the player can judge whether the expression configuration is successful according to the second prompt message, and the intuitiveness of the expression configuration of the chat frame is improved.
For example, as shown in fig. 8 to 10, in fig. 8, an emoticon wheel 12 may be further displayed in the emoticon configuration sub-interface, where the emoticon wheel 12 includes an emoticon 13, an emoticon 14, and an emoticon 16 that have been successfully bound with the chat box in the emoticon library 19; in fig. 9, after configuring the emoticon 15 for the chat box by the operation of dragging the emoticon 15 from the emoticon library 19 to the virtual object a with a finger of the player, the terminal device may also add the emoticon 15 to the emoticon wheel 12, or the player may separately configure the emoticon wheel through the emoticon library, so that the player may select the emoticon wheel 12 when the player needs to send the emoticon 15 in the game. By adopting the operation mode, the intuition of the expression configuration of the chat frame is further improved; and the mode of displaying the expression wheel disc in the expression configuration sub-interface enables a player to know which selectable expressions in the expression wheel disc are and the arrangement sequence of the selectable expressions in the expression wheel disc before starting game-play, so that the time of the player learning the expression wheel disc in the game is further shortened, and the efficiency of the player using the expressions in the game is improved.
Considering that some players like to start the voice function to interact with other players and some players like to interact with other players in the expression display mode, as a possible implementation mode, the information processing method in the game can further comprise the following operation modes: responding to a switching event of the scene parameter corresponding to the first virtual object, and checking whether the voice function of the first virtual object is in a closed state; and if the voice function is in a closed state and the state parameters and/or the position parameters contained in the switched scene parameters meet the trigger conditions, executing the steps of determining the second virtual object according to the type of the first target expression and sending the first target expression to the second virtual object. If the voice function is in the on state, the steps of determining the second virtual object according to the type of the first target expression and sending the first target expression to the second virtual object may not be executed. By adopting the operation mode, the expression can be used as a notification tool when the player does not start the voice function, so that the interaction of the expression in the game is enhanced, the willingness of the player to use the expression is improved, and the use rate of the expression in the game is improved. And in the state that the voice function is started, the information can be quickly transmitted through voice, and the process of automatically sending the expression can not be executed. Of course, as a possible implementation manner, in the state where the voice function is turned on, the steps of determining the second virtual object according to the type of the first target expression and sending the first target expression to the second virtual object may also be executed, specifically which manner is adopted may be set according to the actual game and the preference of the player, and the embodiment of the present disclosure does not limit this.
In addition, in order to further enrich the content in the expression wheel disc, the expression wheel disc can contain a plurality of expressions and remark information corresponding to the expressions. Referring to fig. 4 to 10, remark information corresponding to an expression is displayed in the central area 17 of the expression wheel 12, the content of the remark information may include operation prompt information corresponding to the expression, the name of a virtual object to which the expression belongs, and other personalized information corresponding to the expression, and the specific content may be customized according to actual needs, which is not limited herein.
Considering that the position movement and the interaction between the virtual objects are frequent in the battle scene, the attention of the player is usually focused on the battle response, and in order to avoid the interference of the presentation of the expression information on the battle, as a possible implementation manner, the information processing method in the game may further include the following operation manners: responding to a switching event of the scene parameters corresponding to the first virtual object, wherein the game scene corresponding to the switched scene parameters is a non-combat scene, and displaying a first target expression at a preset position corresponding to the first virtual object in the first graphical user interface. By the method, the picture content of the graphical user interface can be enriched, the interestingness is enhanced, the fighting process cannot be interfered, and the rationality of the expression use is improved.
Referring to fig. 11, a graphical user interface (i.e., a first graphical user interface) provided by a terminal device includes a virtual object a (i.e., a first virtual object, i.e., a player character) and a chat frame 11, where the virtual object a and the virtual object B belong to the same team, a scenario parameter corresponding to the virtual object a meets a trigger condition that the virtual object a is treating itself, the chat frame 11 is configured with an expression (i.e., a first target expression) representing that the virtual object a is treating itself, and the type of the expression is a first type shared by all members; the terminal device displays the expression in the form of a bubble 21 at the upper left of the virtual object a while the virtual object a is treating itself.
The above non-combat scene may refer to that the first virtual object is in the non-combat scene, and may also refer to that the team in which the non-combat scene is located is in the non-combat scene, which is not limited in the embodiment of the present disclosure.
The information processing method in the game is described with emphasis on the terminal device side corresponding to the sending expression, and the processing on the other terminal device side receiving the expression is described below. Referring to a flow chart of an information processing method in a game shown in fig. 12, an execution subject of the method is described by taking the terminal device as an example, and the method includes the following steps:
in step S1202, in response to the control operation for the second virtual object, the second virtual object is controlled to execute an action corresponding to the control operation.
Step S1204, receiving a first target expression sent by a first virtual object; the first target expression is sent by the terminal equipment corresponding to the first virtual object when the scene parameters corresponding to the first virtual object meet the trigger condition, and the first target expression is matched with the scene parameters;
the control operations and the scene parameters are similar to those described above, and are not described herein again.
Step S1206, determining a display mode of the information corresponding to the first target expression according to the game scene (namely the game scene corresponding to the second virtual object); the display mode comprises display content and a display position.
The display content specifically refers to information corresponding to the first target expression, where the information may be the first target expression itself, text information corresponding to the first target expression, or picture information corresponding to the first target expression, and the like, and may be specifically determined according to a game scene corresponding to the second virtual object. The display position specifically refers to a display position of information corresponding to the first target expression in the second graphical user interface, and the specific display position may be different according to different display contents.
And step S1208, displaying the information corresponding to the first target expression in the second graphical user interface according to the display mode.
Referring to fig. 3 and 13, fig. 3 is a first graphical user interface corresponding to a virtual object a (i.e., a first virtual object), where a scenario parameter corresponding to the virtual object a meets a trigger condition that the virtual object a enters into a battle, a first target expression in fig. 3 is an expression that represents that the virtual object a enters into the battle, and a type of the expression is a first type shared by all members; in fig. 13, the graphical user interface (i.e., the second graphical user interface) provided by the terminal device includes a virtual object B (i.e., the second virtual object), a virtual object a, and a virtual object C, and when the virtual object a fights with the virtual object C, the terminal device receives a first target expression sent by the virtual object a to the virtual object B, determines the display content of information corresponding to the expression as a bubble 22 containing the expression and/or a floating word 23 represented by a word "XXX is being flipped" (XXX "is a player ID corresponding to the virtual object a), determines the display position of the bubble 22 as the upper left of the virtual object a, and determines the display position of the floating word 23 as the lower left of the second graphical user interface; the terminal device will then display the bubble 22 in the upper left side of the virtual object a while displaying the hover text 23 in the lower left side of the second graphical user interface.
According to the information processing method in the game, provided by the embodiment of the disclosure, a second graphical user interface comprising at least a part of game scene and a second virtual object is provided through the terminal equipment; when the scene parameters corresponding to the first virtual object meet the trigger conditions, a first target expression which is sent by the first virtual object to the second virtual object and is matched with the scene parameters is received; and determining a display mode of the information corresponding to the first target expression according to the game scene, and displaying the information corresponding to the first target expression in the second graphical user interface according to the display mode. By adopting the technology, when the scene parameter corresponding to the first virtual object meets the trigger condition, the expression matched with the scene parameter is automatically sent to the second virtual object, and after the second virtual object side receives the expression, the information corresponding to the expression can be reasonably displayed by determining the display mode, so that the rationality of displaying the information corresponding to the expression is further improved on the basis of improving the convenience of expression interaction in the game, the interaction of the expression in the game is enhanced, and the utilization rate of the expression in the game is improved.
As a possible implementation manner, the step S1206 may include the following operation manners: checking whether the game scene belongs to a battle scene, checking the distance between the first virtual object and the second virtual object, and determining the display mode of the information corresponding to the first target expression according to the checking result; the display modes can comprise a first display mode and a second display mode; the first display mode may include: displaying information corresponding to the first target expression at a position corresponding to the head portrait of the first virtual object, and/or displaying information corresponding to the first target expression in a chat frame corresponding to the second virtual object; the second display mode may include displaying information corresponding to the first target expression at a corresponding position of the first virtual object.
Based on the first display mode and the second display mode, the step S1208 may include the following operation modes: (1) If the determined display mode is the first display mode, displaying a first target expression at a position corresponding to the head portrait of the first virtual object on the second graphical user interface, and/or displaying text information corresponding to the first target expression on a chat frame corresponding to the second virtual object; (2) And if the determined display mode is the second display mode, displaying the first target expression at the position corresponding to the first virtual object in the second graphical user interface.
The position corresponding to the avatar of the first virtual object and the position corresponding to the first virtual object are two different positions, the position corresponding to the avatar of the first virtual object refers to a position where an avatar list or an avatar area of a member in the same team is displayed in a centralized manner, and the position corresponding to the first virtual object refers to a position where a model of the first virtual object is located, and is content in a game scene.
By adopting the manner of determining the display mode of the expression information and displaying the expression information according to the determined display mode, when the second virtual object receives the expression sent by the first virtual object, the information corresponding to the expression can be flexibly displayed at the corresponding position in the second graphical user interface according to whether the game scene belongs to the battle scene and the distance between the two virtual objects, and the player can intuitively obtain the information to be transmitted by the expression through the interface, so that the flexibility of interaction of the related information in the expression using process and the reasonability of expression information display are improved.
For example, the determining, according to the check result, the display mode of the information corresponding to the first target expression may include the following operation modes:
operation mode 1: and if the game scene is a battle scene, determining that the display mode of the information corresponding to the first target expression is the first display mode.
Referring to fig. 14, the graphical user interface (i.e. the second graphical user interface) provided by the terminal device includes a virtual object B (i.e. the second virtual object), a virtual object a (i.e. the first virtual object), a virtual object C, an avatar B0 of the virtual object B, an avatar A0 of the virtual object a, and a chat box 25, where the virtual object a and the virtual object B belong to a first team, the virtual object C belongs to a second team, the contextual parameter corresponding to the virtual object a satisfies the trigger condition that the virtual object a is being attacked, the chat box of the virtual object a is configured with an expression (i.e. the first target expression) representing the virtual object a being attacked, and the expression is of a second type shared by the same team; referring to fig. 15, after receiving the first target emotion sent by the virtual object a to the virtual object B, the terminal device checks that the virtual object a is being attacked by the virtual object C (i.e., the game scene belongs to the battle scene), and then determines that the information corresponding to the first target emotion is displayed in a manner that a square icon 24 containing the first target emotion is displayed to the right of the avatar A0 of the virtual object a and a "XXX is being filled" (XXX "is the player ID corresponding to the virtual object a) is displayed in a form of a message in the chat box 25.
By adopting the operation mode 1, when the second virtual object receives the expression sent by the first virtual object in the fighting process, the blocking of the fighting scene by the display of the expression information can be avoided, so that the information transmitted by the expression can be directly acquired near the avatar of the first virtual object and/or in the chat frame of the second virtual object, and the interaction of the expression in the game fighting scene becomes more reasonable.
Operation mode 2: and if the game scene is a non-combat scene and the first virtual object is not contained in the game scene or the distance between the first virtual object and the second virtual object is greater than a set distance threshold, determining that the display mode of the information corresponding to the first target expression is a first display mode.
Referring to fig. 16, the graphical user interface (i.e., the second graphical user interface) provided by the terminal device includes a virtual object B (i.e., the second virtual object), a stealth grass 26, an avatar B0 of the virtual object B, an avatar A0 of the virtual object a (i.e., the first virtual object), and a chat frame 27, where the virtual object a and the virtual object B belong to the same team, a scenario parameter corresponding to the virtual object a meets a trigger condition that the virtual object a is located outside a visual field of the virtual object B and enters the stealth grass, the first target expression is an expression representing that the virtual object a enters the stealth grass, and the type of the expression is a second type shared by the same team. Therefore, after the virtual object A enters the stealth grass, the stealth grass expression is automatically sent to the virtual object B.
Referring to fig. 17, after receiving a first target expression sent by a virtual object a to a virtual object B, the terminal device checks that the virtual object a is out of the visual field of the virtual object B and enters a stealth grass 26 (i.e. the game scene does not belong to the battle scene and the game scene does not contain the first virtual object), and determines that information corresponding to the first target expression is displayed in a manner that a square icon 27 containing a stealth grass pattern is displayed to the right of a head image A0 of the virtual object a and/or a "XXX entered stealth grass" character is displayed in a form of a message in a chat frame 27 ("XXX" is a player ID corresponding to the virtual object a).
In fig. 18, after receiving the first target expression sent by the virtual object a to the virtual object B, the terminal device checks that the distance between the virtual object a and the virtual object B is greater than the set distance threshold and enters the stealth grass 26 (that is, the game scene does not belong to the battle scene and the distance between the first virtual object and the second virtual object is greater than the set distance threshold), and then determines that the display mode of the information corresponding to the first target expression is the same as the display mode shown in fig. 17.
By adopting the operation mode 2, when the virtual object which sends the expression is not included in the non-combat scene and the game scene or the virtual object is too far away from the virtual object which receives the expression, the information transmitted by the expression can be intuitively acquired near the head portrait of the virtual object which sends the expression and/or in the chat frame of the virtual object which receives the expression, so that the interaction intuitiveness of the expression information in the non-combat scene is improved.
Operation mode 3: and if the game scene is a non-battle scene and the game scene comprises the first virtual object, and the distance between the first virtual object and the second virtual object is smaller than or equal to the distance threshold, determining that the display mode of the information corresponding to the first target expression is the second display mode.
Referring to fig. 19, the graphical user interface (i.e., the second graphical user interface) provided by the terminal device includes a virtual object B (i.e., the second virtual object) and a virtual object a (i.e., the first virtual object), where the virtual object a and the virtual object B belong to the same team, the context parameter corresponding to the virtual object a meets the trigger condition that the virtual object a is replacing equipment, the first target expression is an expression that represents that the virtual object a is replacing equipment, and the type of the expression is a second type shared by the same team; after receiving the first target expression sent by the virtual object a to the virtual object B, the terminal device may check that the virtual object a is replacing equipment (that is, the game scene does not belong to the battle scene), and then may determine that the display mode of the information corresponding to the first target expression is to display the bubble 28 containing the expression at the upper left of the virtual object a.
By adopting the operation mode 3, when the second virtual object receiving the expression is in a non-combat state and the game scene comprises the first virtual object sending the expression, the information transmitted by the expression sent by the first virtual object can be visually displayed near the first virtual object, so that the interaction intuitiveness of the expression information in the non-combat scene is improved.
As a possible implementation manner, the step of checking whether the game scene belongs to the battle scene may include at least one of the following operation manners: whether an enemy virtual object of the second virtual object is in the attack range or not and whether an attack mark is set on the enemy virtual object is checked in the game scene; checking whether attribute values of the second virtual object and the enemy virtual object are changed; it is checked whether the weapon of the second virtual object and the enemy virtual object is collided.
The attribute value can be a life value, a armour value or other attribute values; in an initial state, when the virtual object is not attacked, the attribute value of the virtual object has the maximum attribute value; when a virtual object is attacked, the value of the attribute of the virtual object may decrease.
Continuing with the previous example, in fig. 15, after receiving the first target expression sent by the virtual object a (i.e., the first virtual object) to the virtual object B (i.e., the second virtual object), the terminal device checks that the life value of the virtual object B has not changed, checks that the life value of the virtual object C (i.e., the enemy virtual object) is changing, and checks that the virtual object B and the weapon of the virtual object C have not collided; the terminal device may then determine that the virtual object B and the virtual object C are not in battle (i.e., the game scene does not belong to the battle scene).
By adopting the operation mode of checking whether the game scene belongs to the battle scene or not, the battle state of the virtual object in the game can be quickly and accurately determined, so that the information corresponding to the expression is displayed according to a proper display mode, and the efficiency and the reasonability of expression information display are improved.
Considering that too long display duration of an expression may occupy too long in a game interface, as a possible implementation manner, the information processing method in the game may further include the following steps: and controlling the display duration of the information corresponding to the first target expression according to the expression type of the first target expression, wherein the expression type comprises a bubble expression and an action expression.
Illustratively, if the expression type of the first target expression is a bubble expression, the terminal device cancels the display of the information corresponding to the first target expression in response to the display duration of the information corresponding to the first target expression reaching a preset time threshold; and if the expression type of the first target expression is the action expression, the terminal equipment responds to the action playing end of the first target expression, and the display of the information corresponding to the first target expression is cancelled.
For example, in fig. 19, the expression type of the first target expression is a bubble expression, and after receiving the first target expression sent by the virtual object a (i.e., the first virtual object) to the virtual object B (i.e., the second virtual object), the terminal device controls the display duration of the information corresponding to the first target expression to be 3s, that is, the bubble 28 above and to the left of the virtual object a disappears after the display duration reaches 3 s. For another example, the expression type of the first target expression is an action expression, the action of the first target expression is that the first virtual object falls down and then cries, the terminal device displays information corresponding to the first target expression in the second graphical user interface after receiving the first target expression sent by the first virtual object to the second virtual object, and the information disappears after the action of the first virtual object falling down and then crying is completed.
By adopting the operation mode for controlling the display time of the expression information, the flexibility of the expression information display is further improved, and meanwhile, the interestingness of the expression use is also increased.
As a possible implementation manner, for an expression sent by directly operating the expression wheel, on the receiving expression side, the information processing method in the game may further include the following steps: receiving a second target expression sent by a third virtual object; and the second target expression is selected from the expression wheel disc corresponding to the third virtual object and is sent through the chat frame corresponding to the third virtual object. And displaying the second target expression at the position corresponding to the third virtual object of the second graphical user interface.
For example, as shown in fig. 7 and 20, after receiving the emoticon 15 (i.e., the second target emoticon) that the virtual object a (i.e., the third virtual object) selects from the emoticon wheel 12 shown in fig. 7 and is sent through the chat box 11 shown in fig. 7, the terminal device displays a bubble 29 containing the emoticon 15 on the upper left corner of the virtual object a on the graphical user interface (i.e., the second graphical user interface) shown in fig. 20. By adopting the operation mode, after the virtual object receives the expressions which are selected from the expression wheel disc by other virtual objects and are sent through the chat frame, the expressions can be visually displayed near the other virtual objects on the current game interface, and the convenience of the interaction of the expressions in the game can be improved.
Based on the information processing method in the game, an embodiment of the present disclosure further provides an information processing apparatus in a game, and as shown in fig. 21, the apparatus may include the following modules:
a first control module 2102, configured to, in response to a control operation for the first virtual object, control the first virtual object to perform an action corresponding to the control operation.
A first determining module 2104 configured to, in response to that the context parameter corresponding to the first virtual object meets a trigger condition, determine a second virtual object according to a type of the first target expression; the type of the first target expression comprises a first type shared by the whole members and a second type shared by the same team, and the second virtual object is other virtual objects in the same game as the first virtual object; the first target expression is an expression corresponding to the scenario parameter.
A first expression sending module 2106, configured to send the first target expression to the second virtual object, so as to notify, through a second graphical user interface corresponding to the second virtual object, information corresponding to the first target expression.
The information processing apparatus in the game provided by the embodiment of the present disclosure provides the first graphical user interface through the terminal device, and when the scenario parameter corresponding to the first virtual object meets the trigger condition, determines the second virtual object according to the type of the first target expression, and sends the first target expression to the second virtual object, so as to notify the information corresponding to the first target expression through the second graphical user interface corresponding to the second virtual object. By adopting the technology, when the scene parameter corresponding to the first virtual object meets the trigger condition, the first target expression matched with the scene parameter is automatically sent to the second virtual object, the convenience of sending the expression in the game is improved, the expression matched with the scene parameter can be automatically sent to other virtual objects under the condition that no additional operation is carried out by a player in the game process, the information of the expression is notified in the graphical user interfaces corresponding to other virtual objects, the interaction efficiency and the action of the expression in the game are enhanced, and the utilization rate of the expression in the game is improved.
Referring to fig. 21, the apparatus may further include:
a first emotion configuring module 2108, configured to configure, in response to an emotion configuring operation, a first target emotion matching the scenario parameter for the chat frame corresponding to the first virtual object.
The first expression sending module 2106 may further be configured to: and sending the first target expression to the second virtual object through the chat frame.
The first expression configuration module 2108 may be further configured to: displaying the expression wheel on the first graphical user interface in response to a display operation directed to the expression wheel; the expression wheel disc comprises a plurality of preconfigured expressions; responding to a first dragging operation of a first target expression in the expression wheel disc, and moving the first target expression according to the first dragging operation; wherein the first target expression is matched with the context parameters; and responding to the first target expression moving to a chat box in the first graphical user interface, and configuring the first target expression suitable for the game for the chat box.
Referring to fig. 21, the apparatus may further include:
the first prompt module 2110 is configured to display first prompt information that the first target expression configuration is successful in the first graphical user interface according to a first preset mode; wherein the first preset mode comprises at least one of the following modes: displaying a first set icon or first text information at a position corresponding to the chat frame, and displaying the chat frame in a first set color; and canceling the display of the first prompt message in response to the display duration of the first prompt message reaching a first time threshold.
A second expression sending module 2112, configured to respond to a second dragging operation for a second target expression in the expression wheel, and move the second target expression according to the second dragging operation; and responding to the second target expression moving to the chat frame, sending the second target expression to a third virtual object corresponding to the second target expression through the chat frame, and informing information corresponding to the second target expression through a third graphical user interface corresponding to the third virtual object.
The first expression configuration module 2108 may further be configured to: responding to expression configuration operation aiming at the first virtual object, and displaying an expression configuration sub-interface on the first graphic user interface; the expression configuration sub-interface comprises the first virtual object and an expression library; responding to a third dragging operation of a first target expression in the expression library, and moving the first target expression according to the third dragging operation; wherein the first target expression is matched with the context parameters; and responding to the movement of the first target expression to the first virtual object, and configuring the first target expression suitable for being used by a plurality of games for a chat box corresponding to the first virtual object.
Referring to fig. 21, the apparatus may further include:
a second prompt module 2114, configured to display, in the expression configuration sub-interface, second prompt information that the first target expression is successfully bound according to a second preset manner; wherein the second preset mode comprises at least one of the following modes: displaying and setting a second icon or second text information at a position corresponding to the first virtual object, and displaying the first virtual object in a second set color; and canceling the display of the second prompt message in response to the display duration of the second prompt message reaching a second time threshold.
A voice function checking module 2116, configured to respond to a switching event of a context parameter corresponding to the first virtual object, and check whether a voice function of the first virtual object is in an off state; and if the voice function is in a closed state and the state parameters and/or the position parameters contained in the switched scene parameters meet the trigger condition, executing the steps of determining a second virtual object according to the type of the first target expression and sending the first target expression to the second virtual object.
The first expression display module 2118 is configured to respond to a switching event of a scenario parameter corresponding to the first virtual object, where a game scene corresponding to the scenario parameter after the switching is a non-battle scene, and display the first target expression at a preset position in the first graphical user interface corresponding to the first virtual object.
Based on the information processing method in the game, an embodiment of the present disclosure further provides another information processing apparatus in a game, and as shown in fig. 22, the apparatus may include the following modules:
a second control module 2202, configured to control, in response to a control operation for the second virtual object, the second virtual object to execute an action corresponding to the control operation.
A first expression receiving module 2204, configured to receive a first target expression sent by the first virtual object; the first target expression is sent by the terminal device corresponding to the first virtual object when the scenario parameter corresponding to the first virtual object meets a trigger condition, and the first target expression is matched with the scenario parameter.
A second determining module 2206, configured to determine, according to the game scene, a display manner of information corresponding to the first target expression; the display mode comprises display content and a display position.
An information display module 2208, configured to display, in the second graphical user interface, information corresponding to the first target expression according to the display manner.
The information processing device in the game provided by the embodiment of the disclosure provides a second graphical user interface comprising at least a part of game scene and a second virtual object through the terminal equipment; when the scene parameters corresponding to the first virtual object meet the trigger conditions, a first target expression which is sent by the first virtual object to the second virtual object and is matched with the scene parameters is received; and determining a display mode of the information corresponding to the first target expression according to the game scene, and displaying the information corresponding to the first target expression in the second graphical user interface according to the display mode. By adopting the technology, when the scene parameter corresponding to the first virtual object meets the trigger condition, the expression matched with the scene parameter is automatically sent to the second virtual object, and after the second virtual object side receives the expression, the information corresponding to the expression can be reasonably displayed by determining the display mode, so that the rationality of displaying the information corresponding to the expression is further improved on the basis of improving the convenience of expression interaction in the game, the interaction of the expression in the game is enhanced, and the utilization rate of the expression in the game is improved.
The second determining module 2206 may further be configured to: checking whether the game scene belongs to a battle scene, checking the distance between the first virtual object and the second virtual object, and determining the display mode of the information corresponding to the first target expression according to the checking result; the display modes comprise a first display mode and a second display mode, and the first display mode comprises the following steps: displaying information corresponding to the first target expression at a position corresponding to the avatar of the first virtual object, and/or displaying information corresponding to the first target expression at a chat frame corresponding to the second virtual object; the second display mode comprises displaying information corresponding to the first target expression at a position corresponding to the first virtual object.
The second determining module 2206 may further be configured to: if the game scene is a battle scene, determining that the display mode of the information corresponding to the first target expression is a first display mode; if the game scene is a non-combat scene and the first virtual object is not contained in the game scene or the distance between the first virtual object and the second virtual object is greater than a set distance threshold, determining that the display mode of the information corresponding to the first target expression is the first display mode; and if the game scene is a non-combat scene and the game scene comprises the first virtual object, and the distance between the first virtual object and the second virtual object is smaller than or equal to the distance threshold, determining that the display mode of the information corresponding to the first target expression is the second display mode.
The information display module 2208 may further be configured to: if the determined display mode is the first display mode, displaying the first target expression at the position corresponding to the head portrait of the first virtual object on the second graphical user interface, and/or displaying the text information corresponding to the first target expression on the chat frame corresponding to the second virtual object; and if the determined display mode is the second display mode, displaying a first target expression at a position corresponding to the first virtual object in the second graphical user interface.
The second determining module 2206 may be further configured to: and checking whether an enemy virtual object of the second virtual object is in an attack range in the game scene or not, and whether an attack mark is set on the enemy virtual object or not.
The second determining module 2206 may further be configured to: checking whether the attribute values of the second virtual object and the enemy virtual object are changed.
The second determining module 2206 may further be configured to: checking whether a weapon of the second virtual object and the enemy virtual object collides.
Referring to fig. 22, the apparatus may further include:
a display duration control module 2210, configured to control a display duration of information corresponding to the first target expression according to an expression type of the first target expression, where the expression type includes a bubble expression and an action expression.
A second expression receiving module 2212, configured to receive a second target expression sent by the third virtual object; and the second target expression is selected from the expression wheel disc corresponding to the third virtual object and is sent through the chat frame corresponding to the third virtual object.
A second expression display module 2214, configured to display the second target expression at a position corresponding to the third virtual object in the second graphical user interface.
The implementation principle and the generated technical effect of the information processing device in the game provided by the embodiment of the present disclosure are the same as those of the embodiment of the method described above, and for the sake of brief description, reference may be made to the corresponding content in the embodiment of the information processing method in the game described above where no part of the embodiment of the device for recommending virtual goods in the game is mentioned.
An electronic device is further provided in the embodiment of the present disclosure, as shown in fig. 23, which is a schematic structural diagram of the electronic device, where the electronic device includes a processor 231 and a memory 230, the memory 230 stores computer-executable instructions capable of being executed by the processor 231, and the processor 231 executes the computer-executable instructions to implement the information processing method in the game.
In the embodiment shown in fig. 23, the electronic device further comprises a bus 232 and a communication interface 233, wherein the processor 231, the communication interface 233 and the memory 230 are connected by the bus 232.
The Memory 230 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 233 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 232 may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 232 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 23, but that does not indicate only one bus or one type of bus.
The processor 231 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 231. The Processor 231 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the information processing method in the game disclosed by the embodiment of the disclosure can be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory, and the processor 231 reads the information in the memory, and completes the steps of the information processing method in the game of the foregoing embodiment in combination with the hardware thereof.
The embodiment of the present disclosure further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the information processing method in the game, and specific implementation may refer to the foregoing method embodiment, and is not described herein again.
The information processing method and apparatus in the game and the computer program product of the electronic device provided by the embodiments of the present disclosure include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
Unless specifically stated otherwise, the relative steps, numerical expressions, and values of the components and steps set forth in these embodiments do not limit the scope of the present disclosure.
The functions, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-transitory computer-readable storage medium executable by a processor. Based on such understanding, the technical solutions of the present disclosure, which are essential or part of the technical solutions contributing to the prior art, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
In the description of the present disclosure, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing and simplifying the present disclosure, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present disclosure. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes and substitutions do not depart from the spirit and scope of the embodiments disclosed herein, and they should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (20)

1. An information processing method in a game is characterized in that a first graphical user interface is provided through a terminal device; the method comprises the following steps:
responding to a control operation aiming at a first virtual object, and controlling the first virtual object to execute an action corresponding to the control operation;
responding to the situation parameter corresponding to the first virtual object meeting the trigger condition, and determining a second virtual object according to the type of the first target expression; the type of the first target expression comprises a first type shared by the whole members and a second type shared by the same team, and the second virtual object is other virtual objects in the same game as the first virtual object; the first target expression is an expression corresponding to the scene parameter;
and sending the first target expression to the second virtual object so as to inform information corresponding to the first target expression through a second graphical user interface corresponding to the second virtual object.
2. The method of claim 1, further comprising: responding to expression configuration operation, and configuring a first target expression matched with the scene parameters for a chat frame corresponding to the first virtual object;
sending the first target expression to the second virtual object comprises: and sending the first target expression to the second virtual object through the chat frame.
3. The method of claim 2, wherein, in response to an emotion configuration operation, configuring a chat box corresponding to the first virtual object with a first target emotion matching the context parameters comprises:
displaying the expression wheel on the first graphical user interface in response to a display operation directed to the expression wheel; the expression wheel disc comprises a plurality of preconfigured expressions;
responding to a first dragging operation of a first target expression in the expression wheel disc, and moving the first target expression according to the first dragging operation; wherein the first target expression is matched with the context parameters;
and responding to the first target expression moving to a chat box in the first graphical user interface, and configuring the first target expression suitable for the game for the chat box.
4. The method of claim 3, wherein after the step of configuring the chat box with the first target emoticon suitable for use in the game, the method further comprises:
displaying first prompt information of successful configuration of the first target expression in the first graphical user interface according to a first preset mode; wherein the first preset mode comprises at least one of the following modes: displaying a first set icon or first text information at a position corresponding to the chat frame, and displaying the chat frame in a first set color;
and canceling the display of the first prompt message in response to the display duration of the first prompt message reaching a first time threshold.
5. The method of claim 3, further comprising:
responding to a second dragging operation aiming at a second target expression in the expression wheel disc, and moving the second target expression according to the second dragging operation;
and responding to the second target expression moving to the chat frame, sending the second target expression to a third virtual object corresponding to the second target expression through the chat frame, and informing information corresponding to the second target expression through a third graphical user interface corresponding to the third virtual object.
6. The method of claim 2, wherein in response to an emotion configuration operation, configuring a chat box corresponding to the first virtual object with a first target emotion matching the context parameters comprises:
responding to expression configuration operation aiming at the first virtual object, and displaying an expression configuration sub-interface on the first graphic user interface; the expression configuration sub-interface comprises the first virtual object and an expression library;
responding to a third dragging operation of a first target expression in the expression library, and moving the first target expression according to the third dragging operation; wherein the first target expression is matched with the context parameters;
and responding to the first target expression moving to the first virtual object, and configuring the first target expression suitable for being used by a plurality of games for a chat box corresponding to the first virtual object.
7. The method of claim 6, wherein after the step of configuring the chat box with the first target emoticon suitable for use in a plurality of games, the method further comprises:
displaying second prompt information of successful binding of the first target expression in the expression configuration sub-interface according to a second preset mode; wherein the second preset mode comprises at least one of the following modes: displaying and setting a second icon or second text information at a position corresponding to the first virtual object, and displaying the first virtual object in a second set color;
and canceling the display of the second prompt message in response to the display duration of the second prompt message reaching a second time threshold.
8. The method of claim 1, further comprising:
responding to a switching event of the scene parameter corresponding to the first virtual object, and checking whether the voice function of the first virtual object is in a closed state;
and if the voice function is in a closed state and the state parameters and/or the position parameters contained in the switched scene parameters meet the trigger condition, executing the steps of determining a second virtual object according to the type of the first target expression and sending the first target expression to the second virtual object.
9. The method of claim 1, further comprising:
responding to a switching event of the scene parameters corresponding to the first virtual object, wherein the switched game scene corresponding to the scene parameters is a non-combat scene, and displaying the first target expression at a preset position corresponding to the first virtual object in the first graphical user interface.
10. An information processing method in a game is characterized in that a second graphical user interface is provided through a terminal device; the second graphical user interface comprising at least a portion of a game scene, a second virtual object, the method comprising:
responding to a control operation aiming at the second virtual object, and controlling the second virtual object to execute an action corresponding to the control operation;
receiving a first target expression sent by a first virtual object; the first target expression is sent by the terminal equipment corresponding to the first virtual object when the scene parameter corresponding to the first virtual object meets a trigger condition, and the first target expression is matched with the scene parameter;
determining a display mode of information corresponding to the first target expression according to the game scene; the display mode comprises display content and a display position;
and displaying the information corresponding to the first target expression in the second graphical user interface according to the display mode.
11. The method of claim 10, wherein determining a display mode of the information corresponding to the first target expression according to the game scene comprises:
checking whether the game scene belongs to a battle scene, checking the distance between the first virtual object and the second virtual object, and determining the display mode of the information corresponding to the first target expression according to the checking result; the display modes comprise a first display mode and a second display mode, and the first display mode comprises the following steps: displaying information corresponding to the first target expression at a position corresponding to the avatar of the first virtual object, and/or displaying information corresponding to the first target expression at a chat frame corresponding to the second virtual object; the second display mode comprises displaying information corresponding to the first target expression at a position corresponding to the first virtual object.
12. The method of claim 11, wherein determining a display mode of the information corresponding to the first target expression according to the check result comprises:
if the game scene is a battle scene, determining that the display mode of the information corresponding to the first target expression is a first display mode;
if the game scene is a non-battle scene and the first virtual object is not included in the game scene or the distance between the first virtual object and the second virtual object is greater than a set distance threshold, determining that the display mode of the information corresponding to the first target expression is the first display mode;
and if the game scene is a non-combat scene and the game scene comprises the first virtual object, and the distance between the first virtual object and the second virtual object is smaller than or equal to the distance threshold, determining that the display mode of the information corresponding to the first target expression is the second display mode.
13. The method of claim 11, wherein displaying the information corresponding to the first target expression in the second graphical user interface according to the display mode comprises:
if the determined display mode is the first display mode, displaying the first target expression at the position corresponding to the head portrait of the first virtual object on the second graphical user interface, and/or displaying the text information corresponding to the first target expression on the chat frame corresponding to the second virtual object;
and if the determined display mode is the second display mode, displaying a first target expression at a position corresponding to the first virtual object in the second graphical user interface.
14. The method of claim 11, wherein checking whether the game scene belongs to a battle scene comprises at least one of:
checking whether an enemy virtual object of the second virtual object is in an attack range in the game scene or not, and whether an attack mark is set on the enemy virtual object or not;
checking whether attribute values of the second virtual object and the enemy virtual object are changed;
checking whether a weapon of the second virtual object and the enemy virtual object collides.
15. The method of claim 10, further comprising:
and controlling the display duration of the information corresponding to the first target expression according to the expression type of the first target expression, wherein the expression type comprises a bubble expression and an action expression.
16. The method of claim 10, further comprising:
receiving a second target expression sent by a third virtual object; the second target expression is selected from the expression wheel disc corresponding to the third virtual object and is sent through the chat frame corresponding to the third virtual object;
and displaying the second target expression at a position corresponding to the third virtual object on the second graphical user interface.
17. An in-game information processing apparatus characterized in that a first graphical user interface is provided by a terminal device; the device comprises:
the first control module is used for responding to a control operation aiming at a first virtual object and controlling the first virtual object to execute an action corresponding to the control operation;
the first determining module is used for determining a second virtual object according to the type of the first target expression in response to the situation parameter corresponding to the first virtual object meeting the trigger condition; the type of the first target expression comprises a first type shared by the whole members and a second type shared by the same team, and the second virtual object is other virtual objects in the same game as the first virtual object; the first target expression is an expression corresponding to the scene parameter;
and the first expression sending module is used for sending the first target expression to the second virtual object so as to inform information corresponding to the first target expression through a second graphical user interface corresponding to the second virtual object.
18. An in-game information processing apparatus characterized by providing a second graphical user interface through a terminal device; the second graphical user interface comprising at least a portion of a game scene, a second virtual object, the apparatus comprising:
the second control module is used for responding to the control operation aiming at the second virtual object and controlling the second virtual object to execute the action corresponding to the control operation;
the first expression receiving module is used for receiving a first target expression sent by the first virtual object; the first target expression is sent by the terminal equipment corresponding to the first virtual object when the scene parameter corresponding to the first virtual object meets a trigger condition, and the first target expression is matched with the scene parameter;
the second determining module is used for determining a display mode of the information corresponding to the first target expression according to the game scene; the display mode comprises display content and a display position;
and the information display module is used for displaying the information corresponding to the first target expression in the second graphical user interface according to the display mode.
19. An electronic device comprising a processor and a memory, the memory storing computer-executable instructions executable by the processor, the processor executing the computer-executable instructions to implement the method of any one of claims 1 to 16.
20. A computer-readable storage medium having stored thereon computer-executable instructions that, when invoked and executed by a processor, cause the processor to implement the method of any of claims 1 to 16.
CN202211032640.1A 2022-08-26 2022-08-26 Information processing method and device in game, electronic equipment and storage medium Pending CN115400427A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211032640.1A CN115400427A (en) 2022-08-26 2022-08-26 Information processing method and device in game, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211032640.1A CN115400427A (en) 2022-08-26 2022-08-26 Information processing method and device in game, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115400427A true CN115400427A (en) 2022-11-29

Family

ID=84160632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211032640.1A Pending CN115400427A (en) 2022-08-26 2022-08-26 Information processing method and device in game, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115400427A (en)

Similar Documents

Publication Publication Date Title
CN113244603A (en) Information processing method and device and terminal equipment
CN111672116B (en) Method, device, terminal and storage medium for controlling virtual object release technology
CN112121417B (en) Event processing method, device, equipment and storage medium in virtual scene
WO2022193838A1 (en) Game settlement interface display method and apparatus, device and medium
JP7492611B2 (en) Method, apparatus, computer device and computer program for processing data in a virtual scene
CN114377396A (en) Game data processing method and device, electronic equipment and storage medium
US20220266139A1 (en) Information processing method and apparatus in virtual scene, device, medium, and program product
JP2023126292A (en) Information display method, device, instrument, and program
CN113398601A (en) Information transmission method, information transmission device, computer-readable medium, and apparatus
KR20220157938A (en) Method and apparatus, terminal and medium for transmitting messages in a multiplayer online combat program
CN113476825A (en) Role control method, role control device, equipment and medium in game
CN113546412B (en) Display control method and device in game and electronic equipment
JP2024516474A (en) Method, device, equipment and storage medium for virtual material transfer
CN114011063A (en) Method for controlling virtual role in game and electronic equipment
CN113144617B (en) Control method, device and equipment of virtual object and computer readable storage medium
CN115282599A (en) Information interaction method and device, electronic equipment and storage medium
CN115634450A (en) Control method, control device, equipment and medium for virtual role
CN115645923A (en) Game interaction method and device, terminal equipment and computer-readable storage medium
CN115400427A (en) Information processing method and device in game, electronic equipment and storage medium
CN113730910A (en) Method and device for processing virtual equipment in game and electronic equipment
CN113713373A (en) Information processing method and device in game, electronic equipment and readable storage medium
US20170157510A1 (en) Systems and methods for procedural game content generation via interactive non-player game entities
CN115569376A (en) Virtual role layout method and device, electronic equipment and storage medium
CN117547811A (en) Game operation control method and device and electronic equipment
CN117339206A (en) Method and device for processing task information in game, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination