CN111701240A - Virtual article prompting method and device, storage medium and electronic device - Google Patents

Virtual article prompting method and device, storage medium and electronic device Download PDF

Info

Publication number
CN111701240A
CN111701240A CN202010574847.6A CN202010574847A CN111701240A CN 111701240 A CN111701240 A CN 111701240A CN 202010574847 A CN202010574847 A CN 202010574847A CN 111701240 A CN111701240 A CN 111701240A
Authority
CN
China
Prior art keywords
virtual
information
neural network
network model
situation information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010574847.6A
Other languages
Chinese (zh)
Other versions
CN111701240B (en
Inventor
蔡康
张哲�
胡瑞清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010574847.6A priority Critical patent/CN111701240B/en
Publication of CN111701240A publication Critical patent/CN111701240A/en
Application granted granted Critical
Publication of CN111701240B publication Critical patent/CN111701240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/67Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a device for prompting a virtual article, a storage medium and an electronic device. The method comprises the following steps: acquiring first situation information of a virtual game role, wherein the first situation information is situation information of a current round; determining a neural network model corresponding to the virtual game role, wherein the neural network model is trained by using multiple groups of sample data, and each group of sample data in the multiple groups of sample data comprises: second situation information, virtual article information and a corresponding relation between the second situation information and the virtual article information, wherein the second situation information is situation information of a historical round, and the virtual article information is a historical virtual article used by the virtual game role in the historical round; analyzing the first local potential information based on the neural network model to obtain a target virtual article; the target virtual item is prompted to the virtual game character. By the method and the device, the effect of improving the efficiency of prompting the virtual article is achieved.

Description

Virtual article prompting method and device, storage medium and electronic device
Technical Field
The invention relates to the field of computers, in particular to a method and a device for prompting a virtual article, a storage medium and an electronic device.
Background
Currently, the selection of virtual items has a great influence on the outcome of competitive game competitions. When presenting a virtual article to a virtual game character, it is common to count up the virtual articles corresponding to the virtual game character level and present the virtual game character with the counted virtual articles.
However, as time goes on, the virtual object identified by the above method cannot adapt to the change of the game situation, and a reasonable virtual object cannot be presented to the virtual game character, which has a technical problem of low efficiency in presenting the virtual object.
Aiming at the technical problem of low efficiency of prompting virtual articles in the prior art, no effective solution is provided at present.
Disclosure of Invention
The invention mainly aims to provide a method, a device, a storage medium and an electronic device for prompting a virtual article, so as to at least solve the technical problem of low efficiency of prompting the virtual article.
In order to achieve the above object, according to one aspect of the present invention, a method for prompting a virtual object is provided. The method can comprise the following steps: acquiring first situation information of a virtual game role, wherein the first situation information is situation information of a current round; determining a neural network model corresponding to the virtual game role, wherein the neural network model is trained by using multiple groups of sample data, and each group of sample data in the multiple groups of sample data comprises: second situation information, virtual article information and a corresponding relation between the second situation information and the virtual article information, wherein the second situation information is situation information of a historical round, and the virtual article information is a historical virtual article used by the virtual game role in the historical round; analyzing the first local potential information based on the neural network model to obtain a target virtual article; the target virtual item is prompted to the virtual game character.
Optionally, analyzing the first local potential information based on the neural network model to obtain a target virtual article, including: converting the first local potential information into a first thermal code; converting the first thermal code based on a neural network model to obtain a plurality of probabilities, wherein each probability is used for indicating the possibility of recommending a corresponding virtual item to the virtual game role; and determining the virtual article corresponding to the maximum probability in the plurality of probabilities as a target virtual article.
Optionally, the first local potential information comprises at least one of: virtual items that the virtual game character has selected in the current round; character information of a virtual game character in a current round; association of virtual game characters the character information of a virtual game character in the current round.
Optionally, before determining the neural network model corresponding to the virtual game character, the method further comprises: acquiring a video of a historical turn; and sampling the video to obtain second situation information and virtual article information.
Optionally, sampling the video to obtain second situation information and virtual article information, including: sampling the video according to a target time interval to obtain a plurality of sampling points; determining second situation information corresponding to each sampling point as second situation information; and determining the virtual article selected by the virtual game character after each sampling point as the virtual article information.
Optionally, the second situation information includes at least one of: virtual items that have been selected by a virtual game character in a historical round; character information of a virtual game character in a historical turn; association of virtual game characters character information of virtual game characters in historical rounds.
Optionally, before determining the neural network model corresponding to the virtual game character, the method further comprises: and taking the second situation information as the input of the initial neural network model, taking the virtual article information as the output of the initial neural network model, and carrying out deep learning training on the initial neural network model to obtain the neural network model.
Optionally, the using the second situation information as an input of the initial neural network model includes: converting the second situation information into a second thermal code; the second thermal encoding is determined as an input feature of the initial neural network model.
Optionally, the outputting the virtual item information as an output of the initial neural network model includes: converting the virtual article information into a third thermal code; determining the third thermal code as an output label of the initial neural network model.
Optionally, performing deep learning training on the initial neural network model to obtain a neural network model, including: and carrying out gradient descent training on the parameters of the initial neural network model to obtain the neural network model.
Optionally, obtaining first local potential information of the current round includes: first local potential information is obtained from a target server.
In order to achieve the above object, according to another aspect of the present invention, there is provided a presentation apparatus for a virtual article. The device includes: the virtual game system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring first situation information of a virtual game role, and the first situation information is situation information of a current round; the determining unit is used for determining a neural network model corresponding to the virtual game role, wherein the neural network model is trained by using multiple groups of sample data, and each group of sample data in the multiple groups of sample data comprises: second situation information, virtual article information and a corresponding relation between the second situation information and the virtual article information, wherein the second situation information is situation information of a historical round, and the virtual article information is a historical virtual article used by the virtual game role in the historical round; the analysis unit is used for analyzing the first local potential information based on the neural network model to obtain a target virtual article; and the recommending unit is used for prompting the target virtual article to the virtual game role.
In order to achieve the above object, according to another aspect of the present invention, there is provided a storage medium. The storage medium stores a computer program, wherein when the computer program is executed by the processor, the apparatus where the storage medium is located is controlled to execute the method for prompting a virtual article according to the embodiment of the present invention.
In order to achieve the above object, according to another aspect of the present invention, an electronic device is provided. The electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the processor is configured to run the computer program to execute the prompting method of the virtual article of the embodiment of the invention.
The method comprises the steps of obtaining first situation information of a virtual game role, wherein the first situation information is situation information of a current round; determining a neural network model corresponding to the virtual game role, wherein the neural network model is trained by using multiple groups of sample data, and each group of sample data in the multiple groups of sample data comprises: second situation information, virtual article information and a corresponding relation between the second situation information and the virtual article information, wherein the second situation information is situation information of a historical round, and the virtual article information is a historical virtual article used by the virtual game role in the historical round; analyzing the first local potential information based on the neural network model to obtain a target virtual article; the target virtual item is prompted to the virtual game character. That is to say, the neural network model is established according to the existing multiple groups of sample data of the virtual game role, and the first local information which changes dynamically can be input into the neural network model for processing, so that the target virtual object is recommended dynamically in real time, the prompt of the target virtual object is more reasonable, the technical problem of low efficiency of prompting the virtual object is solved, and the technical effect of improving the efficiency of prompting the virtual object is achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a method for prompting a virtual article according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for cueing a virtual item according to an embodiment of the invention;
FIG. 3 is a schematic illustration of model training and prediction according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of thermally encoding existing equipment in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of converting attribute information into features according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of sample data composed by one-to-one correspondence of output tags to input features according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a network structure of a model according to an embodiment of the invention; and
fig. 8 is a schematic diagram of a prompting device for a virtual article according to an embodiment of the invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device. Taking the example of running on a mobile terminal, fig. 1 is a hardware structure block diagram of the mobile terminal of a virtual article prompting method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 can be used for storing computer programs, for example, software programs and modules of application software, such as a computer program corresponding to a data processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In this embodiment, a method for prompting a virtual article running on the mobile terminal is provided. Fig. 2 is a flowchart of a method for prompting a virtual article according to an embodiment of the present invention. As shown in fig. 2, the method may include the steps of:
in step S202, first character information of a virtual game character is acquired.
In the technical solution provided by step S202 of the present invention, the first situation information is situation information of a current round.
In this embodiment, the virtual game character may be a virtual object controlled in the game scene by a target object, wherein the target object may be referred to as a player. Alternatively, the virtual game character may also be a virtual object controlled in the game scene by a simulation object, wherein the simulation object is used for simulating the operation of the player in the game scene, and may be an Artificial Intelligence (AI) object, which may also be referred to as game AI.
The current round of the embodiment is a current game in which the virtual game character participates, for example, for a game match of a game competition class currently in progress, the first round information of the virtual game character belongs to one of game data of the virtual game character, the first round information is the situation information of the current round, and can be used for indicating the situation of a real-time battle situation of the game, and the first round information can be dynamically changed along with the progress of time. The embodiment acquires the above-described first local potential information.
In step S204, a neural network model corresponding to the virtual game character is determined.
In the technical solution provided in step S204 of the present invention, after the first local situation information of the virtual game character is acquired, a neural network model corresponding to the virtual game character is determined, where the neural network model is trained by using multiple sets of sample data, and each set of sample data in the multiple sets of sample data includes: the second situation information is situation information of a historical round, and the virtual article information is a historical virtual article used by the virtual game character in the historical round.
In this embodiment, the neural network model is specific to a particular virtual game character, i.e., different virtual game characters may have different neural network models. The neural network model of the embodiment may also be referred to as a virtual item recommendation model, and is used for obtaining a virtual item suitable for the situation where the virtual game character is located based on the input situation information, that is, the neural network model establishes a mapping relationship between the input situation information and the output virtual item. Optionally, in this embodiment, a large amount of existing data of the virtual game character in the historical round is collected in advance, that is, multiple sets of sample data are collected, and the neural network model is trained through the multiple sets of sample data, where each set of sample data may include second situation information, virtual article information, and a correspondence between the second situation information and the virtual article in the historical round, the second situation information is situation information of the historical round and may be used to indicate a situation of a battle situation of the game in the historical round, the historical round is at least one round of game where the virtual game character appeared before the current round, and the virtual article information is a historical virtual article used by the virtual game character in the historical round and may be a virtual article which is appeared next time with respect to the second situation information.
Optionally, in this embodiment, Deep learning training is performed on the initial neural network model based on multiple sets of sample data to obtain model parameters of the neural network model, a Deep neural network model (Deep neural networks) is generated through the model parameters, and cross validation, target evaluation, over-fitting, under-fitting, and other evaluations may be performed on the Deep neural network model, so that the neural network model for determining a virtual article suitable for the virtual game role in the situation where the virtual game role is located is finally obtained.
In this embodiment, the virtual item may be a virtual auxiliary object used by the virtual game character, such as equipment, prop, and the like, which may include, but is not limited to, equipment for increasing attack speed, equipment for increasing attack force, and the like, and may be obtained through purchase.
And S206, analyzing the first local potential information based on the neural network model to obtain a target virtual article.
In the technical solution provided in step S206 of the present invention, after the neural network model corresponding to the virtual game character is determined, the first local potential information is analyzed based on the neural network model to obtain the target virtual item.
In this embodiment, the first local situation information may be input into the neural network model, and the neural network model establishes a mapping relationship between the input local situation information and the output virtual item, so that the neural network model processes the first local situation information to obtain a target virtual item, where the target virtual item is a virtual item suitable for the virtual game character to use in the battle situation indicated by the first local situation information, and thus, the fighting capacity of the virtual game character may be improved, and the game experience of the player corresponding to the virtual game character may be improved.
In step S208, the target virtual item is presented to the virtual game character.
In the technical solution provided in step S208 of the present invention, after the first local potential information is analyzed based on the neural network model to obtain the target virtual item, the target virtual item is prompted to the virtual game character, that is, the target virtual item is recommended to the virtual game character, and a reference is provided for selecting the virtual item, so as to achieve the purpose of guiding the virtual game character to purchase the virtual item.
Optionally, the embodiment outputs the information of the target virtual object, where the information of the target virtual object may include, but is not limited to, a text form, an image form, and the like, and may be output to a specific location of the graphical user interface, and the embodiment may also output the information of the target virtual object in a voice form, which is not limited herein.
In this embodiment, as time goes on, the first situation information is constantly changed, so that the target virtual item determined by the neural network model is also changed, that is, the embodiment achieves the purpose of dynamically prompting a reasonable target virtual item for the virtual game character in real time according to the situation of the battle situation.
In the related art, generally, the purchase condition of a virtual item with a high winning rate in the level corresponding to the player character is counted and recommended to the virtual game character, or a virtual item suitable for the individual shipment is iteratively selected according to a genetic algorithm, but the virtual game character cannot be dynamically presented with reasonable virtual items in real time according to the situation of a battle situation. However, through the above steps S202 to S208 of the present application, first situation information of the virtual game character is acquired, where the first situation information is situation information of a current round; determining a neural network model corresponding to the virtual game role, wherein the neural network model is trained by using multiple groups of sample data, and each group of sample data in the multiple groups of sample data comprises: second situation information, virtual article information and a corresponding relation between the second situation information and the virtual article information, wherein the second situation information is situation information of a historical round, and the virtual article information is a historical virtual article used by the virtual game role in the historical round; analyzing the first local potential information based on the neural network model to obtain a target virtual article; the target virtual item is prompted to the virtual game character. That is to say, the embodiment establishes the neural network model according to the existing multiple groups of sample data of the virtual game character, and can input the first local information which changes dynamically into the neural network model for processing, so as to dynamically recommend the target virtual object for the virtual object in real time, so that the prompt of the target virtual object is more reasonable, the technical problem of low efficiency of prompting the virtual object is solved, and the technical effect of improving the efficiency of prompting the virtual object is achieved.
The above-described method of this embodiment is further described below.
As an optional implementation manner, in step S206, analyzing the first local potential information based on the neural network model to obtain a target virtual article, including: converting the first local potential information into a first thermal code; converting the first thermal code based on a neural network model to obtain a plurality of probabilities, wherein each probability is used for indicating the possibility of recommending a corresponding virtual item to the virtual game role; and determining the virtual article corresponding to the maximum probability in the plurality of probabilities as a target virtual article.
In this embodiment, the first local potential information of the current round may include numerical information or non-numerical information, and in order to facilitate analysis of the first local potential information by the neural network model, the acquired first local potential information may be uniformly converted into a first thermal code, where the first thermal code may also be referred to as a thermal code feature.
In this embodiment, the neural network model may be a probabilistic model, and when the probabilistic model is trained through a deep learning method, the probabilistic model may also be referred to as a deep probabilistic model. The embodiment inputs the first thermal code into the neural network model, obtains a plurality of probabilities, which may be probabilities of a plurality of virtual items, the probability of each virtual item indicating a likelihood of recommending each virtual item to the virtual game character, and the probabilities of all virtual items add up to 1. The embodiment can determine the maximum probability from a plurality of probabilities, and determine the virtual item corresponding to the maximum probability as the target virtual item to be prompted to the virtual game character.
The first thermal encoding is further described below in conjunction with a specific form of the first local potential information.
As an optional implementation, the first local potential information includes at least one of: virtual items that the virtual game character has selected in the current round; character information of a virtual game character in a current round; association of virtual game characters the character information of a virtual game character in the current round.
In this embodiment, the virtual object that the virtual game character has selected has a great influence on the next target virtual object, for example, the virtual game character has purchased a plurality of attack rate increasing equipments, and then the benefit of purchasing attack rate increasing equipments is low, and it is more appropriate to select attack rate increasing equipments; in addition, some virtual articles may have unique effects, that is, a plurality of virtual articles having a certain special effect cannot be superimposed, and it is not reasonable to buy the type of virtual article repeatedly. The first play information of this embodiment may thus include virtual items that have been selected by the virtual game character in the current round, for example, virtual items that have been selected by the virtual game character in a game tournament that is being played. The virtual articles selected by the virtual game character in the current round are non-numerical information, the non-numerical information is not suitable to be represented by single numerical values, because the single numerical values have size scores, direct size relation cannot exist among different virtual articles, and if the numerical values are represented, additional interference information is brought. The virtual articles selected in the current round of this embodiment cannot simply be represented by a single value, encoded by employing a form of thermal encoding. Alternatively, assuming that the Identification (ID) of the virtual item selected in the current round is [2, 4, 4, 5], the encoding characteristic thereof may be (0, 1, 0, 2, 1 …, 0), and assuming that all the selectable virtual items are E types, the characteristic dimension of the virtual item selected by the virtual game character in the current round is E.
In this embodiment, the role information of the virtual game Character in the current round and the role information of the associated virtual game Character of the virtual game Character in the current round also have a great influence on the target virtual item next to the virtual game Character, where the associated virtual game Character may be an enemy competing with the virtual game Character, or may be an associated Non-Player controller (NPC) in the game scene, such as a game unit operated by a Non-Player, such as a soldier, a defense tower, a monster, or a special monster. The character information of the virtual game character in the current round can include attribute information, state information, formation information and the like of the virtual game character, wherein the attribute information can include attack values such as object strength, legal strength and the like of the virtual game object, defense values such as object resistance, legal resistance and the like, types of the virtual game object and the like, wherein the object strength, the legal strength, the object resistance and the legal resistance in the current round belong to numerical information, an obvious size relationship exists and can be directly expressed by numerical values, the types of the virtual game object and the virtual object belong to non-numerical information and need to be expressed by a thermal coding form, and actually, the attribute information such as object strength, legal strength, object resistance, legal resistance and the like in the current round has strong relevance with hero types, for example, for the virtual game object of an adversary auxiliary class, even if the object strength and legal strength are high, however, the game device is limited by its skills, and cannot cause high damage, and the virtual game object does not need to have excessive defensive virtual articles against the virtual game object of the enemy assistance class. Therefore, this embodiment needs to combine the attribute information in the current round with hero type information to be represented by a form of thermal encoding.
For example, combining the physical strength, the physical resistance information, and the formation information, assuming that the type of all the selectable virtual game objects is K, considering that there are 2 formations, the physical strength feature dimension is 2K, and the other three attributes (the physical strength, the physical resistance, and the physical resistance) are the same as the physical strength, and the feature dimensions are 2K, so that the attribute features of the virtual game character are 2K × 4 — 8K in total. Of course, some games do not have a distinction between object strength and legal strength, nor between object resistance and legal resistance, and then features need only be 4K-dimensional. In summary, assuming that L attributes are shared, then the overall feature dimension is L x 2K dimensions.
It should be noted that the first situation information of this embodiment includes the virtual item that the virtual game character has selected in the current round, the character information of the virtual game character in the current round, and the character information of the virtual game character associated with the virtual game character in the current round, which is only a preferred implementation manner of the embodiment of the present invention, and does not represent that the first situation information implemented by the present invention is only the above-mentioned information, and any method that can be used for inputting into the neural network model to obtain the target virtual item suitable for the virtual game character is within the method of the embodiment of the present invention, and is not illustrated here.
As an optional embodiment, before determining the neural network model corresponding to the virtual game character, the method further comprises: acquiring a video of a historical turn; and sampling the video to obtain second situation information and virtual article information.
In this embodiment, before determining the neural network model corresponding to the virtual game character, a plurality of sets of sample data need to be collected to train to obtain the neural network model. The embodiment can acquire the video of the historical turn, namely, the embodiment can collect the video of the historical turn with the virtual game character appearing, the video can be the video of the N matches with the virtual game character appearing, wherein the video comprises some state information and operation information, and the video displayed through the graphical user interface is actually displayed through the information again in the game. Optionally, the embodiment directly samples the video to obtain the data information (the second situation information and the virtual article information). It should be noted that the video recording of this embodiment may be a set composed of status data of each frame, rather than a video. The embodiment can sample the video, for example, N fields of video are sampled, and for each sampling point, second situation information and virtual article information are obtained, where the second situation information and the virtual article information are information that is effective for training the neural network model and is obtained at the corresponding moment of the video.
As an optional implementation, sampling the video to obtain the second situation information and the virtual article information includes: sampling the video according to a target time interval to obtain a plurality of sampling points; determining second situation information corresponding to each sampling point as second situation information; and determining the virtual article selected by the virtual game character after each sampling point as the virtual article information.
In this embodiment, when the video is sampled to obtain the second situation information and the virtual article information, the video may be sampled according to a target time interval to obtain a plurality of sampling points, for example, each video is sampled with Ts as a time interval, and for each sampling point, the second situation information corresponding to each sampling point is determined as the second situation information at a time corresponding to the video. The embodiment can determine the virtual object selected by the virtual game character after each sampling point, for example, the virtual object which is obtained next time after each sampling point, as the virtual object information, thereby achieving the purpose of obtaining the information effective for training the neural network model.
The above-described second situation information of the embodiment is described below.
As an optional implementation, the second situation information includes at least one of: virtual items that have been selected by a virtual game character in a historical round; character information of a virtual game character in a historical turn; association of virtual game characters character information of virtual game characters in historical rounds.
In this embodiment, as described above, the virtual item that the virtual game character has selected greatly affects the next coming virtual item. The second situation information of this embodiment may thus include virtual items that the virtual game character has selected in historical rounds; since the role information of the virtual game role in the historical round and the role information of the virtual game role associated with the virtual game role in the historical round also have a large influence on the virtual item to be given next to the virtual game role, the embodiment can acquire the role information of the virtual game role in the historical round and the role information of the virtual game role associated with the virtual game role in the historical round. The character information of the virtual game character in the historical turn can comprise attribute information, state information, formation information and the like of the virtual game character, wherein the attribute information can comprise attack values such as physical strength, legal strength and the like of the virtual game object in the historical turn, defense values such as physical strength, legal strength and the like in the historical turn, the type of the virtual game object and the like.
It should be noted that the second situation information of this embodiment includes the virtual article that the virtual game character has selected in the historical round, the character information of the virtual game character in the historical round, and the character information of the virtual game character associated with the virtual game character in the historical round, which is only a preferred implementation manner of the embodiment of the present invention, and the second situation information that does not represent the implementation of the present invention is only the above-mentioned information, and any situation information that can be used for training the neural network model is within the method of the embodiment of the present invention, which is not illustrated here.
As an optional implementation manner, before determining the neural network model corresponding to the virtual game character in step S204, the method further includes: and taking the second situation information as the input of the initial neural network model, taking the virtual article information as the output of the initial neural network model, and carrying out deep learning training on the initial neural network model to obtain the neural network model.
In this embodiment, before determining the neural network model corresponding to the virtual game character, the neural network model is trained using multiple sets of sample data. In this embodiment, the second situation information is used as an input of the initial neural network model, the virtual article information is used as an output of the initial neural network model, that is, the second situation information and the virtual article information have a mapping relationship between the input and the output, and the initial neural network model is subjected to deep learning training through the mapping relationship to obtain the neural network model, that is, the deep neural network model, wherein the initial neural network model may be an initialized deep neural network model.
It should be noted that the neural network model in this embodiment is only a preferred implementation of the embodiment of the present invention, but the neural network model does not represent that the neural network model in this embodiment is only a deep neural network model, and any model that can be used to analyze the first local potential information and obtain the target virtual item to be presented to the virtual game character is within the scope of this embodiment, and is not illustrated here.
As an optional implementation, the using the second situation information as an input of the initial neural network model includes: converting the second situation information into a second thermal code; determining a second thermal code as an input feature of the initial neural network model; using the virtual article information as an output of the initial neural network model, including: converting the virtual article information into a third thermal code; determining the third thermal code as an output label of the initial neural network model.
In this embodiment, the second situation information may be converted into an input feature, i.e., a sample feature (also referred to as a feature), which is regarded as an independent variable X, and the virtual article information may be converted into an output tag, i.e., a sample tag (also referred to as a tag), which is regarded as a dependent variable Y, and different independent variables X have different influences on the variable Y.
Optionally, this embodiment translates the second situational information into a second thermal encoding. In this embodiment, the virtual game character in the second situation information selects a non-numerical information in the historical round, and the non-numerical information is not suitable to be represented by a single numerical value, because there is a size difference between the single numerical values, and there cannot be a direct size relationship between different virtual items in the historical round, which would otherwise bring additional interference information if represented by numerical values. The virtual articles selected in the historical round of this embodiment cannot simply be represented by a single value, and the second thermal code is obtained by encoding in the form of a thermal code. Alternatively, assuming that the Identification (ID) of the virtual item selected in the historical round is [2, 4, 4, 5], the second thermal code thereof may be (0, 1, 0, 2, 1 …, 0), and assuming that all selectable virtual items are E types, the feature dimension of the virtual game character in the virtual item selected in the historical round is E. In the second situation information of this embodiment, the physical strength, the legal strength, the physical resistance, and the legal resistance in the historical round belong to numerical information, and there is an obvious magnitude relationship, and they can be directly expressed by numerical values, and the type of the virtual game object belongs to non-numerical information as with the virtual object, and therefore needs to be expressed by a thermal coding format.
And converting the virtual article information into a third thermal code so as to determine the third thermal code as an output label of the initial neural network model. In this embodiment, the output tag and the input feature are in one-to-one correspondence to form a set of sample data, and in the same battle video, the data at each newly added time of the virtual object may be made to correspond to a set of sample data. Each group of sample data takes the newly added virtual object of the next group of sample data as the output label in the group of sample data, and particularly, the last group of sample data of a local fighting video record is abandoned because the last group of sample data lacks the corresponding output label. This means that a battle video tape with Mi set of valid sample data originally contains Mi +1 set of sample data in total.
In this embodiment, the virtual object's existing virtual item belongs to the input feature. The output label is a virtual article, similar to the input feature, and can also be represented in a thermal coding manner, the dimension is the category number of the virtual article, for example, is E, and unlike the input feature, the final representation of the output label is necessarily only one 1, and the rest features are all 0.
Optionally, in this embodiment, the virtual object selected by the virtual game character of each sampling point, the character information of the virtual game character, and the character information associated with the virtual game character are converted into the thermal code, and as the input feature, the virtual object that the virtual game character appears next time is converted into the thermal code, which is used as the output tag, so that each sampling point becomes a set of training sample data, and assuming that the sample points of the ith video have Mi in total, the total sample amount is
Figure BDA0002550957610000111
Wherein N is the number of fields of the video.
As an optional implementation, performing deep learning training on the initial neural network model to obtain the neural network model, includes: and carrying out gradient descent training on the parameters of the initial neural network model to obtain the neural network model.
In this embodiment, model tuning may be automated through an optimization algorithm. When the deep learning training of the initial neural network model is realized to obtain the neural network model, the deep neural network model can be initialized, the sample characteristics are used as input, the sample labels are used as output, the deep neural network model is trained by adopting a Gradient Descent method (Gradient Descent), when the model training error is reduced to a certain degree or the training steps reach a specified number, the training can be stopped, and the neural network model obtained at the moment is determined as the final neural network model of the virtual game role.
It should be noted that the above optimization algorithm of this embodiment is only a preferred implementation of the embodiment of the present invention, and does not represent that the optimization algorithm of this embodiment is only the above gradient descent method, and any method that can achieve model tuning is within the scope of this embodiment of the present invention, and is not illustrated here.
Alternatively, in this embodiment, the neural network model may be obtained by fully connecting the network and then using softmax as the prediction tag, for example, an M-dimensional vector composed of the features X ═ (X0 … xM) is used as the input, 1 dimension is usually added for the convenience of vector calculation, and finally, the value of this 1 dimension is fixed to be 1, that is, X ═ X0 … xM, 1. After passing through a plurality of full connection layers, inputting the data into the last softmax layer to obtain Y ═ Y1,…,yE) Wherein, the dimension of Y in this embodiment may be the category number E of the virtual article, the specific number of layers of the fully-connected layer may be set as P, and the number of nodes of each layer is (Q)1,Q2,…,QP) Wherein P and QiCan be set by the designer as the case may be.
As an optional implementation, the obtaining of the first situation information of the current round includes: first local potential information is obtained from a target server.
In this embodiment, the first situation information may be obtained from a target server, where the target server may be a game server, that is, the situation information data of the game world in this embodiment may be obtained directly from the game server.
In the embodiment, a neural network model of the virtual game role can be established according to a plurality of groups of existing sample data, and an optimization algorithm is applied to optimize the neural network model, so that when the neural network model is actually used, a more reasonable virtual article can be prompted to the virtual game role according to the real-time first local information of the game, reference is provided for the virtual game role, and the method is more flexible than a traditional virtual article prompting method; in addition, the embodiment utilizes a neural network model, adopts a data driving mode, does not require prior knowledge of a programmer too much, and has the result more in line with the optimal recommendation in statistical significance, and the loading and unloading are more intelligent than the traditional virtual article prompting method. The virtual article prompting method of the embodiment has portability, and can be applied to all games containing virtual articles, multiple groups of sample data of virtual game roles can be extracted from game videos to train the neural network model, then the trained neural network model is used in a real-time game, real-time situation information in the game is extracted and input into the neural network model, and target virtual articles output by the neural network model are prompted to the virtual game roles, so that the technical problem of low efficiency of prompting the virtual articles is solved, and the technical effect of improving the efficiency of prompting the virtual articles is achieved.
The following describes an example of a method for presenting a virtual article according to an embodiment of the present invention with reference to a preferred embodiment. Specifically, a virtual article is taken as equipment, and the virtual article is prompted to be a recommended virtual article for illustration.
The equipment selection has great influence on the victory or defeat of the competitive game competition and plays a main role in improving the fighting capacity of the virtual game role. The equipment dynamic intelligent recommendation technology can provide reasonable loading and unloading reference for players or game AI corresponding to virtual game characters in real time and dynamically according to the situation of a battle situation, and specifically comprises the following two functions: recommending appropriate equipment to a player corresponding to the virtual game role as a reference for selection so as to improve the game experience of the player; and guiding the game AI to purchase equipment, so that the game AI shipment is closer to the selection of the player, and the game experience of the player game AI match is improved.
According to the existing player data, the purchase condition of the props with higher winning rates in the corresponding player character grades is counted and recommended to the players, or the recommended equipment suitable for the individual loading is selected iteratively according to a genetic algorithm, but the equipment recommendation is not carried out according to situation dynamics. In this embodiment, equipment recommendation is dynamically performed according to situations, equipment selected by the virtual game character corresponding to the player, attributes of the two virtual game characters (e.g., hero), NPC status, and the like. Therefore, the embodiment can realize that the situation is changed continuously along with the progress of time, and the recommended output and loading can be changed correspondingly, wherein the embodiment applies a deep learning method, establishes a neural network model by using the existing player data, and automatically carries out model tuning through an optimization algorithm, so that the equipment recommendation is more reasonable.
The above-described method of this embodiment is further described below.
In this embodiment, the situation information of the game scene may be directly obtained from the game server, and may include, but is not limited to, equipment that the player has selected, hero attributes of both parties, and the like; the neural network model may be, but is not limited to, employing a deep neural network; the optimization algorithm may, but is not limited to, employ a gradient descent method.
The objective of the neural network model training of this embodiment is to establish different neural network models for each different hero, taking hero a as an example, the overall process mainly includes the following S1 to S5:
s1, collecting N game videos of hero A.
S2, sampling each video at intervals of Ts, and for each sampling point, obtaining information effective for training the neural network model at the time corresponding to the video, which may include but is not limited to the selected equipment of the player corresponding to hero a, the attributes and states of both heros, and the equipment that hero a will come out next time.
S3, converting the information such as hero attributes and the like of the equipment selected by the hero A player at each sampling point in S2 into thermal codes, and taking the thermal codes as sample characteristics; the next time the hero a player gets out the equipment is also converted to a thermal code as a sample label, so that each sampling point becomes a training sample. Assuming that the ith video has a total of Mi sample points, the total sample size is
Figure BDA0002550957610000141
S4, initializing the deep neural network model, taking the sample characteristics as input and the sample labels as output, and training the deep neural network model by adopting a gradient descent method until the model training error is reduced to a certain degree or the training steps reach a specified number, stopping training, and obtaining the neural network model corresponding to hero A, namely, the equipment recommendation model.
S5, obtaining the current situation information, for example, the data information such as the equipments selected by the hero a player and the attributes of both heros in the ongoing game match, converting the data information into a thermal code, and inputting the thermal code into the equipment recommendation model corresponding to the trained hero a, where the equipment corresponding to the node with the largest value in the output node is the best recommended equipment of hero a.
For ease of understanding, the following further description will be made by taking the MOBA game as an example. In combat, equipment is generally required to be purchased, players are generally divided into two enemy camps, the two teams compete with each other in a scattered game map, usually, besides characters selected by the two teams, on the map, game units operated by non-players such as soldiers, defense towers, small wild monsters and special wild monsters are arranged, and each player can control the selected characters to hit a killer or a cubic unit on the map to obtain resources, destroy the base of the enemy and obtain the final victory.
FIG. 3 is a schematic diagram of model training and prediction according to an embodiment of the present invention. As shown in fig. 3, in the training phase, N game videos from hero a are collected to obtain a video database, wherein game data from the N videos are stored, and then the data are analyzed, so that each video can be sampled at intervals of Ts, and for each sampling point, information effective for modeling, including but not limited to the equipment selected by hero a player, attributes and states of hero of both parties, and the equipment that hero a will next appear, is obtained at the corresponding moment of the video. And then, entering a feature extraction stage, converting the equipment selected by the hero A player of each sampling point, hero attributes of both parties and NPC state information into thermal codes, and converting the equipment which is taken by the hero A player next time into the thermal codes as sample features to serve as sample labels, so that each sampling point becomes a group of sample data (a training sample). Initializing a deep neural network model, taking sample characteristics as input and sample labels as output, and training the neural network model by adopting a gradient descent method.
Optionally, the embodiment may perform cross validation in the model training stage, and may divide the data samples for training the neural network model into K parts, train K-1 parts in turn, and validate the remaining 1 parts, so that there are K validation results, and the average of the K results is used as the estimation of the accuracy of the algorithm.
For example, 10 samples are subjected to 10 times of cross validation, that is, the samples are completely disturbed and then divided into 1 st 1 ten thousand samples and 2 nd 1 ten thousand samples … … th 10 th 1 ten thousand samples, so as to obtain 10 sample data. Then, taking the 1 st part as verification, taking the remaining 9 parts as training to train the neural network model, and verifying the neural network model through the 1 st part to obtain the accuracy rate of the neural network model of a 1; by analogy, the 2 nd part is used for verification, the remaining 9 parts are used for training the neural network model, the obtained accuracy is a2, by analogy, the accuracy a10 is obtained, then the accuracy of the cross verification is (a1+ a2+ … + a10)/10, and the neural network model can be further subjected to post-processing to obtain a post-processing model.
In the prediction stage of the neural network model, real-time combat data such as equipment selected by the hero a player in an ongoing game, hero attributes of both parties and the like can be converted into thermal codes and then input into a post-processing model corresponding to the trained hero a, and the equipment corresponding to the node with the maximum value in the output node is the best recommended equipment of the hero a at the moment, namely, the prediction result. Wherein, the probability corresponding to all the equipment is added to be 1, and the maximum value represents the maximum probability.
The method of sample feature extraction of this embodiment is described below.
In this embodiment, the sample characteristics may include existing equipment of hero a, both party lineup, both party hero attributes; the sample label is the equipment for the hero a to come out next time. Converting this information into sample features and sample labels is a critical step. Considering the sample feature as an independent variable X, and the sample label is a dependent variable Y, the influence of the independent variable X on the dependent variable Y and the representation of the independent variable X will be described in detail below.
The existing equipment of hero A has great influence on the next piece of equipment to be produced, for example, hero A has bought a plurality of equipment with attack acceleration, and then has lower profit on buying the equipment with attack acceleration, and the more suitable choice is equipment with attack acceleration; in addition, some equipment may have a unique effect, i.e., a plurality of equipment effects having a certain special effect cannot be superimposed, and it is not reasonable to purchase the equipment of this type repeatedly. The equipment is non-numerical information which is not suitable to be represented by a single numerical value, because the single numerical value has a size division, and direct size relation cannot exist among different equipment, and if the numerical value is represented, additional interference information is brought. The equipment information cannot be simply represented by a single value, but is typically in a thermally encoded form;
fig. 4 is a schematic diagram of thermally encoding existing equipment according to an embodiment of the present invention. As shown in fig. 4, assuming that the existing equipment id is [2, 4, 4, 5], the equipment hot coding feature is (0, 1, 0, 2, 1 …, 0), and assuming that all optional equipment is of E types, the feature dimension of the existing equipment of hero a is E.
In this embodiment, the attribute information of each hero of the two parties has a great influence on the equipment to be taken out next by hero a, because hero a and its teammates should give priority to high attack equipment when the enemy attack is low and the defense is generally high. The attribute information of hero can mainly include attack values such as hero physical strength and legal strength, defense values such as physical resistance and legal resistance, and hero type. The attribute information of hero is specifically expressed as: the physical strength, the physical resistance and the physical resistance belong to numerical information, and have obvious size relationship and can be directly expressed by numerical values; the hero type and the equipment belong to non-numerical information, so a thermal coding form is required; in fact, there is strong correlation between attribute information such as physical strength, legal strength, physical resistance, legal resistance and the like and hero types, for example, for auxiliary hero of enemy, even if physical strength and legal strength are high, the hero A cannot cause high damage due to self skills, and does not need to provide too many defensive devices for auxiliary hero of enemy. Therefore, attribute information needs to be characterized in combination with hero type information.
This embodiment combines the physical strength, legal strength, physical resistance, legal resistance and formation information according to the above, and is characterized as shown in fig. 5. Fig. 5 is a schematic diagram of converting attribute information into features according to an embodiment of the present invention. As shown in fig. 5, in the case of preparing selection of 5V5 heroes, the first formation is a fox, a rabbit, a big sky dog, a Buddha on both sides, and a golden day, the second formation is a fox, a hilgenu Ji, a barren, a soldier tomb, and a hilgeng Ji, and the first formation and the second formation are included in a soldier tomb, a big sky dog, a golden day, a midgen Ji, a Buddha on both sides …, a gobi fox … rabbit, and a gobi Ji …, so that for the first formation, the thermal coding of the formation can be 01101 … 01 … 10 …, and the physical strength \ method strength or physical resistance \ chemical resistance can be expressed as 0ab0c … 0d … e 0; for the second matrix, the heat code of the matrix capacity can be 10010 … 11 … 01 …, and the physical strength \ method strength or physical resistance \ method strength can be expressed as f00g0 … hi … 0j ….
Assuming that the category of all the heroes to be selected is K, considering that there are 2 camps, the characteristic dimension of physical strength is 2K, and the other three attributes (physical strength, physical resistance and physical resistance) are the same as physical strength, and the characteristic dimension is 2K, so that the attribute of heroes is 2K × 4 — 8K in total. Of course, there is no discrimination between object and legal in some MOBA-type games, and there is no discrimination between object and legal, so that the features need only be 4K-dimensional. In summary, assuming that L attributes are shared, then the overall feature dimension is L x 2K dimensions.
The method of output tag extraction of this embodiment is described below.
In this embodiment, in the same team video, the time of each new increase of the equipment is taken as a sample. Each sample has the added equipment of the next sample as a label for the sample, and in particular, the last sample of a video battle is discarded because it lacks the corresponding label. This means that a battle video tape with Mi valid sample data originally contained Mi +1 sample data in total.
In this embodiment, the output tag needs to be in one-to-one correspondence with the input features to form sample data, and during the data processing, the sample form may be set as shown in fig. 6. Fig. 6 is a schematic diagram of sample data formed by one-to-one correspondence between output tags and input features according to an embodiment of the present invention, for sample data 1, if an existing equipment is a, the corresponding tag is equipment b in sample data 2, and the feature of the output tag may be represented as 01000 …; for sample data 2, the existing equipment is a, b, the corresponding label is equipment c in sample data 3, and the feature of the output label can be represented as 00100 …; for sample data 3, the existing equipment is a, b, c, the corresponding label is equipment d in sample data 4, and the feature of the output label can be represented as 00010 …; for sample data 4, the existing equipment is a, b, c, d, the corresponding label is equipment e in sample data 5, and the feature of the output label can be represented as 00001 ….
In the above example, where the features describe the current situation and the tags are served for prediction purposes, the main prediction purpose of this embodiment is to know what the next piece of equipment is out, for sample data 1, the next piece of equipment is explicit b, so b is the tag of sample data 1; similarly, for sample 2, what the next piece of equipment will come out is c, so c is the label of sample data 2, and so on.
In this embodiment, a single a cannot be used as a label because the next piece of equipment is b, and if a single a only indicates that the next piece of equipment is a, this is not practical. It is possible to use a and b together as the label of the sample data 1, but the result obtained after doing so needs to remove the input a and then use b as the next piece of equipment, which is troublesome and not recommended.
In this embodiment, the equipment condition is a feature, the tag is a piece of equipment category, which can be represented by means of thermal coding, and the dimension can be the number of categories of equipment, i.e. E, unlike the feature of the equipment condition, the final representation of the tag is necessarily only one 1, and the rest are all 0 features.
The following describes a design process of the neural network model of this embodiment.
This embodiment results in a neural network model through a fully connected network followed by softmax as a predictive tag.
Fig. 7 is a schematic diagram of a network structure of a model according to an embodiment of the invention. As shown in fig. 7, an M-dimensional vector composed of (X0 … xM) features X is input to the input layer, 1-dimensional vectors are usually added for vector calculation, and the value of the last 1-dimensional vector is fixed to be 1, that is, X is (X0 … xM,1), which has the advantage that the formula of Z is wa + b can be converted to Z is w 'a'. Wherein, a 'is the tail plus one 1, w' is the tail plus one b, and the expansion is the form of wa + b, which is convenient for calculation.
After passing through multiple full connectivity layers (net0, net1 … … net9), input into the last softmax layer, results in Y ═ Y (Y1, …, yE), where Y is the number of classes of equipment E in the dimension of this embodiment. The specific number of layers of the fully-connected layer is P, and the number of nodes of each layer is (Q1, Q2, …, QP), wherein P and Qi can be set by a designer according to the situation.
In the fully-connected layer of this embodiment, data processing can be performed by the following formula:
zl=wlal-1+bl(1)
Figure BDA0002550957610000171
in this embodiment, the softmax layer may perform data processing by the following formula:
Figure BDA0002550957610000172
wherein, al-1A vector for representing input node composition of the l-1 layer; z is a radical oflThe vector formed by the output nodes of the first layer is used, and the output of one layer is actually the input of the next layer; w is alA node weight vector for representing the network between layers l-1 to l, shown as a dotted line in fig. 7; blOffset for representing network function between l-1 to l layersSolid lines as shown in fig. 7; j is used to represent the first node of a certain layer; i is used to represent the second dimension of the softmax layer output node vector (i.e., the probability vector); m1 is used to represent the total dimension of the softmax layer output node vector (i.e., the probability vector), e.g., M1 in this embodiment is equal to the total class of equipment.
In this embodiment, the sample features may be used as input, the sample labels may be used as output, and the gradient descent method may be used to automatically train the parameters of the model. Among them, the gradient descent method is a convex optimization method, for example, on a peak, the walking is performed the fastest to the bottom of the mountain without considering other factors, and of course, the steepest place is selected, which is the core idea of the gradient descent method, that is, it gradually approaches the minimum value of the function by stepping forward in the current gradient direction (steepest direction) each time. Assume that in the nth iteration, the following equation:
θn=θn-1+Δθ (4)
the objective function can be set at thetan-1A first order taylor expansion is performed:
L(θn)=L(θn-1+Δθ)≈L(θn-1)+L′(θn-1)Δθ (5)
to make L (θ) tend to be smaller values after iteration, the first row condition shown below may be assumed:
Δθ=-αL′(θn-1),α>0 s.t.L(θn)=L(θn-1)-αL′(θn-1)2≤L(θn-1) (6)
thus, the iterative approach can be formulated as follows:
θn=θn-1-αL′(θn-1) (7)
the embodiment can dynamically provide the package output reference for the player or AI in real time according to situations, and is more flexible than the traditional fixed package output recommendation; the data driving mode is adopted, so that the prior knowledge of a programmer is not required to be excessive, the recommended content is more consistent with the optimal recommendation in the statistical sense theoretically, and the loading and unloading are more intelligent than the traditional recommendation; the method of the embodiment has portability, and can be applied to all games containing equipment prop purchase, and relevant data information of a player can be extracted from a game video to train a probability model; and then the trained probability model is used in a real-time game, real-time situation information in the game is extracted and input into the model, and equipment recommendation is provided for a player or an AI according to the model output, so that the technical problem of low efficiency of prompting virtual articles is solved, and the technical effect of improving the efficiency of prompting the virtual articles is achieved.
The embodiment of the invention also provides a prompting device of the virtual article. It should be noted that the virtual article prompting device in this embodiment can be used to execute the virtual article prompting method shown in fig. 2 in the embodiment of the present invention.
Fig. 8 is a schematic diagram of a prompting device for a virtual article according to an embodiment of the invention. As shown in fig. 8, the presentation device 80 for virtual articles includes: an acquisition unit 81, a determination unit 82, an analysis unit 83 and a recommendation unit 84.
The acquiring unit 81 is configured to acquire first situation information of a virtual game character, where the first situation information is situation information of a current round.
A determining unit 82, configured to determine a neural network model corresponding to the virtual game character, where the neural network model is trained using multiple sets of sample data, and each set of sample data in the multiple sets of sample data includes: the second situation information is situation information of a historical round, and the virtual article information is a historical virtual article used by the virtual game character in the historical round.
And the analysis unit 83 is configured to analyze the first local potential information based on the neural network model to obtain a target virtual article.
And a recommending unit 84 for presenting the target virtual item to the virtual game character.
The virtual article prompting device of the embodiment establishes the neural network model according to the multiple groups of sample data of the virtual game role, and can input the first local information which changes dynamically into the neural network model for processing, so that the target virtual article is recommended for the virtual object dynamically in real time, the target virtual article is more reasonably prompted, the technical problem of low efficiency of prompting the virtual article is solved, and the technical effect of improving the efficiency of prompting the virtual article is achieved.
Embodiments of the present invention also provide a storage medium having a computer program stored therein, where the computer program, when executed by a processor, controls an apparatus in which the storage medium is located to perform the steps in any of the above-mentioned method embodiments.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (14)

1. A method for prompting a virtual article, comprising:
acquiring first situation information of a virtual game role, wherein the first situation information is situation information of a current round;
determining a neural network model corresponding to the virtual game role, wherein the neural network model is trained by using multiple groups of sample data, and each group of sample data in the multiple groups of sample data comprises: second situation information, virtual article information and a corresponding relation between the second situation information and the virtual article information, wherein the second situation information is situation information of a historical round, and the virtual article information is a historical virtual article used by the virtual game role in the historical round;
analyzing the first local potential information based on the neural network model to obtain a target virtual article;
and prompting the target virtual article for the virtual game role.
2. The method of claim 1, wherein analyzing the first local potential information based on the neural network model to obtain a target virtual item comprises:
converting the first local potential information into a first thermal encoding;
converting the first thermal code based on the neural network model to obtain a plurality of probabilities, wherein each probability is used for indicating the possibility of recommending a corresponding virtual item to the virtual game role;
and determining the virtual article corresponding to the maximum probability in the plurality of probabilities as the target virtual article.
3. The method of claim 1, wherein the first local potential information comprises at least one of:
a virtual item that the virtual game character has selected in the current round;
character information of the virtual game character in the current round;
association of the virtual game character role information of the virtual game character in the current round.
4. The method of claim 1, wherein prior to determining the neural network model corresponding to the virtual game character, the method further comprises:
acquiring a video of the historical turn;
and sampling the video to obtain the second situation information and the virtual article information.
5. The method of claim 4, wherein sampling the video to obtain the second situation information and the virtual item information comprises:
sampling the video according to a target time interval to obtain a plurality of sampling points;
determining second situation information corresponding to each sampling point as the second situation information;
and determining the virtual article selected by the virtual game character after each sampling point as the virtual article information.
6. The method of claim 5, wherein the second situation information comprises at least one of:
virtual items that the virtual game character has selected in the historical rounds;
character information of the virtual game character in the historical round;
association of the virtual game character role information of the virtual game character in the historical round.
7. The method of claim 1, wherein prior to determining the neural network model corresponding to the virtual game character, the method further comprises:
and taking the second situation information as the input of an initial neural network model, taking the virtual article information as the output of the initial neural network model, and performing deep learning training on the initial neural network model to obtain the neural network model.
8. The method of claim 7, wherein using the second situation information as an input to an initial neural network model comprises:
converting the second situation information into a second thermal code;
determining the second thermal encoding as an input feature of the initial neural network model.
9. The method of claim 7, wherein using the virtual item information as an output of the initial neural network model comprises:
converting the virtual item information into a third thermal code;
determining the third thermal code as an output label of the initial neural network model.
10. The method of claim 7, wherein performing deep learning training on the initial neural network model to obtain the neural network model comprises:
and carrying out gradient descent training on the parameters of the initial neural network model to obtain the neural network model.
11. The method of any one of claims 1 to 10, wherein obtaining first local potential information for a current round comprises:
the first local potential information is obtained from a target server.
12. An apparatus for presenting a virtual object, comprising:
the virtual game system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring first situation information of a virtual game role, and the first situation information is situation information of a current round;
a determining unit, configured to determine a neural network model corresponding to the virtual game character, where the neural network model is trained using multiple sets of sample data, and each set of sample data in the multiple sets of sample data includes: second situation information, virtual article information and a corresponding relation between the second situation information and the virtual article information, wherein the second situation information is situation information of a historical round, and the virtual article information is a historical virtual article used by the virtual game role in the historical round;
the analysis unit is used for analyzing the first local potential information based on the neural network model to obtain a target virtual article;
and the recommending unit is used for prompting the target virtual article to the virtual game role.
13. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, controls an apparatus in which the storage medium is located to perform the method of any of claims 1 to 11.
14. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 11.
CN202010574847.6A 2020-06-22 2020-06-22 Virtual article prompting method and device, storage medium and electronic device Active CN111701240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010574847.6A CN111701240B (en) 2020-06-22 2020-06-22 Virtual article prompting method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010574847.6A CN111701240B (en) 2020-06-22 2020-06-22 Virtual article prompting method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN111701240A true CN111701240A (en) 2020-09-25
CN111701240B CN111701240B (en) 2023-04-25

Family

ID=72541629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010574847.6A Active CN111701240B (en) 2020-06-22 2020-06-22 Virtual article prompting method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN111701240B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113546428A (en) * 2021-07-27 2021-10-26 网易(杭州)网络有限公司 Method and device for recommending props in game, electronic equipment and storage medium
CN113730910A (en) * 2021-09-02 2021-12-03 网易(杭州)网络有限公司 Method and device for processing virtual equipment in game and electronic equipment
CN115445207A (en) * 2022-09-22 2022-12-09 武汉瓯越网视有限公司 Game resource pushing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150050999A1 (en) * 2012-03-19 2015-02-19 Neowiz Bless Studio Corporation Method for providing an online game enabling the user to change the shape of an item and a system thereof
CN108579090A (en) * 2018-04-16 2018-09-28 腾讯科技(深圳)有限公司 Article display method, apparatus in virtual scene and storage medium
CN110807150A (en) * 2019-10-14 2020-02-18 腾讯科技(深圳)有限公司 Information processing method and device, electronic equipment and computer readable storage medium
CN110841296A (en) * 2019-11-12 2020-02-28 网易(杭州)网络有限公司 Game character skill generation method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150050999A1 (en) * 2012-03-19 2015-02-19 Neowiz Bless Studio Corporation Method for providing an online game enabling the user to change the shape of an item and a system thereof
CN108579090A (en) * 2018-04-16 2018-09-28 腾讯科技(深圳)有限公司 Article display method, apparatus in virtual scene and storage medium
CN110807150A (en) * 2019-10-14 2020-02-18 腾讯科技(深圳)有限公司 Information processing method and device, electronic equipment and computer readable storage medium
CN110841296A (en) * 2019-11-12 2020-02-28 网易(杭州)网络有限公司 Game character skill generation method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113546428A (en) * 2021-07-27 2021-10-26 网易(杭州)网络有限公司 Method and device for recommending props in game, electronic equipment and storage medium
CN113730910A (en) * 2021-09-02 2021-12-03 网易(杭州)网络有限公司 Method and device for processing virtual equipment in game and electronic equipment
CN115445207A (en) * 2022-09-22 2022-12-09 武汉瓯越网视有限公司 Game resource pushing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111701240B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
Wu et al. Training agent for first-person shooter game with actor-critic curriculum learning
Vinyals et al. Starcraft ii: A new challenge for reinforcement learning
US20230381624A1 (en) Method and System for Interactive, Interpretable, and Improved Match and Player Performance Predictions in Team Sports
Tian et al. Elf opengo: An analysis and open reimplementation of alphazero
CN111701240A (en) Virtual article prompting method and device, storage medium and electronic device
CN109513215B (en) Object matching method, model training method and server
JP7399277B2 (en) Information processing methods, devices, computer programs and electronic devices
CN107970608A (en) The method to set up and device, storage medium, electronic device of outpost of the tax office game
CN111738294B (en) AI model training method, AI model using method, computer device, and storage medium
CN107335220B (en) Negative user identification method and device and server
KR102186949B1 (en) Method and apparatus for predicting result of game
Chen et al. Which heroes to pick? learning to draft in moba games with neural networks and tree search
CN110170171A (en) A kind of control method and device of target object
CN114392560B (en) Method, device, equipment and storage medium for processing running data of virtual scene
CN113343089A (en) User recall method, device and equipment
CN113230650B (en) Data processing method and device and computer readable storage medium
CN110941769B (en) Target account determination method and device and electronic device
KR101962269B1 (en) Apparatus and method of evaluation for game
CN115944921B (en) Game data processing method, device, equipment and medium
CN111652673A (en) Intelligent recommendation method, device, server and storage medium
CN116943220A (en) Game artificial intelligence control method, device, equipment and storage medium
CN117883788B (en) Intelligent body training method, game fight method and device and electronic equipment
KR102365620B1 (en) Story controlling apparatus and method for game using emotion expressions
CN114254260B (en) Method, device, equipment and storage medium for mining unbalanced data group in game
US20140357377A1 (en) Method and system for providing online sports game for recommending squad

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant