CN109821244B - Game data processing method, game data processing device, storage medium and electronic device - Google Patents

Game data processing method, game data processing device, storage medium and electronic device Download PDF

Info

Publication number
CN109821244B
CN109821244B CN201910054620.6A CN201910054620A CN109821244B CN 109821244 B CN109821244 B CN 109821244B CN 201910054620 A CN201910054620 A CN 201910054620A CN 109821244 B CN109821244 B CN 109821244B
Authority
CN
China
Prior art keywords
target
game
result
behavior data
reward
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910054620.6A
Other languages
Chinese (zh)
Other versions
CN109821244A (en
Inventor
陶建容
李�浩
巩琳霞
冯潞潞
沈乔治
范长杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN201910054620.6A priority Critical patent/CN109821244B/en
Publication of CN109821244A publication Critical patent/CN109821244A/en
Application granted granted Critical
Publication of CN109821244B publication Critical patent/CN109821244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a game data processing method, a game data processing device, a storage medium and an electronic device. The method comprises the following steps: predicting a first target probability of a first fighting result obtained by carrying out fighting operation on the first game marketing and the second game marketing, and acquiring a basic index corresponding to the first target probability; determining a target contribution rate according to target behavior data generated when a first target object carries out fighting operation in the current game, wherein the first target object is any object in the first game formation, the target contribution rate is the proportion of contribution of the first target object to a second fighting result, and the second fighting result is the result obtained by carrying out fighting operation in the current game formation on the first game formation and the second game formation; and determining a reward and punishment result of the first target object based on the basic index and the target contribution rate. According to the invention, the effect of distinguishing the reward punishment of the target object in the battle game is achieved.

Description

Game data processing method, game data processing device, storage medium and electronic device
Technical Field
The invention relates to the field of data processing, in particular to a game data processing method, a game data processing device, a game data processing storage medium and an electronic device.
Background
At present, in a game scene, there is mutual competition and cooperation between objects corresponding to players, and it is necessary to evaluate the ability of the objects corresponding to the players.
When determining the capability difference of the objects corresponding to the players, the determination may be performed through manual rules, for example, by defining relevant rules by persons with abundant game experience such as planning, the winner may be specified to obtain a corresponding reward and the loser may be punished accordingly when each game is over. Thus, at the end of each game, the awards and punishments of the different players in the winner or loser are the same. If two parties are matched with great difference in strength, for example, the party A has great difference in strength with the party B, the possibility of winning the party A is not great, and when the party A fails, too many scores are given to the party A.
Because each player in the winner or loser has different performance and capability, if the reward and punishment of the target object in the battle game are not distinguished, the rewards obtained by all players in the winner are the same, and the punishment received by all players in the loser is the same, the player with the high performance and the good performance is unfair and unreasonable.
Aiming at the problem that the target objects are not distinguished during the battle games in the prior art, an effective solution is not provided at present.
Disclosure of Invention
The invention mainly aims to provide a game data processing method, a game data processing device, a storage medium and an electronic device, which are used for at least solving the technical problem that the target objects are not distinguished during the battle games.
In order to achieve the above object, according to an aspect of the present invention, there is provided a game data processing method. The method comprises the following steps: predicting a first target probability of a first fighting result obtained by carrying out fighting operation on the first game marketing and the second game marketing, and obtaining a basic index corresponding to the first target probability, wherein the basic index is used for adjusting a reward and punishment result obtained by carrying out fighting operation on any object in the first game marketing; determining a target contribution rate according to target behavior data generated when a first target object carries out fighting operation in the current game, wherein the first target object is any object in the first game formation, the target contribution rate is the proportion of contribution of the first target object to a second fighting result, and the second fighting result is the result obtained by carrying out fighting operation in the current game formation on the first game formation and the second game formation; and determining a reward and punishment result of the first target object based on the basic index and the target contribution rate.
In an optional embodiment, when the second fighting result is a winning result, the first target object is rewarded according to the reward punishment result, and when the second fighting result is a failing result, the first target object is punished according to the reward punishment result.
In an alternative embodiment, before predicting the first target probability that the first game lineup and the second game lineup will carry out the fighting operation to obtain the first fighting result, the method further comprises: acquiring first historical behavior data generated by the fight operation of first game formation in a historical game, second historical behavior data generated by the fight operation of second game formation in the historical game and historical fight results generated by the fight operation of the first game formation and the second game formation in the historical game; training the first target model through the first historical behavior data, the second historical behavior data and the historical engagement structure; predicting a first target probability of a first fighting result obtained by a fighting operation performed by a first game formation and a second game formation comprises: acquiring first current behavior data generated by the fight operation of first game formation in the current game and second current behavior data generated by the fight operation of second game formation in the current game; and inputting the first current behavior data and the second current behavior data into the trained first target model to obtain a first target probability of the first fight result.
In an alternative embodiment, determining the target contribution rate according to target behavior data generated when the first target object performs a fighting operation in the current game includes: acquiring a plurality of first objects in a first game lineup and a plurality of second objects in a second game lineup, wherein the plurality of first objects comprise first target objects; randomly selecting a first target number of second target objects from a plurality of first target objects and a plurality of second target objects except the first target object, matching the first target number of second target objects as teammates of the first target objects, and matching a third target object except the first target object and the second target object in the plurality of first target objects and the plurality of second target objects as an opponent of the first target object; inputting the target behavior data of the first target object, the target behavior data of the second target object and the target behavior data of the third target object into a trained first target model to obtain a second target probability of a first fight result; and determining the second target probability as the target contribution rate.
In an alternative embodiment, after randomly selecting the first target number of second target objects, the method further comprises: determining a plurality of target results of randomly selecting a first target number of second target objects; acquiring a second target probability under each target result to obtain a second target probability of a second target quantity, wherein the second target quantity is the quantity of the plurality of target results; determining the second target probability as the target contribution rate comprises: and averaging the second target probabilities of the second target quantity to obtain the target contribution rate.
In an optional embodiment, before determining the target contribution rate according to target behavior data generated when the first target object performs the fighting operation in the current game, the method further includes: the method comprises the steps of obtaining interactive behavior data generated when a first game formation carries out fighting operation, wherein the target behavior data comprise interactive behavior data; establishing a second target model based on the interactive behavior data, wherein nodes in the second target model are used for indicating objects in the first game formation, and paths in the second target model are used for indicating that interactive behaviors occur between the objects corresponding to the nodes on the paths; determining the target contribution rate according to target behavior data generated when the first target object performs a fight operation in the current game play comprises: in a second target model, acquiring a third target number of a first target path and a fourth target number of a second target path, wherein the first target path passes through a first node corresponding to a first target object and a second node corresponding to an object in the first game formation except the first target object, and the second target path passes through the second node; and determining the ratio of the third target quantity to the fourth target quantity as the target contribution rate.
In an optional embodiment, before determining the target contribution rate according to target behavior data generated when the first target object performs the fighting operation in the current game, the method further includes: acquiring historical behavior data generated by the fight operation of a first game formation and a second game formation in a historical game; clustering the historical behavior data to obtain multiple types of sub-historical behavior data; obtaining historical contribution rates corresponding to the sub-historical behavior data of the multiple types, wherein the historical contribution rates are proportions of contributions made by the sub-historical behavior data of each type to the second fight result; training a third target model through a plurality of types and historical contribution rates; determining the target contribution rate according to target behavior data generated when the first target object performs a fighting operation in the current game includes: determining a target type of the target behavior data; and inputting the target type into the trained third target model to obtain the target contribution rate.
In an alternative embodiment, determining the target contribution rate according to target behavior data generated when the first target object performs a fighting operation in the current game includes: acquiring the contribution rates of a plurality of objects in the first game lineup, wherein the contribution rate of each object is the proportion of contribution of each object to the second fight result; and carrying out normalization processing on the target contribution rate through the contribution rates of the plurality of objects to obtain the processed target contribution rate.
In an optional embodiment, after determining the reward and punishment result of the first target object based on the base indicator and the target contribution rate, the method further comprises: and under the condition that the target behavior data accords with the target condition, correcting the reward and punishment result through the correction value corresponding to the target condition.
In an alternative embodiment, before correcting the reward and punishment result by the correction value corresponding to the target behavior data, the method further comprises at least one of: determining that the target behavior data meets the target condition under the condition that the target behavior data enables the first target object to continuously obtain the second fight result for more than or equal to the target times; and under the condition that the target behavior data indicate that the target behavior occurs in the current game, determining that the target behavior data meets the target condition.
In an optional embodiment, after determining the reward and punishment result of the first target object based on the base indicator and the target contribution rate, the method further comprises: and under the condition that the current one-game is in the grading stage, determining the product of the reward and punishment result and the target coefficient as the target reward and punishment result of the first target object, wherein the target coefficient is increased along with the increase of the number of the winning fields or the number of the failing fields of the first target object.
In an alternative embodiment, obtaining the base indicator corresponding to the first target probability includes: under the condition that the second fight result is a victory result, acquiring a first basic index corresponding to the first target probability, wherein the larger the first target probability is, the smaller the first basic index is; and under the condition that the second fight result is a failure result, acquiring a second basic index corresponding to the first target probability, wherein the larger the first target probability is, the larger the second basic index is.
In an alternative embodiment, obtaining the base indicator corresponding to the first target probability includes: obtaining a basic score corresponding to the first target probability, wherein the basic score is used for adjusting a reward and punishment score obtained by any object in the first game lineup through fighting operation; determining a reward penalty result for the first target object based on the base indicator and the target contribution rate comprises: and determining a reward and punishment score of the first target object based on the basic score and the target contribution rate, wherein the reward and punishment score is added to the original score of the first target object under the condition that the second warfare result is a winning result, and the reward and punishment score is reduced to the original score of the first target object under the condition that the second warfare result is a failing result.
In order to achieve the above object, according to another aspect of the present invention, there is also provided a game data processing apparatus. The device includes: the processing unit is used for predicting a first target probability of a first fighting result obtained by carrying out fighting operation on the first game marketing and the second game marketing and obtaining a basic index corresponding to the first target probability, wherein the basic index is used for adjusting a reward and punishment result obtained by carrying out fighting operation on any object in the first game marketing; the first determining unit is used for determining a target contribution rate according to target behavior data generated when a first target object carries out fighting operation in the current game, wherein the first target object is any object in the first game formation, the target contribution rate is the proportion of contribution of the first target object to a second fighting result, and the second fighting result is the result obtained by carrying out fighting operation in the current game formation by the first game formation and the second game formation; and the second determination unit is used for determining a reward and punishment result of the first target object based on the basic index and the target contribution rate, rewarding the first target object according to the reward and punishment result under the condition that the second fighting result is a winning result, and punishing the first target object according to the reward and punishment result under the condition that the second fighting result is a failure result.
In order to achieve the above object, according to another aspect of the present invention, there is also provided a storage medium. The storage medium has stored therein a computer program, wherein the computer program is arranged to execute the game data processing method of the embodiment of the present invention when executed.
In order to achieve the above object, according to another aspect of the present invention, there is also provided an electronic device. The electronic device comprises a memory and a processor, and is characterized in that the memory stores a computer program, and the processor is configured to run the computer program to execute the game data processing method of the embodiment of the invention.
According to the method, the first target probability of the first fighting result is obtained by predicting the fighting operation of the first game marketing and the second game marketing, and the basic index corresponding to the first target probability is obtained, wherein the basic index is used for adjusting the reward and punishment result obtained by the fighting operation of any object in the first game marketing; determining a target contribution rate according to target behavior data generated when a first target object carries out fighting operation in the current game, wherein the first target object is any object in the first game formation, the target contribution rate is the proportion of contribution of the first target object to a second fighting result, and the second fighting result is the result obtained by carrying out fighting operation in the current game formation on the first game formation and the second game formation; and determining a reward punishment result of the first target object based on the basic index and the target contribution rate, wherein the reward punishment result is awarded to the first target object under the condition that the second fighting result is a winning result, and the punishment is punished to the first target object under the condition that the second fighting result is a failing result. The target probability of the battle result is determined, the basic index of the target probability is determined, the contribution rate of the target object in the local game is determined, and then the reward and punishment result is obtained according to the basic index and the contribution rate so as to carry out differentiated reward and punishment on the target object, so that the technical problem that reward and punishment of the target object in the battle game are not differentiated is solved, and the technical effect of distinguishing reward and punishment of the target object in the battle game is achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a game data processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of game data processing according to an embodiment of the present invention;
FIG. 3 is a flow diagram of a method for scoring a player's ability to conduct a multi-modal performance in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of a win ratio prediction model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a graph-based scoring model according to an embodiment of the present invention; and
fig. 6 is a schematic diagram of a game data processing apparatus according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the accompanying drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device. Taking the example of being operated on a mobile terminal, fig. 1 is a hardware configuration block diagram of a mobile terminal of a game data processing method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal 10 may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and in alternative embodiments, may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 can be used for storing computer programs, for example, software programs and modules of application software, such as a computer program corresponding to a game data processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In the present embodiment, a game data processing method operating in the mobile terminal is provided, and fig. 2 is a flowchart of a game data processing method according to an embodiment of the present invention. As shown in fig. 2, the process includes the following steps:
step S202, predicting a first target probability of a first fighting result obtained by carrying out fighting operation on the first game marketing and the second game marketing, and acquiring a basic index corresponding to the first target probability.
In the technical scheme provided in step S202 of the present invention, a first target probability of a first battle result obtained by performing battle operations on a first game battle and a second game battle is predicted, and a basic index corresponding to the first target probability is obtained, where the basic index is used to adjust a reward and punishment result obtained by performing battle operations on any object in the first game battle.
In this embodiment, the target Game scene may be a Battle type Game scene, for example, a massively Multiplayer Online Role Playing Game (MORPG) scene, a Sports competition Game (SPG) scene, a Multi-player Online competition Game (MOBA) scene, and the like.
In the target game scene, a plurality of objects in the first game play are in a mutual cooperation relationship, a plurality of objects in the second game play are in a mutual cooperation relationship, and a plurality of objects in the first game play and a plurality of objects in the second game play are in a mutual competition relationship. Due to the difference of abilities of game players such as game operation and consciousness, the plurality of objects in the first game formation and the plurality of objects in the second game formation are different in performance in the target game scene, and the ability difference can be identified through the performance of the plurality of objects in the first game formation and the plurality of objects in the second game formation in corresponding playing methods or a match.
The embodiment predicts the first target probability of the first fighting result obtained by the fighting operation between the first game formation and the second game formation, the first fighting result is also the win or lose result of the predicted fighting, and the first target probability can be used for predicting the strength difference between the first game formation and the second game formation. When the first fighting result is a victory fighting result, the first target probability is a victory rate, that is, the possibility that the first game battle wins the second game battle is represented by the first target probability; when the first fighting result is a losing fighting result, the first target probability is a losing rate, that is, the possibility that the first game play is lost to the second game play is represented by the first target probability.
After the first target probability of the first fighting result is obtained by predicting the fighting operation of the first game marketing and the second game marketing, the basic index corresponding to the first target probability is obtained, the basic index is used for adjusting the reward and punishment result obtained by the fighting operation of any object in the first game marketing, the basic score which is win-lose in the competition of the first game marketing and the second game marketing can be obtained, namely, the basic score base is added or subtracted score . The base index can be determined through the first target probability and the reward and punishment coefficient, the reward and punishment coefficient is used for adjusting the size of the base index, and the reward and punishment coefficient can be preset according to experience. When the basic index is the addition and subtraction basic time division, the reward and punishment coefficient can be the addition and subtraction coefficient.
For example, the first game play is the party A in the target game scene, the second game play is the party B in the game scene, the first fight result is the victory fight result, the first target probability is R (R is a value between 0 and 1), that is, the possibility of the party A winning the party B is R, the basic index of the party A is the basis of plus-minus score base scoreA ,base scoreA Can be calculated by the following formula:
Figure BDA0001952002650000071
wherein M is an addition and subtraction coefficient.
When the A party defeats the B party, the base plus or minus score base scoreA M (1-R), when the A party fails to give the B party, the base plus-minus score base scoreA Is M.R. If R is>0.5, the ability of the party A in the local competition is stronger, the winning is expected, the score increment is correspondingly reduced, and the score increment is less when the R is larger; if the party A loses, the score is reduced more; if R is<0.5, the A side of the game has weaker capability, the score is reduced correspondingly when the A side is expected to lose, and the smaller R is, the smaller the score isIf team A wins, the score is increased more.
In an alternative embodiment, the calculation method of the addition and subtraction basis of the B-party in this embodiment is the same as that of the addition and subtraction basis of the a-party, and is not described here again.
Step S204, according to the target behavior data generated when the first target object carries out the fighting operation in the current game, the target contribution rate is determined.
In the technical solution provided by step S204 of the present invention, after obtaining the basic index corresponding to the first target probability, a target contribution rate is determined according to target behavior data generated when the first target object performs a fighting operation in the current one-game, where the first target object is any object in the first game play, the target contribution rate is a proportion of a contribution of the first target object to a second fighting result, and the second fighting result is a result obtained by performing a fighting operation in the current one-game play in the first game play and the second game play.
In this embodiment, the first game play includes a plurality of objects, and the first target object is any one of the plurality of objects and is an object that needs to predict the fighting capabilities in the target game scene. The first target object may behave differently in the target game scene due to differences in game operations of the first target object, awareness of a game player corresponding to the first target object, and the like. Target behavior data generated when the first target object performs a battle operation in the current game is obtained, where the target behavior data is statistical data in the current game, and may be used to indicate a behavior performance of the first target object in the current game, for example, the target behavior data is data related to attack assistance, backboard, robbery, block, and the like.
In this embodiment, the second fighting result is a result obtained by performing a fighting operation in the current game by the first game formation and the second game formation, and may be a result of a victory fighting or a result of a failed fighting. Since the fighting capabilities of different objects in the target game scene are different, the contribution of different objects to the second fighting outcome is also different. If the reward and punishment of a plurality of objects in the first game lineup are not distinguished and treated equally, the game player corresponding to the object with strong capability and good performance is unfair. Therefore, the embodiment determines the target contribution rate, that is, the expression contribution rate and the contribution degree, according to the target behavior data generated when the first target object performs the fighting operation in the current game, where the target contribution rate is the ratio of the first target object contributing to the second fighting result, for example, the second fighting result is the victory fighting result, the higher the capability of the first target object is, the higher the ratio of the first target object contributing to the second fighting result is, and the lower the capability of the first target object is, the lower the ratio of the first target object contributing to the second fighting result is.
In an alternative embodiment, the embodiment inputs the target behavior data into the target model to obtain the target contribution rate. The target model is used for determining the proportion of the contribution of the object to the competition result, and can be obtained by training historical behavior data and historical competition results.
The embodiment can determine the target contribution rate according to the target behavior data generated when the object corresponding to each player performs the fighting operation in the current game, thereby distinguishing the contribution rate of the object corresponding to each player in the current game.
And step S206, determining a reward and punishment result of the first target object based on the basic index and the target contribution rate.
In the technical solution provided in step S206 of the present invention, a reward and punishment result of the first target object is determined based on the basic indicator and the target contribution rate.
In this embodiment, since the objects corresponding to different players perform differently in the current game play, the awards to the respective objects in the winner and the penalties to the respective objects in the loser should not be the same, for example, the bonus to the respective objects in the winner and the bonus to the respective objects in the loser should not be the same. After acquiring the basic index and the target contribution rate corresponding to the first target probability, determining a reward punishment result of the first target object based on the basic index and the target contribution rate, wherein the reward punishment result is used for rewarding or punishing the first target object, and can be a reward punishment score, a reward punishment blood volume, a reward punishment gold coin, a reward punishment level, a reward punishment holy water, a reward punishment weapon and the like without limitation.
As an optional implementation manner, in a case that the second battle result is a winning result, the first target object is rewarded according to the reward punishment result, and in a case that the second battle result is a failure result, the first target object is punished according to the reward punishment result.
When the second fighting result is a winning result, rewarding the first target object according to the winning and punishing results, when a target contribution rate of the first target object contributing to the winning result is large, that is, the first target is better performed by the object in the current game play relative to other objects, more rewards can be given to the first target object, and when the target contribution rate of the first target object contributing to the winning result is small, that is, the first target object is poor in performance relative to other objects in the current game play, less rewards can be given to the first target object; when the second fighting result is a failure result, when the target contribution rate of the first target object to the failure result is high, that is, the first target object performs poorly in the current game round relative to other objects, the first target object may be given a lot of punishments, and when the target contribution rate of the first target object to the failure result is low, that is, the first target object performs well in the current game round relative to other objects, the first target object may be given a lot of punishments.
The reward punishment result of the other target objects in the first game marketing and the second game marketing in this embodiment may also be determined by the method for determining the reward punishment result of the first target object, which is not described herein again.
This embodiment can be in the stable increase stage, through the different performance of the object that corresponds with the player in present game, give the result to the award punishment of the different ability of the object that the player corresponds for the ability of the object that the player corresponds stabilizes more fast to the horizontal segment that accords with own ability, thereby makes the match fairer, fierce, has promoted player's gaming experience simultaneously.
This scheme is according to the first recreation arrangement in the target game scene and the second recreation arrangement both sides strength gap, and the performance and the ability of the object that each player of local game corresponds determine the result of punishing of the player after the match, thereby realize that the object that different players correspond obtains reasonable result of punishing after the local match ends, make the player of different abilities can distinguish more fast, also make the ability of player can be faster stabilize to the horizontal segment of own ability in, make the competition of every recreation more violent, the participation of player is felt stronger, gaming experience is better.
Through the steps S202 to S206, a first target probability of a first battle result obtained by performing battle operation on the first gaming battle and the second gaming battle is predicted, and a basic index corresponding to the first target probability is obtained, where the basic index is used to adjust a reward and punishment result obtained by performing battle operation on any object in the first gaming battle; determining a target contribution rate according to target behavior data generated when a first target object carries out fighting operation in the current game, wherein the first target object is any object in the first game formation, the target contribution rate is the proportion of contribution of the first target object to a second fighting result, and the second fighting result is the result obtained by carrying out fighting operation in the current game formation on the first game formation and the second game formation; and determining a reward punishment result of the first target object based on the basic index and the target contribution rate, wherein the reward punishment result is awarded to the first target object under the condition that the second fighting result is a winning result, and the punishment is punished to the first target object under the condition that the second fighting result is a failing result. The target probability of the battle result is determined, the basic index of the target probability is determined, the contribution rate of the target object in the local game is determined, the reward and punishment result is obtained according to the basic index and the contribution rate, so that the target object is subjected to distinguished reward and punishment, unreasonable and unfair reward and punishment on the target object is avoided, the technical problem that the reward and punishment of the target object in the battle game is not distinguished is solved, and the technical effect of distinguishing the reward and punishment of the target object in the battle game is achieved.
As an alternative embodiment, before predicting the first target probability that the first game formation and the second game formation will get the first fighting result in the fighting operation at step S202, the method further includes: acquiring first historical behavior data generated by the fight operation of first game formation in a historical game, second historical behavior data generated by the fight operation of second game formation in the historical game and historical fight results generated by the fight operation of the first game formation and the second game formation in the historical game; training the first target model through the first historical behavior data, the second historical behavior data and the historical engagement structure; step S202, predicting a first target probability of obtaining a first fighting result by carrying out fighting operation between the first game marketing and the second game marketing comprises the following steps: acquiring first current behavior data generated by the fight operation of first game formation in the current game and second current behavior data generated by the fight operation of second game formation in the current game; and inputting the first current behavior data and the second current behavior data into the trained first target model to obtain a first target probability of the first fight result.
In this embodiment, the probability of the first fight result can be predicted by training the model, and can be a success rate or a failure rate, and the failure rate can be obtained by (1-success rate). Before predicting a first target probability of a first fighting result obtained by the fighting operation of the first game camp and the second game camp, acquiring first historical behavior data generated by the fighting operation of the first game camp in the historical game, second historical behavior data generated by the fighting operation of the second game camp in the historical game, and historical fighting results generated by the fighting operation of the first game camp and the second game camp in the historical game. The long-term and short-term behavior image data is used for indicating the performance of the object corresponding to the player in the game in a long term and a short term, for example, the game is a basketball game, the behavior image data includes statistical indexes such as average scores, average backboards, average attacks, average snacks and the like of the object corresponding to the player in a plurality of basketball games, wherein the game time covered by the long-term image data can be one month, namely the statistical indexes of all games of the object corresponding to the player in one month, and the time period possibly covered by the short-term behavior image data can be the statistical indexes of the game of the object corresponding to the player in three days.
After the first historical behavior data, the second historical behavior data and the historical fight result are obtained, the first historical behavior data and the second historical behavior data are used as input of a first target model, the historical fight result is used as output of the first target model, and the first target model is trained, for example, when the first game formation wins the second game formation, a result 1 is output, otherwise, a result 0 is output. Wherein the first target model may be a deep neural network model.
For example, the first object model of this embodiment is a win ratio prediction model, the battle game is a 3V3 basketball game, there are six objects in total, including an object corresponding to the current player whose performance score is to be calculated, that is, the first object, two teammates of the first object, three opponents of the first object, whose feature vectors may all be composed of the long-short term behavior image data described above, each object corresponds to one vector, then vectors of the two objects to be teammates are added and averaged to obtain a vector a, three objects to be opponents are added and averaged to obtain a vector b, then the vector of the first object is spliced with the vectors a and b to obtain a vector c, for example, if the original vector length of the first object is 20 dimensions, then the lengths of the vectors a and b are both 20 dimensions, the length of the vector c is 3-20-60 dimensions, the vector with the length of 60 dimensions is input into a deep neural network model, the output of the deep neural network model is the win-lose result of the first target object in the game match, when the first target object wins in the game match, the output of the model fitting is 1, otherwise, the output is 0, and then the training is carried out to obtain a winning rate prediction model.
After a first target model is trained through first historical behavior data, second historical behavior data and a historical fight structure, when a first target probability of a first fight result is obtained by predicting fight operation between a first game formation and a second game formation, first current behavior data generated by the fight operation between the first game formation and the current game play and second current behavior data generated by the fight operation between the second game formation and the current game play can be obtained, wherein the first current behavior data can be performance data of a first target object and teammates in a current game match, the second current behavior data can be performance data of opponents of the first target object in the current game match, the first current behavior data and the second current behavior data are used as input of the first target model to predict through the first target model, a value of 0-1 is obtained, which is the magnitude of the first target probability of the first fight outcome. In an alternative embodiment, when the first target model is the win rate prediction model and the first fight result is the win result, the first target probability is the win rate of the first game play versus the second game play.
As an alternative implementation manner, in step S204, determining the target contribution rate according to the target behavior data generated when the first target object performs the fighting operation in the current game includes: acquiring a plurality of first objects in a first game lineup and a plurality of second objects in a second game lineup, wherein the plurality of first objects comprise first target objects; randomly selecting a first target number of second target objects from a plurality of first target objects and a plurality of second target objects except the first target object, matching the first target number of second target objects as teammates of the first target objects, and matching a third target object except the first target object and the second target object in the plurality of first target objects and the plurality of second target objects as an opponent of the first target object; inputting the target behavior data of the first target object, the target behavior data of the second target object and the target behavior data of the third target object into a trained first target model to obtain a second target probability of a first fight result; and determining the second target probability as the target contribution rate.
In this implementation, the behavior data of the object corresponding to each player in the current game can be obtained according to the behavior performance of the object corresponding to each player in the current game, and the target behavior data is also statistical data and statistical indexes, such as data of scores, attack assistance, backboard, and the like. The winning rate of the battle in which the object corresponding to the good player is located is generally higher, so that the embodiment can calculate the winning rate of the first target object in different matching modes by continuously calculating the winning rate of the first target object when determining the target contribution rate of the first target object.
A plurality of first objects in a first game lineup and a plurality of second objects in a second game lineup are acquired, wherein the plurality of first objects include a first target object, teammates and opponents of the first target object of a current game play have been determined, and target behavior data and match results of each object are also determined.
In calculating the target contribution rate of the first target object, a teammate is formed by randomly selecting a first target number of second target objects from a plurality of first target objects and a plurality of second target objects except the first target object, and a third target object except the first target object and the second target object in the plurality of first target objects and the plurality of second target objects is used as an opponent of the first target object, wherein in an alternative embodiment, the number of the first target objects plus 1 is equal to the number of the third target objects. After a plurality of first objects in the first game formation and a plurality of second objects in the second game formation are obtained, target behavior data of the first target object, target behavior data of the second target object and target behavior data of the third target object are input into a trained first target model, so that a second target probability of the first battle result in a randomly determined matching mode can be obtained, the second target probability can be a random probability determined by randomly selecting a first target number of second target objects and forming teammates with the first target object, the second target probability can be determined as a target contribution rate, the goal of determining the target contribution rate according to the target behavior data generated when the first target object carries out battle operation in the current game is achieved, and a reward and punishment result of the first target object is determined based on the basic index and the target contribution rate, under the condition that the second battle result is a winning result, the first target object is rewarded according to the reward punishment result, and under the condition that the second battle result is a failure result, the first target object is punished according to the reward punishment result, so that the effect of distinguishing the reward punishment of the target object in the battle game is achieved.
As an optional implementation, after randomly selecting the first target number of second target objects, the method further includes: determining a plurality of target results of randomly selecting a first target number of second target objects; acquiring a second target probability under each target result to obtain a second target probability of a second target quantity, wherein the second target quantity is the quantity of the plurality of target results; determining the second target probability as the target contribution rate comprises: and averaging the second target probabilities of the second target quantity to obtain the target contribution rate.
In this embodiment, there may be a teammate with the first target object by randomly selecting the first target number of second target objects
Figure BDA0001952002650000131
A target result, wherein X is used to represent the number of the plurality of first objects and the plurality of second objects other than the first target object, and Y is used to represent the first target number, and in an alternative embodiment, 2 (Y +1) ═ X + 1.
Each target result is used for expressing the result of the teammate matched with the first target object, and under each target result, the target behavior data of the first target object, the target behavior data of the second target object and the target behavior data of the third target object are input into the trained first target model to obtain The second target probability of the first fight result shares the second target probability of the second target number of the plurality of target results, that is, the above-mentioned
Figure BDA0001952002650000132
Thus, when the second target probability is determined as the target contribution rate, the second target probabilities of the second target number are averaged, and the obtained average target probability is determined as the target contribution rate.
For example, in the current game, objects corresponding to 1-6 players are shared to participate in the game competition, the teammates and opponents of the objects corresponding to the player No. 1 in the current game are determined, and the statistical data and the competition result of each player are also determined. When calculating the target contribution rate of the object corresponding to player No. 1, randomly selecting two objects from the remaining five objects to form teammates with the object corresponding to player No. 1, using the remaining three objects as opponents, inputting the statistical data into the win rate prediction model, and obtaining the win rate under the hypothetical matching mode, because the win rate is shared by all the three objects
Figure BDA0001952002650000133
The selection mode is selected, so that for the player No. 1, the selection mode can be calculated
Figure BDA0001952002650000134
The success rate is then determined
Figure BDA0001952002650000135
The kind winning rate is averaged to be the target contribution rate of the object corresponding to the player No. 1, and the target contribution rate of the object corresponding to other players can be obtained in the same way.
Through the method, the target contribution rates of the other objects except the first target object in the first game play and the second game play can be obtained, so that the contribution degree of each object in the current game play can be distinguished.
As an alternative implementation manner, before determining the target contribution rate according to the target behavior data generated when the first target object performs the fighting operation in the current game in step S204, the method further includes: acquiring interactive behavior data generated when a first game formation carries out fighting operation, wherein the target behavior data comprises the interactive behavior data; establishing a second target model based on the interactive behavior data, wherein nodes in the second target model are used for indicating objects in the first game formation, and paths in the second target model are used for indicating that interactive behaviors occur between the objects corresponding to the nodes on the paths; step S204, determining the target contribution rate according to the target behavior data generated when the first target object performs the fighting operation in the current one-game includes: in a second target model, acquiring a third target number of a first target path and a fourth target number of a second target path, wherein the first target path passes through a first node corresponding to a first target object and a second node corresponding to an object in the first game formation except the first target object, and the second target path passes through the second node; and determining the ratio of the third target quantity to the fourth target quantity as the target contribution rate.
In this embodiment, the target game scene is a battle game scene, which may involve the behavior interaction of a team, and the target contribution rate of the first target object in the current game may be determined using a second target model, which may be an interaction graph model, to express the interaction behavior between the objects.
Before determining the target contribution rate according to target behavior data generated when the first target object performs the fighting operation in the current game, acquiring interactive behavior data generated when the first game plays the fighting operation, wherein the interactive behavior data can be data of pass behavior. The second target model is established based on the interactive behavior data, the objects in the first game formation can be used as nodes in the second target model, when the interactive behavior occurs between the two objects, the nodes corresponding to the two objects can be connected by lines to form a path, the path is directional, the direction is used for expressing the behavior interactive direction, for example, the transmission direction of a ball, and the second target model of the one-game can be established in such a way. After the second object model is built, the importance of each node in the human object model can be calculated.
When determining the target contribution rate according to target behavior data generated when a first target object performs a fight operation in a current game, in a second target model, a third target number of a first target path and a fourth target number of a second target path are obtained, the first target path passing through a first node corresponding to the first target object and a second node corresponding to an object in the first game formation except for the first target object, for example, a node of the first target object in the second target model is represented by a C point, a node of the object in the first game formation except for the first target object in the second target model is represented by a PF point and a PG point, the first target path is a PF point, the PG point passes through the C point in all paths, the third target number is a number of PF points, the PG point passes through a path of a side of the C point in all paths, may be represented by num (d); the second target path passes through the second node, and may be the number of PF and PG points to all path edges, which may be denoted by num (d). Since most of the second target paths passing through the second node pass through the first node, it is important to determine the first target object, for example, if most of the paths in the PF and PG point pairs pass through C, that is, if the flow of the PF and PG ball passes through C, it is important to determine the first target object (center) corresponding to C.
After obtaining the third target number of the first target path and the fourth target number of the second target path, determining a ratio of the third target number and the fourth target number as a target contribution rate, for example, r C =num(d * ) /num (d) determining a target contribution rate for the first target object.
The embodiment can also obtain the target contribution rate of other objects in the first game play by the method, thereby distinguishing the contribution degree of each object in the current game play.
As an alternative implementation manner, before determining the target contribution rate according to the target behavior data generated when the first target object performs the fighting operation in the current game in step S204, the method further includes: acquiring historical behavior data generated by the fight operation of a first game formation and a second game formation in a historical game; clustering the historical behavior data to obtain multiple types of sub-historical behavior data; obtaining historical contribution rates corresponding to the sub-historical behavior data of the multiple types, wherein the historical contribution rates are proportions of contributions made by the sub-historical behavior data of each type to the second fight result; training a third target model through a plurality of types and historical contribution rates; step S204, determining the target contribution rate according to the target behavior data generated when the first target object performs the fighting operation in the current one-game includes: determining a target type of the target behavior data; and inputting the target type into the trained third target model to obtain the target contribution rate.
In this embodiment, different professions exist in the target game scene, and evaluation criteria of different professions are different, for example, in a basketball game scene, a center is mainly responsible for grabbing a backboard, scores are not particularly large, scores are mainly responsible for three-point throwing after scoring, scores are large, and the like. Therefore, when the reward and punishment result is specifically determined, the reward and punishment result should be determined according to different professional standards, so that the reward and punishment result is fair and fair.
In order to enable the reward and punishment results to be more consistent with subjective evaluation of people, an experienced player or a plan can be made to score the performance of an object corresponding to the player in a match, so that expert knowledge is obtained, then a supervised model is used for learning the expert knowledge, and finally the supervised model can be used for scoring. However, in the game, hundreds of thousands or even millions of matches are usually played, the objects corresponding to similar players are grouped into one category by clustering, and then the category is scored, so that the behaviors of the objects corresponding to the players in the same category tend to be similar, the scores obtained by the objects are similar, the workload of planning the marks is greatly reduced, and the marks can be more accurate.
In this embodiment, historical behavior data generated by battle operations performed in a historical game by a first game formation and a second game formation is obtained, the historical behavior data is also historical competition statistical data, then, clustering processing is performed on the historical behavior data to obtain sub-historical behavior data of multiple types, careers can be distinguished from all the historical behavior data, for example, the historical behavior data is divided into five careers, such as C, SG, SF, PG, and PF, in the historical behavior data corresponding to each career, a Kmeans clustering method can be used to cluster the historical behavior data corresponding to each career into sub-historical behavior data of multiple types, for example, each career is clustered into 20 types. In the clustering result, each profession is divided into various types, for example, in the C profession, there are a score-type C, a defense-type C, and a basket-type C, so as to well find objects corresponding to players with similar behaviors, and at the same time, C is divided into 20 different levels, that is, the objects corresponding to different types are different in performance and ability, and should obtain different rewards and punishments. The embodiment may use the center point of each class (the average of all data in that class) to represent the class.
After clustering is carried out on historical behavior data to obtain multiple types of sub-historical behavior data, historical contribution rates corresponding to the multiple types of sub-historical behavior data are obtained, the historical contribution rates are the proportion of contribution of each type of sub-historical behavior data to a second fight result, 20 types of central point data of each profession can be given to a plan, the plan subjectively sorts 20 classes in each profession and marks the classes as 1-20, after marking is completed, marks of each class are obtained, and the historical contribution rates of the classes are determined. After the third target model is trained by the plurality of types and the historical contribution rates, the third target model can be a neural network model, and the third target model can be trained by fitting the labels and the historical contribution rates by using the neural network model. In an alternative embodiment, the different professions correspond to different third object models.
When determining the target contribution rate of the first target object according to target behavior data generated when the first target object performs a fighting operation in the current game, the target type of the target behavior data of the first target object may be determined first, for example, after a game is finished, when the target contribution rate is calculated, a third target model corresponding to an occupation type may be selected according to the occupation type of an object corresponding to a player, then the target type of the target behavior data may be input as game statistical data into the trained third target model corresponding to the occupation type, and the target contribution rate of the first target object may be calculated by the trained third target model.
Through the method, the target contribution rates of other objects except the first target object in the first game formation and the second game formation can be obtained, so that the contribution degree of each object in the current game play can be distinguished.
In this embodiment, the target contribution rate of the first target object may also be determined through a scoring model of an NBA efficiency index formula, that is, through an existing ability index, the contribution rate of the object corresponding to the player may be calculated by directly using the statistical data of the player race, and the scoring model of the NBA efficiency index formula may be as follows:
the contribution rate is [ (score number + attack number + total backboard number + snap number + cap number) - (number of shooting hands-number of shooting hits) - (number of penalty ball hands-number of penalty ball hits) ]/number of players' game.
In this embodiment, the target contribution rate of the first target object may also be determined by a scoring model based on the efficiency values of Gmsc. The scoring model using the efficiency values of Gmsc may be as follows:
the contribution rate is the score +0.4 x the number of shooting hits-0.7 x the number of shooting hands-0.4 x (number of penalty shooting hands-number of penalty hits) +0.7 x the number of forecourt basketball backboard +0.3 x the number of backcourt basketball backboard + the number of snap-off +0.7 x the number of attack aids +0.7 x the number of caps-0.4 x the number of offences-the number of errors.
As an alternative implementation manner, in step S204, determining the target contribution rate according to the target behavior data generated when the first target object performs the fighting operation in the current game includes: acquiring the contribution rates of a plurality of objects in the first game lineup, wherein the contribution rate of each object is the proportion of contribution of each object to the second fight result; and carrying out normalization processing on the target contribution rate through the contribution rates of the plurality of objects to obtain the processed target contribution rate.
In this embodiment, in order to ensure that the sum of the contribution rates of the objects corresponding to all players in the first game play and the second game play is 1, it is necessary to normalize the contribution rates of the objects corresponding to all players in the first game play and the second game play, acquire the contribution rates of the plurality of objects in the first game play, and determine the quotient of the target contribution rate and the sum of the contribution rates of the plurality of objects as the target contribution rate after normalization processing of the target contribution rate.
In an alternative embodiment, the data is represented by a formula
Figure BDA0001952002650000161
To normalize the contribution ratio. Wherein r is i Indicating the calculated contribution rate of the object corresponding to i player, d i Is the contribution ratio after normalization processing corresponding to player i,
Figure BDA0001952002650000171
Is the sum of the contribution rates of all the objects corresponding to the players in the battle of the object corresponding to the i player, thereby ensuring that
Figure BDA0001952002650000172
Each player can be differentially punished on the basis of the basic index through different contribution rates of each player corresponding object,
Figure BDA0001952002650000173
wherein S i For presenting reward and punishment results, base, of the ith player ScoreA Base, base for representing A-formation ScoreB The basic index for representing B-battle, if the object corresponding to the ith player belongs to A-battle, the basic index of A-battlebase ScoreA Multiplying by the contribution rate d of the object corresponding to the ith player i Obtaining the reward and punishment result S of the ith player i (ii) a If the object corresponding to the ith player belongs to B-battle, the basic index base of the B-battle ScoreB Multiplying by the contribution rate d of the object corresponding to the ith player i Obtaining the reward and punishment result S of the ith player i
According to the embodiment, after the game competition is finished, according to the behavior performance of the objects corresponding to different players in the current game round, the contribution rate of the objects corresponding to the players in the current game round is obtained through the training model, then the objects corresponding to each player are subjected to distinguishing reward and punishment on the basis index according to the table contribution rate, more reward or less punishment can be performed on the objects which are well represented, and less reward or more punishment is performed on the objects which are poor represented, so that the objects corresponding to the different players can obtain reasonable reward and punishment results after the current game round is finished, the capacity of the objects corresponding to the players can be quickly stabilized in the level segment of the capacity of the objects, the competition of each game is more fierce, the participation of the players is stronger, and the game experience is better.
As an optional implementation manner, after determining the reward and punishment result of the first target object based on the base index and the target contribution rate in step S206, the method further includes: and under the condition that the target behavior data accords with the target condition, correcting the reward and punishment result through the correction value corresponding to the target condition.
In this embodiment, after determining the reward penalty result for the first target object based on the base indicator and the target contribution rate, the reward penalty result may be further modified. Whether the target behavior data meets the target condition can be judged, the target condition is a condition for determining correction of reward and punishment results, the condition can be a condition of success and failure, and the condition can also be a condition of a preset evaluation rule of best value Player reward (MVP for short). And under the condition that the target behavior data accords with the target condition, correcting the reward and punishment result through the correction value corresponding to the target condition.
As an alternative embodiment, before correcting the reward and punishment result by the correction value corresponding to the target behavior data, the method further includes at least one of: determining that the target behavior data meets the target condition under the condition that the target behavior data enables the first target object to continuously obtain the second fight result for more than or equal to the target times; and under the condition that the target behavior data indicate that the target behavior occurs in the current game, determining that the target behavior data meets the target condition.
In this embodiment, when the number of times that the target behavior data allows the first target object to continuously obtain the second fight result is equal to or greater than the target number of times, it is determined that the target behavior data meets the target condition. In an optional embodiment, the target frequency is 3 times, when the number of times of the win-win field of the first target object is greater than or equal to 3 times, it is determined that the target behavior data meets the target condition, and the reward and punishment result is corrected through a correction value corresponding to the target condition when the match is over. Modified reward and punishment result is S i + (N-2) × m, wherein, S i The reward and punishment result is calculated by the previous training model, wherein (N-2) × m is used for expressing the correction value corresponding to the target condition, N is used for expressing the user win scene, and m is used for expressing the basic coefficient of win correction, and the reward and punishment result can be defined by self.
In an optional embodiment, when the continuous failure field times of the first target object are larger than or equal to 3 fields, determining that the target behavior data meet the target condition, and correcting the reward and punishment result through a correction value corresponding to the target condition when the competition is finished. Subscriber reward and punishment result is S i + (M-2) × h, wherein, S i The reward and punishment result is calculated through a training model, wherein (M-2) × h is used for expressing a correction value corresponding to a target condition, M expresses a continuous failure field of a user, and h expresses a continuous failure correction basic coefficient and can be defined by self.
The embodiment also determines that the target behavior data meets the target condition in a case where the target behavior data indicates that the target behavior occurs in the current game play. For example, the MVP evaluation rules are set during the game, including capturing the extinct ball in the basketball game, the key ball (key score, key backboard, key snap, key cap, key interference, key attack, etc.), the continuous ball (continuous ball)Scoring, continuous backboard, continuous snap-off, continuous capping, continuous interference, continuous attack assistance, etc.), and fine-tuning the reward punishment result according to the target behaviors. When the target behavior data indicates that the target behavior occurs in the current game, correcting the reward and punishment result to be S through the correction value corresponding to the target condition i +mvp score Wherein S is i Is a result of reward and punishment calculated by a training model, mvp score Is a constant for representing a correction value corresponding to a target condition to correct a result of winning or punishing, and may be a value given in advance empirically by planning the importance of the MVP for a game party, for example, in a game, a player who obtains the MVP may be given an additional 10 bonus points, and then the MVP score =10。
In this embodiment, the target corresponding to each player obtains five reward and punishment results through the trained first target model, the second target model, the third target model, the score model of the NBA efficiency index formula, and the score model of the efficiency value of Gmsc, and the five reward and punishment results can be corrected through win-win and lose-win and MVP, so that the average reward and punishment result of the five corrected reward and punishment results can be determined as the final reward and punishment result of the target.
It should be noted that the reward and punishment result finally used in this embodiment is an average value of the five reward and punishment results, and the average value is considered to be more suitable for practical situations. In practical use, any number of the reward and punishment results can be selected according to the effect to calculate a final average value as the output reward and punishment result, as long as the final output reward and punishment result is more consistent with the real level of the object corresponding to the player, which is not specifically limited here.
As an optional implementation manner, after determining the reward and punishment result of the first target object based on the base indicator and the target contribution rate at step S206, the method further includes: and under the condition that the current one-game is in the grading stage, determining the product of the reward and punishment result and the target coefficient as the target reward and punishment result of the first target object, wherein the target coefficient is increased along with the increase of the number of the winning fields or the number of the failing fields of the first target object.
In this embodiment, after the novice has created the account, the level-fixing stage may be entered, where the level-fixing stage is to roughly distinguish the objects corresponding to the players with different abilities through ten matches, so that the abilities of the objects corresponding to the players can be stabilized to a level suitable for the abilities of the players as soon as possible, thereby allowing the players to distinguish the abilities of the corresponding objects, and preventing the game experience of the novice from being affected by the situation that the game of the novice falls over.
In the stage of newly-determined level, the ability of the player can be correspondingly awarded and punished according to a rule set in advance. And under the condition that the current one-game is in the grading stage, determining the product of the reward and punishment result and the target coefficient as the target reward and punishment result of the first target object, wherein the target coefficient is increased along with the increase of the number of the winning fields or the number of the failing fields of the first target object.
In an alternative embodiment, when the nth race is calculated (n < ═ 10), if the nth race fails, the number of failed races is calculated to be m (0< m < ═ 10), and a coefficient P corresponding to m is multiplied on the basis of the reward and punishment result, so that a final reward and punishment result is obtained. If the n field wins, the number of the wins from the n field to the n field is calculated to be m ' (0< m ' < ═ 10), and then a coefficient P corresponding to m ' is multiplied on the basis of the reward and punishment result, so that the final reward and punishment result is obtained.
For example, the default points for each player are the same, e.g., 3000 points, i.e., the initial ability score for each player is the same. In the first ten stage races of the beginner, when the reward and punishment is calculated, a target coefficient P is multiplied on the basis of the obtained reward and punishment, wherein the target coefficient P can be obtained through a coefficient table corresponding to a win-or-lose constant in a table 1. For example, the game of the object corresponding to the player is a third game, the game is output, then the calculated reward and punishment is-25 minutes, and meanwhile, two games are output in total until the third game is reached, corresponding to table 1, the target coefficient P can be obtained to be 2.4, so that the reward and punishment of the object corresponding to the player in the game is-25 × 2.4 — 60. In an alternative embodiment, if the game of the player-corresponding object is his fifth game, and the game wins, and then the calculated reward and punishment is 30 points, until the fifth game, he wins three games together, corresponding to table 1, the target coefficient P is 2.8, and thus the reward and punishment of the player-corresponding object in the game is 30 x 2.8 — 84. The reward and punishment for other fields is similar to the calculation method.
TABLE 1 coefficient table corresponding to win or lose constants
Cut off to the n-th field wins/negatives 1 2 3 4 5 6 7 8 9 10
P 2 2.4 2.8 3.2 3.6 4 4.4 4.8 5.2 5.6
Under the condition that the initial score is 3000, ten grading games are carried out, for example, the scores of high-end players are 3800-4000, the scores of low-end players are 2600-2800, and middle players are distributed between 2800-3800, and due to the fact that the scores are different, players with extremely large capacity differences cannot be matched together during matching, so that the capacity scores of objects corresponding to the players are distinguished to a certain extent after the new grading games are carried out, and each player enters a corresponding capacity segment.
As an alternative implementation, in step S202, obtaining the base index corresponding to the first target probability includes: under the condition that the second fight result is a victory result, acquiring a first basic index corresponding to the first target probability, wherein the larger the first target probability is, the smaller the first basic index is; and under the condition that the second fight result is a failure result, acquiring a second basic index corresponding to the first target probability, wherein the larger the first target probability is, the larger the second basic index is.
The basic index of the embodiment is related to the second fight result, and the basic index can be calculated by the first target probability. When the first target probability is greater than 0.5, the first game has stronger camping capacity in the current game, if the second fighting result is a winning result, the first basic index can be correspondingly reduced, and the larger the first target probability is, the smaller the first basic index is; if the second fight result is failure, the first basic index can be increased slightly; if the first target probability is less than or equal to 0.5, the first game marketing capacity of the local competition is weaker, if the second competition result is failure, the first game marketing is not punished too much, the second basic index is correspondingly reduced, the smaller the first target probability is, the smaller the second basic index is, more prizes can be awarded if the first game marketing is successful, and the larger the second basic index can be, so that the difference of the actual strength of the two game parties is considered when the basic index is determined.
For example, the first game play is the party A in the target game scene, the second game play is the party B in the game scene, the first fight result is the victory fight result, the first target probability is R (R is a value between 0 and 1), that is, the possibility of the party A winning the party B is R, the basic index of the party A is the basis of plus-minus score base scoreA ,base scoreA Can be calculated by the following formula:
Figure BDA0001952002650000201
wherein M is an addition and subtraction coefficient.
When the second fight result is that the A party defeats the B party, the base plus-minus score base scoreA M (1-R), when the second fight result is that the A party loses to the B party, the base plus-minus score base scoreA Is M.R. If R is>0.5, the ability of the party A in the local competition is stronger, the winning is expected, the score increment is correspondingly reduced, and the score increment is less when the R is larger; if the party A loses, the score is reduced more; if R is<0.5, the A side of the game has weaker capability and is expected to lose, the score is reduced correspondingly, the smaller R is, the score is reduced, and if the A team wins, the score is increased.
As an alternative implementation, in step S202, obtaining the base index corresponding to the first target probability includes: obtaining a basic score corresponding to the first target probability, wherein the basic score is used for adjusting a reward and punishment score obtained by any object in the first game lineup through fighting operation; step S206, determining a reward and punishment result of the first target object based on the basic indicator and the target contribution rate includes: and determining a reward and punishment score of the first target object based on the basic score and the target contribution rate, wherein the reward and punishment score is added to the original score of the first target object under the condition that the second warfare result is a winning result, and the reward and punishment score is reduced to the original score of the first target object under the condition that the second warfare result is a failing result.
In this embodiment, the base indicator may be a specific numerical value, such as a base score base score The reward punishment result can be reward punishment score S, the reward punishment score obtained by carrying out fighting operation on any object in the first game formation is adjusted through the basic score, and the reward punishment score of the first target object can be played by the basic score and the target contribution rate. The original score of the first target object is a score which is obtained by accumulating the first target object in the historical game, the reward and punishment score is added to the original score of the first target object under the condition that the second fighting result is a winning result, and the reward and punishment score is reduced to the original score of the first target object under the condition that the second fighting result is a failing result.
In the related technology, the ability difference of the players is determined through manual rules, and the reward and punishment of the target object in the battle game is not distinguished, so that the reward and punishment which accords with the performance or the ability of the players in one game can be given according to different performances of the players in the game, the contribution rate of each player in the game is distinguished, the objects corresponding to the players with different abilities can be distinguished more quickly, the competition is more intense, and the player experience is better. Compared with an artificial rule, the method can distinguish objects corresponding to different players, and give reward and punishment results according with the player performance, so that the ability scores of the objects corresponding to the players can be quickly stabilized in the ability level competition segments of the players; compared with an ELO rating model, the method can train a deep learning model through historical behavior data of players, can more accurately predict the wins of both sides of a competition, can also determine the contribution rates of different players to the final competition result in a distinguishing way, is more reasonable and fair, ensures that the competition of each game is more intense, the participation of the players is stronger, and the game experience is better.
The technical solution of the present invention is described below with reference to a preferred embodiment, and specifically, the base index is a reward-penalty score, and the reward-penalty result is a reward-penalty score.
In sports competition games, multi-player online competition games and massively multi-player online role playing games, mutual competition and cooperation among players exist, due to different abilities of game operations, consciousness and the like of the players, performances of objects corresponding to the players are different, and ability scores are that the ability differences of the objects corresponding to the players are expected to be identified through performances of the objects corresponding to different players in corresponding playing methods or a game.
According to the embodiment, a victory ratio prediction model can be established through a large number of historical competitions in the game and historical long-term and short-term behavior portrait data of the player, victory ratio prediction of the two matching parties can be obtained through the victory ratio prediction model, and therefore the assessment of the strength difference of the two parties is obtained. The embodiment can determine the base scores added and subtracted to the players of the two parties in the game according to the predicted winning rate. After determining the base score, a plurality of sets of behavior scoring models can be constructed according to the performance of the object corresponding to the player in the game by utilizing the statistical data and the behavior sequence. Using the behavior scoring model to give the contribution rate d according with the performance or ability of the player according to the performance of the objects corresponding to different players in the game i ,d i The method is used for representing the contribution rate of the object corresponding to the ith player in the game, and the contribution rate is also the plus-minus score proportion. After the plus-minus score proportion and the basic score are obtained, the reward and punishment score of the player in the local game can be calculated.
After the score is added and subtracted through the behavior scoring model, the score is also added and subtracted according to the win-win and loss-win and MVP and the like, the final score added and subtracted of each player is obtained, and then the ability score of each player is adjusted.
The embodiment adopts the method, can solve the problem that the scores of the game ability are not differentiated for the victory/loser players, so that the scores of the victory/loser players are unreasonable, and can give scores according with the performances or abilities of the players according to the different performances of the players in one game, thereby differentiating the contribution degree of the object corresponding to each player in the game, enabling the players with different abilities to be rapidly differentiated, and further enabling the competition to be more intense and the player experience to be better.
FIG. 3 is a flow diagram of a method for scoring a player's ability to conduct a multi-modal performance in accordance with an embodiment of the present invention. As shown in fig. 3, the method includes: a fixed stage and an incremental stage. The grading stage comprises the steps of creating an account and ten grading matches before a new player; the increasing stage comprises the steps of training a victory ratio estimation model based on long-term and short-term behavior portraits of objects corresponding to players, estimating the victory ratios of the two parties through the victory ratio estimation model, roughly determining the difference of the strength of the two parties according to the victory ratios of the two parties, and establishing the basic addition and subtraction of the win-loss of the game according to the difference of the strength; after the competition is finished, according to the performances of the objects corresponding to different players of the game in the current game, the contribution rate of the object corresponding to each player in the game is obtained through a performance scoring model, and then the object corresponding to each player is differentiated and added or subtracted on the basis score according to the contribution rate, namely, the well-expressed more-added score or less-subtracted score, and the poorly-expressed less-added score or more-subtracted score are obtained. Meanwhile, the ability scores are corrected and finely adjusted by considering the ability score correction rules corresponding to the successive defeats of some successive wins and the MVP rule of the killing key ball, and finally the ability reward punishment scores of the objects corresponding to the players are obtained.
The first phase of the new hand race is described below.
After the new player finishes creating the account, the next ten games enter a grading stage firstly, and in the grading stage, through the ten games, objects corresponding to the players with different abilities are roughly distinguished, so that the ability scores of the players can be stabilized to a score section suitable for the ability of the players as soon as possible, the ability scores of the objects corresponding to the players are distinguished, and the condition that the game of the new player falls over on one side is prevented from influencing the game experience of the new player.
In the new grading stage, the ability scores of the players are correspondingly added or subtracted according to a preset rule, and the default scores of each player can be set to be the same, for example, 3000 scores, that is, the initial ability score of each player is 3000.
The addition and subtraction of the new race is to superpose a coefficient P on the basis of each prize and punishment score calculated originally, the coefficient is increased along with the increase of the winning field number/the negative field number, and the specific coefficient corresponding to the winning and the negative constants is shown in the table 1.
For example, when the nth race is calculated (n < ═ 10), if the nth race fails, the number of failed races is calculated to be m (0< m < ═ 10), and then a coefficient P corresponding to m is multiplied on the basis of the final reward and punishment score, so that the final score reduction is obtained. If the nth game wins, the final bonus score can be obtained in the same way.
For another example, in the first ten mastery games, when calculating the reward and punishment score, a coefficient P is multiplied on the basis of the obtained reward and punishment score. For example, the player outputs the third game in the game of the object corresponding to the player, then the calculated reward and punishment is-25 minutes, and two games are output in total when the player stops to the third game, and corresponding to table 1, the coefficient P can be obtained to be 2.4, so that the reward and punishment of the player corresponding to the object in the game is-25 × 2.4 ═ 60. If the game of the object corresponding to the player is the fifth game, the game wins at the same time, and then the calculated reward punishment is divided into 30 minutes, three games win together from the fifth game, corresponding to table 1, the coefficient P is 2.8, and therefore the reward punishment of the game of the player is 30, 2.8 or 84. Other calculation methods are similar and will not be illustrated one by one here.
After ten level games, for example, the initial score is 3000, the scores of the high-end players are 3800-4000, the scores of the low-end players are 2600-2800, and the scores of the middle players are distributed between 2800-3800.
The stability increasing stage of this embodiment is described below.
After a new hand level competition, the ability scores of the objects corresponding to the players are distinguished to a certain extent, the score of the object corresponding to each player enters a corresponding ability section, and the contribution rate of the object corresponding to the player is determined according to the performance and the result of each game competition of the object corresponding to the player in a stable increase stage, so that the ability scores of the players are more consistent with the real ability level of the players.
The basic score obtained by the win ratio prediction model of this embodiment is described below.
In this embodiment, the odds R of both parties of the game tournament are predicted by the odds prediction model, where R is used to represent the odds of the party a to the party B, and then a real tournament result is obtained after the game tournament ends, so that the basic scores of both parties of the game can be obtained according to the odds of both parties of the model prediction and the actual result of the tournament, for example, the basic score of the party a is calculated as follows:
Figure BDA0001952002650000231
the basic score of the B party can be calculated by the same method, wherein M is a base score coefficient number given in advance by the system and is used for adjusting the size of the base score. If R is>0.5, the ability of the party A in the local competition is stronger, the winning is expected, the score increase is correspondingly reduced, the score increase is less when the R is larger, and the score decrease is more when the party A loses; if R is <0.5, the A side of the game has weaker capability and is expected to lose, the score is reduced correspondingly, the smaller the R is, the score is reduced, and the score is increased more if the A side wins.
Because the performance of different players in the game competition of the current game is different, the points to be added and subtracted of the winner and the loser should not be the same, namely, the points to be added or subtracted are more good and the points to be added or subtracted are more poor in the game of the current game.
This embodiment relates to a player performance model, which includes three different models and two different metrics.
The win ratio prediction model is described below.
The embodiment trains a win ratio prediction model by historical tournament results and historical tournament representation data of players.
Fig. 4 is a schematic diagram of a win ratio prediction model according to an embodiment of the present invention. As shown in fig. 4, taking 3V3 basketball as an example, there are six objects, including an object corresponding to the current player whose contribution rate is to be calculated, objects corresponding to two teammates of the player, and objects corresponding to three opponents of the player, whose feature vectors are all composed of historical long-short-term image data, so that each object corresponds to a vector, then the vectors of the two objects to be teammates are summed and averaged to obtain a vector a, the three objects to be opponents are summed and averaged to obtain a vector b, then the vector of the first object is concatenated with the vectors a and b to obtain a vector c, for example, if the original vector length of the first object is 20 dimensions, then the lengths of the vectors a and b are both 20 dimensions, then the length of the vector c is 3 x 20 x 60 dimensions, then the vector with the length of 60 dimensions is input into a deep neural network model, the output of the deep neural network model is the win-lose result of the first target object in the game match, when the first target object wins in the game match, the output of the model fitting is 1, otherwise, the output is 0, and then a win rate prediction model can be obtained through training.
In training, historical image data of opponents and teammates (including the opponents and teammates) are used as input, the result of the match is output, if the A party wins the B party, the output result is 1, otherwise, the output result is 0.
In the prediction stage, the performance data of the opponent and teammates (including the opponent and the teammates) are used as the input of the trained winning rate prediction model, and the trained winning rate prediction model can obtain a value of 0-1, and the value can be used for representing the winning rate of the party to the party B.
According to the embodiment, the statistical data of the object corresponding to each player in the game, such as scores, attack assistance, backboard and the like, can be obtained according to the behavior of the object corresponding to each player in the current game, and the winning rate of the battle of a well-represented player is considered to be generally high, so that when the contribution rate of each player is determined, the winning rate of the object corresponding to the player in different matching modes is calculated continuously.
For example, in the game, objects corresponding to six players with the common reference numbers of 1-6 participate in the game, the teammates and opponents of the object corresponding to the player 1 in the game are determined, the statistical data and the game result of each player are also determined, when the contribution rate of the object corresponding to the player 1 is calculated, two objects can be randomly selected from the remaining five objects to form the teammates with the object corresponding to the player 1, the remaining three objects are used as the opponents of the object corresponding to the player 1, and the statistical data of the objects are input into the win rate prediction model, so that wins in various imaginary matching modes can be obtained
Figure BDA0001952002650000241
The selection method can be calculated for the object corresponding to the player No. 1
Figure BDA0001952002650000242
The success rate is then determined
Figure BDA0001952002650000243
The average of the seed winning rates is used as the contribution rate of the player No. 1, and the performance scores of the objects corresponding to other players can be obtained in the same way.
The graph-based scoring model is described below.
In this embodiment, the use of a graph model is most appropriate in view of the interaction problem in the team project. A graph model can be established through the action sequence of the objects corresponding to the players in the match, and the interaction (mainly pass action) between the objects corresponding to the players is expressed. In an alternative embodiment, three objects on one side of the game are taken as nodes in the graph model, when a pass behavior occurs between the objects corresponding to the two players, an edge can be connected between the objects corresponding to the two players, the edge is directional, and the direction of the edge represents the transmission direction of the ball, so that the player interaction graph model of the game can be constructed. After the player interaction graph model is built, the importance of three nodes in the graph model can be calculated according to the index of flow center.
Fig. 5 is a schematic diagram of a graph-based scoring model, according to an embodiment of the present invention. As shown in FIG. 5, the contribution rate of the object corresponding to each player is r C =num(d * ) (d) num, in which num (d) is the contribution rate of the central player, for example * ) And num (d) is used for indicating the number of the PF and PG points to the edges passing through the C point in all the paths. As can be seen from the index, if most of all paths of the PF and PG point pairs pass through C, the C (center) object proves to be important because PF and PG sphere weight flows through C. The contribution rate of other players in the team can be obtained by the same method.
The clustering label-based scoring model of this embodiment is described below.
Different professions exist in the game, and the evaluation criteria of different professions are different, for example, the centre is mainly responsible for robbing the backboard, scores are not particularly large, scores are mainly responsible for three-point throwing, scores are large, and the like. Therefore, when the specific scoring is carried out, the scoring is carried out according to the standard of different professions, so that the scoring is fairer and more fair.
In order to make the scoring more subjective, the embodiment may allow experienced players or strategies to score the performance of players in a game, which is equivalent to obtaining expert knowledge, and then learning the expert knowledge using a supervised model, and finally, scoring using the supervised model. However, hundreds of thousands or even millions of matches are played in the game, and the objects corresponding to similar players are grouped into one category by using a clustering mode, and then the category is scored, so that the behaviors of the objects corresponding to the players of the same category tend to be similar, the obtained scores are also similar, the workload of planning the marks is greatly reduced, and the marks can be more accurate.
This embodiment distinguishes the profession for all historical game data and can obtain historical game statistics for a large number of players in five professions C, SG, SF, PG, PF. Then, in each profession, the profession uses game statistical data, and historical game statistical data is clustered by using a Kmeans clustering method, wherein each profession is clustered into 20 classes.
In the clustering result, each profession is classified into various types, for example, in the C profession, there are C with score type, C with defense type and C with basket board type, so as to reach the corresponding objects of the players with similar behaviors, and the C profession is classified into 20 different grades, that is, the performances and abilities between different grades are different, and different scores should be obtained.
This embodiment uses the center point of each class (the average of all data in that class) to represent that class. The center point data for the 20 classes for each profession is then submitted to the curation, which will rank the 20 classes in each profession according to their subjectivity and labeled 1-20. After the labeling is completed, labels of each class are obtained, and then a neural network can be used for fitting the label score to train a neural network scoring model for each occupation.
After one game is finished, when the performance is calculated, firstly, a corresponding neural network scoring model is selected according to the occupation used by the player, then, the game statistical data of the player is used as the input of the neural network scoring model, so that the table contribution rate of the player can be obtained, and similarly, the contribution rates of objects corresponding to other players in the game can be obtained.
The scoring model based on the NBA efficiency index formula is described below.
The embodiment can directly calculate the contribution rate of the player by using the statistical data of the player competition by using the existing ability index, and can use an NBA efficiency index formula, wherein the specific calculation formula is as follows:
the contribution rate is [ (score number + attack number + total backboard number + snap number + cap number) - (number of shooting hands-number of shooting hits) - (number of penalty ball hands-number of penalty ball hits) ]/number of players' game.
A scoring model based on the efficiency values of Gmsc is described below.
The specific calculation formula for the efficiency value of this example using Gmsc is as follows:
the contribution rate is the score +0.4 x the number of shooting hits-0.7 x the number of shooting hands-0.4 x (number of penalty shooting hands-number of penalty hits) +0.7 x the number of forecourt basketball backboard +0.3 x the number of backcourt basketball backboard + the number of snap-off +0.7 x the number of attack aids +0.7 x the number of caps-0.4 x the number of offences-the number of errors.
The contribution rates obtained by the five scoring models are not the final contribution rate of the final object, and in order to ensure that the sum of the performance contribution rates of the objects corresponding to all the players of the winner/loser is 1, the performance scores of the winner/loser players need to be normalized, and the calculation formula is as follows:
Figure BDA0001952002650000261
r i indicating the calculated contribution rate of the object corresponding to i player, d i Is the contribution ratio after normalization processing corresponding to player i,
Figure BDA0001952002650000262
is the sum of the contribution rates of all the objects corresponding to the players in the battle of the object corresponding to the i player, thereby ensuring that
Figure BDA0001952002650000263
The objects corresponding to each player can be differentiated and divided on the basis of the basic scores through different contribution rates of each player,
Figure BDA0001952002650000264
wherein S i For presenting reward and punishment results, base, of the ith player ScoreA Base, base for representing A-formation ScoreB The basic index for representing B-battle, if the object corresponding to the ith player belongs to A-battle, the basic index base of A-battle ScoreA Multiplying by the contribution rate d of the object corresponding to the ith player i Receive the prize of the ith playerPenalty result S i (ii) a If the object corresponding to the ith player belongs to B-battle, the basic index base of the B-battle ScoreB Multiplying by the contribution rate d of the object corresponding to the ith player i Obtaining the reward and punishment result S of the ith player i
The score for win-win or loss condition is modified by the rules of this embodiment as follows.
And when the number of times of the object win-win field corresponding to the player is more than or equal to 3, the award punishment scores are correspondingly corrected at the end of the competition. Subscriber reward and punishment is S i N-2 x m, wherein S i The method is characterized in that reward and punishment scores are calculated through a front model, N is used for representing a win-win scene, and m is used for representing a win-win correction basic score and can be defined by self.
And when the continuous failure times of the objects corresponding to the players are more than or equal to 3, the reward punishment scores are correspondingly corrected at the end of the competition. Subscriber reward and punishment is S i + (M-2) × h, wherein, S i The reward and punishment score is calculated through a front model, M is used for representing the continuous-failure field of a user, h is used for representing the continuous-failure correction basic score, and self-definition can be carried out.
The fine tuning of the score by the MVP rule of this embodiment is described below.
The embodiment sets MVP evaluation rules in the game process, captures the killing ball, the key ball (key score, key backboard, key snap, key cap, key interference, key attack assistance and the like), the continuous ball (continuous score, continuous backboard, continuous snap, continuous cap, continuous interference, continuous attack assistance and the like) in the basketball game, and finely adjusts the ability score of the player according to the behaviors of the MVP highlight moments. When the player obtains the MVP, correcting the reward punishment of the player into S i +mvp score Wherein S is i Is a result of reward and punishment calculated by a training model, mvp score Is a constant used to modify the bonus points earned by the MVP player and may be self-defined.
Through the steps, the object corresponding to each player obtains five reward and punishment scores through the trained win rate prediction model, the score model based on the graph, the score model based on the clustering mark, the score model of the NBA efficiency index formula and the score model of the efficiency value of the Gmscc, and the five reward and punishment scores can be corrected through win-win and win-lose and MVP, so that the average reward and punishment score of the five corrected reward and punishment results can be determined as the final reward and punishment score of the object. The calculated bonus punishment score is subtracted if the team of the player object loses, and is added if the player object wins.
This embodiment proposes a game ability scoring solution based on the player's multi-modal performance. According to the scheme, firstly, the wins of the two parties are estimated by training a deep neural network model based on the long-term and short-term behavior portraits of players, the strength difference of the two parties can be predicted according to the wins of the two parties, and then the basic score of the win-loss of the game is established according to the strength difference. After the match is finished, according to the performances of the objects corresponding to different players in the current game, the contribution rate of each player to the match result of the game is obtained through a performance scoring model, and then the objects corresponding to each player are differentially added or subtracted on the basis score according to the contribution rate, that is, the well-represented objects can be more added or less subtracted, and the poorly-represented objects can be less added or more subtracted. In the embodiment, the reward punishment score is corrected and finely adjusted by considering the capacity score correction rules corresponding to continuous win and continuous loss and the MVP rule of the killing key ball, and finally the reward punishment score according with the performance of the player is obtained.
This embodiment is according to the performance and the ability of both sides 'strength gap of sports game and each player's corresponding object of local game score after the target that the player corresponds to, thereby realize that the corresponding object of different players obtains reasonable reward punishment score after the local match finishes, make the ability score of the corresponding object of player can stabilize to the score section that accords with own ability level sooner, make the competition of every game more violent, the participation of player is stronger, gaming experience is better.
Compared with manual rules, the method can distinguish different players, give addition and subtraction scores according with the player performance, and enable the ability scores of the players to be stabilized in a horizontal segment according with the player ability more quickly; compared with an ELO model, the method can more accurately predict the wins of two sides of the competition through the deep learning model and the long-term and short-term images of the players, can also distinguish different players to be treated, gives different expression scores, and is more reasonable and fair.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
The embodiment of the invention also provides a game data processing device. It should be noted that the game data processing apparatus of this embodiment may be used to execute the game data processing method of the embodiment of the present invention.
Fig. 6 is a schematic diagram of a game data processing apparatus according to an embodiment of the present invention. As shown in fig. 6, the game data processing apparatus 600 includes: a processing unit 10, a first determination unit 20 and a second determination unit 30.
The processing unit 10 is configured to predict a first target probability of a first battle result obtained by performing battle operation on the first game battle and the second game battle, and obtain a basic index corresponding to the first target probability, where the basic index is used to adjust a reward and punishment result obtained by performing battle operation on any object in the first game battle.
The first determination unit 20 is configured to determine a target contribution rate according to target behavior data generated when a first target object performs a fighting operation in a current game in one round, where the first target object is any object in a first game formation, the target contribution rate is a proportion of a contribution of the first target object to a second fighting result, and the second fighting result is a result obtained by performing the fighting operation in the current game in the first game formation and the second game formation.
A second determining unit 30, configured to determine a reward and punishment result of the first target object based on the base indicator and the target contribution rate.
In an optional embodiment, when the second fighting result is a winning result, the first target object is rewarded according to the reward punishment result, and when the second fighting result is a failing result, the first target object is punished according to the reward punishment result.
In an optional embodiment, the apparatus further comprises: a first acquisition unit configured to acquire first historical behavior data generated by a match operation performed by the first game team in the historical game, second historical behavior data generated by a match operation performed by the second game team in the historical game, and a historical match result generated by a match operation performed by the first game team in the historical game, before predicting a first target probability at which the match operation performed by the first game team and the second game team results in a first match result; the first training unit is used for training the first target model through the first historical behavior data, the second historical behavior data and the historical fight structure; the processing unit 10 includes: the first obtaining module is used for obtaining first current behavior data generated by the fight operation of the first game in the current game and second current behavior data generated by the fight operation of the second game in the current game; and the first input module is used for inputting the first current behavior data and the second current behavior data into the trained first target model to obtain a first target probability of the first fight result.
In an alternative embodiment, the first determination unit 20 includes: the second acquisition module is used for acquiring a plurality of first objects in the first game formation and a plurality of second objects in the second game formation, wherein the plurality of first objects comprise first target objects; a selection module, configured to randomly select a first target number of second target objects from a plurality of first objects and a plurality of second objects except for the first target object, match the first target number of second target objects as teammates of the first target object, and match a third target object except for the first target object and the second target object among the plurality of first objects and the plurality of second objects as an opponent of the first target object; the second input module is used for inputting the target behavior data of the first target object, the target behavior data of the second target object and the target behavior data of the third target object into the trained first target model to obtain a second target probability of the first fight result; and the first determining module is used for determining the second target probability as the target contribution rate.
In an alternative embodiment, the first determining unit 20 further includes: after randomly selecting the first target number of second target objects, a second determining module for determining a plurality of target results of randomly selecting the first target number of second target objects; a third obtaining module, configured to obtain a second target probability under each target result, to obtain a second target probability of a second target quantity, where the second target quantity may be a quantity of multiple target results; the first determining module includes: and the determining submodule is used for averaging the second target probabilities of the second target quantity to obtain the target contribution rate.
In an optional embodiment, the apparatus further comprises: the second acquisition unit is used for acquiring interactive behavior data generated when the fighting operation is carried out in the first game formation before determining the target contribution rate according to target behavior data generated when the first target object carries out the fighting operation in the current game, wherein the target behavior data comprises the interactive behavior data; the establishing unit is used for establishing a second target model based on the interactive behavior data, wherein nodes in the second target model are used for indicating objects in the first game formation, and paths in the second target model are used for indicating that interactive behaviors occur between the objects corresponding to the nodes on the paths; the first determination unit 20 includes: a fourth obtaining module, configured to obtain, in the second target model, a third target number of the first target path and a fourth target number of the second target path, where the first target path passes through a first node corresponding to the first target object and a second node corresponding to an object in the first game formation except the first target object, and the second target path passes through the second node; and the third determining module is used for determining the ratio of the third target quantity to the fourth target quantity as the target contribution rate.
In an optional embodiment, the apparatus further comprises: a third acquisition unit configured to acquire historical behavior data generated by the battle operations performed in the historical game by the first game battle and the second game battle before determining a target contribution rate according to target behavior data generated by the battle operations performed by the first target object in the current one-game; the clustering unit is used for clustering the historical behavior data to obtain multiple types of sub-historical behavior data; the fourth acquisition unit is used for acquiring historical contribution rates corresponding to the multiple types of sub-historical behavior data, wherein the historical contribution rates are the proportion of contribution of each type of sub-historical behavior data to the second engagement result; the second training unit is used for training the third target model through a plurality of types and historical contribution rates; the first determination unit 20 includes: the fourth determining module is used for determining the target type of the target behavior data; and the third input module is used for inputting the target type into the trained third target model to obtain the target contribution rate.
In an alternative embodiment, the first determination unit 20 includes: a fifth obtaining module, configured to obtain contribution rates of a plurality of objects in the first game lineup, where the contribution rate of each object is a proportion of contribution of each object to the second fighting result; and the processing module is used for carrying out normalization processing on the target contribution rate through the contribution rates of the plurality of objects to obtain the processed target contribution rate.
In an optional embodiment, the apparatus further comprises: and the correction unit is used for correcting the reward and punishment result through a correction value corresponding to the target condition under the condition that the target behavior data accords with the target condition after the reward and punishment result of the first target object is determined based on the basic index and the target contribution rate.
In an alternative embodiment, the apparatus further comprises at least one of: the third determining unit is used for determining that the target behavior data meets the target condition under the condition that the target behavior data enables the first target object to continuously obtain the second fighting result for a time which is larger than or equal to the target time before the reward and punishment result is corrected through the correction value corresponding to the target behavior data; and a fourth determination unit configured to determine that the target behavior data meets the target condition, in a case where the target behavior data indicates that the target behavior occurs in the current one-play game.
In an optional embodiment, the apparatus further comprises: and a fourth determination unit, configured to determine, after determining the reward and penalty result of the first target object based on the base indicator and the target contribution rate, a product of the reward and penalty result and the target coefficient as a target reward and penalty result of the first target object when the current one-game is in the level-fixed stage, where the target coefficient increases with an increase in a win field number or a fail field number of the first target object.
In an alternative embodiment, the processing unit 10 comprises: a sixth obtaining module, configured to obtain a first basic indicator corresponding to the first target probability when the second engagement result is the winning result, where the larger the first target probability is, the smaller the first basic indicator is; and the seventh obtaining module is used for obtaining a second basic index corresponding to the first target probability under the condition that the second fight result is a failure result, wherein the larger the first target probability is, the larger the second basic index is.
In an alternative embodiment, the processing unit 10 comprises: the eighth obtaining module is configured to obtain a basic score corresponding to the first target probability, where the basic score is used to adjust a reward and punishment score obtained by performing a fighting operation on any object in the first game play; the second determination unit 30 includes: and determining a reward and punishment score of the first target object based on the basic score and the target contribution rate, wherein the reward and punishment score is added to the original score of the first target object under the condition that the second warfare result is a winning result, and the reward and punishment score is reduced to the original score of the first target object under the condition that the second warfare result is a failing result.
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
In an alternative embodiment, in the present embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an optional embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented in a general purpose computing device, centralized on a single computing device or distributed across a network of multiple computing devices, and in alternative embodiments, may be implemented in program code that is executable by a computing device, such that the steps shown and described are performed by the computing device in storage and, in some cases, in an order other than that described herein, or as separate integrated circuit modules or as a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (15)

1. A game data processing method, comprising:
predicting a first target probability of a first fighting result obtained by carrying out fighting operation on a first game marketing and a second game marketing, and obtaining a basic index corresponding to the first target probability, wherein the basic index is used for adjusting a reward and punishment result obtained by carrying out the fighting operation on any object in the first game marketing;
determining a target contribution rate according to target behavior data generated when a first target object performs the fighting operation in the current game, wherein the first target object is any object in the first game formation, the target contribution rate is a proportion of contribution of the first target object to a second fighting result, and the second fighting result is a result obtained by performing the fighting operation in the current game formation by the first game formation and the second game formation;
Determining a reward and punishment result of the first target object based on the base indicator and the target contribution rate;
wherein predicting the first target probability that the fight operation by the first game lineup and the second game lineup yields the first fight result comprises: acquiring first current behavior data generated by the fight operation of the first game in the current game play and second current behavior data generated by the fight operation of the second game in the current game play; and inputting the first current behavior data and the second current behavior data into a trained first target model to obtain the first target probability, wherein the first target model is first historical behavior data generated by the fight operation performed in the historical game by the first game camp, second historical behavior data generated by the fight operation performed in the historical game by the second game camp, and a historical fight result generated by the fight operation performed in the historical game by the first game camp and the second game camp.
2. The method of claim 1, wherein the first target object is rewarded according to the reward and punishment result if the second pair of war results is a win result, and wherein the first target object is punished according to the reward and punishment result if the second pair of war results is a fail result.
3. The method of claim 1, wherein determining the target contribution rate according to the target behavior data generated by the first target object when performing the competing operation in the current game play comprises:
obtaining a plurality of first objects in the first gaming lineup and a plurality of second objects in the second gaming lineup, wherein the plurality of first objects includes the first target object;
randomly selecting a first target number of second target objects from the plurality of first objects and the plurality of second objects except the first target object, matching the first target number of second target objects as teammates of the first target object, and matching a third target object except the first target object and the second target object from the plurality of first objects and the plurality of second objects as opponents of the first target object;
Inputting the target behavior data of the first target object, the target behavior data of the second target object and the target behavior data of the third target object into the trained first target model to obtain a second target probability of the first fight result;
determining the second target probability as the target contribution rate.
4. The method of claim 3,
after randomly selecting the first target number of the second target objects, the method further comprises: determining a plurality of target results of the second target objects of the first target number selected at random; acquiring the second target probability under each target result to obtain a second target probability of a second target quantity, wherein the second target quantity is the quantity of the plurality of target results;
determining the second target probability as the target contribution rate comprises: and averaging the second target probabilities of the second target quantity to obtain the target contribution rate.
5. The method of claim 1,
before determining the target contribution rate according to the target behavior data generated when the first target object performs the fighting operation in the current game, the method further includes: acquiring interactive behavior data generated when the first game formation carries out the fight operation, wherein the target behavior data comprises the interactive behavior data; establishing a second target model based on the interactive behavior data, wherein nodes in the second target model are used for indicating objects in the first game formation, and paths in the second target model are used for indicating that interactive behaviors occur between the objects corresponding to the nodes on the paths;
Determining the target contribution rate according to the target behavior data generated when the first target object performs the fighting operation in the current game play comprises: in the second target model, acquiring a third target number of a first target path and a fourth target number of a second target path, wherein the first target path passes through a first node corresponding to the first target object and a second node corresponding to an object in the first game formation except the first target object, and the second target path passes through the second node; determining a ratio of the third target quantity to the fourth target quantity as the target contribution rate.
6. The method of claim 1,
before determining the target contribution rate according to the target behavior data generated when the first target object performs the fighting operation in the current game, the method further includes: acquiring historical behavior data generated by the fight operation in the historical games of the first game formation and the second game formation; clustering the historical behavior data to obtain sub-historical behavior data of multiple types; obtaining historical contribution rates corresponding to the sub-historical behavior data of the multiple types, wherein the historical contribution rates are proportions of contributions made by the sub-historical behavior data of each type to the second engagement result; training a third target model by the plurality of types and the historical contribution rate;
Determining the target contribution rate according to the target behavior data generated when the first target object performs the fighting operation in the current game play comprises: determining a target type of the target behavior data; inputting the target type into the trained third target model to obtain the target contribution rate.
7. The method according to any one of claims 1 to 6, wherein determining the target contribution rate according to the target behavior data generated by the first target object when the fight operation is performed in the current one-game includes:
acquiring the contribution rate of a plurality of objects in the first game lineup, wherein the contribution rate of each object is the proportion of the contribution of each object to the second fight result;
and carrying out normalization processing on the target contribution rate through the contribution rates of the plurality of objects to obtain the processed target contribution rate.
8. The method according to any one of claims 1 to 6, wherein after determining the reward and punishment result for the first target object based on the base indicator and the target contribution rate, the method further comprises:
And under the condition that the target behavior data accords with a target condition, correcting the reward and punishment result through a correction value corresponding to the target condition.
9. The method of claim 8, wherein prior to modifying the reward penalty result by a correction value corresponding to the target behavior data, the method further comprises at least one of:
determining that the target behavior data meets the target condition under the condition that the target behavior data enables the first target object to continuously obtain the second fight result for a number of times which is greater than or equal to a target number of times;
and determining that the target behavior data meets the target condition when the target behavior data indicates that a target behavior occurs in the current game.
10. The method according to any one of claims 1 to 6, wherein after determining the reward and punishment result for the first target object based on the base indicator and the target contribution rate, the method further comprises:
determining a product of the reward-penalty result and a target coefficient as a target reward-penalty result of the first target object when the current game play is in the staging stage, wherein the target coefficient increases with an increase in a number of winning fields or a number of losing fields of the first target object.
11. The method according to any one of claims 1 to 6, wherein obtaining a base indicator corresponding to the first target probability comprises:
under the condition that the second fight result is a victory result, acquiring a first basic index corresponding to the first target probability, wherein the larger the first target probability is, the smaller the first basic index is;
and acquiring a second basic index corresponding to the first target probability under the condition that the second fight result is a failure result, wherein the second basic index is larger when the first target probability is larger.
12. The method according to any one of claims 1 to 6,
obtaining the base indicator corresponding to the first target probability comprises: obtaining a basic score corresponding to the first target probability, wherein the basic score is used for adjusting a reward and punishment score obtained by any one object in the first game lineup performing the fighting operation;
determining a reward penalty result for the first target object based on the base indicator and the target contribution rate comprises: determining a reward-penalty score of the first target object based on the base score and the target contribution rate, wherein the reward-penalty score is increased to the original score of the first target object if the second engagement result is a winning result, and the reward-penalty score is decreased to the original score of the first target object if the second engagement result is a failing result.
13. A game data processing apparatus, comprising:
the system comprises a processing unit and a processing unit, wherein the processing unit is used for predicting a first target probability of a first fight result obtained by fight operation of a first play camp and a second play camp and obtaining a basic index corresponding to the first target probability, and the basic index is used for adjusting a reward punishment result obtained by the fight operation of any object in the first play camp;
a first determination unit, configured to determine a target contribution rate according to target behavior data generated when a first target object performs the battle operation in a current game of one game, where the first target object is any object in the first game formation, the target contribution rate is a proportion of a contribution of the first target object to a second battle result, and the second battle result is a result of the battle operation performed in the current game of one game of the first game formation and the second game formation;
a second determining unit, configured to determine a reward and punishment result of the first target object based on the basic indicator and the target contribution rate, where in a case where the second battle result is a win result, the first target object is rewarded according to the reward and punishment result, and in a case where the second battle result is a failure result, the first target object is punished according to the reward and punishment result;
Wherein the apparatus is further configured to predict the first target probability that the fight operation by the first and second game lineups will result in the first fight outcome by: acquiring first current behavior data generated by the fight operation of the first game in the current game play and second current behavior data generated by the fight operation of the second game in the current game play; and inputting the first current behavior data and the second current behavior data into a trained first target model to obtain the first target probability, wherein the first target model is first historical behavior data generated by the fight operation performed in the historical game by the first game camp, second historical behavior data generated by the fight operation performed in the historical game by the second game camp, and a historical fight result generated by the fight operation performed in the historical game by the first game camp and the second game camp.
14. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to, when executed by a processor, perform the method of any of claims 1 to 12.
15. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 12.
CN201910054620.6A 2019-01-21 2019-01-21 Game data processing method, game data processing device, storage medium and electronic device Active CN109821244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910054620.6A CN109821244B (en) 2019-01-21 2019-01-21 Game data processing method, game data processing device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910054620.6A CN109821244B (en) 2019-01-21 2019-01-21 Game data processing method, game data processing device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN109821244A CN109821244A (en) 2019-05-31
CN109821244B true CN109821244B (en) 2022-07-29

Family

ID=66860417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910054620.6A Active CN109821244B (en) 2019-01-21 2019-01-21 Game data processing method, game data processing device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN109821244B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111035936B (en) * 2019-12-17 2021-03-26 腾讯科技(深圳)有限公司 Account set adjusting method and device, storage medium and electronic device
CN111760291B (en) * 2020-07-06 2022-03-08 腾讯科技(深圳)有限公司 Game interaction behavior model generation method and device, server and storage medium
CN111888761A (en) * 2020-08-12 2020-11-06 腾讯科技(深圳)有限公司 Control method and device of virtual role, storage medium and electronic device
CN111905377B (en) * 2020-08-20 2021-12-10 腾讯科技(深圳)有限公司 Data processing method, device, equipment and storage medium
CN112138407B (en) * 2020-08-31 2024-07-12 杭州威佩网络科技有限公司 Information display method and device
CN113713379B (en) * 2021-09-07 2023-07-14 腾讯科技(深圳)有限公司 Object matching method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101385902A (en) * 2007-09-11 2009-03-18 世嘉股份有限公司 Network game system, method for processing the same, network game processing program product, and storage medium for storing program product
WO2013100364A1 (en) * 2011-12-28 2013-07-04 (주)네오위즈게임즈 Method and server for displaying prediction result information in online game
CN107998661A (en) * 2017-12-26 2018-05-08 苏州大学 A kind of aid decision-making method, device and the storage medium of online battle game
CN109107159A (en) * 2018-08-13 2019-01-01 深圳市腾讯网络信息技术有限公司 A kind of configuration method of application object attributes, device, equipment and medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10617961B2 (en) * 2017-05-07 2020-04-14 Interlake Research, Llc Online learning simulator using machine learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101385902A (en) * 2007-09-11 2009-03-18 世嘉股份有限公司 Network game system, method for processing the same, network game processing program product, and storage medium for storing program product
WO2013100364A1 (en) * 2011-12-28 2013-07-04 (주)네오위즈게임즈 Method and server for displaying prediction result information in online game
CN107998661A (en) * 2017-12-26 2018-05-08 苏州大学 A kind of aid decision-making method, device and the storage medium of online battle game
CN109107159A (en) * 2018-08-13 2019-01-01 深圳市腾讯网络信息技术有限公司 A kind of configuration method of application object attributes, device, equipment and medium

Also Published As

Publication number Publication date
CN109821244A (en) 2019-05-31

Similar Documents

Publication Publication Date Title
CN109821244B (en) Game data processing method, game data processing device, storage medium and electronic device
CN107970608B (en) Setting method and device of level game, storage medium and electronic device
US8272961B2 (en) Asynchronous challenge gaming
US8460078B2 (en) Fantasy game system and method for player selection and scoring
Olesen et al. Real-time challenge balance in an RTS game using rtNEAT
US8315722B1 (en) Advanced fantasy sports competition having user-drafted and system-generated fantasy teams
US11247128B2 (en) Method for adjusting the strength of turn-based game automatically
US20070207845A1 (en) Method of playing an interactive fantasy boxing league game
US20120270614A1 (en) Method for playing fantasy sports
US20060281535A1 (en) Game optimization system
CN111111196B (en) Online role playing game competition method and device and electronic equipment
US20150174491A1 (en) Updating virtual trading card characteristics
US9878234B2 (en) Incorporating objective assessments of fantasy-team-owners&#39; physical activity into fantasy sport platforms
WO2020027985A2 (en) Systems and methods for making real-time, fantasy sports coaching adjustments to live games
JP6411564B2 (en) Information processing system and information processing program
AU2002302156B2 (en) Game System and Game Control Method
Gangal et al. Analysis and prediction of football statistics using data mining techniques
CN117320791A (en) Game system, game control device, program, and recording medium
Diah et al. Quantifying engagement of video games: Pac-man and dota (defense of the ancients)
CN111389010B (en) Virtual robot training method, device, electronic equipment and medium
Christiansen et al. Multi-parameterised matchmaking: A framework
CN101263475A (en) Fantasy sports system and method thereof
CN115689333A (en) Popular competitive level evaluation method, invited competitive sports fighting method and equipment
WO2024004441A1 (en) Program, information processing system, and information processing method
JP7244389B2 (en) Information processing device, information processing method and information processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant