CN116966557A - Game video stream sharing method and device, storage medium and electronic equipment - Google Patents

Game video stream sharing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116966557A
CN116966557A CN202210411602.0A CN202210411602A CN116966557A CN 116966557 A CN116966557 A CN 116966557A CN 202210411602 A CN202210411602 A CN 202210411602A CN 116966557 A CN116966557 A CN 116966557A
Authority
CN
China
Prior art keywords
video stream
game
parameters
target
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210411602.0A
Other languages
Chinese (zh)
Inventor
周逸恒
刘勇成
胡志鹏
袁思思
程龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210411602.0A priority Critical patent/CN116966557A/en
Publication of CN116966557A publication Critical patent/CN116966557A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a game video stream sharing method, a game video stream sharing device, a storage medium and electronic equipment, and relates to the technical field of games. The game video stream sharing method comprises the following steps: identifying a target highlight video stream in the target game video stream by using highlight video stream parameters of the machine learning model after parameter optimization; the target highlight video stream is a game video stream obtained by shooting a virtual game scene by a virtual camera configured with shooting parameters of the machine learning model after parameter optimization; model parameters of the machine learning model at least comprise highlight video stream parameters and shooting parameters of the virtual camera; and responding to the sharing operation of the target highlight video stream, and sharing the target highlight video stream. The method and the device realize automatic identification and shooting of the highlight video, can acquire personalized game highlight scene pictures, are high in scene adaptability, do not need to record by a player manually adopting a sightseeing view angle, and can simplify the operation complexity of the player to a certain extent.

Description

Game video stream sharing method and device, storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of games, in particular to a game video stream sharing method, a game video stream sharing device, a computer readable storage medium and electronic equipment.
Background
With the continuous development of game technology, game play is increasingly favored by game players. During the game playing process, some game scene content with more wonderful stimulus may be generated, and sharing the game scene content with the wonderful stimulus has become one of the main functions in the game at present, and is usually implemented by a built-in sharing function.
In the related art, if a game player wants to share the game scene content of the more wonderful stimulus generated in the game process, the game player usually intercepts the game wonderful picture under the control of the player and shares the game wonderful picture. However, for some kinds of games, such as a near combat game, there may be complex character position interpenetration and variation during the combat process, the game view angle picture under the control of the player may not be the optimal viewing angle, and the manner of intercepting and sharing may result in that the generated game view angle picture cannot well meet the viewing requirement of the player, so that the game scene adaptability is poor. In addition, if the player wants to generate the game highlight with the non-player viewing angle, the player needs to manually record with the viewing angle, the operation is too complicated, and the manual operation requirement for the player is higher, so that the implementation difficulty is higher.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure provides a game video stream sharing method, a game video stream sharing device, a computer readable storage medium and electronic equipment, so as to solve the problem that in the related art, the game scene adaptability is poor and the watching requirement of a player cannot be well met to at least a certain extent.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a game video stream sharing method, the method comprising: identifying a target highlight video stream in the target game video stream by using highlight video stream parameters of the machine learning model after parameter optimization; the target highlight video stream is a game video stream obtained by shooting a virtual game scene by a virtual camera configured with shooting parameters of a machine learning model after parameter optimization; model parameters of the machine learning model at least comprise highlight video stream parameters and shooting parameters of the virtual camera; and responding to the sharing operation of the target highlight video stream, and sharing the target highlight video stream.
In an exemplary embodiment of the present disclosure, the method further comprises: and training and optimizing model parameters of the machine learning model by taking the historical highlight video as training data and the interactive feedback data of the historical highlight video as an objective function to obtain the machine learning model with optimized parameters.
In an exemplary embodiment of the present disclosure, the method further comprises: dividing the historical highlight video into a training set, a verification set and a test set; iteratively optimizing model parameters of the machine learning model based on the training set, the verification set and the test set; the training set is used for training the machine learning model, the verification set is used for evaluating the machine learning model, and the test set is used for testing the machine learning model; and when the model parameters of the machine learning model meet the termination conditions of iterative optimization, obtaining the machine learning model after parameter optimization.
In an exemplary embodiment of the present disclosure, the method further comprises: updating the training data by taking the target highlight video stream as a history highlight video; and carrying out a new round of iterative optimization on the model parameters of the machine learning model after parameter optimization based on the updated training data.
In one exemplary embodiment of the present disclosure, model parameters of the machine learning model are optimized over a range of model parameter intervals.
In one exemplary embodiment of the present disclosure, the model parameters of the machine learning model are discrete model parameters that satisfy a normal distribution.
In an exemplary embodiment of the present disclosure, the highlight video stream parameters include a specified game event and an event weight corresponding to the specified game event, and the identifying a target highlight video stream in the target game video stream using the highlight video stream parameters of the parameter-optimized machine learning model includes: determining the highlight score value of the candidate game video stream in the target game video stream according to the designated game event in the target game video stream and the event weight corresponding to the designated game event; the highlight score value characterizes a level of highlighting of the candidate game video stream; and taking the candidate game video streams with the highlight score value exceeding the highlight score threshold value as target highlight video streams.
In one exemplary embodiment of the present disclosure, the model parameters of the machine learning model further comprise game player attributes, the method further comprising: acquiring a target game player attribute, wherein the target game player attribute is a game player attribute corresponding to the target game video stream; determining highlight video stream parameters matched with the target game player attributes and shooting parameters of a virtual camera matched with the target game player attributes from model parameters of the machine learning model; when identifying a target highlight video stream in target game video streams, taking highlight video stream parameters matched with the target game player attributes as highlight video stream parameters adopted by the machine learning model, and taking shooting parameters of a virtual camera matched with the target game player attributes as shooting parameters adopted by the machine learning model.
In an exemplary embodiment of the present disclosure, the shooting parameters of the virtual camera include any one or more of the following: the virtual camera comprises an aiming point of the virtual camera, a shooting distance of the virtual camera, a horizontal angle of the virtual camera, a vertical angle of the virtual camera, a surrounding direction of the virtual camera and a surrounding speed of the virtual camera.
According to a second aspect of the present disclosure, there is provided a game video stream sharing apparatus, the apparatus comprising: the video stream identification module is used for identifying a target highlight video stream in the target game video stream by using highlight video stream parameters of the machine learning model after parameter optimization; the target highlight video stream is a game video stream obtained by shooting a virtual game scene by a virtual camera configured with shooting parameters of a machine learning model after parameter optimization; model parameters of the machine learning model at least comprise highlight video stream parameters and shooting parameters of the virtual camera; and the video stream sharing module is used for responding to the sharing operation of the target highlight video stream and sharing the target highlight video stream.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described game video stream sharing method.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the above-described game video stream sharing method via execution of the executable instructions.
The technical scheme of the present disclosure has the following beneficial effects:
in the game video stream sharing process, identifying a target highlight video stream in the target game video stream by using highlight video stream parameters of the machine learning model after parameter optimization; the target highlight video stream is a game video stream obtained by shooting a virtual game scene by a virtual camera configured with shooting parameters of the machine learning model after parameter optimization; model parameters of the machine learning model at least comprise highlight video stream parameters and shooting parameters of the virtual camera; and responding to the sharing operation of the target highlight video stream, and sharing the target highlight video stream. On the one hand, through the highlight video stream parameters in the machine learning model and the shooting parameters of the virtual cameras to identify and shoot the target highlight video stream in the target game video stream, the automatic identification and shooting of the highlight video are realized, the traditional shooting mode with a fixed visual angle is broken, personalized game highlight scene pictures can be obtained, and the scene adaptability is high. On the other hand, the player does not need to manually record by adopting the sightseeing angle, and the operation complexity of the player can be simplified to a certain extent.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely some embodiments of the present disclosure and that other drawings may be derived from these drawings without undue effort.
FIG. 1 illustrates a flow chart of a game video stream sharing method in the present exemplary embodiment;
FIG. 2 shows a flow chart of a machine learning model after parameter optimization in the present exemplary embodiment;
FIG. 3 illustrates a flowchart of one method of identifying a target highlight video stream in a target game video stream in the present exemplary embodiment;
fig. 4 is a flowchart showing a determination of highlight video stream parameters used to identify a target highlight video stream and photographing parameters of a virtual camera in the present exemplary embodiment;
FIG. 5 illustrates a flowchart of one method for identifying a target highlight video stream based on target game player attributes in the present exemplary embodiment;
Fig. 6 is a block diagram showing a game video stream sharing apparatus in the present exemplary embodiment;
fig. 7 shows an electronic device for implementing the above-described game video stream sharing method in this exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
In the related art, a mode of generating a game highlight scene based on a player's view angle by intercepting a sharing function is generally suitable for games with fixed view angle types such as tower defense, card or shooting. However, for the near combat game of the cold weapon with a free view angle, the most wonderful view angle of a combat is not the view angle under the control of the player, but the bystander view angle, for example, the view angle of the two parties of the fight can be accommodated, and in this case, the intercepting and sharing function cannot well meet the requirements of the player. If the player is to use himself or herself in a video recording mode similar to the game play, the player manually uses the viewing angle to generate video recordings of the wonderful scenes of the game, which is excessively complicated and time-consuming to operate.
In view of one or more of the problems described above, exemplary embodiments of the present disclosure provide a game video stream sharing method. The game video stream sharing method can be suitable for games with fixed visual angles and games with free visual angles. In the game with a fixed viewing angle, the method can be used for fusing the viewing angles of multiple players and generating a high-integrity wonderful video stream for sharing by the players. In the game of the free view angle, the fight among players can undergo repeated position insertion and transformation, enemies can be inserted behind the players, the players need to frequently rotate the view angle for observation, the view angle of the players is favorable for game control, but is unfavorable for the bystander view angle played back as a wonderful lens, and the game video stream sharing method can better solve the problems.
The game video stream sharing method in one embodiment of the present disclosure may be run on a local terminal device or a server. When the game video stream sharing method runs on a server, the game video stream sharing method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, a running main body of the game program and a game picture presentation main body are separated, the storage and running of the game video stream sharing method are completed on a cloud game server, and the client device is used for receiving and sending data and presenting game pictures, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the cloud game server that shares the game video stream is the cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
An exemplary embodiment of the present disclosure provides a game video stream sharing method, as shown in fig. 1, specifically including the following steps S110 to S120:
step S110, identifying a target highlight video stream in the target game video stream by using highlight video stream parameters of the machine learning model after parameter optimization; the target highlight video stream is a game video stream obtained by shooting a virtual game scene by a virtual camera configured with shooting parameters of the machine learning model after parameter optimization; model parameters of the machine learning model at least comprise highlight video stream parameters and shooting parameters of the virtual camera;
in step S120, the target highlight video stream is shared in response to the sharing operation of the target highlight video stream.
In the game video stream sharing process, on one hand, through the highlight video stream parameters in the machine learning model and the shooting parameters of the virtual cameras, the target highlight video stream in the target game video stream is identified and shot, so that automatic identification and shooting of the highlight video are realized, a traditional shooting mode with a fixed visual angle is broken, personalized game highlight scene pictures can be obtained, and the scene adaptability is high. On the other hand, the player does not need to manually record by adopting the sightseeing angle, and the operation complexity of the player can be simplified to a certain extent.
Each step in fig. 1 is specifically described below.
Step S110, identifying a target highlight video stream in the target game video stream by using highlight video stream parameters of the machine learning model after parameter optimization; the target highlight video stream is a game video stream obtained by shooting a virtual game scene by a virtual camera configured with shooting parameters of the machine learning model after parameter optimization; the model parameters of the machine learning model at least comprise highlight video stream parameters and shooting parameters of the virtual camera.
In step S110, the target highlight video stream in the target game video stream may be identified in real time during the game process and before the game is finished, or may be identified after the game is finished. In practical applications, the timing of identifying the target highlight video stream in the target game video stream may be set according to practical needs, and is not particularly limited herein.
The target game video stream refers to a game video stream corresponding to a game play for identifying a highlight period, and may be a game video stream corresponding to a current game play or a video stream corresponding to a history game play. The target highlight video stream refers to a game video stream corresponding to a game highlight period identified in a game pair corresponding to the target game video stream, and the target highlight video stream can be obtained by controlling the virtual camera to shoot a virtual game scene corresponding to the identified game highlight period based on shooting parameters. The virtual game scene may include game virtual objects in the game pair that are controlled by the game player.
The machine learning model refers to a machine learning model capable of continuously training model parameters such as highlight video stream parameters, shooting parameters of a virtual camera and the like, and the training of the machine learning model can adopt a series of mainstream machine learning methods such as a neural network and the like to continuously optimize the model parameters of the machine learning model. The model parameters of the machine learning model at least comprise highlight video stream parameters and shooting parameters of the virtual camera.
Wherein the highlight-video recognition parameter refers to a parameter that recognizes a highlight-game period from the target-game video stream. The highlight video recognition parameter may include an event weight value corresponding to the specified game event. Wherein the specified game event may be, for example: the game event types such as the highest damage of the existing game virtual object in unit time, the highest number of the enemy-killing actions of the existing game virtual object in unit time, the continuous attack of the existing game virtual object in unit time and the final survival of the existing game virtual object, the continuous hit of the game skill of the existing game virtual object in unit time, and the like are generated. The game virtual object here is a game virtual object located in a game play corresponding to the target game video stream, and may be a virtual object controlled by the current game player or may be a virtual object previously set to be controlled by any other game player located in the game play corresponding to the target game video stream. The game event types listed herein are merely illustrative, and other more prominent specified game events may be provided during actual use, and the specific types of specified game events are not specifically limited herein. In addition, the event weight value corresponding to a specified game event refers to a parameter that can be used to comment on the level of the game event.
The shooting parameters of the virtual camera refer to parameters configured when the virtual camera shoots a game scene in the game scene, and may include any one or more of the following: shooting parameters such as aiming point of the virtual camera, shooting distance of the virtual camera, horizontal angle of the virtual camera, vertical angle of the virtual camera, surrounding direction and speed of the virtual camera and the like.
The aiming point of the virtual camera refers to the center point of the virtual game scene shot by the virtual camera, and a parameter from 0 to 1 can be used for representing the distance between the center point and the virtual attack object or the virtual attacked object. For example: the center point falls on the center line of the virtual attack object and the virtual attacked object and can be represented by 0.5; the center point falls on the virtual attack object, which can be represented by 0; the center point falls on the virtual attacked object and can be represented by 1. When the center point falls on the center line of the virtual attack object and the virtual attacked object, the virtual camera can focus between the virtual attack object and the virtual attacked object to shoot a virtual game scene; when the center point falls on the virtual attack object, the virtual camera can focus and shoot the virtual attack object in the virtual game scene; when the center point falls on the virtual attacked object, the virtual camera can focus on shooting the virtual attacked object located in the virtual game scene. The virtual attack object refers to a virtual object which initiates an attack in a virtual game scene, and the virtual attacked object refers to a virtual object which is attacked in the virtual game scene.
The shooting distance of the virtual camera refers to the distance between the virtual camera and the aiming point of the lens, the shooting distance when the shooting picture just can accommodate the virtual attack object and the virtual attacked object is recorded as 1, and then the shooting distance of the lens is described by adopting a parameter within a positive real number range.
The horizontal angle of the virtual camera refers to the horizontal direction angle of the virtual camera, and the lens horizontal angle can be described by a parameter from 0 to 180 degrees. For example: when the horizontal angle of the lens is 0 degrees, the virtual camera is looking from a virtual attack object to a virtual attacked object; when the horizontal angle of the virtual camera is within the range of (0 DEG, 180 DEG), the virtual camera is 'bystanded on one side of the connecting line of the virtual attack object and the virtual attacked object'; when the horizontal angle of the virtual camera is 180 degrees, the virtual camera is looking from a virtual attacked object to a virtual attack object. In these three cases, the horizontal angle of the virtual camera is exactly 180 degrees (based on the connection line of the virtual attack object and the virtual attacked object).
The vertical angle of the virtual camera refers to the elevation angle height of the virtual camera, the range of the elevation angle can be set between (-90 degrees, 90 degrees), 0 degrees can be used as the initial value of the vertical angle of the lens, and the virtual camera can shoot in the horizontal direction.
The imaging parameters such as the surrounding direction and the speed of the virtual camera refer to the rotation shooting of the virtual camera around the virtual attack object and the virtual attacked object, wherein the sign can be used for representing the surrounding direction of the lens, the numerical value represents the surrounding speed of the lens, and the larger the numerical value is, the larger the surrounding speed is.
It should be noted that, because the game fight is a dynamically changing game scene, the relative position and distance between the virtual attack object and the virtual attacked object may change, so when the virtual camera shoots according to the configured shooting parameters, the virtual camera will follow correspondingly according to the game fight situation, so that the lens picture of the highlight scene is more dynamic and stereoscopic.
Because model parameters of the machine learning model can be continuously optimized, the method can dynamically adapt to complex and changeable game scenes, and is beneficial to generating a target highlight video stream which better accords with the sharing tendency of game players.
In an alternative embodiment, the machine learning model after parameter optimization can be obtained by training and optimizing model parameters of the machine learning model by taking the historical highlight video as training data and taking the interactive feedback data of the historical highlight video as an objective function.
The history highlight video refers to a highlight video corresponding to a history game, the interactive feedback data of the history highlight video can include feedback information of whether the history highlight video is shared, praise, comment and other operations, and the satisfaction degree of a game player on the history highlight video can be reflected, namely whether the interactive feedback data is matched with personalized sharing requirements of the game player. For example, if the historical highlight video is shared, it indicates that the historical highlight video matches the personalized sharing requirement of the game player; if the historical highlight is not shared, the historical highlight is not matched with the personalized sharing requirement of the game player.
In the process, the machine learning model is trained by taking the historical highlight video as training data and the interactive feedback data of the historical highlight video as an objective function, so that model parameters of the target highlight video stream are continuously optimized and generated, a traditional shooting mode with a fixed visual angle is broken, personalized identification of the target highlight video stream is realized, and scene adaptability is high.
The model parameters of the machine learning model can be optimized in the range of the model parameter interval, namely, corresponding value ranges can be respectively determined for the configuration parameter values of the model parameters in advance, so that meaningless value spaces are eliminated, certain free activity spaces of the configuration parameters are reserved, and the problem that the fixed parameters are difficult to cover various game scenes is avoided.
In addition, the model parameters of the machine learning model may also be discrete model parameters that satisfy a normal distribution. For example, when the preset value range corresponding to the shooting distance of the virtual camera is [10, 30], the generated shooting distance of the virtual camera may satisfy the parameters of normal distribution with the mean value of 20 (the center of the value range) and the standard deviation of 10. Thus, the model parameter values fall in the middle part of the respective value ranges under most conditions, and the model parameter values exceed the value ranges with a certain probability under the small conditions, and if the highlight video truly causes the sharing feedback of the game player, the sharing feedback can be captured.
In an alternative embodiment, the steps shown in fig. 2 may be further performed to obtain a machine learning model after parameter optimization, which may specifically include the following steps S210 to S230:
step S210, dividing the history highlight video into a training set, a verification set and a test set;
step S220, performing iterative optimization on model parameters of the machine learning model based on the training set, the verification set and the test set; the training set is used for training the machine learning model, the verification set is used for evaluating the machine learning model, and the test set is used for testing the machine learning model;
Step S230, when the model parameters of the machine learning model meet the termination condition of iterative optimization, obtaining the machine learning model after parameter optimization.
The training set refers to a historical highlight video set for training the machine learning model and can be used for training model parameters in the machine learning model; the verification set refers to a historical highlight video set used to verify the performance of the machine learning model; the test set refers to a historical highlight video set for generalizing the trained machine learning model.
The termination condition of the iterative optimization refers to a preset condition for stopping the iterative optimization of the machine learning model. The method can be as follows: the number of optimization iterations reaches a preset number, the time used for iterative optimization reaches a preset duration, the error of the machine learning model is smaller than a specific threshold, etc., and the termination condition of iterative optimization in the actual application process can be set according to the needs, and is not particularly limited herein.
In the steps shown in fig. 2, when the model parameters of the machine learning model are iteratively optimized, the model generalization error can be reduced by dividing the historical highlight data into samples, so that when the trained machine learning model is deployed into the real environment, the model parameters with stronger suitability can be provided, and further, a more highlight video stream can be identified from the target game video. In addition, based on the optimized model parameters, dynamic optimization of the game highlight scene images can be realized, so that the mirror transporting effect of the virtual camera is continuously improved, the workload of developers can be reduced to a certain extent, and the problem of rule setting defects can be avoided.
Furthermore, the machine learning model referred to above may include, but is not limited to, the following components: the system comprises a data storage and data calling component, a data preprocessing component, a model training component and a training result summarizing and visualizing component. The data storage and data calling component can store and call various data generated in the running process of the machine learning model, and can also store and call sample data for iterative optimization, such as historical highlight video, and can be used for supporting the running of the machine learning model. The data preprocessing component can clean various data to be stored in the data storage and data calling component, and remove abnormal data, for example: the sharing operation of the target highlight video stream is not returned due to network reasons, so that data abnormality and the like are caused. In addition, the data preprocessing component can normalize and standardize each dimension characteristic of the data to be stored in the data storage and data calling component. The model training component can learn by adopting a neural network, takes interactive feedback data of the historical highlight video as an objective function, and optimizes model parameters such as highlight video stream parameters, shooting parameters of the virtual camera and the like. The training result summarizing and visualizing component can visualize the training results (such as optimized model parameters, prediction accuracy of a machine learning model and other relevant training data) obtained by the model training component, so that a game designer can judge the training condition of the model according to the training results.
Because of the complex variability of game play, there may be multiple game moments of relatively high interest in a game, when identifying a target highlight video stream in a target game video stream using highlight video stream parameters of a machine learning model after parameter optimization, candidate game video streams satisfying one or more specified game events may be screened out of the target game video streams in advance, and the target highlight video stream may be determined from the candidate game video streams according to game weight values of the specified game events corresponding to the candidate game video streams. It should be noted that, the game duration corresponding to the candidate game video stream may be a preset fixed duration, and the game duration corresponding to the candidate game video stream may also be determined by the start time and the end time of the designated game event related to the candidate game video stream, which is not specifically limited herein.
Because a single candidate video stream can simultaneously meet a plurality of specified game events, the game weight value of the specified game event can be set to quantify the wonderful degree of each candidate video stream, and the method is favorable for more accurately screening the wonderful game scene content in the games from the plurality of candidate video streams. It should be noted that, the event weights corresponding to the specified game events are related to the highlighting degree of the specified game events, so that the event weights corresponding to different game events may be different, and the event weights corresponding to the game events may be continuously updated following iterative optimization of model parameters of the machine learning model.
In an alternative embodiment, the highlight video stream parameters may include a specified game event and an event weight corresponding to the specified game event, and the identifying the target highlight video stream in the target game video stream using the highlight video stream parameters of the machine learning model after the parameter optimization in step S110 may be implemented by the steps shown in fig. 3:
step S310, determining the highlight score value of the candidate game video stream in the target game video stream according to the designated game event in the target game video stream and the event weight corresponding to the designated game event; the highlight score value characterizes the level of highlighting of the candidate game video stream;
step S320, the candidate game video stream with the highlight score exceeding the highlight score threshold is taken as the target highlight video stream.
It should be noted that, to facilitate quantization and comparison of the level of highlights between candidate game video streams, the event weights of the game events may be within a specific range, such as a [0,1] range.
When determining the highlight score value of the candidate game video stream in the target game video stream according to the designated game event in the target game video stream and the event weight corresponding to the designated game event, candidate game video streams meeting one or more designated game events can be screened out of the target game video stream in advance, and the game weight values of the designated game events met by the candidate game video streams are summed to obtain the highlight score value of the corresponding candidate video stream.
In the steps shown in fig. 3, the level of the candidate game video stream's highlights is quantized based on the candidate game video stream's highlights value in order to obtain a target highlight video stream that more meets the game player's sharing needs.
In addition, the parameters of the wonderful video stream in the model parameters and the shooting parameters of the virtual camera can be bound with the attributes of the game players so as to match different model parameters for different game players, thereby realizing the personalized requirements of the game players for the target wonderful video stream.
In an alternative embodiment, the model parameters of the machine learning model may further include game player attributes, and the determining the highlight video stream parameters adopted for identifying the target highlight video stream and the shooting parameters of the virtual camera through the steps shown in fig. 4 may specifically include the following steps S410 to S430:
step S410, obtaining a target game player attribute, wherein the target game player attribute is a game player attribute corresponding to a target game video stream;
step S420, determining highlight video stream parameters matched with the attributes of the target game player and shooting parameters of the virtual camera matched with the attributes of the target game player from model parameters of the machine learning model;
In step S430, when identifying the target highlight video stream in the target game video stream, the highlight video stream parameter matching the target game player attribute is used as the highlight video stream parameter adopted by the machine learning model, and the shooting parameter of the virtual camera matching the target game player attribute is used as the shooting parameter adopted by the machine learning model.
The target virtual object may be a virtual object controlled by the current game player, or may be a virtual object controlled by any other game player in the game pair corresponding to the target game video stream, which is set in advance. Specifically, a target virtual object may be determined from the local game in response to a selection operation of the current game player.
Wherein the target game player attributes may include: attributes such as player segment, player gender, virtual objects (e.g., avatars or virtual characters, etc.) controlled by the player, weapons or props assembled by the player, and ornamental skins reflect various game features or properties of the game player in the game.
Since different game players may have different preferences for game highlight video content and game video capture perspectives, the game highlight video content and game video capture perspectives to which different controlled target virtual objects are adapted may be different. Illustratively, a game player biased towards close combat by a cold weapon may prefer a lens perspective that reveals combat details in close range; players who prefer remote attacks, remote skills, may prefer long-range global shots. For example, when the target virtual object is a fighter, the method is more suitable for showing the view angle of the fighter detail in a short distance; when the target virtual object is a shooter, the method is more suitable for long-distance global lens visual angles.
In the steps shown in fig. 4, a mapping relationship is established between the game player attribute and the capturing parameters of the highlight video stream parameter and the virtual camera, so as to adopt the highlight video stream parameter and the capturing parameters of the virtual camera, which are in accordance with the characteristics of the game player, thereby generating a personalized target highlight video stream. The target game player attribute herein refers to a game player attribute corresponding to the target virtual object.
It should be noted that, when the model parameters of the machine learning model include game player attributes, the game player attributes corresponding to the historical highlight video, the highlight video stream parameters, and the shooting parameters of the virtual camera may be used as training data, and the interactive feedback data of the historical highlight video (for example, the sharing result of the target highlight video stream) may be used as an objective function to train and optimize the model parameters of the machine learning model, so as to obtain the machine learning model with optimized parameters, so as to dynamically promote the matching degree between the identified target highlight game video stream and the sharing requirement of the game player.
As shown in FIG. 5, a flow chart for identifying a target highlight video stream based on target game player attributes is provided to achieve a personalized need for the target highlight video stream by a game player, which may include the following steps.
Step S510, obtaining a target game player attribute, wherein the target game player attribute is a game player attribute corresponding to a target game video stream;
step S520, determining highlight video stream parameters matched with the attributes of the target game player and shooting parameters of the virtual camera matched with the attributes of the target game player from model parameters of the machine learning model after parameter optimization;
step S530, identifying a target highlight video stream in the target game video stream by using highlight video stream parameters matched with the target game player attributes; the target highlight video stream is a game video stream obtained by shooting a virtual game scene by a virtual camera configured with shooting parameters matched with the attributes of the target game player.
After identifying the target highlight video stream, step S120 in fig. 1 may be continued.
In step S120, the target highlight video stream is shared in response to the sharing operation of the target highlight video stream.
The sharing operation of the target highlight video stream refers to a player operation of sharing the target highlight video stream, and may be, for example: and (3) clicking the shortcut sharing button on the game settlement interface. The sharing operation can share the target highlight video stream to the social platform in the form of gif small animation, so that the sharing of the target highlight video stream is more stereoscopic and vivid. Gif is a relatively common dynamic image format, and may be an animation formed by combining multiple frames of images together.
In an alternative embodiment, the target highlight video stream may also be updated as historical highlight video training data; and carrying out new iterative optimization on model parameters of the machine learning model after parameter optimization based on the updated training data so as to continuously improve the matching degree of the follow-up identified highlight game video stream and the personalized video sharing requirement of the game player.
The exemplary embodiment of the present disclosure further provides a game video stream sharing device, as shown in fig. 6, the game video stream sharing device 600 may include:
a video stream identification module 610 for identifying a target highlight video stream in the target game video stream using highlight video stream parameters of the parameter-optimized machine learning model; the target highlight video stream is a game video stream obtained by shooting a virtual game scene by a virtual camera configured with shooting parameters of the machine learning model after parameter optimization; model parameters of the machine learning model at least comprise highlight video stream parameters and shooting parameters of the virtual camera;
the video stream sharing module 620 is configured to share the target highlight video stream in response to a sharing operation of the target highlight video stream.
In an alternative embodiment, the game video stream sharing device 600 may include: the first model training module is used for training and optimizing model parameters of the machine learning model by taking the historical highlight video as training data and the interactive feedback data of the historical highlight video as an objective function, so as to obtain the machine learning model with optimized parameters.
In an alternative embodiment, the first model training module may be configured to: dividing the historical highlight video into a training set, a verification set and a test set; performing iterative optimization on model parameters of the machine learning model based on the training set, the verification set and the test set; the training set is used for training the machine learning model, the verification set is used for evaluating the machine learning model, and the test set is used for testing the machine learning model; and when the model parameters of the machine learning model meet the termination conditions of iterative optimization, obtaining the machine learning model after parameter optimization.
In an alternative embodiment, the game video stream sharing device 600 may further include: a second model training module, the second model training module may be configured to: the target highlight video stream is used as historical highlight video update training data; and carrying out a new round of iterative optimization on the model parameters of the machine learning model after parameter optimization based on the updated training data.
In an alternative embodiment, model parameters of the machine learning model in the game video stream sharing apparatus 600 may be optimized within the range of model parameter intervals.
In an alternative embodiment, the model parameters of the machine learning model in the game video stream sharing device 600 may be discrete model parameters that satisfy a normal distribution.
In an alternative embodiment, the highlight video stream parameter includes a specified game event and an event weight corresponding to the specified game event, and the video stream identification module 610 may be configured to: determining the highlight score value of the candidate game video stream in the target game video stream according to the designated game event in the target game video stream and the event weight corresponding to the designated game event; the highlight score value characterizes the level of highlighting of the candidate game video stream; and taking the candidate game video streams with the highlight score value exceeding the highlight score threshold value as target highlight video streams.
In an alternative embodiment, the model parameters of the machine learning model further include game player attributes, and the game video stream sharing device 600 further includes: the attribute acquisition module is used for acquiring the attribute of the target game player, wherein the attribute of the target game player is the attribute of the game player corresponding to the target game video stream; a parameter determination model for determining, from model parameters of the machine learning model, a highlight video stream parameter matching the target game player attribute and a shooting parameter of a virtual camera matching the target game player attribute; and the parameter use model is used for taking the highlight video stream parameters matched with the attributes of the target game player as highlight video stream parameters adopted by the machine learning model and taking the shooting parameters of the virtual camera matched with the attributes of the target game player as shooting parameters adopted by the machine learning model when identifying the target highlight video stream in the target game video stream.
In an alternative embodiment, the shooting parameters of the virtual camera in the game video stream sharing device 600 may include any one or more of the following: the virtual camera comprises an aiming point of the virtual camera, a shooting distance of the virtual camera, a horizontal angle of the virtual camera, a vertical angle of the virtual camera, a surrounding direction of the virtual camera and a surrounding speed of the virtual camera.
The specific details of each part of the game video stream sharing device 600 are described in detail in the method part embodiments, and the details not disclosed can be referred to the embodiment content of the method part, so that the details are not described again.
Exemplary embodiments of the present disclosure also provide a computer readable storage medium having stored thereon a program product capable of implementing the game video stream sharing method described above in the present specification. In some possible implementations, aspects of the present disclosure may also be implemented in the form of a program product comprising program code for causing an electronic device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on an electronic device. The program product may employ a portable compact disc read-only memory (CD-ROM) and comprise program code and may be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The exemplary embodiment of the disclosure also provides an electronic device capable of implementing the game video stream sharing method. An electronic device 700 according to such an exemplary embodiment of the present disclosure is described below with reference to fig. 7. The electronic device 700 shown in fig. 7 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic device 700 may be embodied in the form of a general purpose computing device. Components of electronic device 700 may include, but are not limited to: at least one processing unit 710, at least one memory unit 720, a bus 730 connecting the different system components (including the memory unit 720 and the processing unit 710), and a display unit 740.
The storage unit 720 stores program code that can be executed by the processing unit 710, so that the processing unit 710 performs the steps according to various exemplary embodiments of the present disclosure described in the above-described "exemplary method" section of the present specification. For example, the processing unit 710 may perform any one or more of the method steps of fig. 1-5.
The memory unit 720 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 721 and/or cache memory 722, and may further include Read Only Memory (ROM) 723.
The storage unit 720 may also include a program/utility 724 having a set (at least one) of program modules 725, such program modules 725 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 730 may be a bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 700 may also communicate with one or more external devices 800 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 700, and/or any device (e.g., router, modem, etc.) that enables the electronic device 700 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 750. Also, electronic device 700 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 760. As shown, network adapter 760 communicates with other modules of electronic device 700 over bus 730. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 700, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the exemplary embodiments of the present disclosure.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with exemplary embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, various aspects of the disclosure may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method for sharing a game video stream, the method comprising:
identifying a target highlight video stream in the target game video stream by using highlight video stream parameters of the machine learning model after parameter optimization; the target highlight video stream is a game video stream obtained by shooting a virtual game scene by a virtual camera configured with shooting parameters of a machine learning model after parameter optimization; model parameters of the machine learning model at least comprise highlight video stream parameters and shooting parameters of the virtual camera;
and responding to the sharing operation of the target highlight video stream, and sharing the target highlight video stream.
2. The method according to claim 1, wherein the method further comprises:
and training and optimizing model parameters of the machine learning model by taking the historical highlight video as training data and the interactive feedback data of the historical highlight video as an objective function to obtain the machine learning model with optimized parameters.
3. The method according to claim 2, wherein the method further comprises:
dividing the historical highlight video into a training set, a verification set and a test set;
iteratively optimizing model parameters of the machine learning model based on the training set, the verification set and the test set; the training set is used for training the machine learning model, the verification set is used for evaluating the machine learning model, and the test set is used for testing the machine learning model;
And when the model parameters of the machine learning model meet the termination conditions of iterative optimization, obtaining the machine learning model after parameter optimization.
4. The method according to claim 2, wherein the method further comprises:
updating the training data by taking the target highlight video stream as a history highlight video;
and carrying out a new round of iterative optimization on the model parameters of the machine learning model after parameter optimization based on the updated training data.
5. The method of claim 1, wherein model parameters of the machine learning model are optimized over a range of model parameter intervals.
6. The method of claim 1, wherein the model parameters of the machine learning model are discrete model parameters that satisfy a normal distribution.
7. The method of claim 1, wherein the highlight video stream parameters include a specified game event and an event weight corresponding to the specified game event, wherein the highlight video stream parameters using the parameter-optimized machine learning model identify a target highlight video stream from among target game video streams, comprising:
determining the highlight score value of the candidate game video stream in the target game video stream according to the designated game event in the target game video stream and the event weight corresponding to the designated game event; the highlight score value characterizes a level of highlighting of the candidate game video stream;
And taking the candidate game video streams with the highlight score value exceeding the highlight score threshold value as target highlight video streams.
8. The method of claim 1, wherein the model parameters of the machine learning model further comprise game player attributes, the method further comprising:
acquiring a target game player attribute, wherein the target game player attribute is a game player attribute corresponding to the target game video stream;
determining highlight video stream parameters matched with the target game player attributes and shooting parameters of a virtual camera matched with the target game player attributes from model parameters of the machine learning model;
when identifying a target highlight video stream in target game video streams, taking highlight video stream parameters matched with the target game player attributes as highlight video stream parameters adopted by the machine learning model, and taking shooting parameters of a virtual camera matched with the target game player attributes as shooting parameters adopted by the machine learning model.
9. The method of claim 1, wherein the shooting parameters of the virtual camera include any one or more of:
The virtual camera comprises an aiming point of the virtual camera, a shooting distance of the virtual camera, a horizontal angle of the virtual camera, a vertical angle of the virtual camera, a surrounding direction of the virtual camera and a surrounding speed of the virtual camera.
10. A game video stream sharing apparatus, the apparatus comprising:
the video stream identification module is used for identifying a target highlight video stream in the target game video stream by using highlight video stream parameters of the machine learning model after parameter optimization; the target highlight video stream is a game video stream obtained by shooting a virtual game scene by a virtual camera configured with shooting parameters of a machine learning model after parameter optimization; model parameters of the machine learning model at least comprise highlight video stream parameters and shooting parameters of the virtual camera;
and the video stream sharing module is used for responding to the sharing operation of the target highlight video stream and sharing the target highlight video stream.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 9.
12. An electronic device, comprising:
A processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 9 via execution of the executable instructions.
CN202210411602.0A 2022-04-19 2022-04-19 Game video stream sharing method and device, storage medium and electronic equipment Pending CN116966557A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210411602.0A CN116966557A (en) 2022-04-19 2022-04-19 Game video stream sharing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210411602.0A CN116966557A (en) 2022-04-19 2022-04-19 Game video stream sharing method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116966557A true CN116966557A (en) 2023-10-31

Family

ID=88480086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210411602.0A Pending CN116966557A (en) 2022-04-19 2022-04-19 Game video stream sharing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116966557A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117425076A (en) * 2023-12-18 2024-01-19 湖南快乐阳光互动娱乐传媒有限公司 Shooting method and system for virtual camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117425076A (en) * 2023-12-18 2024-01-19 湖南快乐阳光互动娱乐传媒有限公司 Shooting method and system for virtual camera
CN117425076B (en) * 2023-12-18 2024-02-20 湖南快乐阳光互动娱乐传媒有限公司 Shooting method and system for virtual camera

Similar Documents

Publication Publication Date Title
CN109091861B (en) Interactive control method in game, electronic device and storage medium
CN109445662B (en) Operation control method and device for virtual object, electronic equipment and storage medium
WO2019242222A1 (en) Method and device for use in generating information
CN112827172B (en) Shooting method, shooting device, electronic equipment and storage medium
US10617945B1 (en) Game video analysis and information system
WO2022095516A1 (en) Livestreaming interaction method and apparatus
CN109743584B (en) Panoramic video synthesis method, server, terminal device and storage medium
CN111177167B (en) Augmented reality map updating method, device, system, storage and equipment
CN116966557A (en) Game video stream sharing method and device, storage medium and electronic equipment
CN113497946A (en) Video processing method and device, electronic equipment and storage medium
CN111277904B (en) Video playing control method and device and computing equipment
CN112843733A (en) Method and device for shooting image, electronic equipment and storage medium
CN112843695B (en) Method and device for shooting image, electronic equipment and storage medium
CN114615556B (en) Virtual live broadcast enhanced interaction method and device, electronic equipment and storage medium
CN113852839B (en) Virtual resource allocation method and device and electronic equipment
CN113327309B (en) Video playing method and device
CN112843691B (en) Method and device for shooting image, electronic equipment and storage medium
CN112822555A (en) Shooting method, shooting device, electronic equipment and storage medium
CN114691068A (en) Information display method and device based on screen projection technology
CN113975802A (en) Game control method, device, storage medium and electronic equipment
CN112843736A (en) Method and device for shooting image, electronic equipment and storage medium
US20210154570A1 (en) Enhanced Split-Screen Display via Augmented Reality
CN112861612A (en) Method and device for shooting image, electronic equipment and storage medium
CN112860360A (en) Picture shooting method and device, storage medium and electronic equipment
CN112843696A (en) Shooting method, shooting device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination