CN113596558A - Interaction method, device, processor and storage medium in game live broadcast - Google Patents

Interaction method, device, processor and storage medium in game live broadcast Download PDF

Info

Publication number
CN113596558A
CN113596558A CN202110797572.7A CN202110797572A CN113596558A CN 113596558 A CN113596558 A CN 113596558A CN 202110797572 A CN202110797572 A CN 202110797572A CN 113596558 A CN113596558 A CN 113596558A
Authority
CN
China
Prior art keywords
virtual model
anchor
virtual
terminal
anchor terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110797572.7A
Other languages
Chinese (zh)
Inventor
温建芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110797572.7A priority Critical patent/CN113596558A/en
Publication of CN113596558A publication Critical patent/CN113596558A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an interaction method, an interaction device, a processor and a storage medium in live game. Wherein, the method comprises the following steps: identifying an action instruction sent by a first anchor terminal, and determining an initial movement direction of a virtual model, wherein the action instruction is an instruction for controlling the virtual model through a first control medium, and the virtual model is an interactive medium for game interaction of a plurality of anchor terminals in a shared user interface; determining an adjustment direction corresponding to the virtual model according to the virtual gift sent by at least one audience terminal; and controlling the virtual model to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction, so that the second anchor terminal at least controls the subsequent motion direction of the virtual model. The invention solves the technical problems of poor interactivity between the anchor and between audiences and the anchor in the existing live game.

Description

Interaction method, device, processor and storage medium in game live broadcast
Technical Field
The invention relates to the field of computers, in particular to an interaction method, an interaction device, a processor and a storage medium in live game.
Background
With the rapid development of computer technology, people communicate more and more conveniently through the internet, and live broadcast is widely applied to shopping, entertainment, learning and other aspects as a network interaction mode. For example, on a live game platform, a user may interact with the anchor by sending text, voice, and gifting a gift. In the network live broadcast, the interaction between audiences and the anchor and between the anchor and the anchor makes the network live broadcast popular with the majority of users. Among them, gifting virtual gifts is an important way to increase the interaction between the audience and the anchor. However, in the existing live broadcast interaction mode, most of the live broadcast interaction modes are through audience gifting and speaking, and the anchor broadcasts make some feedback after seeing the feedback, for example, expressing thank you, performing talent skill and the like, so that other interactions cannot be generated, and emotional linkage between the anchor broadcasts and the audiences cannot be promoted. In addition, in the prior art, when the main broadcast and the main broadcast are in a match, the manner in which the audience presents the virtual gifts to the main broadcast is only that after the audience presents the virtual gifts, a certain number of tickets is added on the side of the main broadcast supported by the audience, and the audience cannot really participate in the competition between the main broadcasts, so that the interactivity between the audience and the main broadcast is poor, and the audience is easy to lose.
In addition, the wheat connecting technology in the prior art can enable the anchor and other anchors to carry out wheat connecting interaction to carry out competitive games, and the anchor increases the competitive ticket number for the audience to give gifts to the anchor so as to win the match. However, when a competitive game is played between anchor players, the interaction between the anchor players is poor, and the components of the game atmosphere that are stressful and irritating are lacking.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an interaction method, an interaction device, a processor and a storage medium in live game, which are used for at least solving the technical problems of poor interactivity between a main broadcast and poor interactivity between audiences and the main broadcast in the existing live game.
According to an aspect of an embodiment of the present invention, there is provided an interaction method in a live game, including: identifying an action instruction sent by a first anchor terminal, and determining an initial movement direction of a virtual model, wherein the action instruction is an instruction for controlling the virtual model through a first control medium, and the virtual model is an interactive medium for game interaction of a plurality of anchor terminals in a shared user interface; determining an adjustment direction corresponding to the virtual model according to the virtual gift sent by at least one audience terminal; and controlling the virtual model to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction, so that the second anchor terminal at least controls the subsequent motion direction of the virtual model.
Further, the interaction method in the game live broadcast further comprises the following steps: acquiring image information sent by a first anchor terminal; identifying the image information to obtain an action instruction; determining the moving direction of the first control medium according to the action instruction; and determining the initial motion direction of the virtual model according to the angle difference between the moving direction and a plurality of preset directions.
Further, the interaction method in the game live broadcast further comprises the following steps: the method comprises the steps of obtaining somatosensory data sent by a first anchor terminal, wherein the somatosensory data represent motion information of limbs of a target object controlling the first anchor terminal; identifying the somatosensory data to obtain an action instruction; determining the moving direction of the first control medium according to the action instruction; and determining the initial motion direction of the virtual model according to the angle difference between the moving direction and a plurality of preset directions.
Further, the interaction method in the game live broadcast further comprises the following steps: acquiring an angle difference value between the moving direction and each preset direction; and determining the preset direction with the minimum angle difference value as the initial motion direction of the virtual model.
Further, the interaction method in the game live broadcast further comprises the following steps: detecting whether at least one audience terminal determines the initial adjustment direction of the virtual model within a first preset time length, and sending a virtual gift; when detecting that at least one audience terminal determines an initial adjustment direction within a first preset time length and sends the virtual gift, determining the virtual gift as a valid gift, and determining the initial adjustment direction as the adjustment direction.
Further, the interaction method in the game live broadcast further comprises the following steps: and determining the virtual gift as an invalid gift when detecting that the at least one audience terminal does not determine the initial adjustment direction within the first preset time length and/or sending the virtual gift.
Further, the interaction method in the game live broadcast further comprises the following steps: acquiring a virtual gift sent by at least one audience terminal within a first preset time length; and determining an initial adjustment direction corresponding to the virtual gift according to the gift type of the virtual gift, wherein different gift types correspond to different adjustment directions.
Further, the interaction method in the game live broadcast further comprises the following steps: acquiring a virtual gift sent by at least one audience terminal within a first preset time length; determining a transmission order in which the at least one viewer terminal transmits the virtual gifts; and adjusting the initial movement direction based on the adjustment direction according to the sending sequence to obtain the target movement direction.
Further, the interaction method in the game live broadcast further comprises the following steps: acquiring a virtual gift sent by at least one audience terminal within a first preset time length; determining the gift type corresponding to the virtual gift and the gift quantity corresponding to the gift type; acquiring an adjustment direction corresponding to a first virtual gift, wherein the first virtual gift is the virtual gift with the largest gift number; and adjusting the target movement direction based on the adjustment direction to obtain the target movement direction.
Further, the interaction method in the game live broadcast further comprises the following steps: after the target movement direction is adjusted based on the adjustment direction to obtain the target movement direction, acquiring a first quantity corresponding to the first virtual gift and a second quantity corresponding to the second virtual gift, wherein the adjustment direction corresponding to the second virtual gift is opposite to the adjustment direction corresponding to the first virtual gift; calculating the difference between the first quantity and the second quantity to obtain a target quantity; and adjusting the movement speed of the virtual model according to the target number.
Further, the interaction method in the game live broadcast further comprises the following steps: before the virtual model is controlled to move from a display area corresponding to a first anchor terminal to a display area corresponding to a second anchor terminal in a shared user interface according to the target motion direction, acquiring the display area corresponding to each anchor terminal and the position information of the display area corresponding to each anchor terminal; controlling the virtual model to move according to the target movement direction to obtain a movement track; and determining a second anchor terminal from the plurality of anchor terminals according to the position information and the motion trail.
Further, the interaction method in the game live broadcast further comprises the following steps: determining at least one target anchor terminal which passes through the virtual model in the motion process according to the position information and the motion track; and determining a second anchor terminal from the at least one target anchor terminal according to the stay time of the virtual model in the at least one target anchor terminal.
Further, the interaction method in the game live broadcast further comprises the following steps: after the virtual model is controlled to move from a display area corresponding to a first anchor terminal to a display area corresponding to a second anchor terminal in a shared user interface according to the target motion direction, whether the second anchor terminal sends a target image within a second preset time is detected, wherein the target image at least comprises the virtual model and a second control medium, and the distance between the second control medium and the virtual model is smaller than a preset distance; under the condition that the target image is not detected within a second preset time, determining that the second anchor terminal does not receive the virtual model; and under the condition that the target image is detected within a second preset time length and the posture corresponding to the second control medium is the preset posture, the second anchor terminal is determined to successfully receive the virtual model.
Further, the interaction method in the game live broadcast further comprises the following steps: after the virtual model is controlled to move from a display area corresponding to a first anchor terminal to a display area corresponding to a second anchor terminal in a shared user interface according to the target motion direction, a target image sent by the second anchor terminal is obtained, wherein the target image at least comprises the virtual model and the area position where the virtual model is located; when the area position is detected to be located in a first area corresponding to a second anchor terminal, determining that the second anchor terminal does not receive the virtual model; and when the area position is detected to be located in a second area corresponding to the second anchor terminal, determining that the second anchor terminal successfully receives the virtual model.
According to another aspect of the embodiments of the present invention, there is also provided an interaction apparatus in a live game, including: the identification module is used for identifying an action instruction sent by the first anchor terminal and determining the initial motion direction of the virtual model, wherein the action instruction is an instruction for controlling the virtual model through a first control medium, and the virtual model is an interactive medium for game interaction of a plurality of anchor terminals in a shared user interface; the determining module is used for determining the adjusting direction corresponding to the virtual model according to the virtual gift sent by at least one audience terminal; the adjusting module is used for adjusting the initial movement direction according to the adjusting direction to obtain a target movement direction; and the processing module is used for controlling the virtual model to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction so that the second anchor terminal at least controls the subsequent motion direction of the virtual model.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein the program executes the interaction method in the live game described above.
According to another aspect of the embodiment of the present invention, a processor is further provided, where the processor is configured to run a program, where the program executes the above interaction method in live game broadcast when running.
In the embodiment of the invention, a mode of controlling a virtual model by virtual gifts is adopted, an initial motion direction of the virtual model is determined by identifying an action instruction sent by a first anchor terminal, an adjustment direction corresponding to the virtual model is determined according to the virtual gifts sent by at least one audience terminal, then the initial motion direction is adjusted according to the adjustment direction to obtain a target motion direction of the virtual model, and finally, the virtual model is controlled to move from a display area corresponding to the first anchor terminal to a display area corresponding to a second anchor terminal in a shared user interface according to the target motion direction, so that the second anchor terminal at least controls the subsequent motion direction of the virtual model. The action instruction is an instruction for controlling the virtual model through the first control medium, and the virtual model is an interactive medium for game interaction of a plurality of anchor terminals in the shared user interface.
In the process, the audience can give the virtual gift to the anchor through the audience terminal to achieve the purpose of controlling the movement direction of the virtual model, so that the interactivity between the anchor and the audience is improved. In addition, in the application, the limb actions of the anchor are identified by identifying the action commands, and then the movement direction of the virtual model is determined, so that the virtual model can be controlled by other anchors, and the interactivity between the anchors is improved.
Therefore, the scheme provided by the application achieves the purpose of improving the interactivity between the anchor and the anchor as well as between the anchor and the audience in the live game, thereby realizing the technical effect of improving the interactivity and the interestingness of the live game, and further solving the technical problems of poor interactivity between the anchor and between the audience and the anchor existing in the existing live game.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of an interaction method in a live game according to an embodiment of the invention;
fig. 2 is a schematic illustration of an alternative shared user interface of a first anchor terminal according to an embodiment of the present invention;
FIG. 3 is a schematic illustration of an alternative game play according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of an alternative game play according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of an alternative game play according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of an alternative game play according to an embodiment of the present invention;
FIG. 7 is a schematic illustration of an alternative game play according to an embodiment of the present invention;
fig. 8 is a schematic diagram of an interaction device in a live game according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In accordance with an embodiment of the present invention, there is provided an embodiment of an interactive method in live gaming, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
In addition, it should be noted that a live broadcast server in the game live broadcast can be used as the execution subject of this embodiment, where the live broadcast server can acquire information sent by the anchor terminal and/or the audience terminal and can also send information to the anchor terminal and/or the audience terminal. In addition, the anchor terminal and the audience terminal may be the same type of terminal (e.g., both the anchor terminal and the audience terminal are smart phones) or different types of terminals (e.g., the anchor terminal is a desktop computer and the audience terminal is a smart phone).
Fig. 1 is a flowchart of an interaction method in a live game according to an embodiment of the present invention, and as shown in fig. 1, the method includes the following steps:
step S102, identifying an action instruction sent by a first anchor terminal, and determining an initial movement direction of a virtual model, wherein the action instruction is an instruction for controlling the virtual model through a first control medium, and the virtual model is an interactive medium for game interaction of a plurality of anchor terminals in a shared user interface.
In step S102, the motion command may be a command obtained by an image recognition method or a command obtained by a somatosensory recognition method. For example, the live broadcast server determines the body action of the first anchor terminal by identifying the image information sent by the first anchor terminal; for another example, the live broadcast server identifies the body motion of the first anchor terminal by recognizing the body motion sensing data transmitted by the first anchor terminal.
The first control medium may be, but is not limited to, a palm, a sole, an arm, a head, etc. of a main player controlling the first main player terminal, and the virtual model may be, but is not limited to, a sphere, an airplane, a vehicle, etc. In this embodiment, the control medium is taken as a palm, and the virtual model is taken as a sphere for example.
The shared user interface may be, but is not limited to, a display interface such as a microphone, a screen, and the like, where the display area position of each anchor on the shared user interface of each anchor terminal (including itself) is the same, for example, the display area corresponding to the anchor a is located in the upper left corner of the shared user interface of the anchor terminal of the anchor a, and the display area corresponding to the anchor a is also located in the upper left corner of the shared user interface of the anchor terminal of the anchor B.
In addition, it should be further noted that the interaction method in live game broadcasting provided by this embodiment may be applied to live game broadcasting of multiple persons, where each anchor controls one anchor terminal, and the number of the anchor terminals is even, for example, 4, 6, and the like, optionally, fig. 3 shows a shared user interface when 6 anchors perform a wheat-connecting competition, where A, B, C, D, E, F respectively represents display areas corresponding to six anchors, and different colors in fig. 3 represent different marketing campaigns. In the present embodiment, a description is given by taking four anchor terminals as an example to play a live game, that is, in the present embodiment, the number of anchor terminals is four.
And step S104, determining the corresponding adjustment direction of the virtual model according to the virtual gift sent by at least one audience terminal.
In step S104, at least one viewer terminal has a gift control on a shared user interface, and the viewer can gift a virtual gift to the main broadcast by clicking on the gift control. Optionally, the anchor displayed on the shared user interface of at least one of the viewer terminals gifts the virtual gift to the viewer, for example, if the viewer a1 is watching the live video of the anchor B1, the viewer a1 is playing the live video of the anchor B1 on the viewer terminal, and if the viewer a1 gifts the virtual gift through the viewer terminal, the viewer a1 is playing the anchor B1.
The adjustment direction is used to adjust the initial movement direction of the virtual model. Optionally, different virtual gifts correspond to different adjustment directions, so that the spectator can adjust the movement direction of the virtual model by presenting the virtual gifts corresponding to the different adjustment directions to the anchor, for example, the adjustment direction corresponding to the flower gift is rightward, and the adjustment direction corresponding to the star gift is leftward. Optionally, the viewer may give a virtual gift after selecting the adjustment direction, and in this scenario, there is no correlation between the virtual gift and the adjustment direction, for example, viewer a1 selects the adjustment direction to be right and gives a star gift to anchor B1, indicating that viewer a1 wants to adjust the movement direction of the virtual model to be right; viewer a2 has chosen to adjust the direction to the right and presented a flower gift to anchor B1 indicating that viewer a2 also wants to adjust the direction of movement of the virtual model to the right.
And S106, adjusting the initial movement direction according to the adjustment direction to obtain the target movement direction.
In step S106, the adjustment direction is a direction in which the viewer wants the anchor to apply force to the virtual model, and it should be noted that there may be a plurality of viewers watching the same anchor live broadcast, and there may also be a plurality of viewers giving virtual gifts to the anchor to control the movement direction of the virtual model, so the live broadcast server can obtain a plurality of adjustment directions according to the received virtual gifts sent by the plurality of viewer terminals, and the live broadcast server obtains a direction of the resultant force by synthesizing the forces based on the plurality of adjustment directions, where the direction of the resultant force is the target movement direction.
In addition, it should be noted that, the audience presents the virtual gifts to the anchor not only increases the number of tickets of the anchor, but also can adjust the motion direction of the virtual model, so that the audience can really participate in the competition between the anchors, and the interactivity between the audience and the anchor is improved.
Step S108, the virtual model is controlled to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction, so that the second anchor terminal at least controls the subsequent motion direction of the virtual model.
In step S108, after the target movement direction of the virtual model is determined, the virtual model moves in the target movement direction, for example, in the schematic diagram of the game competition shown in fig. 4, A, B, D, E four boxes respectively indicate A, B, D, E display areas corresponding to four anchor players, and arrows indicate the target movement direction of the virtual model.
Optionally, after the target moving direction is determined, the live broadcast server determines, according to the target moving direction of the virtual model, a target video stream corresponding to the first anchor terminal, where the target video stream at least includes a transmission direction of video data. For example, if the target motion direction of the virtual model is from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal, the live broadcast server determines that the target video stream flows from the first anchor terminal to the second anchor terminal, and at this time, the live broadcast server merges the dynamic effect image corresponding to the virtual model with the target video stream, so that the virtual model is displayed in the shared user interface and moved from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal.
It should be noted that after the virtual model moves from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal along the target motion direction, the second anchor terminal may control the subsequent motion direction of the virtual model, for example, in fig. 4, the sphere moves from the display area corresponding to the anchor terminal a (i.e., the first anchor terminal) to the display area corresponding to the anchor terminal B (i.e., the second anchor terminal), and then, controlling the anchor of the anchor terminal B may control the motion direction of the sphere, for example, the anchor B performs a ball shooting. Therefore, the scheme provided by the embodiment can realize the interaction between the anchor players in the live game, and the interest of the game is improved.
Based on the solutions defined in steps S102 to S108, it can be known that, in the embodiment of the present invention, an initial movement direction of a virtual model is determined by identifying an action instruction sent by a first anchor terminal in a manner of controlling the virtual model by using a virtual gift, an adjustment direction corresponding to the virtual model is determined according to the virtual gift sent by at least one viewer terminal, then the initial movement direction is adjusted according to the adjustment direction to obtain a target movement direction of the virtual model, and finally, the virtual model is controlled according to the target movement direction to move from a display area corresponding to the first anchor terminal to a display area corresponding to a second anchor terminal in a shared user interface, so that the second anchor terminal at least controls a subsequent movement direction of the virtual model. The action instruction is an instruction for controlling the virtual model through the first control medium, and the virtual model is an interactive medium for game interaction of a plurality of anchor terminals in the shared user interface.
It is easy to notice that in the above process, the spectator can give a virtual gift to the anchor through the spectator terminal to achieve the purpose of controlling the movement direction of the virtual model, thereby improving the interactivity between the anchor and the spectator. In addition, in the application, the limb actions of the anchor are identified by identifying the action commands, and then the movement direction of the virtual model is determined, so that the virtual model can be controlled by other anchors, and the interactivity between the anchors is improved.
Therefore, the scheme provided by the application achieves the purpose of improving the interactivity between the anchor and the anchor as well as between the anchor and the audience in the live game, thereby realizing the technical effect of improving the interactivity and the interestingness of the live game, and further solving the technical problems of poor interactivity between the anchor and between the audience and the anchor existing in the existing live game.
In an alternative embodiment, the live broadcast server may identify the action command sent by the first anchor terminal by means of image recognition, and determine the initial movement direction of the virtual model. Specifically, the live broadcast server firstly acquires image information sent by a first anchor terminal, then identifies the image information to obtain an action instruction, determines the moving direction of a first control medium according to the action instruction, and finally determines the initial moving direction of the virtual model according to the angle difference between the moving direction and a plurality of preset directions. The live broadcast server can obtain an angle difference value between the moving direction and each preset direction, and determines the preset direction with the minimum angle difference value as the initial moving direction of the virtual model.
Optionally, the first anchor terminal at least has a display screen and an image collector, where the display screen can display the virtual model, and the image collecting device can collect an anchor image of the first anchor controlling the first anchor terminal, and combine the collected anchor image with the virtual model to obtain the image information. Optionally, the anchor image at least includes the first manipulation medium, and for example, the image information is displayed in a display interface of the first anchor terminal shown in fig. 2.
In a live broadcast game, a virtual model is displayed in a display area corresponding to one anchor at random, for example, the virtual model is displayed in the display area of an anchor terminal a, then the anchor corresponding to the anchor terminal a places a first control medium beside the virtual model, for example, in fig. 2, places a palm beside a sphere, then the palm makes a gesture of moving back and forth, a live broadcast server obtains image information in real time when the anchor terminal a broadcasts directly, determines the moving direction of the anchor palm through an image recognition mode, namely, the moving direction of the first control medium is obtained, then performs fuzzy processing on the moving direction of the sphere according to the moving direction of the palm, namely, the direction with the smallest angle between the moving direction of the palm and a plurality of preset directions is determined as the initial moving direction of the sphere, for example, the angle difference between the moving direction of the palm and the lower part is the smallest, the moving direction of the palm is blurred to be downward, thereby determining that the initial moving direction of the sphere is downward.
For example, in the schematic diagram of the game competition shown in fig. 5, the plurality of preset directions are eight directions, i.e., left, right, up, down, left up, right down, and left down (as shown by the arrow directions in fig. 5).
In addition, it should be noted that, in practical applications, the number and the directions of the preset directions can be set according to actual requirements, and are not limited to the eight preset directions.
In another optional embodiment, the live broadcast server may identify an action instruction sent by the first terminal in a motion sensing identification manner, and determine an initial movement direction of the virtual model. Specifically, the live broadcast server firstly obtains the somatosensory data sent by the first anchor terminal, identifies the somatosensory data to obtain an action instruction, then determines the moving direction of the first control medium according to the action instruction, and determines the initial moving direction of the virtual model according to the angle difference between the moving direction and a plurality of preset directions. The somatosensory data represents motion information of limbs of a target object controlling the first anchor terminal.
The somatosensory data may be data obtained by inertial sensing, optical sensing, and combined inertial and optical sensing. The live broadcast server can identify and obtain the body action of the anchor corresponding to the first anchor terminal by identifying the somatosensory data, and further generates an action instruction according to the body action of the anchor.
In addition, it should be noted that, after obtaining the motion instruction through the motion sensing recognition, the method of determining the moving direction of the first operating medium according to the motion instruction and determining the initial moving direction of the virtual model according to the angle difference between the moving direction and the plurality of preset directions is the same as the method of obtaining the motion instruction through the image recognition, and further determining the moving direction of the first operating medium according to the motion instruction and determining the initial moving direction of the virtual model according to the angle difference between the moving direction and the plurality of preset directions, and therefore, the description is omitted here.
In an alternative embodiment, the spectators watching the game competition of the multiple anchor casts can control the virtual model by giving a virtual gift to the anchor casts, so that the spectators can participate in the live game. The live broadcast server can determine the adjustment direction corresponding to the virtual model through any one of the following two ways:
the first method is as follows: and the live broadcast server determines the adjustment direction corresponding to the virtual model according to the initial adjustment direction selected by the audience and the given virtual gift.
Specifically, the live broadcast server firstly detects whether at least one audience terminal determines an initial adjustment direction of a virtual model within a first preset time length and sends a virtual gift, determines the virtual gift as an effective gift when detecting that the at least one audience terminal determines the initial adjustment direction within the first preset time length and sends the virtual gift, and determines the initial adjustment direction as an adjustment direction; and determining the virtual gift as an invalid gift when detecting that the at least one audience terminal does not determine the initial adjustment direction within the first preset time length and/or sending the virtual gift.
Optionally, the virtual model is taken as a sphere for example to explain, after the initial movement direction of the sphere is determined, that is, after the anchor racket ball corresponding to the first anchor terminal is determined, the live broadcast server controls the sphere to move at a constant speed for a certain time length according to the initial movement direction, for example, for a first preset time length. Within a first preset time length, the live broadcast server pushes a popup notice to the audience terminal to prompt the audience to give a virtual gift within the time length (namely the first preset time length) and participate in game competition. During the first preset time, the audience selects the shooting direction through the audience terminal (for example, the shooting direction is selected from four directions of up, down, left and right), selects the shooting gift in the gift control displayed by the audience terminal, and completes the presentation of the virtual gift and the control of the motion direction of the virtual model after the audience selects the shooting gift and clicks the presentation button.
It should be noted that, if the audience does not select the direction of the racket ball within the first preset time period and/or does not finish giving the virtual gift, the virtual gift is determined to be an invalid gift, that is, the audience can only give the gift to the anchor, but cannot realize the control of the virtual model.
The second method comprises the following steps: the live server determines a viewing adjustment direction according to the type of the virtual gift.
Specifically, the live broadcast server obtains a virtual gift sent by at least one audience terminal within a first preset time length, and determines an initial adjustment direction corresponding to the virtual gift according to the gift type of the virtual gift, wherein different gift types correspond to different adjustment directions.
Optionally, the virtual model is taken as a sphere for example to explain, after the initial movement direction of the sphere is determined, that is, after the anchor racket ball corresponding to the first anchor terminal is determined, the live broadcast server controls the sphere to move at a constant speed for a certain time length according to the initial movement direction, for example, for a first preset time length (for example, 5 s). Within a first preset time length, the live broadcast server pushes a popup notice to the audience terminal to prompt the audience to give a virtual gift within the time length (namely the first preset time length) and participate in game competition. In the first preset time, the audience selects the racket gift through the gift control displayed by the audience terminal, and after the audience selects the racket gift and clicks the presentation button, the presentation of the virtual gift and the control of the movement direction of the virtual model can be completed, for example, the audience A1 presents a flower gift to the main broadcast, the live broadcast server determines that the initial adjustment direction corresponding to the audience A1 is right, the audience A2 presents a star gift to the main broadcast, and the live broadcast server determines that the initial adjustment direction corresponding to the audience A2 is left.
It should be noted that, the control of the virtual model by the audience can be realized through any one of the above two manners, in addition, the virtual gift given by the audience within the first preset time period can adjust the moving direction of the virtual model, and the virtual gift given outside the first preset time period cannot adjust the moving direction of the virtual model. In addition, within the first preset duration, the virtual model moves from the preset position (e.g., middle position) of the display area corresponding to the first anchor terminal to the preset position (e.g., middle position) of the display area corresponding to the second anchor terminal according to the initial movement direction, for example, in fig. 4, the virtual model moves from the middle position of the display area corresponding to the anchor terminal a to the middle position of the display area corresponding to the anchor terminal B, or moves to the middle position of the display area corresponding to the anchor terminal D, or moves to the middle position of the display area corresponding to the anchor terminal E.
Furthermore, it should be noted that, after acquiring the virtual gift sent by at least one audience terminal, the live broadcast server displays a preset virtual model at the position of the virtual model, and pushes preset text information to the anchor terminal (including the first anchor terminal and the second anchor terminal) and the at least one audience terminal at the preset virtual model according to the sending order of the virtual gift sent by the at least one audience terminal, for example, after audience a1 gives a right-handed racket gift, a back-and-forth moving virtual racket in the same direction as the right direction is displayed at the position of the ball in the display areas of the anchor terminal and the at least one audience terminal, and the text is displayed beside the virtual racket: viewer a1 takes a shot in the right direction. If a plurality of audiences present the racket gifts, the live broadcast server plays the special effects and the documentations in sequence according to the sending sequence of the gifts presented by the audiences.
In an optional embodiment, after the adjustment direction corresponding to the virtual model is determined, the live broadcast server adjusts the initial movement direction according to the adjustment direction to obtain the target movement direction. Specifically, the live broadcast server obtains a virtual gift sent by at least one audience terminal within a first preset time length, determines a sending sequence of the virtual gift sent by the at least one audience terminal, and then adjusts the initial moving direction based on the adjusting direction according to the sending sequence to obtain a target moving direction.
It should be noted that, in this embodiment, the target moving direction may be calculated according to the mechanics principle for the adjustment direction determined by the virtual gifts sent by the multiple viewer terminals, for example, the viewer a1 takes a racket ball to the left, the viewer a2 takes a racket ball to the right, and the forces of the viewer a1 and the viewer a2 take a racket ball out each other according to the mechanics principle; for example, in the game play interface shown in fig. 6, when the audience a1 takes a downward shot at time a and the audience a2 takes a rightward shot at time b, the result is a downward-right shot, that is, the target movement direction of the ball is downward-right, according to the mechanics principle.
In addition, it should be noted that, after the second anchor terminal receives the virtual model and manipulates the virtual model, no mechanical calculation is performed. However, before the second anchor terminal successfully manipulates the virtual model, the virtual model moves along the target movement direction, and after the second anchor terminal receives the virtual model, the movement direction of the virtual model is changed immediately, and the changed movement direction of the virtual model is the manipulation direction of the anchor controlling the second anchor terminal to the virtual model, for example, the changed movement direction of the virtual model is the direction of controlling the anchor shooting of the second anchor terminal.
In another optional embodiment, after the adjustment direction corresponding to the virtual model is determined, the live broadcast server obtains a virtual gift sent by at least one viewer terminal within a first preset time period, determines a gift type corresponding to the virtual gift and a gift number corresponding to the gift type, then obtains the adjustment direction corresponding to the first virtual gift, and adjusts the target movement direction based on the adjustment direction to obtain the target movement direction. Wherein, the first virtual gift is the virtual gift with the largest gift number.
Optionally, different gift types correspond to different adjustment directions, for example, the adjustment direction corresponding to the flower gift is rightward, and the adjustment direction corresponding to the star gift is leftward. In addition, the live broadcast server determines the adjustment direction according to the number of each type of virtual gift, wherein the live broadcast server determines the adjustment direction according to the gift type of the largest number of virtual gifts, for example, if the number of flower gifts is the largest, the live broadcast server determines the adjustment direction to be right.
In an optional embodiment, after the target movement direction is adjusted based on the adjustment direction to obtain the target movement direction, the live broadcast server obtains a first quantity corresponding to the first virtual gift and a second quantity corresponding to the second virtual gift, calculates a difference between the first quantity and the second quantity to obtain the target quantity, and then adjusts the movement speed of the virtual model according to the target quantity. The adjustment direction corresponding to the second virtual gift is opposite to the adjustment direction corresponding to the first virtual gift.
Optionally, after determining the adjustment direction according to the number of the virtual gifts and the type of the virtual gifts, the live broadcast server determines the movement speed of the virtual model according to the number of the virtual gifts. For example, the live server detects that there are 3 first gifts taken down, 1 second gift taken to the left, 2 third gifts taken to the right, and 1 fourth gift taken up. Since the number of the first gifts is the largest, the live server determines that the adjustment direction is downward. And because the adjustment direction corresponding to the first gift and the adjustment direction corresponding to the fourth gift are opposite to each other, and the difference between the number of the first gift and the number of the fourth gift is 2, the live broadcast server adjusts the original movement speed of the virtual model to be twice of the original movement speed, namely, the live broadcast server determines the adjustment multiple of the movement speed of the virtual model according to the target number and controls the virtual model to move to the display area corresponding to the second main broadcast terminal according to the adjusted movement speed.
In an alternative embodiment, after determining the target movement direction, the live server may control the virtual model to move according to the target movement direction. Since there may be a plurality of anchor terminals participating in the game competition, before the virtual model is controlled to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction, the second anchor terminal needs to be determined from the plurality of anchor terminals. Specifically, the live broadcast server obtains a display area corresponding to each anchor terminal and position information of the display area corresponding to each anchor terminal, controls the virtual model to move according to the target movement direction to obtain a movement track, and then determines a second anchor terminal from the plurality of anchor terminals according to the position information and the movement track.
Optionally, the live broadcast server determines, according to the position information and the motion trajectory, a display area corresponding to at least one target anchor terminal through which the virtual model passes in a motion process of the motion trajectory, and determines, according to a dwell time of the virtual model in the display area corresponding to the at least one target anchor terminal, a second anchor terminal from the display area corresponding to the at least one target anchor terminal. For example, in the process that the virtual model moves according to the motion trajectory, the virtual model passes through the display areas corresponding to the anchor terminal a, the anchor terminal B and the anchor terminal C, the live broadcast server determines the second anchor terminal according to the stay time of the virtual model in the display areas corresponding to the three anchor terminals, and for example, the live broadcast server takes the anchor terminal corresponding to the display area where the stay time of the virtual model is the longest as the second anchor terminal.
In an optional embodiment, after determining a second anchor terminal from a plurality of anchor terminals according to a target movement direction, the live broadcast server further detects whether the second anchor terminal sends a target image within a second preset time period, wherein the second anchor terminal is determined not to receive the virtual model under the condition that the target image is not detected within the second preset time period; and under the condition that the target image is detected within a second preset time length and the posture corresponding to the second control medium is the preset posture, the second anchor terminal is determined to successfully receive the virtual model.
It should be noted that the target image at least includes a virtual model and a second manipulation medium, and a distance between the second manipulation medium and the virtual model is smaller than a preset distance, where the second manipulation medium may be, but is not limited to, a palm, a toe, an arm, a head, and the like, which control a main play of the second main play terminal.
Optionally, if the live broadcast server does not receive the target image sent by the second anchor terminal within a second preset time (e.g., 5s), it may be determined that the anchor controlling the second anchor terminal does not take a racket motion within a specified time (i.e., the second preset time), that is, the anchor controlling the second anchor terminal fails to take a racket, and at this time, the live broadcast server outputs a prompt message indicating that the anchor corresponding to the second anchor terminal fails in the game competition. In addition, if the live broadcast server receives the target image sent by the second anchor terminal within a second preset time (for example, 5s), but the gesture corresponding to the second control medium is not the preset gesture, it is determined that the racket ball action for controlling the anchor of the second anchor terminal is not standard, and at this time, the live broadcast server outputs prompt information indicating that the anchor corresponding to the second anchor terminal fails in the game competition. If the live broadcast server receives the target image sent by the second anchor terminal within a second preset time (for example, 5s), and the gesture corresponding to the second control medium is a preset gesture, it indicates that the anchor controlling the second anchor terminal takes a racket motion within a specified time (i.e., the second preset time), and the racket motion is normative.
In an optional embodiment, after determining a second anchor terminal from a plurality of anchor terminals according to a target motion direction, the live broadcast server further obtains a target image sent by the second anchor terminal, and determines that the second anchor terminal does not receive the virtual model when detecting that the area position is located in a first area corresponding to the second anchor terminal; and when the area position is detected to be located in a second area corresponding to the second anchor terminal, determining that the second anchor terminal successfully receives the virtual model. The target image at least comprises a virtual model and a region position where the virtual model is located.
Note that, the first area is an out-of-bounds area, for example, in the schematic diagram of the game competition shown in fig. 7, the display area corresponding to each anchor terminal has a corresponding out-of-bounds area (i.e., the first area), for example, the out-of-bounds area of the anchor terminal a is a black area in fig. 7, the out-of-bounds area of the anchor terminal B is a vertical stripe area in fig. 7, the out-of-bounds area of the anchor terminal D is a horizontal stripe area in fig. 7, and the out-of-bounds area of the anchor terminal E is a diagonal stripe area in fig. 7. If the virtual model is located in the out-of-bounds area in fig. 7, it indicates that the anchor competition controlling the second anchor terminal failed.
In order to make the scheme of the present embodiment clearer, the following description will take the virtual model as a sphere and four anchor terminals as an example to perform a racket sports.
Specifically, 4 anchor wheat are played in a racket match, and at this time, the live broadcast server controls the spheres to randomly appear in the display area corresponding to the anchor terminal, for example, the spheres are displayed in the display area corresponding to the anchor terminal a. Then, the live broadcast server acquires an image sent by the anchor terminal a, and recognizes the image to recognize a distance between a palm and a sphere in the anchor terminal a, the anchor corresponding to the anchor terminal a performs a motion of shooting a ball back and forth, and after the live broadcast server recognizes that the image is shot, the live broadcast server shoots the sphere in a moving direction corresponding to the palm, for example, shoots the sphere to an anchor corresponding to the anchor terminal B on the right side.
When the anchor terminal a plays the racket ball and the ball leaves the palm of the anchor terminal a, the viewer terminal receives the first message pushed by the live broadcast server, for example, "5 s can be presented to play the racket gift to change the trajectory of the ball! ". If the 5s is reached and the audience presents the gift again, the live broadcast server pushes a second message to the audience terminal, for example, the second message is that the second message is 'cannot be played within the specified shooting time', and at the moment, the gift presented by the audience cannot adjust the motion track of the ball.
In addition, in the above process, the sphere moves at a uniform speed in the direction of the anchor racket ball of the anchor terminal a, wherein in this embodiment, the sphere moves from the middle position of the display area corresponding to the anchor terminal a to the middle position of the display area corresponding to the anchor terminal B within a preset time period (for example, 5 s).
After the audience presents the virtual gift, the live broadcast server pushes a message to the audience terminal and the anchor terminal according to the virtual gift presented by the audience, for example, the audience A presents a racket ball, and the selected direction is downward; and B, presenting the bat by the audience B, wherein the selected direction is towards the right, the live broadcast server displays a corresponding bat gesture on the moving track of the bat according to the presentation time of the audience A, and displays a file: "audience A takes a shot downward! ".
It should be noted that, within a preset time period (for example, 5s) for presenting the virtual gift to the audience, the motion trajectory of the sphere does not change, and after the bat gesture and the text of the audience within the preset time period are played, the motion trajectory of the sphere is finally obtained by synthesizing the mechanical principle, for example, in fig. 6, the target motion direction of the sphere is the lower right.
Further, after the sphere enters a display area corresponding to the anchor of the anchor terminal B, the anchor of the anchor terminal B can take a racket of the sphere. After the anchor of anchor terminal B takes a racket, the following steps are the same as the above-described process. The direction of the main playing racket of the main playing terminal B and the original moving direction of the ball are not mechanically calculated, and the direction of the main playing racket of the main playing terminal B is which, the ball moves to which direction. However, in the process of the anchor racket of the anchor terminal B, the ball slowly moves along the target movement direction, and within the preset time, if the anchor of the anchor terminal B does not racket or the live broadcast server does not recognize the racket motion, it is determined that the ball is out of bound, and the anchor competition of the anchor terminal B fails.
As can be seen from the above, the scheme provided by this embodiment determines the moving direction of the ball by identifying the anchor gesture through the anchor wheat-connecting game, thereby realizing the interaction between the anchors. In addition, the movement direction of the ball is controlled by the way that the audience presents the virtual gift, so that the audience can directly participate in the interactive game, the interactivity of the audience participating in live broadcast is increased, the interactive atmosphere is increased, and the participation popularity of the audience is improved.
Example 2
According to an embodiment of the present invention, an embodiment of an interaction apparatus in a live game is further provided, where fig. 8 is a schematic diagram of an interaction apparatus in a live game according to an embodiment of the present invention, and as shown in fig. 8, the apparatus includes: an identification module 801, a determination module 803, an adjustment module 805, and a processing module 807.
The identification module 801 is configured to identify an action instruction sent by a first anchor terminal, and determine an initial movement direction of a virtual model, where the action instruction is an instruction for controlling the virtual model through a first control medium, and the virtual model is an interactive medium for game interaction of multiple anchor terminals in a shared user interface; a determining module 803, configured to determine, according to a virtual gift sent by at least one viewer terminal, an adjustment direction corresponding to the virtual model; an adjusting module 805, configured to adjust the initial moving direction according to the adjustment direction to obtain a target moving direction; the processing module 807 is configured to control the virtual model to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction, so that the second anchor terminal at least controls a subsequent motion direction of the virtual model.
It should be noted that the identifying module 801, the determining module 803, the adjusting module 805, and the processing module 807 correspond to steps S102 to S108 in the above embodiment 1, and the four modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in the above embodiment 1.
Optionally, the identification module includes: the device comprises a first acquisition module, a first identification module, a first determination module and a second determination module. The first acquisition module is used for acquiring image information sent by a first anchor terminal; the first identification module is used for identifying the image information to obtain an action instruction; the first determining module is used for determining the moving direction of the first control medium according to the action instruction; and the second determining module is used for determining the initial motion direction of the virtual model according to the angle difference between the moving direction and the plurality of preset directions.
Optionally, the identification module includes: the device comprises a second acquisition module, a second identification module, a third determination module and a fourth determination module. The second acquisition module is used for acquiring the somatosensory data sent by the first anchor terminal, wherein the somatosensory data represents the motion information of the limbs of the target object controlling the first anchor terminal; the second identification module is used for identifying the somatosensory data to obtain an action instruction; the third determining module is used for determining the moving direction of the first control medium according to the action instruction; and the fourth determining module is used for determining the initial motion direction of the virtual model according to the angle difference between the moving direction and the plurality of preset directions.
Optionally, the second determining module or the fourth determining module includes: a third obtaining module and a fifth determining module. The third obtaining module is used for obtaining an angle difference value between the moving direction and each preset direction; and the fifth determining module is used for determining the preset direction with the minimum angle difference value as the initial motion direction of the virtual model.
Optionally, the determining module includes: the device comprises a first detection module and a sixth determination module. The system comprises a first detection module, a second detection module and a virtual gift sending module, wherein the first detection module is used for detecting whether at least one audience terminal determines the initial adjustment direction of a virtual model within a first preset time length and sending the virtual gift; and the sixth determining module is used for determining the virtual gift as an effective gift and determining the initial adjustment direction as the adjustment direction when detecting that at least one audience terminal determines the initial adjustment direction within the first preset time length and sends the virtual gift.
Optionally, the interaction device in the live game further includes: and the seventh determining module is used for determining the virtual gift as an invalid gift when detecting that the at least one audience terminal does not determine the initial adjustment direction within the first preset time length and/or the virtual gift is sent.
Optionally, the interaction device in the live game further includes: a fourth obtaining module and an eighth determining module. The fourth obtaining module is configured to obtain a virtual gift sent by at least one viewer terminal within a first preset duration; and the eighth determining module is used for determining the initial adjustment direction corresponding to the virtual gift according to the gift type of the virtual gift, wherein different gift types correspond to different adjustment directions.
Optionally, the adjusting module includes: the device comprises a fifth obtaining module, a ninth determining module and a first adjusting module. The fifth obtaining module is configured to obtain a virtual gift sent by at least one viewer terminal within a first preset time duration; a ninth determining module for determining a transmission order in which the virtual gifts are transmitted by the at least one viewer terminal; and the first adjusting module is used for adjusting the initial movement direction based on the adjusting direction according to the sending sequence to obtain the target movement direction.
Optionally, the adjusting module includes: the device comprises a sixth obtaining module, a tenth determining module, a seventh obtaining module and a second adjusting module. The sixth obtaining module is configured to obtain a virtual gift sent by at least one viewer terminal within a first preset time duration; a tenth determining module, configured to determine a gift type corresponding to the virtual gift and a gift number corresponding to the gift type; a seventh obtaining module, configured to obtain an adjustment direction corresponding to a first virtual gift, where the first virtual gift is a virtual gift with a largest gift number; and the second adjusting module is used for adjusting the target movement direction based on the adjusting direction to obtain the target movement direction.
Optionally, the interaction device in the live game further includes: the device comprises an eighth acquisition module, a calculation module and a third adjustment module. The eighth obtaining module is configured to obtain a first quantity corresponding to the first virtual gift and a second quantity corresponding to the second virtual gift after the target movement direction is obtained by adjusting the target movement direction based on the adjustment direction, where the adjustment direction corresponding to the second virtual gift is opposite to the adjustment direction corresponding to the first virtual gift; the calculating module is used for calculating the difference between the first quantity and the second quantity to obtain the target quantity; and the third adjusting module is used for adjusting the movement speed of the virtual model according to the target number.
Optionally, the interaction device in the live game further includes: the device comprises a ninth obtaining module, a first control module and an eleventh determining module. The ninth obtaining module is configured to obtain a display area corresponding to each anchor terminal and position information of the display area corresponding to each anchor terminal before the virtual model is controlled to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction; the first control module is used for controlling the virtual model to move according to the target movement direction to obtain a movement track; and the eleventh determining module is used for determining the second anchor terminal from the plurality of anchor terminals according to the position information and the motion trail.
Optionally, the eleventh determining module includes: a twelfth determination module and a thirteenth determination module. The twelfth determining module is configured to determine, according to the position information and the motion trajectory, at least one target anchor terminal through which the virtual model passes in the motion process according to the motion trajectory; and the thirteenth determining module is used for determining the second anchor terminal from the at least one target anchor terminal according to the stay time of the virtual model in the at least one target anchor terminal.
Optionally, the interaction device in the live game further includes: a second detection module, a fourteenth determination module, and a fifteenth determination module. The second detection module is used for detecting whether the second anchor terminal sends a target image within a second preset time length after the virtual model is controlled to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction, wherein the target image at least comprises the virtual model and a second control medium, and the distance between the second control medium and the virtual model is smaller than the preset distance; a fourteenth determining module, configured to determine that the second anchor terminal does not receive the virtual model when the target image is not detected within a second preset time period; and the fifteenth determining module is configured to determine that the second anchor terminal successfully receives the virtual model when the target image is detected within the second preset duration and the posture corresponding to the second control medium is the preset posture.
Optionally, the interaction device in the live game further includes: a tenth obtaining module, a sixteenth determining module, and a seventeenth determining module. The tenth acquiring module is configured to acquire a target image sent by the second anchor terminal after the virtual model is controlled to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction, where the target image at least includes the virtual model and an area position where the virtual model is located; a sixteenth determining module, configured to determine that the second anchor terminal does not receive the virtual model when it is detected that the area location is located in the first area corresponding to the second anchor terminal; and the seventeenth determining module is configured to determine that the second anchor terminal successfully receives the virtual model when the area position is detected to be located in a second area corresponding to the second anchor terminal.
Example 3
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein the program performs the interaction method in the live game in embodiment 1 described above.
Example 4
According to another aspect of the embodiment of the present invention, there is further provided a processor, configured to run a program, where the program executes the interaction method in the live game in embodiment 1.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (17)

1. An interaction method in game live broadcast is characterized by comprising the following steps:
identifying an action instruction sent by a first anchor terminal, and determining an initial movement direction of a virtual model, wherein the action instruction is an instruction for controlling the virtual model through a first control medium, and the virtual model is an interactive medium for game interaction of a plurality of anchor terminals in a shared user interface;
determining an adjustment direction corresponding to the virtual model according to a virtual gift sent by at least one audience terminal;
adjusting the initial movement direction according to the adjustment direction to obtain a target movement direction;
and controlling the virtual model to move from a display area corresponding to the first anchor terminal to a display area corresponding to a second anchor terminal in the shared user interface according to the target motion direction, so that the second anchor terminal at least controls the subsequent motion direction of the virtual model.
2. The method of claim 1, wherein identifying the action command sent by the first anchor terminal and determining the initial direction of motion of the virtual model comprises:
acquiring image information sent by the first anchor terminal;
identifying the image information to obtain the action instruction;
determining the moving direction of the first control medium according to the action instruction;
and determining the initial motion direction of the virtual model according to the angle difference between the moving direction and a plurality of preset directions.
3. The method of claim 1, wherein identifying the action command sent by the first anchor terminal and determining the initial direction of motion of the virtual model comprises:
acquiring somatosensory data sent by the first anchor terminal, wherein the somatosensory data represents and controls the motion information of limbs of a target object of the first anchor terminal;
identifying the somatosensory data to obtain the action instruction;
determining the moving direction of the first control medium according to the action instruction;
and determining the initial motion direction of the virtual model according to the angle difference between the moving direction and a plurality of preset directions.
4. The method according to claim 2 or 3, wherein determining an initial movement direction of the virtual model based on an angular difference between the movement direction and a plurality of preset directions comprises:
obtaining an angle difference value between the moving direction and each preset direction;
and determining the preset direction with the minimum angle difference value as the initial motion direction of the virtual model.
5. The method of claim 1, wherein determining the adjustment direction corresponding to the virtual model according to the virtual gift transmitted by the at least one viewer terminal comprises:
detecting whether the at least one audience terminal determines the initial adjustment direction of the virtual model within a first preset time length, and sending the virtual gift;
and when the at least one audience terminal is detected to determine the initial adjustment direction within the first preset time length and send the virtual gift, determining the virtual gift as a valid gift, and determining the initial adjustment direction as the adjustment direction.
6. The method of claim 5, further comprising:
and when detecting that the at least one audience terminal does not determine the initial adjustment direction within the first preset time length and/or the virtual gift is sent, determining the virtual gift as an invalid gift.
7. The method of claim 5, further comprising:
acquiring a virtual gift sent by the at least one audience terminal within the first preset time length;
and determining an initial adjustment direction corresponding to the virtual gift according to the gift type of the virtual gift, wherein different gift types correspond to different adjustment directions.
8. The method of claim 1, wherein adjusting the initial moving direction according to the adjustment direction to obtain a target moving direction comprises:
acquiring a virtual gift sent by the at least one audience terminal within a first preset time length;
determining a transmission order in which the at least one viewer terminal transmits the virtual gifts;
and adjusting the initial movement direction based on the adjustment direction according to the sending sequence to obtain the target movement direction.
9. The method of claim 1, wherein adjusting the initial moving direction according to the adjustment direction to obtain a target moving direction comprises:
acquiring a virtual gift sent by the at least one audience terminal within a first preset time length;
determining a gift type corresponding to the virtual gift and a gift quantity corresponding to the gift type;
acquiring an adjustment direction corresponding to a first virtual gift, wherein the first virtual gift is the virtual gift with the largest gift number;
and adjusting the target movement direction based on the adjustment direction to obtain the target movement direction.
10. The method of claim 9, wherein after adjusting the target motion direction based on the adjustment direction, the method further comprises:
acquiring a first quantity corresponding to the first virtual gift and a second quantity corresponding to a second virtual gift, wherein the adjustment direction corresponding to the second virtual gift is opposite to the adjustment direction corresponding to the first virtual gift;
calculating the difference between the first quantity and the second quantity to obtain a target quantity;
and adjusting the movement speed of the virtual model according to the target number.
11. The method of claim 1, wherein before controlling the virtual model to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction, the method further comprises:
acquiring a display area corresponding to each anchor terminal and position information of the display area corresponding to each anchor terminal;
controlling the virtual model to move according to the target movement direction to obtain a movement track;
and determining the second anchor terminal from the plurality of anchor terminals according to the position information and the motion trail.
12. The method of claim 11, wherein determining the second anchor terminal from the plurality of anchor terminals based on the location information and the motion profile comprises:
determining at least one target anchor terminal which passes through the virtual model in the motion process according to the position information and the motion trail;
and determining the second anchor terminal from the at least one target anchor terminal according to the stay time of the virtual model in the at least one target anchor terminal.
13. The method of claim 1, wherein after controlling the virtual model to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction, the method further comprises:
detecting whether the second anchor terminal sends a target image within a second preset time length or not, wherein the target image at least comprises the virtual model and a second control medium, and the distance between the second control medium and the virtual model is smaller than a preset distance;
determining that the second anchor terminal does not receive the virtual model under the condition that the target image is not detected within the second preset time length;
and when the target image is detected within the second preset time and the posture corresponding to the second control medium is a preset posture, determining that the second anchor terminal successfully receives the virtual model.
14. The method of claim 1, wherein after controlling the virtual model to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction, the method further comprises:
acquiring a target image sent by the second anchor terminal, wherein the target image at least comprises the virtual model and the area position of the virtual model;
when the area position is detected to be located in a first area corresponding to the second anchor terminal, determining that the second anchor terminal does not receive the virtual model;
and when the area position is detected to be located in a second area corresponding to the second anchor terminal, determining that the second anchor terminal successfully receives the virtual model.
15. An interactive device in live game, comprising:
the identification module is used for identifying an action instruction sent by a first anchor terminal and determining the initial motion direction of a virtual model, wherein the action instruction is an instruction for controlling the virtual model through a first control medium, and the virtual model is an interactive medium for game interaction of a plurality of anchor terminals in a shared user interface;
the determining module is used for determining the corresponding adjusting direction of the virtual model according to the virtual gift sent by at least one audience terminal;
the adjusting module is used for adjusting the initial movement direction according to the adjusting direction to obtain a target movement direction;
and the processing module is used for controlling the virtual model to move from the display area corresponding to the first anchor terminal to the display area corresponding to the second anchor terminal in the shared user interface according to the target motion direction, so that the second anchor terminal at least controls the subsequent motion direction of the virtual model.
16. A storage medium comprising a stored program, wherein the program performs the method of interacting in a live game of any one of claims 1 to 14.
17. A processor, characterized in that the processor is configured to run a program, wherein the program is run to execute the interaction method in the live game of any one of claims 1 to 14.
CN202110797572.7A 2021-07-14 2021-07-14 Interaction method, device, processor and storage medium in game live broadcast Pending CN113596558A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110797572.7A CN113596558A (en) 2021-07-14 2021-07-14 Interaction method, device, processor and storage medium in game live broadcast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110797572.7A CN113596558A (en) 2021-07-14 2021-07-14 Interaction method, device, processor and storage medium in game live broadcast

Publications (1)

Publication Number Publication Date
CN113596558A true CN113596558A (en) 2021-11-02

Family

ID=78247425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110797572.7A Pending CN113596558A (en) 2021-07-14 2021-07-14 Interaction method, device, processor and storage medium in game live broadcast

Country Status (1)

Country Link
CN (1) CN113596558A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071177A (en) * 2021-11-16 2022-02-18 网易(杭州)网络有限公司 Virtual gift sending method and device and terminal equipment
CN114786024A (en) * 2022-04-01 2022-07-22 广州方硅信息技术有限公司 Live broadcast room game synchronization method, system, device, equipment and storage medium
CN115379262A (en) * 2022-07-12 2022-11-22 网易(杭州)网络有限公司 Game operation control method, device and system and electronic equipment
CN115379262B (en) * 2022-07-12 2024-05-31 网易(杭州)网络有限公司 Game operation control method, device and system and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680157A (en) * 2017-09-08 2018-02-09 广州华多网络科技有限公司 It is a kind of based on live interactive approach and live broadcast system, electronic equipment
CN109275040A (en) * 2018-11-06 2019-01-25 网易(杭州)网络有限公司 Exchange method, device and system based on game live streaming
CN111918090A (en) * 2020-08-10 2020-11-10 广州繁星互娱信息科技有限公司 Live broadcast picture display method and device, terminal and storage medium
JP6790203B1 (en) * 2019-09-13 2020-11-25 グリー株式会社 Computer programs, server devices, terminal devices and methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680157A (en) * 2017-09-08 2018-02-09 广州华多网络科技有限公司 It is a kind of based on live interactive approach and live broadcast system, electronic equipment
CN109275040A (en) * 2018-11-06 2019-01-25 网易(杭州)网络有限公司 Exchange method, device and system based on game live streaming
JP6790203B1 (en) * 2019-09-13 2020-11-25 グリー株式会社 Computer programs, server devices, terminal devices and methods
CN111918090A (en) * 2020-08-10 2020-11-10 广州繁星互娱信息科技有限公司 Live broadcast picture display method and device, terminal and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114071177A (en) * 2021-11-16 2022-02-18 网易(杭州)网络有限公司 Virtual gift sending method and device and terminal equipment
CN114071177B (en) * 2021-11-16 2023-09-26 网易(杭州)网络有限公司 Virtual gift sending method and device and terminal equipment
CN114786024A (en) * 2022-04-01 2022-07-22 广州方硅信息技术有限公司 Live broadcast room game synchronization method, system, device, equipment and storage medium
CN114786024B (en) * 2022-04-01 2024-05-28 广州方硅信息技术有限公司 Live room game synchronization method, system, device, equipment and storage medium
CN115379262A (en) * 2022-07-12 2022-11-22 网易(杭州)网络有限公司 Game operation control method, device and system and electronic equipment
CN115379262B (en) * 2022-07-12 2024-05-31 网易(杭州)网络有限公司 Game operation control method, device and system and electronic equipment

Similar Documents

Publication Publication Date Title
US11660531B2 (en) Scaled VR engagement and views in an e-sports event
US8177611B2 (en) Scheme for inserting a mimicked performance into a scene and providing an evaluation of same
CN107566911B (en) Live broadcast method, device and system and electronic equipment
US11724177B2 (en) Controller having lights disposed along a loop of the controller
JP6976424B2 (en) Audience view of the interactive game world shown at live events held at real-world venues
CN107680157B (en) Live broadcast-based interaction method, live broadcast system and electronic equipment
KR101686576B1 (en) Virtual reality system and audition game system using the same
CN107592575B (en) Live broadcast method, device and system and electronic equipment
JP7184913B2 (en) Creating Winner Tournaments with Fandom Influence
US10380798B2 (en) Projectile object rendering for a virtual reality spectator
EP2281245B1 (en) Method and apparatus for real-time viewer interaction with a media presentation
JP4890552B2 (en) Interactivity via mobile image recognition
US20190073830A1 (en) Program for providing virtual space by head mount display, method and information processing apparatus for executing the program
CN113596558A (en) Interaction method, device, processor and storage medium in game live broadcast
JP2004512865A (en) Interactive games through set-top boxes
CN111359200A (en) Augmented reality-based game interaction method and device
CN113453034A (en) Data display method and device, electronic equipment and computer readable storage medium
US11865446B2 (en) Interactive what-if game replay methods and systems
KR20220152428A (en) Terminal device, virtual sports device, virtual sports system and method for operating virtual sports system
CN114425162A (en) Video processing method and related device
CN114374856B (en) Interaction method and device based on live broadcast
JP7403581B2 (en) systems and devices
US20230381674A1 (en) Triggering virtual help or hindrance based on audience participation tiers
CN117138356A (en) Electronic contest audience entry
KR20240068451A (en) Metaverse movement lesson system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination