CN114760498A - Method, system, medium, and device for synthesizing action interaction under live broadcast with continuous microphone - Google Patents

Method, system, medium, and device for synthesizing action interaction under live broadcast with continuous microphone Download PDF

Info

Publication number
CN114760498A
CN114760498A CN202210339603.9A CN202210339603A CN114760498A CN 114760498 A CN114760498 A CN 114760498A CN 202210339603 A CN202210339603 A CN 202210339603A CN 114760498 A CN114760498 A CN 114760498A
Authority
CN
China
Prior art keywords
live broadcast
image
broadcast room
imitation
anchor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210339603.9A
Other languages
Chinese (zh)
Other versions
CN114760498B (en
Inventor
许英俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202210339603.9A priority Critical patent/CN114760498B/en
Priority claimed from CN202210339603.9A external-priority patent/CN114760498B/en
Publication of CN114760498A publication Critical patent/CN114760498A/en
Application granted granted Critical
Publication of CN114760498B publication Critical patent/CN114760498B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application relates to the field of network live broadcast, and provides a method, a system, a device, a medium and computer equipment for synthesizing action interaction under live broadcast with continuous microphone, wherein the method comprises the following steps: the server responds to the synthetic action interaction starting instruction, acquires a plurality of anchor identifiers, and establishes connection session connection between anchor clients corresponding to the anchor identifiers; a client in the live broadcast room outputs audio and video stream data in the live broadcast room; responding to the imitation image display instruction by a client in the live broadcast room, and acquiring imitation image data; outputting the imitation image in a live broadcast room; wherein the imitation image comprises at least two objects to be imitated; the actions of the two objects to be simulated are matched with each other and point to a preset shape; the simulation image is used for indicating the anchor cooperation corresponding to each anchor identification to simulate the action of the corresponding object to be simulated; and the server responds to the image simulation finishing instruction and outputs the image simulation interaction result in the live broadcast room. The method and the device can improve the live watching rate and the audience retention rate.

Description

Method, system, medium, and device for synthesizing action interaction under live broadcast with continuous microphone
Technical Field
The embodiment of the application relates to the technical field of network live broadcast, in particular to a method, a system, a medium and equipment for synthesizing action interaction under live broadcast.
Background
With the progress of network communication technology, online projects which users can participate in are more and more, wherein live webcasting is more and more popular with more and more users due to the characteristics of strong real-time performance and strong interactivity.
In the process of network live broadcast, the live broadcasts can carry out real-time audio-video interaction in a mode of establishing a connecting session, so that audiences who join respective live broadcast rooms can watch audio-video interactive contents of a plurality of live broadcasts in the live broadcast rooms.
However, in live broadcast, the simple audio/video interactive content is boring, and the activity of the audience in the live broadcast room cannot be improved, so that the interactive experience of the audience is poor, and the live broadcast watching rate and the audience retention rate are reduced.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a method, a system, a device, a medium and computer equipment for synthesizing action interaction under live broadcast with continuous microphone, which can improve live broadcast interaction and live broadcast watching rate and audience retention rate.
According to a first aspect of an embodiment of the present application, a method for synthesizing action interaction under live broadcasting with live broadcasting includes the following steps:
the server responds to the synthetic action interaction starting instruction, acquires a plurality of anchor identifiers, and establishes connection session connection between anchor clients corresponding to the anchor identifiers;
a client in the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms established by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications;
responding to the imitation image display instruction by a client in the live broadcast room, and acquiring imitation image data; outputting the imitation image in a live broadcasting room according to the imitation image data; wherein the imitation image comprises at least two objects to be imitated; the actions of the two objects to be simulated are matched with each other and point to a preset shape; the simulation image is used for indicating the anchor cooperation corresponding to each anchor identification to simulate the action of the corresponding object to be simulated;
and the server responds to the image simulation finishing instruction, acquires an image simulation interaction result and outputs the image simulation interaction result in the live broadcast room.
According to a second aspect of the embodiments of the present application, there is provided a system for synthesizing actions and interacting with live broadcasting, including: a server and a client;
the server responds to the synthetic action interaction starting instruction, acquires a plurality of anchor identifiers, and establishes connection session connection between anchor clients corresponding to the anchor identifiers;
a client in the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms established by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications;
responding to the imitation image display instruction by a client in the live broadcast room, and acquiring imitation image data; outputting the simulated image in the live broadcast room according to the simulated image data; wherein the imitation image comprises at least two objects to be imitated; the actions of the two objects to be simulated are matched with each other and point to a preset shape; the simulation image is used for indicating the anchor cooperation corresponding to each anchor identification to simulate the action of the corresponding object to be simulated;
and the server responds to the image simulation finishing instruction, acquires an image simulation interaction result and outputs the image simulation interaction result in the live broadcast room.
According to a third aspect of the embodiments of the present application, there is provided a device for synthesizing actions and interacting with live broadcasting, including:
the system comprises a microphone connecting module, a microphone receiving module and a microphone sending module, wherein the microphone connecting module is used for responding to a synthetic action interaction starting instruction by a server, acquiring a plurality of anchor identifiers and establishing microphone connecting session connection between anchor clients corresponding to the anchor identifiers;
the audio and video data output module is used for acquiring audio and video stream data by a client in the live broadcast room and outputting the audio and video stream data in the live broadcast room; the live broadcast room comprises a live broadcast room established by a main broadcast corresponding to each main broadcast identifier, and the audio and video stream data comprises audio and video stream data corresponding to each main broadcast identifier;
the simulated image display module is used for responding to a simulated image display instruction by a client in the live broadcast room and acquiring simulated image data; outputting the imitation image in a live broadcasting room according to the imitation image data; wherein the imitation image comprises at least two objects to be imitated; the actions of the two objects to be simulated are matched with each other and point to a preset shape; the simulation image is used for indicating the anchor cooperation corresponding to each anchor identification to simulate the action of the corresponding object to be simulated;
and the simulation interaction result output module is used for responding to the image simulation completion instruction by the server, acquiring an image simulation interaction result and outputting the image simulation interaction result in the live broadcast room.
According to a fourth aspect of embodiments herein, there is provided a computer device comprising a processor and a memory; the memory stores a computer program adapted to be loaded by the processor and to perform the method for composite action interaction in live telecast as described above.
According to a fifth aspect of embodiments of the present application, there is provided a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for synthesizing action interaction in live broadcast.
The server responds to the synthetic action interaction starting instruction, acquires a plurality of anchor identifiers, and establishes connection session connection between anchor clients corresponding to the anchor identifiers; a client in the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms established by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications; responding to the imitation image display instruction by a client in the live broadcast room, and acquiring imitation image data; outputting the simulated image in the live broadcast room according to the simulated image data; wherein the imitation image comprises at least two objects to be imitated; the actions of the two objects to be simulated are matched with each other and point to a preset shape; the simulation image is used for indicating the anchor cooperation corresponding to each anchor identification to simulate the action of the corresponding object to be simulated; and the server responds to the image simulation finishing instruction, acquires an image simulation interaction result and outputs the image simulation interaction result in the live broadcast room. This application embodiment is in company live broadcast interactive scene, through will including at least two treat that the synthetic action image of imitate object shows in the live broadcast room, and then make the anchor that company's wheat anchor sign corresponds can cooperate the imitative synthetic action image to improve the live broadcast and link the interactive interest of wheat, realize the flow for the anchor and introduce, promote live broadcast watching rate and audience retention rate.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
For a better understanding and practice, the present invention is described in detail below with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic block diagram of an application environment of a method for synthesizing motion interaction under live broadcast with live broadcast according to an embodiment of the present application;
fig. 2 is a flowchart of a method for synthesizing action interaction under live broadcasting with live broadcasting, according to a first embodiment of the present application;
fig. 3 is a schematic view of a live broadcast room interface under a composite action interaction under live broadcast with live microphone according to a first embodiment of the present application;
FIG. 4 is a schematic view of a live room interface displaying a simulated image according to a first embodiment of the present application;
fig. 5 is a flowchart of a method for establishing a connection session according to a first embodiment of the present application;
FIG. 6 is a flow chart of a method of displaying a simulated image according to a first embodiment of the present application;
FIG. 7 is a flowchart of a method for obtaining voting scores according to a first embodiment of the present application;
fig. 8 is a schematic structural diagram of a composite action interactive system under live webcast according to a second embodiment of the present application;
fig. 9 is a schematic block diagram of a composite action interaction device under live telecast according to a third embodiment of the present application;
fig. 10 is a schematic block diagram of a computer device according to a fourth embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that the embodiments described are only some embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without any creative effort belong to the protection scope of the embodiments in the present application.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. In the description of the present application, it is to be understood that the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not necessarily used to describe a particular order or sequence, nor are they to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The word "if/if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination".
In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated object, indicating that there may be three relationships, for example, a and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
As will be appreciated by those skilled in the art, the terms "client," "terminal device," and "terminal device" as used herein include both wireless signal transmitter devices, which include only wireless signal transmitter devices capable of transmitting, and wireless signal receiver devices, which include only wireless signal receiver devices capable of receiving, and also include receiving and transmitting hardware devices having receiving and transmitting hardware capable of two-way communication over a two-way communication link. Such a device may include: cellular or other communication devices such as personal computers, tablets, etc. having single or multi-line displays or cellular or other communication devices without multi-line displays; PCS (personal communications Service), which may combine voice, data processing, facsimile and/or data communications capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global positioning system) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "client," "terminal device" can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "client", "terminal Device" used herein may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, and the like.
The hardware referred to by the names "server", "client", "service node", etc. in the present application is essentially a computer device with the performance of a personal computer, and is a hardware device having necessary components disclosed by the von neumann principles of a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, etc., wherein a computer program is stored in the memory, and the central processing unit loads a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby accomplishing specific functions.
It should be noted that the concept of "server" in the present application can be extended to the case of server cluster. According to the network deployment principle understood by those skilled in the art, each server should be logically divided, and in physical space, the servers can be independent from each other but can be called through an interface, or can be integrated into one physical computer or a set of computer clusters. Those skilled in the art should understand this variation and should not be so constrained as to implement the network deployment of the present application.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a composite action interaction method under live webcast according to an embodiment of the present disclosure, where the application scenario includes a main webcast client 110, a viewer client 120, and a server 130.
Anchor client 110 interacts with viewer client 120 through server 130. Specifically, both the anchor client 110 and the viewer client 120 may access the internet via a network access to establish a data communication link with the server 130. The network may be any connection type of communication medium capable of enabling communication between the anchor client 110 and the server 130 and between the viewer client 120 and the server 130, such as a wired communication link, a wireless communication link, or a fiber optic cable, etc., which is not limited herein.
It should be noted that the clients proposed in the embodiment of the present application include the anchor client 110 and the viewer client 130.
It is noted that there are many understandings of the concept of "client" in the prior art, for example: it may be understood as an application program installed in a computer device, or may be understood as a hardware device corresponding to a server.
In the embodiments of the present application, the term "client" refers to a hardware device corresponding to a server, and more specifically, refers to a computer device, such as: smart phones, smart interactive tablets, personal computers, and the like.
When the client is a mobile device such as a smart phone and an intelligent interactive tablet, a user can install a matched mobile application program on the client and can also access a Web application program on the client.
When the client is a non-mobile device such as a Personal Computer (PC), the user can install a matching PC application on the client, and similarly can access a Web application on the client.
The mobile application refers to an application program that can be installed in the mobile device, the PC application refers to an application program that can be installed in the non-mobile device, and the Web application refers to an application program that needs to be accessed through a browser.
Specifically, the Web application program may be divided into a mobile version and a PC version according to the difference of the client types, and the page layout modes and the available server support of the two versions may be different.
In the embodiment of the application, the types of live application programs provided to the user are divided into a mobile end live application program, a PC end live application program and a Web end live application program. The user can autonomously select a mode of participating in the live webcasting according to different types of the client adopted by the user.
The present application divides clients into an anchor client 110 and a viewer client 130 depending on the identity of the user entering the client in the live room. It should be noted that in practical applications, the functions of viewer client 120 and anchor client 110 may be performed by the same client at different times, respectively. Thus, the same client may act as the viewer client 120 when viewing the live network broadcast and as the anchor client 110 when publishing the live video.
The anchor client 110 is a terminal that sends a webcast video, and is generally a client used by an anchor user in webcast. The hardware at which the anchor client 110 is directed essentially refers to a computer device, and in particular, as shown in fig. 1, may be a type of computer device such as a smartphone, smart interactive tablet, and personal computer.
The viewer client 120 is a client that receives and views the webcast video, and is typically a client used by a viewer user viewing the video in the webcast. The hardware at which the viewer client 120 is directed is essentially a computer device, and in particular, as shown in fig. 1, may be a type of computer device such as a smart phone, smart interactive tablet, and personal computer.
Server 130 may act as a business server and may be responsible for further connecting related audio data servers, video streaming servers, and other servers providing related support, etc. to form a logically associated server cluster for serving related end devices, such as anchor client 110 and viewer client 120 shown in fig. 1.
In the embodiment of the present application, the anchor client 110 and the viewer client 120 may join the same live broadcast room (i.e., a live broadcast channel), where the live broadcast room is a chat room implemented by means of internet technology and the server 130, and generally has an audio/video broadcast control function. The anchor user is live in the live room through anchor client 110, and the viewer user at viewer client 120 can log into server 130 to watch the live.
In the live broadcast room, interaction between the anchor user and the audience users can be realized through known online interaction modes such as voice, video, characters and the like, generally, the anchor user performs programs for the audience users in the form of audio and video streams, meanwhile, the audience users can interact with the anchor user in the form of characters or virtual gifts, economic transaction behaviors can also be generated in the interaction process, and the application form of the live broadcast room is not limited to online entertainment and can also be popularized to other related scenes.
Specifically, the process of watching the live broadcast by the audience user is as follows: a viewer user may click to access a live application installed on the viewer client 120 and choose to enter any one of the live rooms, triggering the viewer client 120 to load a live room interface for the viewer user, including several interactive components within the live room interface, such as: the video component, the virtual gift box component, the public screen component and the like can enable audience users to watch live broadcast in a live broadcast room by loading the interactive components, and perform various online interactions, wherein the online interaction modes comprise but are not limited to presenting virtual gifts, participating in live broadcast activities, talking on public screen speech and the like.
In this embodiment, the server 120 may further establish a connection for a live microphone session between the anchor clients 110 to perform live microphone broadcast. The server 120 establishes a connection session connection with a plurality of anchor clients 110 corresponding to the live microphone connection requests which include a plurality of anchor identifiers; after the connection of the microphone connection session is established, a client in the live broadcast room can acquire audio and video stream data corresponding to a plurality of anchor identifications and output the audio and video stream data in the live broadcast room, so that a user (including audiences and the anchor) entering the live broadcast room can see real-time live broadcast of a plurality of anchors in the live broadcast room.
However, in live broadcasting, the simple audio and video interactive content is boring, so that the activity of audiences in a live broadcasting room cannot be improved, the interactive experience of the audiences is poor, and the live broadcasting watching rate and the audience retention rate are reduced.
Based on the above problems, the embodiment of the application provides a synthetic action interaction method under live telecast. Referring to fig. 2, fig. 2 is a flowchart illustrating a method for synthesizing interaction actions under live broadcasting. The interactive method for the composite action under the live broadcasting with wheat in the embodiment of the application can be executed by two execution main bodies, namely the client and the server, wherein the client comprises an audience client and a main broadcasting client, and the method comprises the following steps:
step S101: and the server responds to the synthetic action interaction starting instruction, acquires a plurality of anchor identifiers and establishes the connection of the microphone session between the anchor clients corresponding to the anchor identifiers.
Before the synthesis action is carried out, the anchor needs to start the live broadcast firstly, specifically, the anchor can click to access a live broadcast application program, enter an opening interface, trigger the anchor client to send a live broadcast opening request to the server through interaction with a live broadcast opening control in the opening interface, the server responds to the live broadcast opening request and sends the live broadcast room data to the anchor client, the anchor client loads a live broadcast room interface according to the live broadcast room data and plays audio and video streaming data collected by the anchor client in the live broadcast room, and at the moment, audiences can also enter the live broadcast room to watch the live broadcast.
A composite action component is loaded in a live broadcasting interface, and the anchor can start a composite action playing method through interaction with the composite action component.
Because the composite action provided by the live broadcasting room needs to be realized by matching at least two anchor broadcasters, when the anchor broadcasters start the composite action playing method, the server needs to establish the microphone connecting session connection between the anchor broadcasters and carry out composite action interaction in a microphone connecting live broadcasting scene.
Therefore, before describing step S101 in detail, the embodiment of the present application needs to first describe what kind of situation will trigger the server to issue the composition action start instruction, which is as follows:
in an optional embodiment, before the server executes step S101, the server responds to a composite action start request sent by the anchor client, parses the composite action start request to obtain a composite action identifier, selects and sends at least two anchor clients containing the composite action identifier start request, generates a composite action instruction according to anchor identifiers corresponding to the at least two anchor clients, and sends the composite action start instruction.
In this embodiment, the server randomly selects the anchor that starts the composite action by means of random matching, and establishes a connection session for the corresponding anchor client.
It can be understood that the anchor number required for synthesizing the action image may be set and selected by the anchor client, for example, when the anchor client triggers the synthesizing action component, a synthesizer selection control may be provided, and the anchor may select the cooperation of two or more anchors through the synthesizer selection control to complete the synthesizing action.
It will be appreciated that the number of anchor required to compose a motion image may also be set by the server by default, for example: the server defaults that the two anchor clients are needed to cooperate, and then the server randomly selects the two anchor clients which send the synthetic action opening request containing the synthetic action interaction identifier to establish the connection session connection for the two anchor clients.
In addition, the anchor can also start an interactive playing method in a friend mode, specifically, the anchor client firstly obtains an anchor identifier and a composite action identifier corresponding to the wheat-connected anchor (which is in friend relationship with the current anchor) selected by the current anchor, and generating a composite action starting request according to the anchor identification and the composite action identification, sending the composite action starting request to a server, responding to the composite action starting request by the server, acquiring the anchor identification and the composite action identification, sending a direct-broadcasting-with-TV request to a corresponding anchor client, wherein, the live microphone connecting request comprises a main microphone identifier and a composite action identifier for requesting to connect the microphone, so that the anchor receiving the microphone connecting invitation determines which anchor invites the anchor to carry out microphone connecting and which interactive playing method is carried out at present, and after the server receives the connecting-to-microphone confirmation information sent by the corresponding anchor client, sending a synthetic action starting instruction.
In another alternative embodiment, the composite action playing method may also be performed as a team, and live composite action interaction is performed in a team form, for example, when the composite action playing method is performed, multiple rounds of composite actions may be set, each round of composite action is completed by one or more team members in each group, and finally, the interaction scores of the team members are counted to obtain a composite interaction result of the corresponding group. The mode of grouping may also be a friend mode or a random mode, and the grouping implementation process is not described in detail herein.
Step S102: a client side of the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms created by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications.
The live broadcast rooms comprise live broadcast rooms created by the anchor corresponding to the anchor identifications.
The clients within the live room include a main client and a spectator client within the live room.
The audio and video stream data comprises audio and video stream data corresponding to each anchor identifier, and the audio and video stream data can be mixed audio and video stream data or non-mixed audio and video data.
It should be noted that, for mixed video stream data, since video stream data corresponding to each anchor identifier has been spliced frame by frame, mixed video stream data may be directly displayed in one video window in a live broadcast interface, and for audio/video data that is not mixed, it is necessary to bind to different video windows respectively and display in different video windows.
In the embodiment of the application, audio and video data corresponding to each anchor identifier are mixed to obtain mixed audio and video stream data.
In an optional embodiment, the server is an execution subject of mixed flow operation, specifically, after the server pulls the audio and video stream data corresponding to the anchor identifier from each anchor client, the mixed flow operation is performed on the audio and video stream data corresponding to the anchor identifier to obtain audio and video stream data, then the audio and video stream data is sent to the client in the live broadcast room, and the client in the live broadcast room obtains the audio and video stream data and outputs the audio and video stream data in the live broadcast room.
In another optional embodiment, the anchor client is an execution subject of the mixed flow operation, and specifically, after the server pulls the audio/video stream data corresponding to the anchor identifier from each anchor client, the server sends the audio/video stream data corresponding to each anchor identifier to the anchor client. Optionally, the server may only issue audio/video stream data corresponding to the anchor identifier of another connected wheat to a certain anchor client, so as to reduce a certain data transmission amount. After the anchor client acquires the audio and video stream data corresponding to each anchor identification, the anchor client performs mixed flow operation on the audio and video stream data to obtain the audio and video stream data, and finally, the audio and video stream data is issued to the audience client in the live broadcast room through the server and is output in the live broadcast room.
In other optional embodiments, the anchor client and the audience client are both execution subjects of mixed flow operation, and specifically, after the server pulls the audio and video stream data corresponding to the anchor identifier from each anchor client, the server sends the audio and video stream data corresponding to each anchor identifier to clients (including the anchor client and the audience client) in a live broadcast room, and after the clients in the live broadcast room acquire the audio and video stream data corresponding to each anchor identifier, the clients perform mixed flow operation on the audio and video stream data to obtain the audio and video stream data, and the audio and video stream data is subjected to data in the live broadcast room.
In the embodiment of the present application, an execution subject for performing a mixed flow operation on audio/video stream data corresponding to each anchor identifier is not limited, and may be a server, an anchor client, or a viewer client.
In an optional embodiment, the server includes a service server and a stream server, the service server performs processing on a service flow, the stream server performs processing on related stream data, and the mixed flow operation is executed.
Referring to fig. 3, fig. 3 is a schematic display diagram of a live broadcast room interface after action synthesis interaction under live broadcast. In fig. 3, a video frame of live fighting interaction of two anchor shows that the video display area 41 corresponding to anchor a is on the left side of the video window, and the video display area 42 corresponding to anchor B is on the right side of the video window. In fig. 3, the video display area 41 and the video display area 42 divide the video window equally left and right.
It can be understood that, when there are multiple anchor broadcasters performing live broadcast/live broadcast composite action interaction, the layout of the video display area corresponding to the anchor broadcasters in the video window may also change, and is not described here.
Step S103: responding to the imitation image display instruction by a client in the live broadcast room, and acquiring imitation image data; outputting the simulated image in the live broadcast room according to the simulated image data; wherein the imitation image comprises at least two objects to be imitated; the actions of the two objects to be simulated are matched with each other and point to a preset shape; the impersonation image is used for indicating the anchor collaboratively impersonating the action of the corresponding object to be impersonated corresponding to each anchor identification.
The imitated image data is data used for presenting imitated images in a live broadcast, and the imitated images comprise at least two objects to be imitated, wherein the objects to be imitated can be real characters, cartoon characters or cartoon animals and the like. The actions of the at least two objects to be simulated are matched with each other and point to a preset shape, the preset shape can be a conventional or non-conventional shape such as love heart, circle and the like, for example, the simulation image can be a heart shape formed by two cartoon characters respectively drawn by hand, and the anchor corresponding to the two anchor marks respectively imitates the strokes of the hands of the two cartoon characters to form the heart shape.
When the simulated image is displayed in the live broadcast room, different implementation modes can be provided, for example, the simulated image can be directly covered on a live broadcast picture of the live broadcast room, so that the simulated image is displayed in the live broadcast room, and the simulated image data and the video stream data can be mixed and then output in the live broadcast room, so that the simulated image is displayed in the live broadcast room.
It should be noted that, the mixed flow operation may also be executed by the server, the anchor client, or the viewer client, which is not limited herein, and in order to ensure that the mimic image is displayed in the live broadcast room for a certain duration, the mimic image data may be continuously mixed with the video frames in the video stream data until the duration of the mixed flow operation reaches the mimic duration, which is from the time when the mimic image is displayed in the live broadcast room, and the anchor may imitate the object to be imitated, and may also be understood as the display duration in the live broadcast room.
In an optional embodiment, the display position of the simulated image in the live broadcast room is further included in the simulated image data, and the display position may be a preset fixed position, for example: the middle of the upper part of the main broadcasting picture of the live broadcasting room is centered. Referring to fig. 4, a simulated image is displayed above the on-air screen of the live room.
Step S104: and the server responds to the image simulation finishing instruction, acquires an image simulation interaction result and outputs the image simulation interaction result in the live broadcast room.
The action cooperation simulation finishing instruction can be triggered by the server when the action synthesis simulation time reaches the action cooperation simulation interaction result display time.
The action cooperation simulation interaction result may include a notification of whether the action cooperation simulation interaction was successful, may include a composite simulation score corresponding to the anchor simulation image, may include one or more of a composite simulation image corresponding to the anchor simulation image, and the like.
The server responds to the synthetic action interaction starting instruction, acquires a plurality of anchor identifiers, and establishes connection session connection between anchor clients corresponding to the anchor identifiers; a client in the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms established by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications; responding to the imitation image display instruction by a client in the live broadcast room, and acquiring imitation image data; outputting the imitation image in a live broadcasting room according to the imitation image data; wherein the imitation image comprises at least two objects to be imitated; the actions of the two objects to be simulated are matched with each other and point to a preset shape; the simulation image is used for indicating the anchor cooperation corresponding to each anchor identification to simulate the action of the corresponding object to be simulated; and the server responds to the image simulation finishing instruction, acquires an image simulation interaction result and outputs the image simulation interaction result in the live broadcast room. This application embodiment is in company live broadcast interactive scene, through will including at least two treat that the synthetic action image of imitate object shows in the live broadcast room, and then make the anchor that company's wheat anchor sign corresponds can cooperate the imitative synthetic action image to improve the live broadcast and link the interactive interest of wheat, realize the flow for the anchor and introduce, promote live broadcast watching rate and audience retention rate.
In an optional embodiment, in order to improve the enthusiasm of the anchor imitating the composite action and improve the interest of the interaction, after the establishment of the online session and before the display of the imitation image display instruction, an interaction willingness survey is further performed on the audience of the live broadcast room created by the anchor corresponding to each anchor identifier for establishing online, specifically, referring to fig. 5, before the step of responding to the imitation image display instruction by the client in the live broadcast room in step S103, the method includes steps S10311-S10313:
step S10311: a client in the live broadcast room receives the play voting control data and displays the play voting control in the live broadcast room according to the play voting control data; and the play voting control is used for acquiring whether audiences in the live broadcast room support the anchor to carry out synthetic action interaction.
Specifically, playing prompt information and a selection control for whether the anchor is supported to carry out synthetic action interaction or not are displayed on a playing voting control; the play prompt information may be, for example, whether to perform composite action prompt information with the anchor, and the selection control whether to support the anchor to perform composite action interaction may include yes and no selection controls, and if the audience clicks the yes selection control, the audience indicates to support the anchor to perform composite action interaction; if the viewer clicks the "no" selection control, it indicates that the anchor is not supported for composite action interaction. It will be appreciated that an audience user has only one operation to select one of the selection controls at a time.
Step S10312: the method comprises the steps that a server obtains voting data triggered by a playing method voting control by audiences corresponding to a client in a live broadcast room within preset time; and if the number of votes supporting the synthetic action interaction is larger than the number of votes not supporting the synthetic action interaction, sending an imitation image display instruction to a client side of the live broadcast room.
Step S10313: if the number of votes supporting the synthetic action interaction is less than or equal to the number of votes not supporting the synthetic action interaction, the server disconnects the microphone connecting session, reselects a plurality of anchor client sides which send synthetic action opening requests containing synthetic action identifiers, generates synthetic action opening instructions according to the anchor identifiers corresponding to the anchor client sides, and reestablishes microphone connecting session connection between the anchor client sides corresponding to the anchor identifiers.
According to the method and the device, voting data triggered by the play voting control by the audience corresponding to the client side in the live broadcast room within the preset time are obtained, the voting number supporting the synthetic action interaction is larger than the voting number not supporting the synthetic action interaction, the imitation image display instruction is sent to the client side in the live broadcast room, the synthetic action interaction is started, the voting number supporting the synthetic action interaction is smaller than or equal to the voting number not supporting the synthetic action interaction, the server disconnects the microphone connecting session, the microphone connecting session is reestablished, whether the synthetic action interaction is carried out or not is determined through the audience interaction with the live broadcast room, and therefore enough audiences can participate in the interaction in the process of starting the synthetic action interaction, the enthusiasm of the main broadcast imitating the synthetic action is improved, and the interestingness of the interaction is improved.
In an alternative embodiment, the simulated image data includes the simulated image and a display position of the simulated image in the live view; referring to fig. 6, in step S103, the client in the live broadcast room responds to the mimic image display instruction to obtain mimic image data; a step of displaying the simulant image in the live view based on the simulant image data, including steps S10321 to S10322:
step S10321: and responding to the imitation image display instruction by the server, randomly acquiring imitation image data from the imitation image database, and sending the imitation image to the client in the live broadcast room.
Step S10322: a client side of the live broadcast room receives the simulated image data; the mimic image is displayed at a display position in the live broadcast room based on the mimic image data.
The position of the imitation image in the embodiment of the application may be set by the server, and generally, the display position is on the premise that the main broadcast of the live broadcast room is not shielded, for example, the imitation image may be displayed above the live broadcast screen of the live broadcast room.
In an alternative embodiment, in step S103, the client in the live broadcast room acquires the mimic image data in response to the mimic image display instruction; after the step of outputting the mimic image in the live broadcast room according to the mimic image data, the method further includes: and the client side of the live broadcast room receives the action synthesis prompt data, and displays the action synthesis prompt information in the live broadcast room according to the action synthesis prompt data. And/or the client side of the live broadcast room receives the action synthesis play countdown control data, synthesizes the play countdown control data according to the action, and displays the action synthesis play countdown control in the live broadcast room; and the countdown control is used for displaying the remaining preparation time of the synthetic action interaction.
The action synthesis prompt information is used for indicating that the simulated image can complete the challenge when being matched with other anchor players, and the playing method is started after the countdown is finished. Optionally, after the simulated image is displayed, the action composition prompt information and the composition play countdown control may be displayed on the simulated image in a pop-up window manner to prompt the anchor to perform the action composition play.
According to the method and the device, the action synthesis prompt information and the action synthesis play countdown control are displayed in the live broadcast room, so that the anchor in the live broadcast room can know the action synthesis play requirement in advance and prepare for the action synthesis play.
In order to enable the anchor to better cooperate with the imitated image, after the countdown of the action synthesis playing countdown control is finished, a client in the live broadcast room receives imitated duration countdown control data, and the imitated duration countdown control is displayed in the live broadcast room according to the imitated duration countdown control data; wherein the emulated duration countdown control is to indicate a remaining duration of the emulated synthetic action. The display position of the simulated duration countdown control can be set according to actual needs, for example, as shown in fig. 4, the simulated duration countdown control is displayed below the live broadcast room.
In an optional embodiment, in step S103, the client in the live broadcast room responds to the imitation image display instruction to obtain imitation image data; the step of outputting the mimic image in the live broadcast room based on the mimic image data includes the steps of: the server responds to the imitation duration ending instruction, and obtains a frame of video frame image when responding to the imitation duration ending instruction from the mixed video stream in the live broadcast room; and processing the video frame image according to a preset image recognition algorithm and the imitation image to obtain a synthetic imitation score corresponding to the imitation image.
Specifically, the image recognition algorithm may be preset in the mimic image data, and the server obtains the mimic image data from the mimic image database to obtain the image recognition algorithm; the image recognition algorithm can also be preset in the server, and the image recognition algorithm is obtained from the server according to the identifier corresponding to the simulated image by binding and storing the identifier corresponding to the simulated image and the image recognition algorithm.
Processing the video frame image according to a preset image recognition algorithm and the imitation image, and obtaining a synthetic imitation score corresponding to the imitation image may be: carrying out gesture recognition on the video frame image to obtain a gesture coordinate, and drawing a synthetic picture according to the gesture coordinate; and according to the synthetic picture and a preset image identification method, judging the similarity value of a preset shape corresponding to the imitation image, converting the similarity value into a score of 0-100, and storing the score to further obtain a synthetic imitation score corresponding to the imitation image.
In order to further improve the interest of the interaction, after the step that the server responds to the imitation duration ending instruction, the method comprises the following steps: and the client in the live broadcast room receives the image acquisition prompt instruction, and displays image acquisition prompt information on the client in the live broadcast room according to the image acquisition prompt instruction.
Specifically, after the simulation duration ends, the clickable shooting special effect may be output in the live broadcast room according to the image capture prompt instruction, and the notification of the analysis result may be displayed in the live broadcast room after the clickable shooting special effect.
In order to realize real-time display of the synthetic imitation result, after the synthetic imitation score corresponding to the imitation image is obtained, the video frame image and the synthetic imitation score corresponding to the imitation image are output in a live broadcast.
In another optional embodiment, the client in the live broadcast room in step S103 acquires the mimic image data in response to the mimic image display instruction; the step of outputting the mimic image in the live broadcast room based on the mimic image data includes the steps of: the server responds to an imitation duration ending instruction and acquires a plurality of frame video frame images corresponding to a preset time interval in the video stream of each anchor identifier within a preset time period; synthesizing and matching a plurality of frames of video frame images corresponding to each anchor identification according to corresponding preset time intervals to obtain a plurality of frame matching images; respectively processing a plurality of frames of matched images according to a preset image recognition algorithm and the simulated images to obtain simulated scores of the plurality of frames of matched images; and obtaining a synthetic imitation score corresponding to the imitation image according to the imitation scores of the plurality of frames of matched images.
Specifically, the server responds to the simulation duration end instruction, starts monitoring of the video stream corresponding to each anchor identifier within a preset time period, and acquires a plurality of corresponding frame video frame images in the video stream corresponding to each anchor identifier according to a preset time interval; respectively synthesizing and matching the video frames corresponding to the anchor identifications according to corresponding preset time intervals to obtain a plurality of frame matching images; respectively carrying out gesture recognition on the plurality of frames of matched images to obtain gesture coordinates, and drawing a corresponding synthetic picture according to the gesture coordinates; according to the synthetic pictures and a preset image recognition method, judging similarity values of preset shapes corresponding to the simulated images, converting the similarity values into scores of 0-100 and storing the scores, and further obtaining simulated scores of the synthetic pictures; and taking the average value of the imitation scores of the synthetic pictures as the synthetic imitation score corresponding to the imitation image. It is to be understood that the highest value, the lowest value, the weighted average value, or the like of the imitation scores of the respective synthetic pictures may be used as the synthetic imitation score corresponding to the imitation image.
Similarly, in order to display the result of the synthesis simulation in real time, after the synthesis simulation score corresponding to the simulation image is obtained, the synthesis picture with the highest simulation score and the synthesis simulation score corresponding to the simulation image are output in the live broadcast.
In another alternative embodiment, referring to fig. 7, in step S103, the client in the live broadcast room responds to the mimic image display instruction to obtain mimic image data; the step of outputting the mimic image in the live broadcast room based on the mimic image data includes steps S10331 to S10332:
step S10331: a client in a live broadcast room acquires voting control data; displaying a voting control in a live broadcast room according to the voting control data; and whether the voting control is effective or not is determined according to the virtual resources given by the audience in the live broadcast room.
For example, the audience user may obtain the number of tickets for the anchor gift, for example, the number of sent gifts or the value of the gifts may be converted into the corresponding number of tickets, and when the audience user succeeds in the gift sending, the voting control will become clickable, and the audience user may further vote to support the anchor collaborative imitation image through the voting control. It should be understood that, in order to show fairness, the upper limit of voting for each user may be limited.
The display position of the screen projection control can be set according to actual needs, for example, as shown in fig. 4, the display position is displayed below the live broadcast interface.
Step S10332: and the client side of the live broadcast room responds to the triggering operation of the voting control part and acquires the voting score of the anchor cooperation imitation image corresponding to each anchor identification.
According to the method and the device, the interestingness of the synthetic action can be improved by adding the voting control, the interactivity between the anchor and audiences can be improved by determining the virtual resources given by the audiences in the live broadcast room, and the enthusiasm of the anchor is improved.
In an alternative embodiment, the image mimic interaction result includes an image mimic interaction score; the step S04 in which the server responds to the image simulation completion instruction, obtains an image simulation interaction result, and outputs the image simulation interaction result in the live broadcast room includes: and the server responds to the image simulation finishing instruction, obtains an image simulation interaction score according to the synthetic simulation score corresponding to the simulated image and the voting score of the client side of the live broadcast room, and outputs the image simulation interaction score in the live broadcast room.
Specifically, the sum of the synthetic simulation score corresponding to the simulation image and the voting score of the client side in the live broadcast room can be used as the image simulation interaction score; the synthesis imitation score and the voting score can be respectively preset with a weight proportion, and the sum of the synthesis imitation score corresponding to the imitation image and the product of the voting score of the client side of the live broadcast room and the corresponding preset weight proportion is used as an image imitation interaction score.
On the basis of the embodiment, if the image imitation interaction score is larger than a preset threshold value, the server issues a synthesis action success instruction; the client side of the live broadcasting room outputs the anchor reward information in the live broadcasting room according to the synthesis action success instruction; if the image imitation interaction score is smaller than or equal to a preset threshold value, the server issues a synthesis action failure instruction; and the client side of the live broadcast room outputs punishment information in the live broadcast room according to the synthetic action failure instruction.
Specifically, the anchor bonus information may be a bonus pattern of a smiling face added to an anchor face in a live broadcast room, or a bonus pattern of a crown added to an anchor face in a live broadcast room, or a special audio-visual effect of winning output in a live broadcast room. Specifically, the penalty information may be interaction information for penalizing the main singing, dancing, and the like.
It can be understood that, for one-time synthetic action simulation, a plurality of simulated images can be output in the live broadcast room, specifically, the simulated image data includes a plurality of simulated images, a display sequence of the plurality of simulated images, and a simulation duration corresponding to each simulated image; in step S103, the client in the live broadcast room responds to the imitation image display instruction to acquire imitation image data; a step of outputting the mimic image in the live broadcast room based on the mimic image data, comprising: and responding to the imitation image display instruction by a client in the live broadcast room, and sequentially outputting the corresponding imitation images in the live broadcast room according to the display sequence of the plurality of imitation images and the imitation duration corresponding to each imitation image.
It can be understood that, if the simulated image data includes a plurality of simulated images, after responding to the simulated image display instruction, the client in the live broadcast room outputs a first simulated image in the live broadcast room, and after the simulation duration corresponding to the first simulated image is over, displays a second simulated image, and so on, and then outputs a plurality of simulated images.
In an optional embodiment, if the simulated image data includes a plurality of simulated images, in step S104, the server obtains an image simulation interaction result in response to the image simulation completion instruction, and when the image simulation interaction result is output in the live broadcast room, the server may obtain a simulation score corresponding to each simulated image and an audience vote score corresponding to each simulated image, obtain the simulation interaction score of each simulated image according to the simulation score corresponding to each simulated image and the audience vote score corresponding to each simulated image, and then take an average value as a final image simulation interaction score, thereby outputting the image simulation interaction score in the live broadcast room.
In another alternative embodiment, if the simulated image data includes a plurality of simulated images, in step S104, the server obtains the image simulation interaction result in response to the image simulation completion instruction, and when the image simulation interaction result is output in the live broadcast room, may obtain an average value of the simulation scores corresponding to each simulated image, and an audience voting average value in the simulated interactive play method; and then, carrying out weighted calculation on the average value of the imitation scores corresponding to each imitation image and the audience voting average value in the imitation interactive playing method according to a preset proportion to obtain the final image imitation interactive score, and outputting the image imitation interactive score in a live broadcasting room. For example, with three model images, the image emulation interaction score is 80% x three emulation scores/3 + 20% x audience vote score/audience size.
In an optional embodiment, if the simulated image data includes a plurality of simulated images, after one of the simulated images is displayed, the obtained synthesized simulated image corresponding to each simulated image and the image simulated interaction score corresponding to each simulated image may be output in a live broadcast room, and then the next simulated image is displayed. Further, after all the simulation images are displayed, the synthetic simulation image corresponding to each simulation image, the image simulation interaction score corresponding to each simulation image and the simulation interaction score can be output in a live broadcasting room.
In an optional embodiment, the server monitors the simulated interaction score of the anchor starting the synthetic action play within a preset time period, and places the simulated interaction score on the synthetic action play score publishing column, so that the audience user can obtain the simulated interaction score of the anchor through the synthetic action play entrance. Wherein the higher the simulated interaction score, the higher the ranking on the score posting column. The anchor may cooperate with different anchors, both having corresponding scores. When the simulated interaction value is larger than the first name of the full service, the server can issue the broadcast at the full service, a message that someone dominates the first name of the action synthesis playing method is displayed in the live broadcast room, and meanwhile, the live broadcast room number of the main broadcast corresponding to the simulated interaction value is issued in the live broadcast room, so that audience users can jump to the corresponding live broadcast room through the live broadcast number, and the flow of the main broadcast is increased.
Please refer to fig. 8, which is a schematic structural diagram of a composite action interaction system under live telecast according to a second embodiment of the present application. The interactive system 200 for synthesizing actions under live broadcast with live TV includes: a server 201 and a client 202; clients 202 include a cast client 2021 and a viewer client 2022;
the server 201 responds to the synthetic action interaction opening instruction, acquires a plurality of anchor identifiers, and establishes connection session connection between anchor clients 2021 corresponding to the anchor identifiers;
a client 202 in the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms established by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications;
the client 202 in the live broadcast room responds to the imitation image display instruction to acquire imitation image data; outputting the simulated image in the live broadcast room according to the simulated image data; wherein the imitation image comprises at least two objects to be imitated; the actions of the two objects to be simulated are matched with each other and point to a preset shape; the simulation image is used for indicating the anchor cooperation corresponding to each anchor identification to simulate the action of the corresponding object to be simulated;
The server 201 responds to the image simulation completion instruction, obtains an image simulation interaction result, and outputs the image simulation interaction result in the live broadcast room.
The system for synthesizing action interaction under live telecast provided in the second embodiment of the present application and the method for synthesizing action interaction under live telecast provided in the first embodiment belong to the same concept, and details of implementation processes thereof are referred to in the method embodiments, and are not described herein again.
Please refer to fig. 9, which is a schematic structural diagram of a live-live fighting interaction device according to a fifth embodiment of the present application. The apparatus 300 comprises:
the wheat connecting module 301 is configured to, in response to the synthetic action interaction start instruction, the server acquires a plurality of anchor identifiers, and establishes a wheat connecting session connection between anchor clients corresponding to the plurality of anchor identifiers;
the audio and video data output module 302 is used for a client in the live broadcast room to acquire audio and video stream data and output the audio and video stream data in the live broadcast room; the live broadcast room comprises live broadcast rooms established by the anchor corresponding to the anchor identifications, and the audio and video stream data comprises audio and video stream data corresponding to the anchor identifications;
the imitation image display module 303 is configured to, in response to an imitation image display instruction, obtain imitation image data by a client in the live broadcast room; outputting the simulated image in the live broadcast room according to the simulated image data; wherein the imitation image comprises at least two objects to be imitated; the actions of the two objects to be simulated are matched with each other and point to a preset shape; the simulation image is used for indicating the anchor cooperation corresponding to each anchor identification to simulate the action of the corresponding object to be simulated;
And the simulation interaction result output module 304 is used for responding to the image simulation completion instruction by the server, acquiring an image simulation interaction result and outputting the image simulation interaction result in the live broadcast room.
It should be noted that, when the device for interacting a composite action under live telecast according to the third embodiment of the present application executes the method for interacting a composite action under live telecast, only the division of the above functional modules is used for illustration, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the interactive device for composite actions under live telecast provided in the third embodiment of the present application and the interactive method for composite actions under live telecast in the first embodiment of the present application belong to the same concept, and details of the implementation process are shown in the method embodiments, and are not described herein again.
The embodiment of the device for synthesizing action and interacting with live broadcasting under continuous broadcasting can be applied to computer equipment, such as a server, and the embodiment of the device can be implemented by software or by hardware or a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor in which the file processing is located. From a hardware perspective, the computer device may include a processor, a network interface, a memory, and a non-volatile memory, which are connected to each other via a data bus or in other known manners.
Please refer to fig. 10, which is a schematic structural diagram of a computer device according to a fourth embodiment of the present application. As shown in fig. 10, the computer device 400 may include: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and executable on the processor 401, such as: synthesizing action interactive program under the live-broadcasting of continuous wheat; the steps in the first embodiment described above are implemented when the processor 401 executes the computer program 403.
The processor 401 may include one or more processing cores, among others. The processor 401 is connected to various parts in the computer device 400 by various interfaces and lines, executes various functions of the computer device 400 and processes data by executing or executing instructions, programs, code sets or instruction sets stored in the memory 402 and calling up data in the memory 402, and optionally, the processor 401 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), Programmable Logic Array (PLA). The processor 401 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing contents required to be displayed by the touch display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 401, but may be implemented by a single chip.
The Memory 402 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 402 includes a non-transitory computer-readable medium. The memory 402 may be used to store instructions, programs, code sets, or instruction sets. The memory 402 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 402 may optionally be at least one storage device located remotely from the aforementioned processor 401.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the method steps of the foregoing embodiment, and a specific execution process may refer to specific descriptions of the foregoing embodiment, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the steps of the above-described embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are intended to be included within the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (17)

1. A synthetic action interaction method under live broadcast with continuous wheat is characterized by comprising the following steps:
the server responds to the synthetic action interaction starting instruction, acquires a plurality of anchor identifiers, and establishes connection session connection between anchor clients corresponding to the anchor identifiers;
a client in a live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises a live broadcast room established by a main broadcast corresponding to each main broadcast identification, and the audio and video stream data comprises audio and video stream data corresponding to each main broadcast identification;
the client side in the live broadcast room responds to the imitation image display instruction and obtains imitation image data; outputting the imitation images in a live broadcast room according to the imitation image data; wherein the mimic image includes at least two objects to be mimicked; the actions of the two objects to be simulated are matched with each other and point to a preset shape; the simulation image is used for indicating the anchor cooperation corresponding to each anchor identification to simulate the action of the corresponding object to be simulated;
And the server responds to the image simulation finishing instruction, acquires an image simulation interaction result and outputs the image simulation interaction result in the live broadcast room.
2. The method of claim 1, wherein the method comprises the following steps:
the client side in the live broadcast room responds to the imitation image display instruction and obtains imitation image data; after the step of outputting the mimic image in the live broadcast room according to the mimic image data, the method comprises the steps of:
the server responds to the imitation duration ending instruction, and obtains a frame of video frame image when responding to the imitation duration ending instruction from the mixed video stream in the live broadcast room; and processing the video frame image according to a preset image recognition algorithm and the imitation image to obtain a synthetic imitation score corresponding to the imitation image.
3. The method of claim 1, wherein the method comprises the following steps:
the client side in the live broadcast room responds to the imitation image display instruction and obtains imitation image data; after the step of outputting the mimic image in the live broadcast room according to the mimic image data, the method comprises the steps of:
The server responds to the simulation duration ending instruction and acquires a plurality of frame video frame images corresponding to preset time intervals in the video stream of each anchor identifier within a preset time period; synthesizing and matching a plurality of frames of video frame images corresponding to each anchor identifier according to corresponding preset time intervals to obtain a plurality of frame matching images; respectively processing a plurality of frames of matched images according to a preset image recognition algorithm and the simulated images to obtain simulated scores of the plurality of frames of matched images; and obtaining a synthetic imitation score corresponding to the imitation image according to the imitation scores of the plurality of frames of matched images.
4. The method of claim 2 or 3, wherein the method comprises the following steps:
the client side in the live broadcast room responds to the imitation image display instruction and obtains imitation image data; after the step of outputting the mimic image in the live broadcast room according to the mimic image data, the method comprises the steps of:
a client in the live broadcast room acquires voting control data; displaying a voting control in a live broadcast room according to the voting control data; whether the voting control takes effect is determined according to virtual resources given by audiences in a live broadcast room;
And the client side of the live broadcast room responds to the triggering operation of the voting control piece and acquires the voting score of the anchor cooperation imitation image corresponding to each anchor identification.
5. The method of claim 4, wherein the method comprises the following steps:
the image simulation interaction result comprises an image simulation interaction score; the server responds to the image simulation finishing instruction, obtains an image simulation interaction result, and outputs the image simulation interaction result in the live broadcast room, wherein the steps comprise:
and the server responds to an image imitation completion instruction, obtains an image imitation interaction score according to the synthetic imitation score corresponding to the imitation image and the voting score of the client side of the live broadcast room, and outputs the image imitation interaction score in the live broadcast room.
6. The method of claim 5, wherein the method comprises the following steps:
if the image imitation interaction value is larger than a preset threshold value, the server issues a synthesis action success instruction; the client side of the live broadcast room outputs anchor reward information in the live broadcast room according to the synthesis action success instruction;
if the image imitation interaction score is smaller than or equal to a preset threshold value, the server issues a synthesis action failure instruction; and the client side of the live broadcast room outputs punishment information in the live broadcast room according to the synthetic action failure instruction.
7. The method of any one of claims 1 to 6, wherein the method comprises the following steps:
the simulated image data comprises a plurality of simulated images, a display sequence of the simulated images and simulated duration corresponding to each simulated image; the client side in the live broadcast room responds to the imitation image display instruction and obtains imitation image data; the step of outputting the imitation image in the live broadcast room according to the imitation image data comprises the following steps:
and responding to an imitation image display instruction by a client in the live broadcast room, and sequentially outputting the corresponding imitation images in the live broadcast room according to the display sequence of the plurality of imitation images and the imitation duration corresponding to each imitation image.
8. The method of any one of claims 1 to 3, wherein the method comprises the following steps:
the step of responding to the imitated image display instruction by the client in the live broadcast room comprises the following steps:
a client in the live broadcast room receives play voting control data, and displays a play voting control in the live broadcast room according to the play voting control data; the play voting control is used for acquiring whether audiences in the live broadcast room support the anchor to perform synthetic action interaction;
The server acquires voting data triggered by the playing voting control by the audience corresponding to the client in the live broadcast room within a preset time; and if the voting number which supports the synthetic action interaction is larger than the voting number which does not support the synthetic action interaction according to the voting data, sending an imitation image display instruction to a client side of the live broadcast room.
9. The method of claim 8, wherein the method comprises the steps of:
if the number of votes supporting the synthetic action interaction is smaller than or equal to the number of votes not supporting the synthetic action interaction according to the voting data, the server disconnects the microphone connecting session, reselects a plurality of main broadcast clients which send synthetic action opening requests containing synthetic action identifiers, generates synthetic action opening instructions according to the main broadcast identifiers corresponding to the main broadcast clients, and reestablishes the microphone connecting session connection between the main broadcast clients corresponding to the main broadcast identifiers.
10. The method of claim 2 or 3, wherein the method comprises the following steps:
the server comprises the following steps before the step of responding to the imitated duration ending instruction:
A client in the live broadcast room receives data of the imitated duration countdown control; displaying an imitated duration countdown control in a live broadcast room according to the imitated duration countdown control data; wherein the mimic duration countdown control is to indicate a remaining duration to mimic the synthetic action.
11. The method of claim 2 or 3, wherein the method comprises the following steps:
after the step of the server responding to the imitated end of duration instruction, the method comprises the following steps:
a client in the live broadcast room receives an image acquisition prompt instruction; and displaying image acquisition prompt information at a client side in the live broadcast room according to the image acquisition prompt instruction.
12. The method of any one of claims 1 to 3, wherein the method comprises the following steps:
the mimic image data includes the mimic image and a display position of the mimic image in a live room; the client side in the live broadcast room responds to the imitation image display instruction and obtains imitation image data; the step of displaying the mimic image in the live broadcast room according to the mimic image data includes:
the server responds to an imitation image display instruction, randomly acquires imitation image data from an imitation image database, and sends the imitation image data to a client side in the live broadcast room;
The client side of the live broadcast room receives the imitated image data; and displaying the simulated image at the display position of the live broadcast room according to the simulated image data.
13. The method of any one of claims 1 to 3, wherein the method comprises the following steps:
the client side in the live broadcast room responds to the imitation image display instruction and obtains imitation image data; after the step of outputting the mimic image in the live broadcast room according to the mimic image data, the method further includes:
the client side of the live broadcast room receives action synthesis prompt data; displaying action synthesis prompt information in a live broadcast room according to the action synthesis prompt data;
and/or the client side of the live broadcast room receives the data of the action composition play countdown control; displaying the action synthesis play countdown control in a live broadcast room according to the action synthesis play countdown control data; and the countdown control is used for displaying the remaining preparation time of the synthetic action interaction.
14. A synthetic action interactive system under live continuous wheat broadcast is characterized by comprising: a server and a client;
the server responds to the synthetic action interaction starting instruction, acquires a plurality of anchor identifiers, and establishes connection session connection between anchor clients corresponding to the anchor identifiers;
A client in the live broadcast room acquires audio and video stream data and outputs the audio and video stream data in the live broadcast room; the live broadcast room comprises a live broadcast room established by a anchor corresponding to each anchor identifier, and the audio and video stream data comprises audio and video stream data corresponding to each anchor identifier;
the client side in the live broadcast room responds to the imitation image display instruction and obtains imitation image data; outputting the imitation images in a live broadcast room according to the imitation image data; wherein the imitation image comprises at least two objects to be imitated; the actions of the two objects to be simulated are matched with each other and point to a preset shape; the simulation image is used for indicating the anchor cooperation corresponding to each anchor identification to simulate the action of the corresponding object to be simulated;
and the server responds to the image simulation finishing instruction, acquires an image simulation interaction result and outputs the image simulation interaction result in the live broadcast room.
15. A synthetic action interaction device under live continuous wheat broadcast is characterized by comprising:
the system comprises a wheat connecting module, a plurality of client terminals and a plurality of server interaction starting instructions, wherein the wheat connecting module is used for responding to the synthetic action interaction starting instructions by the server, acquiring a plurality of anchor identifiers and establishing wheat connecting conversation connection between the anchor clients corresponding to the anchor identifiers;
The audio and video data output module is used for acquiring audio and video stream data by a client in the live broadcast room and outputting the audio and video stream data in the live broadcast room; the live broadcast room comprises a live broadcast room established by a anchor corresponding to each anchor identifier, and the audio and video stream data comprises audio and video stream data corresponding to each anchor identifier;
the simulated image display module is used for responding to a simulated image display instruction by a client in the live broadcast room and acquiring simulated image data; outputting the imitation image in a live broadcasting room according to the imitation image data; wherein the imitation image comprises at least two objects to be imitated; the actions of the two objects to be simulated are matched with each other to point to a preset shape; the simulation image is used for indicating the anchor cooperation corresponding to each anchor identification to simulate the action of the corresponding object to be simulated;
and the simulation interaction result output module is used for responding to the image simulation completion instruction by the server, acquiring an image simulation interaction result and outputting the image simulation interaction result in the live broadcast room.
16. A computer device comprising a processor and a memory; characterized in that the memory stores a computer program adapted to be loaded by the processor and to execute the method of synthesizing action interactions in live telecast according to any of claims 1 to 13.
17. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method for interactive composite actions in live telecast as claimed in any one of claims 1 to 13.
CN202210339603.9A 2022-04-01 Synthetic action interaction method, system, device, equipment and medium under continuous wheat direct sowing Active CN114760498B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210339603.9A CN114760498B (en) 2022-04-01 Synthetic action interaction method, system, device, equipment and medium under continuous wheat direct sowing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210339603.9A CN114760498B (en) 2022-04-01 Synthetic action interaction method, system, device, equipment and medium under continuous wheat direct sowing

Publications (2)

Publication Number Publication Date
CN114760498A true CN114760498A (en) 2022-07-15
CN114760498B CN114760498B (en) 2024-07-26

Family

ID=

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112351348A (en) * 2020-11-09 2021-02-09 北京达佳互联信息技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
WO2021032092A1 (en) * 2019-08-18 2021-02-25 聚好看科技股份有限公司 Display device
CN113676747A (en) * 2021-09-27 2021-11-19 广州方硅信息技术有限公司 Live wheat-connecting fighting interaction method, system and device and computer equipment
CN113873280A (en) * 2021-09-27 2021-12-31 广州方硅信息技术有限公司 Live wheat-connecting fighting interaction method, system and device and computer equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021032092A1 (en) * 2019-08-18 2021-02-25 聚好看科技股份有限公司 Display device
CN112351348A (en) * 2020-11-09 2021-02-09 北京达佳互联信息技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN113676747A (en) * 2021-09-27 2021-11-19 广州方硅信息技术有限公司 Live wheat-connecting fighting interaction method, system and device and computer equipment
CN113873280A (en) * 2021-09-27 2021-12-31 广州方硅信息技术有限公司 Live wheat-connecting fighting interaction method, system and device and computer equipment

Similar Documents

Publication Publication Date Title
CN112714330B (en) Gift presenting method and device based on live broadcast with wheat and electronic equipment
CN104468623B (en) It is a kind of based on online live information displaying method, relevant apparatus and system
CN113453029B (en) Live broadcast interaction method, server and storage medium
CN111836066B (en) Team interaction method, device, equipment and storage medium based on live broadcast
CN113766340B (en) Dance music interaction method, system and device under live connected wheat broadcast and computer equipment
CN113676747B (en) Continuous wheat live broadcast fight interaction method, system and device and computer equipment
CN114007094B (en) Voice-to-microphone interaction method, system and medium of live broadcasting room and computer equipment
CN114501104B (en) Interaction method, device, equipment, storage medium and product based on live video
CN113032542B (en) Live broadcast data processing method, device, equipment and readable storage medium
CN114025186A (en) Virtual voice interaction method and device in live broadcast room and computer equipment
CN110366023B (en) Live broadcast interaction method, device, medium and electronic equipment
CN114666672B (en) Live fight interaction method and system initiated by audience and computer equipment
CN113824976A (en) Method and device for displaying approach show in live broadcast room and computer equipment
CN114666671B (en) Live broadcast praise interaction method, device, equipment and storage medium
CN113038228A (en) Virtual gift transmission and request method, device, equipment and medium thereof
CN113873280B (en) Continuous wheat live broadcast fight interaction method, system and device and computer equipment
CN110336957B (en) Video production method, device, medium and electronic equipment
CN114007095B (en) Voice-to-microphone interaction method, system and medium of live broadcasting room and computer equipment
CN115314729B (en) Team interaction live broadcast method and device, computer equipment and storage medium
CN115134621B (en) Live combat interaction method, system, device, equipment and medium
CN113438491B (en) Live broadcast interaction method and device, server and storage medium
CN114760520A (en) Live small and medium video shooting interaction method, device, equipment and storage medium
CN114760498A (en) Method, system, medium, and device for synthesizing action interaction under live broadcast with continuous microphone
CN115314727A (en) Live broadcast interaction method and device based on virtual object and electronic equipment
CN114760498B (en) Synthetic action interaction method, system, device, equipment and medium under continuous wheat direct sowing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant