CN110809172A - Interactive special effect display method and device and electronic equipment - Google Patents

Interactive special effect display method and device and electronic equipment Download PDF

Info

Publication number
CN110809172A
CN110809172A CN201911134270.0A CN201911134270A CN110809172A CN 110809172 A CN110809172 A CN 110809172A CN 201911134270 A CN201911134270 A CN 201911134270A CN 110809172 A CN110809172 A CN 110809172A
Authority
CN
China
Prior art keywords
action
video data
special effect
user terminal
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911134270.0A
Other languages
Chinese (zh)
Inventor
廖卓杰
赖立高
杨剑飞
麦志英
范赐丰
谢孟辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN201911134270.0A priority Critical patent/CN110809172A/en
Publication of CN110809172A publication Critical patent/CN110809172A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides an interactive special effect display method, an interactive special effect display device and electronic equipment, wherein the method comprises the following steps: acquiring video data acquired by a first user terminal; identifying the character action in the video data to obtain an action text identifier corresponding to the character action; and sending the video data and the action text identification to a second user terminal, so that the second user terminal calls a display component to display an interaction special effect corresponding to the character action when playing the video data according to the action text identification. The action text identification of the character action is recognized from the video data collected by the first user terminal, and the action text identification and the video data are sent to the second user terminal, so that the second user terminal can call the display component to carry out diversified interactive special effect display according to the action text identification.

Description

Interactive special effect display method and device and electronic equipment
Technical Field
The application relates to the technical field of information interaction, in particular to an interactive special effect display method and device and electronic equipment.
Background
The anchor may make various gestures to interact with the audience during the live broadcast, such as kiss, hearts, palms, etc. In order to improve the interactive experience, in some interactive schemes, the gesture or the limb action of the anchor is recognized, and corresponding interactive special effects are added in the live video according to the gesture or the limb action of the anchor and then are sent to the client, so that the viewing experience of audiences is improved.
However, in these interactive schemes, the manner of superimposing the interactive effect is relatively simple, and the terminal of the viewer can only monotonously display the video to which the interactive effect has been added.
Disclosure of Invention
In order to overcome at least one of the deficiencies in the prior art, the present application aims to provide an interactive special effect display method, which comprises:
acquiring video data acquired by a first user terminal;
identifying the character action in the video data to obtain an action text identifier corresponding to the character action;
and sending the video data and the action text identification to a second user terminal, so that the second user terminal calls a display component to display an interaction special effect corresponding to the character action when playing the video data according to the action text identification.
In one possible embodiment of the present application, the step of recognizing a character motion in the video data and obtaining a motion text identifier corresponding to the character motion includes:
identifying the character action in the video data, and obtaining an action text identifier corresponding to the character action and a time identifier of the character action;
the step of sending the video data and the action text identifier to a second user terminal includes:
and sending the video data, the action text identification and the time identification to a second user terminal, so that the second user terminal displays an interaction special effect corresponding to the character action at the moment corresponding to the time identification when playing the video data.
In a possible embodiment of the present application, the step of sending the video data and the action text identifier to the second user terminal includes:
and acquiring a target audience identification needing to display the interactive special effect, and sending the video data and the action text identification to a second user terminal corresponding to the target audience identification.
In one possible embodiment of the present application, the step of determining a target audience identifier for which an interactive special effect is to be displayed comprises:
and acquiring a target audience identification which is specified by a user and needs to display the interactive special effect from the first user terminal.
In one possible embodiment of the present application, the method is applied to a server; the method further comprises the following steps:
and sending the action text identifier to the first user terminal, so that the first user terminal displays an interaction special effect corresponding to the character action according to the action text identifier.
In a possible embodiment of the present application, the step of acquiring video data collected by a first user terminal includes:
and acquiring live broadcast video data acquired by the first user terminal in real time.
Another object of the present application is to provide an interactive special effect display method, including:
receiving video data and action text identification, wherein the action text identification is obtained by identifying the action of a person in the video data;
and calling a display component to display an interactive special effect corresponding to the character action when the video data is played according to the action text identification.
In one possible embodiment of the present application, the step of receiving video data and an action text identifier includes:
receiving the video data, the action text identification and a time identification corresponding to the action text identification;
calling a display component to display an interactive special effect corresponding to the character action when the video data is played, wherein the step comprises the following steps:
and calling a display component to display the interactive special effect corresponding to the character action at the moment corresponding to the time identification when the video data is played.
In one possible embodiment of the present application, the step of receiving video data and an action text identifier includes:
receiving the video data, the action text identification and a display permission identification corresponding to the action text identification;
before the step of invoking a display component to display an interactive special effect corresponding to the character action, the method further comprises:
according to whether the authority identification has the authority for displaying the interactive special effect or not;
and if the user has the authority of displaying the interactive special effect, then executing a step of calling a display component to display the interactive special effect corresponding to the character action when the video data is played according to the action text identification.
In a possible embodiment of the present application, the step of invoking a display component to display an interactive special effect corresponding to the character motion when the video data is played includes:
calling a local display component to display an interactive special effect corresponding to the character action when the video data is played; or
And when the video data is played, calling to send a request to a component server to call a display component provided by the component server to display an interactive special effect corresponding to the character action.
In one possible embodiment of the present application, the method further comprises:
receiving a component acquisition notification;
and acquiring the display component from a server according to the component acquisition notification.
Another object of the present application is to provide an interactive special effect display apparatus, including:
the video acquisition module is used for acquiring video data acquired by a first user terminal;
the action recognition module is used for recognizing the figure action in the video data and obtaining an action text identifier corresponding to the figure action;
and the data sending module is used for sending the video data and the action text identifier to a second user terminal, so that the second user terminal calls a display component to display an interaction special effect corresponding to the character action when playing the video data according to the action text identifier.
Another object of the present application is to provide an interactive special effect display apparatus, including:
the data receiving module is used for receiving video data and action text identifications, wherein the action text identifications are obtained by identifying character actions in the video data;
and the special effect display module is used for calling a display component to display the interactive special effect corresponding to the character action when the video data is played according to the action text identification.
Another object of the present application is to provide an electronic device, which includes a machine-readable storage medium and a processor, wherein the machine-readable storage medium stores machine-executable instructions, and the machine-executable instructions, when executed by the processor, implement the interactive special effect display method provided by the present application.
Another object of the present application is to provide a machine-readable storage medium storing machine-executable instructions, which when executed by a processor, implement the interactive special effects display method provided by the present application.
Compared with the prior art, the method has the following beneficial effects:
according to the interactive special effect display method, the interactive special effect display device and the electronic equipment, the action text identification of the character action is recognized from the video data collected by the first user terminal, and the action text identification and the video data are sent to the second user terminal, so that the second user terminal can call the display component to carry out diversified interactive special effect display according to the action text identification.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic view of an application scenario of a scheme provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an interactive special effect display method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating interactive effects in the prior art;
FIG. 4 is a schematic diagram illustrating an interactive special effect display method according to an embodiment of the present disclosure;
fig. 5 is a second flowchart illustrating an interactive special effect displaying method according to an embodiment of the present application;
fig. 6 is a schematic functional block diagram of an interactive special effect display apparatus according to an embodiment of the present application;
fig. 7 is a second functional block diagram of an interactive special effect display apparatus according to an embodiment of the present application;
fig. 8 is a schematic view of an electronic device according to an embodiment of the present application.
Icon: 100-an electronic device; 120-a machine-readable storage medium; 130-a processor; 111-a first user terminal; 112-a second user terminal; 113-a server; 610(710) -interactive special effects presentation means; 611-video acquisition module; 612-an action recognition module; 613-data sending module; 711-a data receiving module; 712-special effects presentation module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a schematic view of an exemplary application scenario of the solution provided in the present embodiment. The server 113 may communicate with a plurality of user terminals, which may be divided into a first user terminal 111 for generally providing video data and a second user terminal 112 for generally receiving video data according to usage functions. Taking an "internet live" scene as an example, the first user terminal 111 may be a terminal of a main broadcast, and is configured to shoot and upload a live video to the server 113; the second user terminal 112 may be a viewer's terminal for obtaining live video from the server 113 for viewing.
The first user terminal 111 and the second user terminal 112 are divided according to functions in use, and the same user terminal device may be used as the first user terminal 111 or the second user terminal 112 according to different functions in use.
In this embodiment, the user terminal may be, but is not limited to, a smart phone, a personal digital assistant, a tablet computer, a personal computer, a notebook computer, a virtual reality terminal device, an augmented reality terminal device, and the like. Therein, the first user terminal 111 may typically have video capture capabilities or obtain video data from other devices having video capture capabilities. The second user terminal 112 may typically have a video display function or may output video data to other devices having a video display function.
In this embodiment, the server 113 may be a single physical server 113 or a cluster composed of a plurality of servers 113. The server 113 may be configured to transmit the video data acquired from the first user terminal 111 to a plurality of second user terminals 112 for display, or may be configured to transmit other information between the user terminals.
In this embodiment, the user terminal may implement data interaction with the server 113 in different manners. In one approach, the user terminal may be installed with a live Application (APP) provided by the server 113, which typically may run independently of other applications in the user terminal.
In another mode, the user terminal may perform data interaction with the server 113 through a browser, for example, the user may input information such as an account number and a password on the browser, log in to the server 113, and thereby use a live service provided by the server 113. In yet another approach, the user terminal may be installed with a third party application and interact with the server 113 through a program running on the third party application.
Referring to fig. 2, fig. 2 is a schematic diagram of an interactive special effect display method provided in this embodiment, and the following explains the steps of the method.
Step S110, video data collected by the first user terminal 111 is obtained.
In this embodiment, video data may be captured by the first user terminal 111, which may typically be video data comprising an image of at least part of the body of the user. For example, the video data may be live video data of a main broadcast captured in real time.
Step S120, identifying the character action in the video data, and obtaining an action text identification corresponding to the character action.
In this embodiment, the first user terminal 111 may collect the video data and locally perform the work of identifying the human motion in the video data; the server 113 may also acquire video data collected by the first user terminal 111 and perform work of identifying human actions in the video data.
In this embodiment, the human motion may be, but is not limited to, a limb motion, a gesture motion, a facial expression, and the like.
Optionally, some motion templates to which motion text identifiers have been added may be stored in the first user terminal 111 or the server 113 in advance, and the first user terminal 111 or the server 113 may match the human motions in the video data with pre-stored motion samples through image recognition, machine learning, and other techniques, so as to determine the motion text identifiers corresponding to the human motions in the video data.
Step S130, sending the video data and the action text identifier to a second user terminal 112, so that the second user terminal 112 invokes a display component to display an interaction special effect corresponding to the character action when playing the video data according to the action text identifier.
The second user terminal 112 obtains the action text identifier while receiving the video data, and then invokes a display component to display a corresponding interaction special effect while playing the video data according to the action text identifier.
For example, if the video data collected in step S110 includes a motion of the anchor making a kiss, a corresponding motion text identifier (e.g., the text "kiss") may be recognized in step S120. Then, in step S130, the action text identifier is sent to the second user terminal 112, and the second user terminal 112 may display an interactive special effect of a love pattern on the playing interface according to the action text identifier when playing the video.
The above example of the motion sample identifier is only an exemplary illustration for explaining the scheme adopted in the present embodiment, and in the present embodiment, the motion sample identifier may be any identifier capable of identifying unique identity information of a motion, such as an english name, a pinyin, a number, a letter plus number, and the like of a character motion.
In this embodiment, the second user terminal 112 may invoke a display component locally pre-configured to the second user terminal 112 according to the action text identifier to display a corresponding interaction special effect. Wherein, the display component may be an operating system of the second user terminal 112 or an APP-owned component; the display component may also be a component downloaded by the second user terminal 112 from the server 113 or another component providing server according to the notification message acquired from the first user terminal 111 or the server 113.
The second user terminal 112 may also send a request to a component server to invoke a display component provided by the component server to display a corresponding interactive special effect. The component server may be the server 113 shown in fig. 1, or may be another third-party service that provides a display component service.
It should be noted that, if the method shown in fig. 1 is executed by the first user terminal 111, in step S130, the first user terminal 111 may send the action text identifier to the server 113, and then the server 113 sends the action text identifier to one or more second user terminals 112. In this process, the data format of the action text identifier may be changed, for example, the data format of the action text identifier when the first user terminal 111 sends the action text identifier to the server 113 may be a simple text corresponding to the character action, and the data format of the action text identifier sent by the server 113 to the second user terminal 112 may be a program execution instruction or an interface call instruction corresponding to the character action.
Optionally, in this embodiment, the display effect of the interactive special effect may be determined by a display component configured in the second user terminal 112, for example, when the display component is invoked with an action text identifier as a parameter, if the display components configured in the two second user terminals 112 are different, different visual effects may be displayed.
In the embodiment, since the interactive special effect is realized by the display component running on the second user terminal 112, more interactive actions with the user can be realized on the second user terminal 112, such as actions of accepting a user click, invoking other local plug-ins of the second user terminal 112, sending information, and the like.
For example, referring to fig. 3, in the prior art, when a anchor performs a kiss action, an anchor terminal or a server acquires a video image including the kiss action, and recognizes the kiss action in video data. Then the anchor terminal or the server superimposes an interactive special effect of the love pattern on the original video image, and sends the video image data added with the interactive special effect of the love pattern to the audience terminal for displaying.
That is, the viewer terminal receives only the video image data, and the viewer cannot perform more interactive actions based on the video image data.
Referring to fig. 4, in this embodiment, when the anchor performs a kissing action, the first user terminal 111 or the server 113 acquires a video image including the kissing action, identifies the kissing action in the video data, and acquires an action text identifier "kissing". The video data is then sent by the first user terminal 111 or the server 113 to the second user terminal 112 together with the recognized action text identification "kiss". The second user terminal 112 calls the display component to display an interactive special effect of a love pattern according to the action text identifier "kiss" on the basis of displaying the video image.
Since the display component is invoked locally by the second user terminal 112, the love pattern displayed by the display component after being invoked for execution may be user clickable, and other richer actions may be performed after clicking, for example, other display effects may be generated or response information (e.g., information identifying thank you for the main cast) may be sent to the first user terminal 111.
Based on the above design, compared with the prior art, the scheme provided in this embodiment may display more diversified interactive special effects on the second user terminal 112 according to the action of the anchor, so as to improve the interactivity between the audience and the anchor. And the action text identification is easy to perform other logic operations, so that the later expansion is convenient to add other more functions.
Optionally, since in this embodiment, the video data and the action text identifier are two relatively separated data, the transmission of the video data may be jammed or delayed.
In order to make the timing of displaying the interactive special effect by the second user terminal 112 coincide with the timing of the occurrence of the character motion in the video data, in step S110 of this embodiment, when the character motion in the video data is recognized, the motion text identifier corresponding to the character motion and the time identifier of the occurrence of the character motion may be obtained.
Then, in step S130, the video data, the action text identifier, and the time identifier are sent to the second user terminal 112 together, so that when the second user terminal 112 plays the video data, an interaction effect corresponding to the character action is displayed at a time corresponding to the time identifier.
Optionally, in an example of this embodiment, before executing step S130, a target audience identifier that needs to display the interactive special effect may be obtained, and then in step S130, the video data and the action text identifier may be sent to the second user terminal 112 corresponding to the target audience identifier.
In some scenarios, it may be set that only a certain part of the viewers are shown with the interactive special effect according to the ratings, permissions, payment conditions, etc. of the viewers, the identifiers of the part of the viewers may be obtained first, and then the action text identifiers are sent to the second user terminals 112 of the part of the viewers only. In this way, only the second user terminal 112 that receives the action text identifier will display the interactive special effect.
For example, if a function of providing the interactive special effect display only to the viewer who pays a certain fee may be provided, the identifier of the viewer who pays a fee may be obtained as the target viewer identifier, and in step S130, the action text identifier is transmitted only to the second user terminal 112 of the viewer who pays a fee, so that the second user terminal 112 of the viewer who pays a fee may display the interactive special effect, and the second user terminal 112 of the viewer who does not pay a fee may not display the interactive special effect.
As another example, the audience who can obtain the interactive special effects may be specified by a host. For example, a target viewer identifier specified by the user and requiring to display an interactive special effect may be obtained from the first user terminal 111, and then, in step S130, only the action text identifier is sent to the second user terminal 112 of the viewer specified by the anchor according to the target viewer identifier.
Alternatively, in another example, the video data, the action text identifier and the display right identifier corresponding to the action text identifier may be sent to the second user terminal 112 in step S130.
Then, the second user terminal 112 compares the authority identifier with the authority of the user logged in the second user terminal 112, for example, according to whether the authority identifier has the authority to display the special interaction effect.
If the user has the authority to display the interactive special effect, the second user terminal 112 calls a display component to display the interactive special effect corresponding to the character action when the video data is played according to the action text identification.
If the user does not have the authority to display the interactive special effect, the second user terminal 112 may play only the video data.
Optionally, in this embodiment, when the method shown in fig. 2 is applied to the first user terminal 111, the video data currently collected by the first user terminal 111 may also be displayed on the first user terminal 111 so as to facilitate the anchor to observe the current video shooting situation of the anchor. In this case, the first user terminal 111 may further display an interaction effect corresponding to the character motion according to the motion text identifier after obtaining the motion text identifier through step S120.
Optionally, when the method shown in fig. 2 is applied to the server 113, the server 113 may further send the action text identifier to the first user terminal 111 after obtaining the action text identifier in step S120, so that the first user terminal 111 displays an interaction special effect corresponding to the character action according to the action text identifier.
Optionally, when the method shown in fig. 1 is applied to the first user terminal 111, the first user terminal may be configured with live broadcast software, a gesture recognition program and a front-end application program, the server 113 may be configured with a service background, and the second user terminal 112 may be configured with live broadcast viewer-side software.
After the anchor starts live broadcast on the first user terminal 111, the front-end application program continuously monitors the gesture recognition result message of the live broadcast software. When the live broadcast generates gesture actions in the live broadcast process, live broadcast software acquires video pictures through the first user terminal 111 and pushes the video pictures to the gesture recognition program.
The gesture recognition program can calculate the video frame content through an image algorithm, recognize the gesture content and push the recognition result to the live broadcast software in a text mode.
The live broadcast software can transmit the gesture recognition result to a front-end application program (such as a JavaScript program) running in the live broadcast software, the front-end application program performs certain data processing, and sends the result to a service background for processing.
The service background may process service logic according to the service scene requirements, and push related interactive service messages to the front-end application program of the first user terminal 111.
After receiving the interactive service message, the front-end application program calls a local display component of the second user terminal 112 through a matching program pre-configured on the second user terminal 112, so that the display component API of the second user terminal 112 presents a user interaction interface of the interactive play method.
Referring to fig. 5, the present embodiment further provides an interactive special effect display method applied to the second user terminal 112 shown in fig. 1, and the method may include the following steps.
Step S210, receiving video data and an action text identifier, where the action text identifier is obtained by identifying a character action in the video data.
Step S220, according to the action text identification, calling a display component to display an interactive special effect corresponding to the character action when the video data is played.
For a specific interaction process between the second user terminal 112 and the first user terminal 111 or the server 113 in step S210 and step S220, please refer to step 120 and step S130 shown in fig. 2, which is not described herein again.
Referring to fig. 6, the present embodiment further provides an interactive special effect display device 610 applied to the first user terminal 111 or the server 113 shown in fig. 1, which is functionally divided, and the interactive special effect display device 610 may include a video obtaining module 611, an action recognizing module 612, and a data sending module 613.
The video obtaining module 611 is configured to obtain video data collected by the first user terminal 111.
In this embodiment, the video obtaining module 611 may be configured to execute the step S110 shown in fig. 2, and for the detailed description of the video obtaining module 611, reference may be made to the description of the step S110.
And the action recognition module 612 is configured to recognize a character action in the video data, and obtain an action text identifier corresponding to the character action.
In this embodiment, the action recognition module 612 may be configured to execute the step S120 shown in fig. 2, and the detailed description about the action recognition module 612 may refer to the description about the step S120.
A data sending module 613, configured to send the video data and the action text identifier to a second user terminal 112, so that the second user terminal 112 invokes a display component to display an interaction special effect corresponding to the character action when playing the video data according to the action text identifier.
In this embodiment, the data sending module 613 may be configured to execute step S130 shown in fig. 2, and the detailed description about the data sending module 613 may refer to the description about the step S130.
Referring to fig. 7, the present embodiment further provides an interactive special effect display apparatus 710 applied to the second user terminal 112 shown in fig. 1, which is functionally divided, and the interactive special effect display apparatus 710 may include a data receiving module 711 and a special effect display module 712.
The data receiving module 711 is configured to receive video data and an action text identifier, where the action text identifier is obtained by identifying a character action in the video data.
In this embodiment, the data receiving module 711 may be configured to execute step S210 shown in fig. 5, and for the detailed description of the data receiving module 711, reference may be made to the description of step S210.
And the special effect display module 712 is configured to invoke a display component to display an interactive special effect corresponding to the character action when the video data is played according to the action text identifier.
In this embodiment, the special effect displaying module 712 can be used to execute the step S220 shown in fig. 5, and the detailed description about the special effect displaying module 712 can refer to the description of the step S220.
Referring to fig. 8, fig. 8 is a schematic diagram of a hardware structure of an electronic device 100 according to the present embodiment. The electronic device 100 may be one of the first user terminal 111, the server 113, or the second user terminal 112 shown in fig. 1. The electronic device 100 may include a processor 130 and a machine-readable storage medium 120. The processor 130 and the machine-readable storage medium 120 may communicate via a system bus. Also, the machine-readable storage medium 120 stores machine-executable instructions, and the processor 130 may perform the above-described interactive special effects display method by reading and executing the machine-executable instructions corresponding to the interactive special effects display logic in the machine-readable storage medium 120.
A machine-readable storage medium as referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
In summary, according to the interactive special effect display method, the interactive special effect display device and the electronic equipment, the action text identification of the character action is recognized from the video data collected by the first user terminal, and the action text identification and the video data are sent to the second user terminal, so that the second user terminal can call the local display component to perform diversified interactive special effect display according to the action text identification.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for various embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and all such changes or substitutions are included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. An interactive special effect presentation method, the method comprising:
acquiring video data acquired by a first user terminal;
identifying the character action in the video data to obtain an action text identifier corresponding to the character action;
and sending the video data and the action text identification to a second user terminal, so that the second user terminal calls a display component to display an interaction special effect corresponding to the character action when playing the video data according to the action text identification.
2. The method of claim 1,
the method comprises the steps of identifying the character action in the video data and obtaining an action text identifier corresponding to the character action, and comprises the following steps:
identifying the character action in the video data, and obtaining an action text identifier corresponding to the character action and a time identifier of the character action;
the step of sending the video data and the action text identifier to a second user terminal includes:
and sending the video data, the action text identification and the time identification to a second user terminal, so that the second user terminal displays an interaction special effect corresponding to the character action at the moment corresponding to the time identification when playing the video data.
3. The method of claim 1, further comprising:
acquiring a target audience identification needing to display the interactive special effect;
the step of sending the video data and the action text identifier to a second user terminal includes:
and sending the video data and the action text identification to a second user terminal corresponding to the target audience identification.
4. The method of claim 3, wherein the step of obtaining the target viewer identification of the interactive special effect to be displayed comprises:
and acquiring a target audience identification which is specified by a user and needs to display the interactive special effect from the first user terminal.
5. The method of claim 1, wherein the method is applied to a server; the method further comprises the following steps:
and sending the action text identifier to the first user terminal, so that the first user terminal displays an interaction special effect corresponding to the character action according to the action text identifier.
6. The method of claim 1, wherein the step of obtaining video data collected by the first user terminal comprises:
and acquiring live broadcast video data acquired by the first user terminal in real time.
7. An interactive special effect presentation method, the method comprising:
receiving video data and action text identification, wherein the action text identification is obtained by identifying the action of a person in the video data;
and calling a display component to display an interactive special effect corresponding to the character action when the video data is played according to the action text identification.
8. The method of claim 7,
the step of receiving video data and action text identification comprises:
receiving the video data, the action text identification and a time identification corresponding to the action text identification;
calling a display component to display an interactive special effect corresponding to the character action when the video data is played, wherein the step comprises the following steps:
and calling a display component to display the interactive special effect corresponding to the character action at the moment corresponding to the time identification when the video data is played.
9. The method of claim 7,
the step of receiving video data and action text identification comprises:
receiving the video data, the action text identification and a display permission identification corresponding to the action text identification;
before the step of invoking a display component to display an interactive special effect corresponding to the character action, the method further comprises:
according to whether the authority identification has the authority for displaying the interactive special effect or not;
and if the user has the authority of displaying the interactive special effect, then executing a step of calling a display component to display the interactive special effect corresponding to the character action when the video data is played according to the action text identification.
10. The method of claim 7, wherein the step of invoking a display component to display an interactive special effect corresponding to the human action while playing the video data comprises:
calling a local display component to display an interactive special effect corresponding to the character action when the video data is played; or
And when the video data is played, sending a request to a component server to call a display component provided by the component server to display an interactive special effect corresponding to the character action.
11. The method of claim 7, further comprising:
receiving a component acquisition notification;
and acquiring the display component from a server according to the component acquisition notification.
12. An interactive special effect presentation device, comprising:
the video acquisition module is used for acquiring video data acquired by a first user terminal;
the action recognition module is used for recognizing the figure action in the video data and obtaining an action text identifier corresponding to the figure action;
and the data sending module is used for sending the video data and the action text identifier to a second user terminal, so that the second user terminal calls a display component to display an interaction special effect corresponding to the character action when playing the video data according to the action text identifier.
13. An interactive special effect presentation device, comprising:
the data receiving module is used for receiving video data and action text identifications, wherein the action text identifications are obtained by identifying character actions in the video data;
and the special effect display module is used for calling a display component to display the interactive special effect corresponding to the character action when the video data is played according to the action text identification.
14. An electronic device comprising a machine-readable storage medium and a processor, the machine-readable storage medium having stored thereon machine-executable instructions that, when executed by the processor, implement the method of any of claims 1-11.
15. A machine-readable storage medium having stored thereon machine-executable instructions which, when executed by a processor, implement the method of any one of claims 1-11.
CN201911134270.0A 2019-11-19 2019-11-19 Interactive special effect display method and device and electronic equipment Pending CN110809172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911134270.0A CN110809172A (en) 2019-11-19 2019-11-19 Interactive special effect display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911134270.0A CN110809172A (en) 2019-11-19 2019-11-19 Interactive special effect display method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110809172A true CN110809172A (en) 2020-02-18

Family

ID=69490522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911134270.0A Pending CN110809172A (en) 2019-11-19 2019-11-19 Interactive special effect display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110809172A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111541951A (en) * 2020-05-08 2020-08-14 腾讯科技(深圳)有限公司 Video-based interactive processing method and device, terminal and readable storage medium
WO2022022485A1 (en) * 2020-07-27 2022-02-03 阿里巴巴集团控股有限公司 Content provision method and apparatus, content display method and apparatus, and electronic device and storage medium
CN116225238A (en) * 2023-05-10 2023-06-06 荣耀终端有限公司 Man-machine interaction method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104935860A (en) * 2014-03-18 2015-09-23 北京三星通信技术研究有限公司 Method and device for realizing video calling
CN107613310A (en) * 2017-09-08 2018-01-19 广州华多网络科技有限公司 A kind of live broadcasting method, device and electronic equipment
CN109618181A (en) * 2018-11-28 2019-04-12 网易(杭州)网络有限公司 Exchange method and device, electronic equipment, storage medium is broadcast live
CN109922352A (en) * 2019-02-26 2019-06-21 李钢江 A kind of data processing method, device, electronic equipment and readable storage medium storing program for executing
WO2021004221A1 (en) * 2019-07-09 2021-01-14 北京字节跳动网络技术有限公司 Display processing method and apparatus for special effects and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104935860A (en) * 2014-03-18 2015-09-23 北京三星通信技术研究有限公司 Method and device for realizing video calling
CN107613310A (en) * 2017-09-08 2018-01-19 广州华多网络科技有限公司 A kind of live broadcasting method, device and electronic equipment
CN109618181A (en) * 2018-11-28 2019-04-12 网易(杭州)网络有限公司 Exchange method and device, electronic equipment, storage medium is broadcast live
CN109922352A (en) * 2019-02-26 2019-06-21 李钢江 A kind of data processing method, device, electronic equipment and readable storage medium storing program for executing
WO2021004221A1 (en) * 2019-07-09 2021-01-14 北京字节跳动网络技术有限公司 Display processing method and apparatus for special effects and electronic device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111541951A (en) * 2020-05-08 2020-08-14 腾讯科技(深圳)有限公司 Video-based interactive processing method and device, terminal and readable storage medium
WO2022022485A1 (en) * 2020-07-27 2022-02-03 阿里巴巴集团控股有限公司 Content provision method and apparatus, content display method and apparatus, and electronic device and storage medium
CN116225238A (en) * 2023-05-10 2023-06-06 荣耀终端有限公司 Man-machine interaction method and system

Similar Documents

Publication Publication Date Title
CN106658200B (en) Live video sharing and acquiring method and device and terminal equipment thereof
CN109089154B (en) Video extraction method, device, equipment and medium
CN110418151B (en) Bullet screen information sending and processing method, device, equipment and medium in live game
CN111246308B (en) Method and device for accessing live broadcast room, live broadcast server and storage medium
US9767768B2 (en) Automated object selection and placement for augmented reality
CN109089127B (en) Video splicing method, device, equipment and medium
US11025967B2 (en) Method for inserting information push into live video streaming, server, and terminal
CN110830735B (en) Video generation method and device, computer equipment and storage medium
CN112929678B (en) Live broadcast method, live broadcast device, server side and computer readable storage medium
WO2018196733A1 (en) Data sharing method and device, storage medium and electronic device
CN109005422B (en) Video comment processing method and device
US20170171335A1 (en) Advertising push methods, devices, video servers and terminal equipment
CN107040808B (en) Method and device for processing popup picture in video playing
CN109309851B (en) Information processing method, server and terminal
CN108777806B (en) User identity recognition method, device and storage medium
CN111526411A (en) Video processing method, device, equipment and medium
WO2019114330A1 (en) Video playback method and apparatus, and terminal device
CN109168012B (en) Information processing method and device for terminal equipment
CN108574878B (en) Data interaction method and device
CN110809172A (en) Interactive special effect display method and device and electronic equipment
CN113573090A (en) Content display method, device and system in game live broadcast and storage medium
CN110913237A (en) Live broadcast control method and device, live broadcast initiating device and storage medium
CN113949697A (en) Data distribution method and device, electronic equipment and storage medium
CN110673886A (en) Method and device for generating thermodynamic diagram
CN110300118B (en) Streaming media processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200218

RJ01 Rejection of invention patent application after publication