CN109348274B - Live broadcast interaction method and device and storage medium - Google Patents

Live broadcast interaction method and device and storage medium Download PDF

Info

Publication number
CN109348274B
CN109348274B CN201811063369.1A CN201811063369A CN109348274B CN 109348274 B CN109348274 B CN 109348274B CN 201811063369 A CN201811063369 A CN 201811063369A CN 109348274 B CN109348274 B CN 109348274B
Authority
CN
China
Prior art keywords
target user
special effect
instruction
information
voice information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811063369.1A
Other languages
Chinese (zh)
Other versions
CN109348274A (en
Inventor
牛冰峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Music Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201811063369.1A priority Critical patent/CN109348274B/en
Publication of CN109348274A publication Critical patent/CN109348274A/en
Priority to PCT/CN2019/105771 priority patent/WO2020052665A1/en
Application granted granted Critical
Publication of CN109348274B publication Critical patent/CN109348274B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a live broadcast interaction method, which comprises the following steps: after receiving a first instruction, acquiring voice information of a target user, wherein the first instruction is used for representing a gift giving instruction triggered by the target user; analyzing the voice information of the target user, and determining acoustic characteristic parameters corresponding to the voice information of the target user; and adjusting the audio track information of the anchor in the live broadcast room currently watched by the target user according to the acoustic characteristic parameters. The invention also discloses a live broadcast interaction device and a storage medium.

Description

Live broadcast interaction method and device and storage medium
Technical Field
The present invention relates to the field of network communications, and in particular, to a live broadcast interaction method, apparatus, and storage medium.
Background
With the rapid development of internet technology, live webcasting becomes more popular, and users can watch live programs favorite with their anchor anytime and anywhere through live Application programs (APP). In the process of watching the live video, the user can give gifts for the favorite anchor program through the network platform so as to realize interaction with the anchor program.
Fig. 1 is a schematic diagram illustrating an effect of a present presented by a conventional live APP, where as shown in fig. 1, the present presented is often displayed in a chat window of a live room in the form of a chat session, or displayed in a live interface in the form of a special effect animation. However, the gifts may disappear gradually due to being covered by frequent chat contents during the display process, which results in a short presentation time of the gifted gifts and a low association degree between the display effect of the gifted gifts and the live content in the current live broadcast room, which results in a single display effect of the gifted gifts during the live broadcast video playing process, thereby reducing the interaction effect between the user and the main broadcast.
Disclosure of Invention
In view of the above, embodiments of the present invention are directed to a live broadcast interaction method, apparatus, and storage medium, which are used to at least solve the problem in the related art that it is difficult to effectively improve the interaction effect between a user and a main broadcast.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a live broadcast interaction method, where the method includes:
after receiving a first instruction, acquiring voice information of a target user, wherein the first instruction is used for representing a gift giving instruction triggered by the target user;
analyzing the voice information of the target user, and determining acoustic characteristic parameters corresponding to the voice information of the target user;
and adjusting the audio track information of the anchor in the live broadcast room currently watched by the target user according to the acoustic characteristic parameters.
In a second aspect, an embodiment of the present invention further provides a live broadcast interaction apparatus, where the apparatus includes: the device comprises an acquisition module, an analysis module and an adjustment module; wherein the content of the first and second substances,
the acquisition module is used for acquiring voice information of a target user after receiving a first instruction, wherein the first instruction is used for representing a gift giving instruction triggered by the target user;
the analysis module is used for analyzing the voice information of the target user and determining acoustic characteristic parameters corresponding to the voice information of the target user;
and the adjusting module is used for adjusting the track information of the anchor in the live broadcast room currently watched by the target user according to the acoustic characteristic parameters.
In a third aspect, an embodiment of the present invention further provides a live broadcast interaction apparatus, including a memory, a processor, and an executable program that is stored in the memory and can be executed by the processor, where the processor executes the steps of the live broadcast interaction method provided in the embodiment of the present invention when executing the executable program.
In a fourth aspect, an embodiment of the present invention further provides a storage medium, where an executable program is stored on the storage medium, and when the executable program is executed by a processor, the steps of the live broadcast interaction method provided in the embodiment of the present invention are implemented.
According to the live broadcast interaction method, the live broadcast interaction device and the storage medium, after the first instruction is received, voice information of a target user is collected, the voice information of the target user is analyzed, acoustic characteristic parameters corresponding to the voice information of the target user are determined, and audio track information of a live broadcast anchor currently watched by the target user is adjusted according to the acoustic characteristic parameters. Therefore, the audio track information of the anchor is changed through the specific acoustic characteristic parameters of the target user, the presenting effect of the gifts given can be enhanced, the interactive effect between the user and the anchor can be increased, the interest of live broadcast and the use viscosity of the network platform user are improved, and the use experience of the user is greatly improved.
Drawings
FIG. 1 is a diagram illustrating the effect of a present gift presented by a conventional live APP;
fig. 2 is a schematic view of an implementation flow of a live broadcast interaction method according to an embodiment of the present invention;
fig. 3 is a schematic view of an implementation flow of another live broadcast interaction method according to an embodiment of the present invention;
fig. 4A to fig. 4C are schematic diagrams illustrating display effects of collecting voice information of a target user according to an embodiment of the present invention;
fig. 5 is a functional structure diagram of a live broadcast interaction device according to an embodiment of the present invention;
fig. 6 is a functional structure diagram of another live broadcast interaction device according to an embodiment of the present invention;
fig. 7 is a schematic hardware structure diagram of a live broadcast interaction apparatus according to an embodiment of the present invention.
Detailed Description
So that the manner in which the features and aspects of the embodiments of the present invention can be understood in detail, a more particular description of the embodiments of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. It should be understood by those skilled in the art that the technical solutions described in the embodiments of the present invention may be arbitrarily combined without conflict.
Fig. 2 is a schematic flow chart of an implementation of a live broadcast interaction method according to an embodiment of the present invention, where the live broadcast interaction method is applicable to a server or a terminal device; as shown in fig. 2, an implementation process of the live broadcast interaction method in the embodiment of the present invention may include the following steps:
step 201: and after receiving a first instruction, acquiring voice information of a target user, wherein the first instruction is used for representing a gift giving instruction triggered by the target user.
In the embodiment of the present invention, the first instruction may be a gift giving instruction triggered by a target user through a specified area of a live APP; wherein, the gift giving instruction can be triggered by inputting a click operation or a sliding operation in the designated area. Here, the target user can flexibly select various gift giving methods according to the own requirements.
In an embodiment of the present invention, the gift giving manner includes at least one of the following: a normal gift giving mode and a special effect gift giving mode.
In this embodiment of the present invention, for the voice information of the target user collected in this step 201, the following method may be adopted: analyzing a special effect identifier from the first instruction; selecting to enter a corresponding special effect gift giving mode according to the special effect identification; and acquiring the voice information of the target user by calling audio acquisition equipment in the special-effect gift presenting mode.
It should be noted that, when the normal identifier is analyzed from the first instruction, the normal gift presentation mode is entered in response to the first instruction, and in the normal gift presentation mode, the gift presented by the target user is displayed according to the current existing manner, such as in the form of a chat conversation or in the form of a special effect animation, which is not described herein again.
Step 202: and analyzing the voice information of the target user, and determining the acoustic characteristic parameters corresponding to the voice information of the target user.
In an embodiment of the present invention, the acoustic feature parameter includes acoustic spectrum information.
It should be noted that, since there may be a plurality of different waveforms for sounds of the same timbre, but the acoustic spectrum information of the same timbre is often the same, the embodiment of the present invention uses the acoustic spectrum information as a main basis for distinguishing the timbres of the sounds. That is to say, in order to achieve the effect of changing the anchor sound through the special-effect speech, the embodiment of the present invention needs to extract the acoustic spectrum information of the target user from the collected speech information of the target user in advance.
Here, after the determining the acoustic feature parameter corresponding to the voice information of the target user, the method further includes: and associating the acoustic frequency spectrum information corresponding to the voice information of the target user with the user account of the target user.
Specifically, after the acoustic spectrum information of the target user is determined, the association relationship between the acoustic spectrum information of the target user and the user account of the target user is stored in an acoustic spectrum information base of the server, so that the accuracy of the extracted acoustic spectrum information can be further determined through the user account of the target user.
Step 203: and adjusting the audio track information of the anchor in the live broadcast room currently watched by the target user according to the acoustic characteristic parameters.
In this embodiment of the present invention, when the acoustic characteristic parameter is acoustic spectrum information, for adjusting the track information of the anchor in the live broadcast room currently watched by the target user according to the acoustic characteristic parameter in this step 203, the following method may be adopted:
analyzing the special effect duration from the first instruction; and adjusting the track information of the anchor in the live broadcast room currently watched by the target user according to the special effect duration and the acoustic frequency spectrum information.
It should be noted that after receiving the first instruction and determining the acoustic spectrum information corresponding to the voice information of the target user, first, according to a live broadcast room identifier carried in the first instruction, a live broadcast room corresponding to the first instruction, that is, a live broadcast room currently being watched by the target user, and an anchor broadcast of the live broadcast room are determined, and then, according to the determined acoustic spectrum information and the special effect duration, the track information of the anchor broadcast of the live broadcast room currently being watched by the target user is adjusted.
Here, the track information of the live room anchor currently being viewed by the target user within a set time period may be adjusted. The set time period may be set according to actual conditions, and is not particularly limited herein.
In another optional example of the present invention, after the adjusting track information of the live-air anchor currently being watched by the target user, the method further comprises:
analyzing a special effect use range from the first instruction, wherein the special effect use range corresponds to a gift type given; and outputting the adjusted audio track information according to the special effect application range.
Here, the special effect use range includes at least one of: for all users in the live broadcast room that the target user is currently watching, a portion of users, which may be the target users that trigger gift giving instructions, for example.
By adopting the technical scheme of the embodiment of the invention, the voice information of the target user is analyzed, the acoustic characteristic parameter corresponding to the voice information of the target user is determined, and the audio track information of the anchor broadcast in the live broadcast room currently watched by the target user is correspondingly adjusted according to the acoustic characteristic parameter, so that the effect of changing the audio track information of the anchor broadcast through the voice special-effect gift of the target user is achieved, the presentation effect of the gift and the interaction effect between the user and the anchor broadcast are enhanced, and the interest of the live broadcast and the use viscosity of a network platform user are improved.
The specific implementation process of the live broadcast interaction method provided by the embodiment of the invention is further described in detail below.
Fig. 3 is a schematic flow chart illustrating an implementation process of another live broadcast interaction method according to an embodiment of the present invention, where the live broadcast interaction method is applicable to a server (e.g., a live broadcast server) or a terminal device; as shown in fig. 3, a specific implementation flow of the live broadcast interaction method may include the following steps:
step 301: and after receiving the first instruction, acquiring the voice information of the target user.
In this embodiment of the present invention, the first instruction is used to characterize a gift giving instruction triggered by the target user, and specifically, the first instruction may refer to a gift giving instruction triggered by the target user through a specified area of a live APP; wherein, the gift giving instruction can be triggered by inputting a click operation or a sliding operation in the designated area. Here, the target user can flexibly select various gift giving methods according to the own requirements.
In an embodiment of the present invention, the gift giving manner includes at least one of the following: a normal gift giving mode and a special effect gift giving mode.
For example, when the gift-giving mode selected by the target user is the normal gift-giving mode, the normal identifier is carried in the first instruction, and at this time, the normal identifier is analyzed from the first instruction, and the normal gift-giving mode is entered in response to the first instruction, and then in the normal gift-giving mode, the gift given by the target user is displayed in the currently existing mode, such as the chat conversation mode shown in fig. 1, or the special effect animation mode, which is not described herein again. When the gift giving mode selected by the target user is the special-effect gift giving mode, the first instruction carries the special-effect identification, at the moment, the special-effect identification, namely the voice special-effect identification, is analyzed from the first instruction, the special-effect gift giving mode is entered in response to the first instruction, and under the special-effect gift giving mode, the terminal equipment calls the audio acquisition equipment and pops up prompt information to guide the target user to input audio information.
Fig. 4 is a schematic view illustrating a display effect of collecting voice information of a target user according to an embodiment of the present invention, as shown in fig. 4A, first, the target user clicks a virtual key that can trigger "present a gift", at this time, the live APP pops up a selection interface of a gift presentation mode as shown in fig. 4B, and pops up a corresponding operation interface according to a gift presentation mode selected by the target user on the selection interface, such as a normal gift presentation mode or a special-effect gift presentation mode. In fig. 4B, for example, when the target user selects the special effect gift-giving mode, the live APP pops up the voice input guidance interface shown in fig. 4C, such as "please record a piece of voice through the microphone" to guide the target user to input voice.
Step 302: and analyzing the voice information of the target user, and determining acoustic spectrum information corresponding to the voice information of the target user.
Here, sound is an electromagnetic wave (sound wave) having a certain oscillation frequency, and the electromagnetic wave has physical parameters such as oscillation frequency, amplitude, waveform, etc., and it is due to these different physical parameters that the sound has various different auditory effects. If the sound is divided according to the sound characteristics of various instruments, the sound has four different expressions of tone, volume, tone color, tone type and the like. The tone is a representation form related to the oscillation frequency of the electromagnetic wave and is in direct proportion to the oscillation frequency of the electromagnetic wave, namely, the higher the oscillation frequency of the electromagnetic wave is, the higher the tone is; the lower the oscillation frequency of the electromagnetic wave, the lower the tone; the volume is a representation form related to the oscillation amplitude of the electromagnetic wave, and the volume is proportional to the amplitude of the electromagnetic wave, i.e. the larger the amplitude is, the larger the volume is, and conversely, the smaller the amplitude is, the smaller the volume is.
The timbre refers to the perceptual characteristic of sound, and the sound of different users is distinguished by timbre. The same is female treble, even though singing the same song, the tone color of different users is different. And the tone is determined by the waveform of the electromagnetic wave. The waveform of the standard electromagnetic wave is a sine wave, such as alternating current, and the waveform is a standard sine wave. However, the waveform of the sound of the user, the sound of various instruments, and various sounds in nature is often a complex shape, and it is just the waveforms of these different shapes that determine the timbres of the different sounds. The timbre of the sound can be represented by a waveform (the waveform is a time domain representation of the sound), and can also be represented by a sound frequency spectrum (the spectrum is a frequency domain representation of the sound), and the sound spectrum information corresponding to the waveform can be obtained by performing fourier transform on the waveform of the sound.
It should be noted that, since there may be a plurality of different waveforms for sounds of the same timbre, but the acoustic spectrum information of the same timbre is often the same, the embodiment of the present invention uses the acoustic spectrum information as a main basis for distinguishing the timbres of the sounds. That is to say, in order to achieve the effect of changing the anchor sound through the special-effect speech, the embodiment of the present invention needs to extract the acoustic spectrum information of the target user from the collected speech information of the target user in advance.
Here, when acquiring the acoustic spectrum information, only 20 basic acoustic spectrums need to be acquired, and more than 400 acoustic spectrum combinations can be combined by the 20 basic acoustic spectrums, so that the sound of the target user can be simulated by the more than 400 acoustic spectrum combinations.
Here, after the determining the acoustic feature parameter corresponding to the voice information of the target user, the method further includes: and associating the acoustic frequency spectrum information corresponding to the voice information of the target user with the user account of the target user.
It should be noted that, when the target user gives the voice special effect gift, different special effect durations or special effect application ranges may be selected as needed.
Step 303: and analyzing the special effect duration from the first instruction.
Step 304: and adjusting the track information of the anchor in the live broadcast room currently watched by the target user according to the special effect duration and the acoustic frequency spectrum information.
In the embodiment of the invention, after receiving the first instruction and determining the acoustic spectrum information corresponding to the voice information of the target user, firstly, a live broadcast room corresponding to the first instruction, namely the live broadcast room currently watched by the target user, and an anchor broadcast of the live broadcast room are determined according to a live broadcast room identifier carried in the first instruction, and then, the track information of the anchor broadcast of the live broadcast room currently watched by the target user is adjusted according to the determined acoustic spectrum information and the special effect duration. Here, the track information of the live room anchor currently being viewed by the target user within a set time period may be adjusted. The set time period may be set according to actual conditions, and is not particularly limited herein.
Step 305: and analyzing a special effect use range from the first instruction, and outputting the adjusted track information according to the special effect use range.
Here, the special effect use range corresponds to a gift type given. The special effect application range comprises at least one of the following ranges: for all users in the live broadcast room that the target user is currently watching, a portion of users, which may be the target users that trigger gift giving instructions, for example.
For example, the voice effect gift is divided into two types of "a" and "B", where the effect usage range of the "a" type voice effect gift is specific to all users in the live broadcast room, that is, when the target user selects to send the "a" type voice effect gift, all users in the live broadcast room can hear the voice effect, and when the target user selects to send the "B" type voice effect gift, only the gift giver, i.e., the target user, can hear the voice effect, and other users cannot hear the voice effect.
By adopting the technical scheme of the embodiment of the invention, when a target user watches a live video by using a live APP, the target user can select to give a voice special-effect gift to the anchor when the target user is satisfied with the live content of the anchor, namely the satisfaction degree is greater than a set threshold value, the target user can input a section of voice information to the anchor through a voice input device (such as a microphone) and analyze the voice information to determine the acoustic frequency spectrum information corresponding to the voice information, and adjust the audio track information in the next set time period of the anchor according to the acoustic frequency spectrum information to achieve the effect of changing the sound of the anchor through the voice special-effect gift, enhance the presentation effect of the gift and the interaction effect between the user and the anchor, and improve the interest of live broadcasting and the use viscosity of a network platform user.
In order to implement the above live broadcast interaction method, an embodiment of the present invention further provides a live broadcast interaction apparatus, where the live broadcast interaction apparatus may be applied to a server or a terminal device, and fig. 5 is a functional structure diagram of the live broadcast interaction apparatus provided in the embodiment of the present invention; as shown in fig. 5, the live interaction apparatus includes: an acquisition module 51, an analysis module 52 and an adjustment module 53. The functions of the program modules will be described in detail below.
The acquisition module 51 is configured to acquire voice information of a target user after receiving a first instruction, where the first instruction is used to represent a gift giving instruction triggered by the target user;
the analysis module 52 is configured to analyze the voice information of the target user, and determine an acoustic feature parameter corresponding to the voice information of the target user;
and the adjusting module 53 is configured to adjust, according to the acoustic characteristic parameter, audio track information of a main broadcasting in a live broadcasting room currently watched by the target user.
In the embodiment of the present invention, for the acquisition module 51 to acquire the voice information of the target user, the following method may be adopted: analyzing a special effect identifier from the first instruction; selecting to enter a corresponding special effect gift giving mode according to the special effect identification; and acquiring the voice information of the target user by calling audio acquisition equipment in the special-effect gift presenting mode.
Here, the acoustic feature parameter may include acoustic spectrum information.
As an implementation manner, fig. 6 is a functional structure diagram of another live broadcast interaction device provided in an embodiment of the present invention; as shown in fig. 6, the live interaction apparatus may further include:
the associating module 54 is configured to associate, after the parsing module 52 determines the acoustic feature parameter corresponding to the voice information of the target user, the acoustic spectrum information corresponding to the voice information of the target user with the user account of the target user.
For the adjustment module 53 to adjust the track information of the live broadcast anchor currently watched by the target user according to the acoustic feature parameter, the following method may be specifically adopted: analyzing the special effect duration from the first instruction; and adjusting the track information of the anchor in the live broadcast room currently watched by the target user according to the special effect duration and the acoustic frequency spectrum information.
In this embodiment of the present invention, the parsing module 52 is further configured to parse out a special effect usage range from the first instruction after the adjusting module 53 adjusts the track information of the live broadcast anchor currently being watched by the target user, where the special effect usage range corresponds to a gift type to be gifted;
the live broadcast interaction device further comprises:
and the output module 55 is configured to output the adjusted track information according to the special effect use range.
It should be noted that: in the live broadcast interaction device provided in the above embodiment, when the live broadcast interaction between the user and the anchor is implemented, the division of the program modules is merely used as an example, and in practical applications, the processing distribution may be completed by different program modules as needed, that is, the internal structure of the live broadcast interaction device is divided into different program modules to complete all or part of the processing described above. In addition, the live broadcast interaction device and the live broadcast interaction method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described in detail herein.
In practical applications, the acquisition module 51, the analysis module 52, the adjustment module 53 and the association module 54 in the live broadcast interaction device may be implemented by a Central Processing Unit (CPU), a microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like located on a server or a terminal device; the output module 55 can be implemented in practical applications by a communication module (including a basic communication suite, an operating system, a communication module, a standardized interface, a protocol, etc.) and a transceiver antenna.
In order to implement the live broadcast interaction method, the embodiment of the invention also provides a hardware structure of the live broadcast interaction device. A live interactive apparatus implementing an embodiment of the present invention will now be described with reference to the drawings, where the live interactive apparatus may be implemented in various forms, such as various types of computer devices, such as a server (e.g., a live server), a terminal device (e.g., a desktop computer, a notebook computer, and a smart phone). In the following, the hardware structure of the live broadcast interaction device according to the embodiment of the present invention is further described, it is to be understood that fig. 7 only shows an exemplary structure of the live broadcast interaction device, and not a whole structure, and a part of the structure or a whole structure shown in fig. 7 may be implemented as needed.
Referring to fig. 7, fig. 7 is a schematic diagram of a hardware structure of a live broadcast interaction apparatus according to an embodiment of the present invention, which may be applied to various servers or terminal devices running application programs in practical applications, where the live broadcast interaction apparatus 700 shown in fig. 7 includes: at least one processor 701, memory 702, user interface 703, and at least one network interface 704. The various components of the live interaction device 700 are coupled together by a bus system 705. It will be appreciated that the bus system 705 is used to enable communications among the components. The bus system 705 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various busses are labeled in figure 7 as the bus system 705.
The user interface 703 may include, among other things, a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, or a touch screen.
It will be appreciated that the memory 702 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory.
The memory 702 in the present embodiment is used to store various types of data to support the operation of the live interactive device 700. Examples of such data include: any computer program for operating on the live interactive device 700, such as an executable program 7021 and an operating system 7022, can be included in the executable program 7021 to implement the live interactive method according to the embodiment of the present invention.
The live broadcast interaction method disclosed by the embodiment of the invention can be applied to the processor 701, or can be realized by the processor 701. The processor 701 may be an integrated circuit chip having signal processing capabilities. In the implementation process, the steps of the live broadcast interaction method may be implemented by hardware integrated logic circuits or instructions in the form of software in the processor 701. The processor 701 described above may be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Processor 701 may implement or perform the live interaction methods, steps, and logic blocks provided in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the live broadcast interaction method provided by the embodiment of the invention can be directly embodied as the execution of a hardware decoding processor, or the execution of the combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium located in the memory 702, and the processor 701 reads information in the memory 702, and completes the steps of the live broadcast interaction method provided by the embodiment of the present invention in combination with hardware thereof.
In the embodiment of the present invention, the live interactive device 700 includes a memory 702, a processor 701, and an executable program 7021 stored on the memory 702 and capable of being executed by the processor 701, and when the processor 701 executes the executable program 7021, the processor 701 implements: after receiving a first instruction, acquiring voice information of a target user, wherein the first instruction is used for representing a gift giving instruction triggered by the target user; analyzing the voice information of the target user, and determining acoustic characteristic parameters corresponding to the voice information of the target user; and adjusting the audio track information of the anchor in the live broadcast room currently watched by the target user according to the acoustic characteristic parameters.
As an embodiment, when the processor 701 runs the executable program 7021, it implements: analyzing a special effect identifier from the first instruction; selecting to enter a corresponding special effect gift giving mode according to the special effect identification; and acquiring the voice information of the target user by calling audio acquisition equipment in the special-effect gift presenting mode.
As an embodiment, when the processor 701 runs the executable program 7021, it implements: the acoustic characteristic parameters comprise acoustic spectrum information; and after determining the acoustic characteristic parameters corresponding to the voice information of the target user, associating the acoustic spectrum information corresponding to the voice information of the target user with the user account of the target user.
As an embodiment, when the processor 701 runs the executable program 7021, it implements: analyzing the special effect duration from the first instruction; and adjusting the track information of the anchor in the live broadcast room currently watched by the target user according to the special effect duration and the acoustic frequency spectrum information.
As an embodiment, when the processor 701 runs the executable program 7021, it implements: after the adjustment of the track information of the live broadcast anchor currently watched by the target user, analyzing a special effect using range from the first instruction, wherein the special effect using range corresponds to a gift type given; and outputting the adjusted audio track information according to the special effect application range.
In an exemplary embodiment, an embodiment of the present invention further provides a storage medium, which may be a storage medium such as an optical disc, a flash memory, or a magnetic disc, and may be a non-transitory storage medium. In the embodiment of the present invention, the storage medium stores an executable program 7021, and when executed by the processor 701, the executable program 7021 implements: after receiving a first instruction, acquiring voice information of a target user, wherein the first instruction is used for representing a gift giving instruction triggered by the target user; analyzing the voice information of the target user, and determining acoustic characteristic parameters corresponding to the voice information of the target user; and adjusting the audio track information of the anchor in the live broadcast room currently watched by the target user according to the acoustic characteristic parameters.
As an embodiment, the executable program 7021 when executed by the processor 701 implements: analyzing a special effect identifier from the first instruction; selecting to enter a corresponding special effect gift giving mode according to the special effect identification; and acquiring the voice information of the target user by calling audio acquisition equipment in the special-effect gift presenting mode.
As an embodiment, the executable program 7021 when executed by the processor 701 implements: the acoustic characteristic parameters comprise acoustic spectrum information; and after determining the acoustic characteristic parameters corresponding to the voice information of the target user, associating the acoustic spectrum information corresponding to the voice information of the target user with the user account of the target user.
As an embodiment, the executable program 7021 when executed by the processor 701 implements: analyzing the special effect duration from the first instruction; and adjusting the track information of the anchor in the live broadcast room currently watched by the target user according to the special effect duration and the acoustic frequency spectrum information.
As an embodiment, the executable program 7021 when executed by the processor 701 implements: after the adjustment of the track information of the live broadcast anchor currently watched by the target user, analyzing a special effect using range from the first instruction, wherein the special effect using range corresponds to a gift type given; and outputting the adjusted audio track information according to the special effect application range.
In summary, the live broadcast interaction method provided by the embodiment of the present invention changes the track information of the anchor program through the specific acoustic characteristic parameter of the target user, so that not only the presentation effect of the gifted gift can be enhanced, but also the interaction effect between the user and the anchor program can be increased, and the interest of live broadcast and the use viscosity of the network platform user can be improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or executable program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of an executable program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and executable program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by executable program instructions. These executable program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor with reference to a programmable data processing apparatus to produce a machine, such that the instructions, which execute via the computer or processor with reference to the programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These executable program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These executable program instructions may also be loaded onto a computer or reference programmable data processing apparatus to cause a series of operational steps to be performed on the computer or reference programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or reference programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. that are within the spirit and principle of the present invention should be included in the present invention.

Claims (10)

1. A live interaction method, comprising:
after receiving a first instruction, acquiring voice information of a target user, wherein the first instruction is used for representing a gift giving instruction triggered by the target user;
analyzing the voice information of the target user, and determining acoustic characteristic parameters corresponding to the voice information of the target user;
adjusting the audio track information of the anchor of the live broadcast room currently watched by the target user according to the acoustic characteristic parameters; wherein the content of the first and second substances,
after the adjusting of the track information of the live-air anchor currently being viewed by the target user, the method further comprises:
analyzing a special effect use range from the first instruction; the special effect application range corresponds to the gift type to be given;
and outputting the adjusted audio track information according to the special effect application range.
2. The live interaction method of claim 1, wherein the collecting voice information of the target user comprises:
analyzing a special effect identifier from the first instruction;
selecting to enter a corresponding special effect gift giving mode according to the special effect identification;
and acquiring the voice information of the target user by calling audio acquisition equipment in the special-effect gift presenting mode.
3. The live interaction method of claim 1, wherein the acoustic feature parameters comprise acoustic spectrum information;
after the determining the acoustic characteristic parameters corresponding to the voice information of the target user, the method further includes:
and associating the acoustic frequency spectrum information corresponding to the voice information of the target user with the user account of the target user.
4. The live interaction method of claim 3, wherein the adjusting of the track information of the live room anchor currently being watched by the target user according to the acoustic feature parameter comprises:
analyzing the special effect duration from the first instruction;
and adjusting the track information of the anchor in the live broadcast room currently watched by the target user according to the special effect duration and the acoustic frequency spectrum information.
5. A live interaction device, the device comprising: the device comprises an acquisition module, an analysis module and an adjustment module; wherein the content of the first and second substances,
the acquisition module is used for acquiring voice information of a target user after receiving a first instruction, wherein the first instruction is used for representing a gift giving instruction triggered by the target user;
the analysis module is used for analyzing the voice information of the target user and determining acoustic characteristic parameters corresponding to the voice information of the target user;
the adjusting module is used for adjusting the track information of the anchor in the live broadcast room currently watched by the target user according to the acoustic characteristic parameters; wherein the content of the first and second substances,
the analysis module is further configured to analyze a special effect use range from the first instruction after the adjustment module adjusts the track information of the anchor in the live broadcast room currently watched by the target user; the special effect application range corresponds to the gift type to be given;
the device further comprises:
and the output module is used for outputting the adjusted audio track information according to the special effect application range.
6. The live interaction device of claim 5, wherein the capture module is specifically configured to:
analyzing a special effect identifier from the first instruction;
selecting to enter a corresponding special effect gift giving mode according to the special effect identification;
and acquiring the voice information of the target user by calling audio acquisition equipment in the special-effect gift presenting mode.
7. The live interaction device of claim 5, wherein the acoustic feature parameters comprise acoustic spectrum information;
the device further comprises:
and the association module is used for associating the acoustic spectrum information corresponding to the voice information of the target user with the user account of the target user after the analysis module determines the acoustic characteristic parameters corresponding to the voice information of the target user.
8. The live interaction device of claim 7, wherein the adjustment module is specifically configured to:
analyzing the special effect duration from the first instruction;
and adjusting the track information of the anchor in the live broadcast room currently watched by the target user according to the special effect duration and the acoustic frequency spectrum information.
9. A live interaction device comprising a memory, a processor and an executable program stored on the memory and executable by the processor, wherein the processor executes the executable program to perform the steps of the live interaction method as claimed in any one of claims 1 to 4.
10. A storage medium having stored thereon an executable program, the executable program when executed by a processor implementing the steps of the live interaction method of any of claims 1 to 4.
CN201811063369.1A 2018-09-12 2018-09-12 Live broadcast interaction method and device and storage medium Active CN109348274B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811063369.1A CN109348274B (en) 2018-09-12 2018-09-12 Live broadcast interaction method and device and storage medium
PCT/CN2019/105771 WO2020052665A1 (en) 2018-09-12 2019-09-12 Live broadcast interaction method and apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811063369.1A CN109348274B (en) 2018-09-12 2018-09-12 Live broadcast interaction method and device and storage medium

Publications (2)

Publication Number Publication Date
CN109348274A CN109348274A (en) 2019-02-15
CN109348274B true CN109348274B (en) 2021-03-23

Family

ID=65305258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811063369.1A Active CN109348274B (en) 2018-09-12 2018-09-12 Live broadcast interaction method and device and storage medium

Country Status (2)

Country Link
CN (1) CN109348274B (en)
WO (1) WO2020052665A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109348274B (en) * 2018-09-12 2021-03-23 咪咕音乐有限公司 Live broadcast interaction method and device and storage medium
CN110119264B (en) * 2019-05-21 2023-03-31 北京达佳互联信息技术有限公司 Sound effect adjusting method, device and storage medium
CN111988655A (en) * 2019-05-22 2020-11-24 西安诺瓦星云科技股份有限公司 Program playing method and device and program playing system
CN110989910A (en) * 2019-11-28 2020-04-10 广州虎牙科技有限公司 Interaction method, system, device, electronic equipment and storage medium
CN111314788A (en) * 2020-03-13 2020-06-19 广州华多网络科技有限公司 Voice password returning method and presenting method, device and equipment for voice gift
CN112533053B (en) * 2020-11-30 2022-08-23 北京达佳互联信息技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN113014477A (en) * 2021-03-18 2021-06-22 广州市百果园信息技术有限公司 Gift processing method, device and equipment of voice platform and storage medium
CN113596596A (en) * 2021-07-27 2021-11-02 百果园技术(新加坡)有限公司 Gift rewarding system, method, device and medium for live broadcast application
CN113613033B (en) * 2021-08-03 2024-05-28 广州繁星互娱信息科技有限公司 Live broadcast interaction method and device for audience and anchor, electronic equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881283A (en) * 2011-07-13 2013-01-16 三星电子(中国)研发中心 Method and system for processing voice
CN107682729A (en) * 2017-09-08 2018-02-09 广州华多网络科技有限公司 It is a kind of based on live interactive approach and live broadcast system, electronic equipment
CN108040285A (en) * 2017-11-15 2018-05-15 上海掌门科技有限公司 Net cast picture adjusting method, computer equipment and storage medium

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5479823B2 (en) * 2009-08-31 2014-04-23 ローランド株式会社 Effect device
US9037467B2 (en) * 2012-01-02 2015-05-19 International Business Machines Corporation Speech effects
CN105488135B (en) * 2015-11-25 2019-11-15 广州酷狗计算机科技有限公司 Live content classification method and device
CN105872838A (en) * 2016-04-28 2016-08-17 徐文波 Sending method and device of special media effects of real-time videos
CN106331736A (en) * 2016-08-24 2017-01-11 武汉斗鱼网络科技有限公司 Live client speech processing system and processing method thereof
CN106375864B (en) * 2016-08-25 2019-04-26 广州华多网络科技有限公司 Virtual objects distribute control method, device and mobile terminal
CN106507207B (en) * 2016-10-31 2019-07-05 北京小米移动软件有限公司 The method and device interacted in live streaming application
CN107093421A (en) * 2017-04-20 2017-08-25 深圳易方数码科技股份有限公司 A kind of speech simulation method and apparatus
CN107483986A (en) * 2017-06-30 2017-12-15 武汉斗鱼网络科技有限公司 A kind of method and system of gifts
CN107277637A (en) * 2017-08-18 2017-10-20 上海东方明珠新媒体股份有限公司 Medium living broadcast shopping interactive device, medium living broadcast shopping interactive system and method
CN107396177B (en) * 2017-08-28 2020-06-02 北京小米移动软件有限公司 Video playing method, device and storage medium
CN107481735A (en) * 2017-08-28 2017-12-15 ***通信集团公司 A kind of method, server and the computer-readable recording medium of transducing audio sounding
CN107818792A (en) * 2017-10-25 2018-03-20 北京奇虎科技有限公司 Audio conversion method and device
CN107767879A (en) * 2017-10-25 2018-03-06 北京奇虎科技有限公司 Audio conversion method and device based on tone color
CN107959882B (en) * 2017-12-12 2019-12-13 广东小天才科技有限公司 Voice conversion method, device, terminal and medium based on video watching record
CN108198566B (en) * 2018-01-24 2021-07-20 咪咕文化科技有限公司 Information processing method and device, electronic device and storage medium
CN109348274B (en) * 2018-09-12 2021-03-23 咪咕音乐有限公司 Live broadcast interaction method and device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102881283A (en) * 2011-07-13 2013-01-16 三星电子(中国)研发中心 Method and system for processing voice
CN107682729A (en) * 2017-09-08 2018-02-09 广州华多网络科技有限公司 It is a kind of based on live interactive approach and live broadcast system, electronic equipment
CN108040285A (en) * 2017-11-15 2018-05-15 上海掌门科技有限公司 Net cast picture adjusting method, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2020052665A1 (en) 2020-03-19
CN109348274A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109348274B (en) Live broadcast interaction method and device and storage medium
CN107393569B (en) Audio-video clipping method and device
CN104123938A (en) Voice control system, electronic device and voice control method
CN109474845B (en) Bullet screen control method, bullet screen processing server and computer readable storage medium
US20170060520A1 (en) Systems and methods for dynamically editable social media
CN110070896B (en) Image processing method, device and hardware device
US11511200B2 (en) Game playing method and system based on a multimedia file
CN105404642B (en) A kind of audio frequency playing method and user terminal
CN104980773A (en) Streaming media processing method and device, terminal and server
CN109982231B (en) Information processing method, device and storage medium
CN105812845B (en) A kind of media resource method for pushing, system and the media player based on android system
CN113286161A (en) Live broadcast method, device, equipment and storage medium
CN110602553B (en) Audio processing method, device, equipment and storage medium in media file playing
CN110503979B (en) Audio output effect monitoring method, device, medium and electronic equipment
CN109410972B (en) Method, device and storage medium for generating sound effect parameters
CN110025958B (en) Voice sending method, device, medium and electronic equipment
CN105635418A (en) Method and device for cutting bell
US20140039891A1 (en) Automatic separation of audio data
CN110853606A (en) Sound effect configuration method and device and computer readable storage medium
US11775070B2 (en) Vibration control method and system for computer device
WO2023005193A1 (en) Subtitle display method and device
CN105208439A (en) Audio file playing method and device
CN112307161B (en) Method and apparatus for playing audio
EP3909046B1 (en) Determining a light effect based on a degree of speech in media content
Lorho Perceptual evaluation of mobile multimedia loudspeakers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant