CN111757136A - Webpage audio live broadcast method, device, equipment and storage medium - Google Patents

Webpage audio live broadcast method, device, equipment and storage medium Download PDF

Info

Publication number
CN111757136A
CN111757136A CN202010607621.1A CN202010607621A CN111757136A CN 111757136 A CN111757136 A CN 111757136A CN 202010607621 A CN202010607621 A CN 202010607621A CN 111757136 A CN111757136 A CN 111757136A
Authority
CN
China
Prior art keywords
audio
live
playing
web
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010607621.1A
Other languages
Chinese (zh)
Inventor
常炎隆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010607621.1A priority Critical patent/CN111757136A/en
Publication of CN111757136A publication Critical patent/CN111757136A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8193Monomedia components thereof involving executable data, e.g. software dedicated tools, e.g. video decoder software or IPMP tool
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a webpage audio live broadcast method, device, equipment and storage medium, and relates to the technical field of cloud computing and webpage live broadcast. The specific implementation scheme is as follows: receiving a live broadcast audio frame sent by a server in real time; and playing the live Audio frame in the Audio context through a Web Audio player Web Audio, and writing the received live Audio frame into another Audio context. The method and the device reduce the delay of the live broadcast of the webpage audio and improve the live broadcast real-time property.

Description

Webpage audio live broadcast method, device, equipment and storage medium
Technical Field
The application relates to the technical field of internet, in particular to the technical field of cloud computing and webpage live broadcast, and specifically relates to a webpage audio live broadcast method, device, equipment and storage medium.
Background
With the development of internet technology, live broadcasting has been widely applied to various fields of work and life. A user can select a live broadcast room from a live broadcast page through logging in a client, and enters the live broadcast room to interact with a main broadcast. For example, a listener may enter an audio live room through a web browser to receive the anchor speech.
Disclosure of Invention
The disclosure provides a method, a device, equipment and a storage medium for webpage audio live broadcast.
According to an aspect of the present disclosure, a method for live broadcasting webpage audio is provided, including:
receiving a live broadcast audio frame sent by a server in real time;
and playing the live Audio frame in the Audio context through a Web Audio player Web Audio, and writing the received live Audio frame into another Audio context.
According to an aspect of the present disclosure, a method for live broadcasting webpage audio is provided, including:
determining a live broadcast audio frame according to audio data acquired from a live broadcast stream pushing end;
sending the live audio frame to the webpage client in real time to instruct the webpage client to execute the following steps: and playing the live Audio frame in the Audio context through a Web Audio player Web Audio, and writing the received live Audio frame into another Audio context.
According to an aspect of the present disclosure, there is provided a web page audio live broadcasting device, including:
the audio frame receiving module is used for receiving live audio frames sent by the server in real time;
and the Audio playing module is used for playing the live broadcast Audio frame in the Audio context through a Web Audio player Web Audio and writing the received live broadcast Audio frame into another Audio context.
According to an aspect of the present disclosure, there is provided a web page audio live broadcasting device, including:
the audio frame determining module is used for determining a live broadcast audio frame according to audio data acquired from a live broadcast stream pushing end;
an audio frame sending module, configured to send the live audio frame to a web page client in real time, so as to instruct the web page client to execute the following steps: and playing the live Audio frame in the Audio context through a Web Audio player Web Audio, and writing the received live Audio frame into another Audio context.
According to a fifth aspect, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method for audio live broadcast of a web page as described in any one of the embodiments of the present application.
According to a sixth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method of audio live broadcasting of a web page as described in any one of the embodiments of the present application.
According to the technology of the application, the delay of the live broadcast of the webpage audio is reduced, and the live broadcast real-time performance is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1a is a schematic flowchart of a method for live broadcasting web page audio provided in an embodiment of the present application;
fig. 1b is a schematic flowchart of an HLS-based audio live broadcast method in the related art;
fig. 1c is a schematic structural diagram of a web page audio live broadcasting system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for live audio broadcasting of a web page according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for live broadcasting web page audio according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for live broadcasting web page audio according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a web page audio live broadcasting device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a web page audio live broadcasting device according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing a web page audio live broadcasting method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1a is a schematic flowchart of a web page audio live broadcasting method provided in an embodiment of the present application. The embodiment is applicable to the condition of listening to audio live broadcast through the network. The webpage audio live broadcasting method disclosed by the embodiment can be executed by electronic equipment, and specifically can be executed by a webpage audio live broadcasting device, and the device can be realized by software and/or hardware and is configured in the electronic equipment. Referring to fig. 1a, the method for live broadcasting web page audio provided by this embodiment includes:
and S110, receiving live audio frames sent by the server in real time.
Fig. 1b is a schematic flow chart of an audio Live broadcast method based on HLS (Http Live streaming protocol) in the related art. Referring to fig. 1b, a live streaming client records audio data, and sends the recorded audio data to a server, the server slices the received audio data with a preset duration as a slicing period to obtain a TS (transport stream) slice, determines a storage address of the TS slice, writes the TS slice into an access area corresponding to the storage address, updates the storage address of the TS slice in an m3u8 index file, sends the m3u8 index file to a client, so that the web client downloads TS slice data based on the storage address of the TS slice in the m3u8 index file, and plays the TS slice data. Since the HLS protocol divides the slice period according to time units such as 10s and 20s, the TS slice generated by the server is delayed by at least the slice period level, and besides, Http (hypertext transfer protocol) connection needs to be frequently established between the HLS protocol client and the server, which increases network overhead; after the web client downloads the TS slice data, the TS switching data needs to be decapsulated, and considering that JavaScript logic processing capability of a browser is weak, decapsulation may cause time performance overhead to be larger, thereby possibly causing live broadcast delay to be further increased.
Fig. 1c is a schematic structural diagram of a web page audio live broadcasting system according to an embodiment of the present application. Referring to fig. 1c, the live streaming client records audio data and sends the recorded audio data to the server, and the server determines a live audio frame according to the audio data of the live streaming client and sends the live audio frame to the web page client, i.e., the web page listening end in real time.
In the embodiment of the application, the live audio frame may be unpackaged audio data. The server side does not package the audio data, and sends the live audio data at the frame level to the webpage client side in real time, so that the audio data transmission delay caused by data fragmentation of the server side can be avoided, and the live broadcast delay can be greatly reduced.
And S120, playing the live Audio frame in the Audio context through a webpage Audio player Web Audio, and writing the received live Audio frame into another Audio context.
The Web Audio is an Audio playing interface supported by a browser, and an Audio context (Audio context) refers to a series of Audio nodes and a connection mode between the nodes, and allows one or more Audio signals to be finally connected to an Audio playing device node after being connected arbitrarily. Since Web Audio only supports playing fixed-length Audio data, it cannot be directly applied to live scenes.
The embodiment of the application creatively constructs two Audio contexts for the Web Audio, and the received live Audio frame is written into the queue of the other Audio context in the process of playing the live Audio frame in one Audio context by the Web Audio, namely, the Audio data can still be written into the queue of the other Audio context in the process of playing the Audio data by the Web Audio. Through the mutual cooperation of the Web Audio and the two Audio contexts, the Web Audio supports the continuous writing of Audio data, so that the Web Audio can be applied to live scenes.
The length of the two audio context queues is not specifically limited in the embodiment of the present application, and the lengths of the two audio context queues may be the same or different, and may be adjusted according to the service requirement. During the process of playing the Audio frame of one Audio context, the newly received Audio frame is written into the queue of the other Audio context, and the new Audio frame is not written into the queue of the currently played Audio context, that is, the length of the Audio context in the playing state is not changed during the playing process, so that the Audio can be played through the Web Audio.
According to the technical scheme, the Audio frames are sent to the webpage client in real time through the server, Audio data transmission delay caused by fragment processing of the server is avoided, live broadcast delay can be greatly reduced, and live broadcast is achieved by means of cooperation of the Web Audio and two Audio contexts.
Fig. 2 is a schematic flowchart of a method for live audio broadcasting of a web page according to an embodiment of the present application. The present embodiment is an alternative proposed on the basis of the above-described embodiments. Referring to fig. 2, the method for live broadcasting web page audio provided by this embodiment includes:
s210, receiving live audio frames sent by the server in real time.
The live broadcast audio frame is audio data which is not packaged by the server, and the server sends the live broadcast audio frame to the webpage client in real time, so that the live broadcast delay is greatly reduced.
In an alternative embodiment, S210 includes: establishing long connection with the server by using Web Socket; and receiving the live audio frame sent by the server side in real time through the long connection. Specifically, a long connection is established between the webpage client and the server by using Web Socket, the server responds to a live broadcast request of the webpage client and sends an audio frame to the webpage client through the long connection, and compared with the situation that the Http connection is frequently established between the webpage client and the server, the network overhead can be reduced.
In an alternative embodiment, the live Audio frame is an original live Audio frame in Pulse Code Modulation (PCM) format or a compressed live Audio frame in Advanced Audio Coding (AAC) format. Specifically, the server can compress the original live audio data to obtain a compressed live audio frame, so that the format of the frame and the stream is compatible, the logic is simplified, and the delay is reduced.
S220, in a first playing period, playing the live broadcast Audio frame in the first Audio context through Web Audio, and writing the received live broadcast Audio frame into a second Audio context.
And S230, in a second playing period, playing the live broadcast Audio frame in the second Audio context through the Web Audio, and writing the received live broadcast Audio frame into the first Audio context.
In the embodiment of the present application, the durations of the first playing period and the second playing period are not specifically limited, and the durations of the first playing period and the second playing period may be the same or different, and may be determined according to a service requirement. The first playing period and the second playing period are alternately switched and do not overlap.
Specifically, the Web client loads the Web Audio and initializes the Web Audio. In the initial playing period, the audio frame received from the server within a preset duration (e.g., 1s) may be written into the queue of the first audio context and played; and writing the received audio frames into a queue of the second audio context in the playing process, and switching to the second playing period when the initial playing period is ended. And subsequently, live broadcasting through Web Audio is realized through alternately switching the first playing period and the second playing period.
In an optional implementation manner, when the first playing period or the second playing period ends, a playing channel switching instruction is generated, so that the Web Audio switches the playing channel. Specifically, in a first playing period, if no audio is not played in the queue of the first audio context, it is determined that the first playing period is ended; correspondingly, in the second playing period, if no audio is not played in the queue of the second audio context, it is determined that the second playing period is finished. By generating a channel switching instruction at the end of any playing period, the Web Audio switches the playing channel, namely, the continuous playing of the live Audio data is realized through the Web Audio.
In addition, parameters such as volume, amplitude and the like of Audio live broadcast can be adjusted through Web Audio, and therefore the Audio live broadcast quality is further improved. The webpage client logic is simple and easy to operate, audio data do not need to be unpacked, the browser pressure is reduced, and the audio live broadcast stability is improved.
According to the technical scheme of the embodiment of the application, the continuity of Audio live broadcast of the Web Audio is further enhanced through the alternate switching of the first playing period and the second playing period, so that the live broadcast delay is reduced; the server side and the webpage client side are connected in a long mode, and therefore live broadcast network overhead is reduced.
Fig. 3 is a schematic flowchart of a web page audio live broadcasting method provided in an embodiment of the present application. The embodiment is applicable to the condition of listening to audio live broadcast through the network. The webpage audio live broadcasting method disclosed by the embodiment can be executed by electronic equipment, and specifically can be executed by a webpage audio live broadcasting device, and the device can be realized by software and/or hardware and is configured in the electronic equipment. Referring to fig. 3, the method for live broadcasting webpage audio provided by this embodiment may include:
s310, determining a live broadcast audio frame according to the audio data acquired from the live broadcast stream pushing end.
Specifically, the live streaming push end collects audio data and sends the collected audio data to the server. The format of the audio data is not specifically limited in the embodiments of the present application, and may be bare stream audio data.
In an alternative embodiment, the live Audio frame is an original live Audio frame in Pulse Code Modulation (PCM) format or a compressed live Audio frame in Advanced Audio Coding (AAC) format. Specifically, the server can compress the original live audio data to obtain a compressed live audio frame, so that the format of the frame and the stream is compatible, the logic is simplified, and the delay is reduced.
S320, sending the live audio frame to the webpage client in real time to indicate the webpage client to execute the following steps: and playing the live Audio frame in the Audio context through a Web Audio player Web Audio, and writing the received live Audio frame into another Audio context.
Specifically, the server may directly use the audio data received from the stream pushing end as an audio frame, that is, send the original audio stream to the web page client in real time; the server may also compress the audio data, for example, obtain an audio stream in the AAC format, and send the audio stream in the AAC format to the web client in real time. The server side does not package the audio data, and sends the live audio data at the frame level to the webpage client side in real time, so that the audio data transmission delay caused by fragment processing of the server side can be avoided, and the live broadcast delay can be greatly reduced.
In an optional implementation mode, a long connection is established between a server and a webpage client by using a Web Socket and the server; and receiving the live audio frame sent by the server side in real time through the long connection. Specifically, a long connection is established between the webpage client and the server by using Web Socket, the server responds to a live broadcast request of the webpage client and sends an audio frame to the webpage client through the long connection, and compared with the situation that the Http connection is frequently established between the webpage client and the server, the network overhead can be reduced.
According to the technical scheme, the server sends the audio frames to the webpage client in real time, live broadcast delay can be greatly reduced compared with TS (transport stream) fragmentation, and long connection is established between the server and the webpage client, so that live broadcast network overhead can be reduced.
Fig. 4 is a schematic flowchart of a method for live broadcasting web page audio according to an embodiment of the present application. Referring to fig. 4, the method for live broadcasting web page audio provided by this embodiment includes:
s410, the stream pushing client collects audio data and sends the audio data to the server.
And S420, the server determines an audio frame according to the audio data.
The audio frames may be original live audio frames in a pulse code modulation format or compressed live audio frames in an advanced audio coding format.
And S430, the webpage client and the server establish long connection by using Web Socket.
And S440, the server side responds to the live broadcast request of the webpage client side and sends the audio frame to the webpage client side in real time through long connection.
S450, the Web Audio is loaded by the webpage client and initialized.
And S460, in the first playing period, the webpage client plays the live Audio frame in the first Audio context through the Web Audio, and writes the received live Audio frame into the second Audio context.
And S470, in the second playing period, the webpage client plays the live Audio frame in the second Audio context through the Web Audio, and writes the received live Audio frame into the first Audio context.
The first playing period and the second playing period are not overlapped and are alternately switched. Specifically, when any one play cycle is finished, a play channel switching instruction is generated, so that the Web Audio switches the play channel.
According to the technical scheme of the embodiment of the application, the server side can achieve pushing and receiving delay of a frame level by sending the generated AAC compressed audio stream and the PCM original audio stream in real time, and one frame of AAC is about 20MS, so that the delay effect of the AAC is improved by more than percentage relative to the HLS second level, and live broadcast granulation is achieved to the frame level; the capability of continuously playing the live streaming media is realized through dynamic switching and dynamic updating of the dual-channel queue; compatible with the format of the frame and the stream, the PCM stream can skip the coding step, thereby saving the logic delay; in addition, the long link is established by using the Web Socket protocol, and the Http connection with high cost does not need to be established frequently like HLS.
Fig. 5 is a schematic structural diagram of a web page audio live broadcasting device according to an embodiment of the present application. Referring to fig. 5, an embodiment of the present application discloses a web page audio live broadcasting apparatus 500, where the apparatus 500 may be configured in a web page client, and the apparatus 500 includes:
an audio frame receiving module 501, configured to receive a live audio frame sent by a server in real time;
and the Audio playing module 502 is configured to play a live Audio frame in an Audio context through a Web Audio player, and write the received live Audio frame in another Audio context.
Optionally, the audio playing module 502 includes:
the first playing unit is used for playing a live broadcast Audio frame in a first Audio context through Web Audio in a first playing period and writing the received live broadcast Audio frame into a second Audio context;
and the second playing unit is used for playing the live broadcast Audio frame in the second Audio context through the Web Audio in a second playing period and writing the received live broadcast Audio frame into the first Audio context.
Optionally, the audio playing module 502 further includes:
and the channel switching unit is used for generating a playing channel switching instruction when the first playing period or the second playing period is finished, so that the Web Audio switches the playing channel.
Optionally, the audio frame receiving module 501 includes:
the long connection unit is used for establishing long connection with the server by using Web Socket;
and the audio frame receiving unit is used for receiving the live audio frame sent by the server side in real time through the long connection.
Optionally, the live audio frame is an original live audio frame in a pulse code modulation format or a compressed live audio frame in an advanced audio coding format.
According to the technical scheme of the embodiment of the application, the server sends the audio frames to the webpage client in real time, so that compared with TS (transport stream) fragmentation, the live broadcast delay can be greatly reduced; the capability of continuously playing the live streaming media is realized through dynamic switching and dynamic updating of the dual-channel queue; and long connection is established between the server side and the webpage client side, so that the live broadcast network overhead can be reduced.
Fig. 6 is a schematic structural diagram of a web page audio live broadcasting device according to an embodiment of the present application. Referring to fig. 6, an embodiment of the present application discloses a web page audio live broadcasting apparatus 600, where the apparatus 600 may be configured in a server, and the apparatus 600 may include:
an audio frame determining module 601, configured to determine a live audio frame according to audio data obtained from a live streaming end;
an audio frame sending module 602, configured to send the live audio frame to the web client in real time, so as to instruct the web client to perform the following steps: and playing the live Audio frame in the Audio context through a Web Audio player Web Audio, and writing the received live Audio frame into another Audio context.
Optionally, the live audio frame is an original live audio frame in a pulse code modulation format or a compressed live audio frame in an advanced audio coding format.
According to the technical scheme, the server sends the audio frames to the webpage client in real time, live broadcast delay can be greatly reduced compared with TS (transport stream) fragmentation, and long connection is established between the server and the webpage client, so that live broadcast network overhead can be reduced.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 701, a memory 702, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 7, one processor 701 is taken as an example.
The memory 702 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor to cause the at least one processor to perform the method for webpage audio live broadcast provided by the application. A non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform a method of audio live streaming of a web page as provided herein.
Memory 702 serves as a non-transitory computer-readable storage medium that may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the method for audio live broadcast of a web page in the embodiments of the present application (e.g., audio frame receiving module 501 and audio playing module 502 shown in fig. 5; and audio frame determining module 601 and audio frame sending module 602 shown in fig. 6). The processor 701 executes various functional applications and data processing of the server by running non-transitory software programs, instructions and modules stored in the memory 702, that is, the method for live audio broadcasting of web pages in the above method embodiments is implemented.
The memory 702 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the electronic device for audio live broadcast of the web page, and the like. Further, the memory 702 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 702 may optionally include memory located remotely from the processor 701, and such remote memory may be connected to a web-enabled audio live electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method for webpage audio live broadcasting can further comprise: an input device 703 and an output device 704. The processor 701, the memory 702, the input device 703 and the output device 704 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 703 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device on which the web page is live audio, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 704 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the server side can achieve pushing and receiving delay of a frame level by sending the generated AAC compressed audio stream and the PCM original audio stream in real time, and one frame of AAC is about 20MS, so that the delay effect of the AAC is improved by more than percentage relative to the HLS second level, and live broadcast granulation is achieved to the frame level; the capability of continuously playing the live streaming media is realized through dynamic switching and dynamic updating of the dual-channel queue; compatible with the format of the frame and the stream, the PCM stream can skip the coding step, thereby saving the logic delay; in addition, the long link is established by using the Web Socket protocol, and the Http connection with high cost does not need to be established frequently like HLS.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (16)

1. A webpage audio live broadcasting method comprises the following steps:
receiving a live broadcast audio frame sent by a server in real time;
and playing the live Audio frame in the Audio context through a Web Audio player Web Audio, and writing the received live Audio frame into another Audio context.
2. The method of claim 1, wherein the playing a live Audio frame in an Audio context through a Web player, Web Audio, and writing the received live Audio frame in another Audio context comprises:
in a first playing period, playing a live Audio frame in a first Audio context through Web Audio, and writing a received live Audio frame into a second Audio context;
and in a second playing period, playing the live broadcast Audio frame in the second Audio context through the Web Audio, and writing the received live broadcast Audio frame into the first Audio context.
3. The method of claim 2, further comprising:
and generating a playing channel switching instruction when the first playing period or the second playing period is finished, so that the WebAudio switches playing channels.
4. The method of claim 1, wherein the receiving live audio frames sent by the server in real-time comprises:
establishing long connection with the server by using Web Socket;
and receiving the live audio frame sent by the server side in real time through the long connection.
5. The method of claim 1, wherein the live audio frames are raw live audio frames in a pulse code modulation format or compressed live audio frames in an advanced audio coding format.
6. A webpage audio live broadcasting method comprises the following steps:
determining a live broadcast audio frame according to audio data acquired from a live broadcast stream pushing end;
sending the live audio frame to the webpage client in real time to instruct the webpage client to execute the following steps: and playing the live Audio frame in the Audio context through a Web Audio player Web Audio, and writing the received live Audio frame into another Audio context.
7. The method of claim 6, wherein the live audio frames are raw live audio frames in a pulse code modulation format or compressed live audio frames in an advanced audio coding format.
8. A web page audio live device, comprising:
the audio frame receiving module is used for receiving live audio frames sent by the server in real time;
and the Audio playing module is used for playing the live broadcast Audio frame in the Audio context through a Web Audio player Web Audio and writing the received live broadcast Audio frame into another Audio context.
9. The apparatus of claim 8, wherein the audio playback module comprises:
the first playing unit is used for playing a live broadcast Audio frame in a first Audio context through Web Audio in a first playing period and writing the received live broadcast Audio frame into a second Audio context;
and the second playing unit is used for playing the live broadcast Audio frame in the second Audio context through the Web Audio in a second playing period and writing the received live broadcast Audio frame into the first Audio context.
10. The apparatus of claim 9, the audio playback module further comprising:
and the channel switching unit is used for generating a playing channel switching instruction when the first playing period or the second playing period is finished, so that the Web Audio switches the playing channel.
11. The apparatus of claim 8, wherein the audio frame receiving module comprises:
the long connection unit is used for establishing long connection with the server by using Web Socket;
and the audio frame receiving unit is used for receiving the live audio frame sent by the server side in real time through the long connection.
12. The apparatus of claim 8, wherein the live audio frames are raw live audio frames in a pulse code modulation format or compressed live audio frames in an advanced audio coding format.
13. A web page audio live device, comprising:
the audio frame determining module is used for determining a live broadcast audio frame according to audio data acquired from a live broadcast stream pushing end;
an audio frame sending module, configured to send the live audio frame to a web page client in real time, so as to instruct the web page client to execute the following steps: and playing the live Audio frame in the Audio context through a Web Audio player Web Audio, and writing the received live Audio frame into another Audio context.
14. The apparatus of claim 13, wherein the live audio frames are raw live audio frames in a pulse code modulation format or compressed live audio frames in an advanced audio coding format.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202010607621.1A 2020-06-29 2020-06-29 Webpage audio live broadcast method, device, equipment and storage medium Pending CN111757136A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010607621.1A CN111757136A (en) 2020-06-29 2020-06-29 Webpage audio live broadcast method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010607621.1A CN111757136A (en) 2020-06-29 2020-06-29 Webpage audio live broadcast method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111757136A true CN111757136A (en) 2020-10-09

Family

ID=72678077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010607621.1A Pending CN111757136A (en) 2020-06-29 2020-06-29 Webpage audio live broadcast method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111757136A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269122B1 (en) * 1998-01-02 2001-07-31 Intel Corporation Synchronization of related audio and video streams
CN1885830A (en) * 2006-07-04 2006-12-27 华为技术有限公司 Method and gateway for transmitting voice stream based on network load in wireless packet network
CN101247432A (en) * 2007-07-18 2008-08-20 北京高信达网络科技有限公司 VoIP voice data real-time monitoring method and device
US20090192803A1 (en) * 2008-01-28 2009-07-30 Qualcomm Incorporated Systems, methods, and apparatus for context replacement by audio level
CN105516739A (en) * 2015-12-22 2016-04-20 腾讯科技(深圳)有限公司 Video live broadcasting method and system, transcoding server and webpage client
US20170133038A1 (en) * 2015-11-11 2017-05-11 Apptek, Inc. Method and apparatus for keyword speech recognition
CN107483972A (en) * 2017-07-24 2017-12-15 平安科技(深圳)有限公司 Live processing method, storage medium and a kind of mobile terminal of a kind of audio frequency and video
US20180361237A1 (en) * 2007-12-05 2018-12-20 Sony Interactive Entertainment America Llc Method for Multicasting Views of Real-Time Streaming Interactive Video
CN109947978A (en) * 2017-07-28 2019-06-28 杭州海康威视数字技术股份有限公司 A kind of audio storage, playback method and device
CN110858919A (en) * 2018-08-24 2020-03-03 北京字节跳动网络技术有限公司 Data processing method and device in media file playing process and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269122B1 (en) * 1998-01-02 2001-07-31 Intel Corporation Synchronization of related audio and video streams
CN1885830A (en) * 2006-07-04 2006-12-27 华为技术有限公司 Method and gateway for transmitting voice stream based on network load in wireless packet network
CN101247432A (en) * 2007-07-18 2008-08-20 北京高信达网络科技有限公司 VoIP voice data real-time monitoring method and device
US20180361237A1 (en) * 2007-12-05 2018-12-20 Sony Interactive Entertainment America Llc Method for Multicasting Views of Real-Time Streaming Interactive Video
US20090192803A1 (en) * 2008-01-28 2009-07-30 Qualcomm Incorporated Systems, methods, and apparatus for context replacement by audio level
US20090192791A1 (en) * 2008-01-28 2009-07-30 Qualcomm Incorporated Systems, methods and apparatus for context descriptor transmission
US20170133038A1 (en) * 2015-11-11 2017-05-11 Apptek, Inc. Method and apparatus for keyword speech recognition
CN105516739A (en) * 2015-12-22 2016-04-20 腾讯科技(深圳)有限公司 Video live broadcasting method and system, transcoding server and webpage client
CN107483972A (en) * 2017-07-24 2017-12-15 平安科技(深圳)有限公司 Live processing method, storage medium and a kind of mobile terminal of a kind of audio frequency and video
CN109947978A (en) * 2017-07-28 2019-06-28 杭州海康威视数字技术股份有限公司 A kind of audio storage, playback method and device
CN110858919A (en) * 2018-08-24 2020-03-03 北京字节跳动网络技术有限公司 Data processing method and device in media file playing process and storage medium

Similar Documents

Publication Publication Date Title
WO2020078165A1 (en) Video processing method and apparatus, electronic device, and computer-readable medium
US20190341078A1 (en) Gapless video looping
CN108055304B (en) Remote data synchronization method, device, server, equipment and storage medium
CN111866567B (en) Multimedia playing method, device, equipment and storage medium
EP2793475A1 (en) Distribution control system, distribution control method, and computer-readable storage medium
CN111432248A (en) Quality monitoring method and device for live video stream
WO2019134499A1 (en) Method and device for labeling video frames in real time
WO2023093322A1 (en) Live broadcast method and device
WO2015180446A1 (en) System and method for maintaining connection channel in multi-device interworking service
CN114244821B (en) Data processing method, device, equipment, electronic equipment and storage medium
CN104349199A (en) Information synchronization method and device
CN104113778A (en) Video stream decoding method and device
CN105812439A (en) Audio transmission method and device
CN110636409A (en) Audio sharing method and device, microphone and storage medium
CN113079386B (en) Video online playing method and device, electronic equipment and storage medium
CN110636338A (en) Video definition switching method and device, electronic equipment and storage medium
CN111541905B (en) Live broadcast method and device, computer equipment and storage medium
CN111757136A (en) Webpage audio live broadcast method, device, equipment and storage medium
CN205105345U (en) Audio frequency and video playback devices
CN103648027A (en) Digital media terminal and media file play method
CN114339415B (en) Client video playing method and device, electronic equipment and readable medium
CN103179449A (en) Media file playing method, electronic device and virtual machine framework
CN107342981B (en) Sensor data transmission method and device and virtual reality head-mounted equipment
CN113766266B (en) Audio and video processing method, device, equipment and storage medium
CN112073727B (en) Transcoding method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201009

RJ01 Rejection of invention patent application after publication