CN112218210B - Display device, audio playing method and device - Google Patents

Display device, audio playing method and device Download PDF

Info

Publication number
CN112218210B
CN112218210B CN201910710346.3A CN201910710346A CN112218210B CN 112218210 B CN112218210 B CN 112218210B CN 201910710346 A CN201910710346 A CN 201910710346A CN 112218210 B CN112218210 B CN 112218210B
Authority
CN
China
Prior art keywords
audio
audio data
data
channel
processing circuit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910710346.3A
Other languages
Chinese (zh)
Other versions
CN112218210A (en
Inventor
李见
黄飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to PCT/CN2020/070890 priority Critical patent/WO2021004046A1/en
Priority to PCT/CN2020/070887 priority patent/WO2021004045A1/en
Priority to PCT/CN2020/070902 priority patent/WO2021004048A1/en
Priority to PCT/CN2020/070929 priority patent/WO2021004049A1/en
Priority to PCT/CN2020/070891 priority patent/WO2021004047A1/en
Publication of CN112218210A publication Critical patent/CN112218210A/en
Application granted granted Critical
Publication of CN112218210B publication Critical patent/CN112218210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/03Connection circuits to selectively connect loudspeakers or headphones to amplifiers

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

The invention provides a display device, an audio playing method and a device, comprising the following steps: the audio processing circuit, the multiple sound track broadcast circuit; the output end of the audio processing circuit is connected with the input end of the auxiliary audio processing circuit through at least two audio data transmission lines, the output end of the auxiliary audio processing circuit is connected with the plurality of sound channel playing circuits, and the number of sound channels corresponding to the audio data transmitted by the at least two audio data transmission lines is less than that of the sound channels of the multi-channel audio data; the audio processing circuit rearranges the received multi-channel audio data to obtain at least two paths of audio transmission data, and sends the at least two paths of audio transmission data to the audio processing circuit through at least two paths of audio data transmission lines; the audio processing circuit is used for disassembling at least two paths of audio transmission data to obtain audio data of a plurality of sound channels in the multi-channel audio data, and sending the audio data of each sound channel to the corresponding sound channel playing circuit for playing to realize multi-channel surround sound.

Description

Display device, audio playing method and device
Technical Field
The present invention relates to electronic device technologies, and in particular, to a display device, an audio playing method and an audio playing device.
Background
The Sound Channel (Sound Channel) refers to the audio signals that are collected or played back at different spatial positions when the Sound is recorded or played, so the number of Sound channels is the number of Sound sources when the Sound is recorded or the number of corresponding speakers when the Sound is played back. The larger the number of channels, the more realistic the reproduced sound is.
Fig. 1 is a first schematic diagram of playing audio data by a conventional television. As shown in fig. 1, most televisions have two speakers built therein. The two loudspeakers can be arranged at two ends of the television and play audio data in a downward sound outputting mode or a forward sound outputting mode. Through the two built-in loudspeakers, the television can realize the sound effect of two-channel stereo. Fig. 2 is a diagram illustrating a second conventional television broadcasting audio data. As shown in fig. 2, some televisions can create a multi-channel surround sound effect by externally connecting speaker devices distributed at different positions in space, so as to highly restore the telepresence effect of sound. However, when the method is used for realizing multi-channel surround sound, the built-in loudspeaker of the television does not play audio data any more, and a strongly-related audio-video experience cannot be formed for a user. In addition, the implementation mode needs to be realized by arranging a plurality of loudspeaker box devices, and the equipment overhead is high.
Therefore, how to realize the sound effect of multi-channel surround sound is an urgent problem to be solved.
Disclosure of Invention
The invention provides a display device, an audio playing method and an audio playing device, which are used for solving the technical problem of how to realize the sound effect of multi-channel surround sound of a television.
A first aspect of the present invention provides a display device including: the audio processing circuit, the multiple sound channel playing circuit; the output end of the audio processing circuit is connected with the input end of the audio processing circuit through at least two audio data transmission lines, the output end of the audio processing circuit is connected with the multiple sound channel playing circuits, and the number of sound channels corresponding to the audio data which can be transmitted by the at least two audio data transmission lines is less than that of the sound channels corresponding to the multi-channel audio data;
the audio processing circuit is used for rearranging the received multi-channel audio data to obtain at least two paths of audio transmission data, and sending the at least two paths of audio transmission data to the audio processing circuit through the at least two paths of audio data transmission lines;
the audio processing circuit is used for disassembling at least two paths of audio transmission data to obtain audio data of a plurality of sound channels in the multi-channel audio data, and sending the audio data of each sound channel to a corresponding sound channel playing circuit for playing.
As a possible implementation manner, the audio processing circuit is specifically configured to decode the multi-channel audio data to obtain audio data of each channel, and rearrange the audio data of each channel according to a preset audio data sampling bit width and a preset audio data arrangement manner to obtain at least two channels of audio transmission data; the preset audio data sampling bit width is larger than that of audio data of one sound channel, and the sampling frequency of the at least two paths of audio transmission data is the same as that of the audio data of one sound channel. Correspondingly, the coaural audio processing circuit is specifically configured to disassemble at least two paths of audio transmission data according to the preset audio data sampling bit width and the preset audio data arrangement mode to obtain audio data of each channel. Optionally, at least two paths of the audio transmission data share a clock, or each path of the audio transmission data corresponds to one clock.
As a possible implementation manner, the audio processing circuit is specifically configured to decode the multi-channel audio data to obtain audio data of each channel, and rearrange the audio data of each channel according to a preset audio data sampling frequency and a preset audio data arrangement manner to obtain at least two paths of audio transmission data; the preset audio data sampling frequency is greater than the sampling frequency of one channel audio data, and the sampling bit width of the at least two paths of audio transmission data is the same as the sampling bit width of one channel audio data. Correspondingly, the coaural audio processing circuit is specifically configured to disassemble at least two paths of audio transmission data according to the preset audio data sampling frequency and the preset audio data arrangement mode to obtain audio data of each channel. Optionally, at least two paths of the audio transmission data share a clock, or each path of the audio transmission data corresponds to one clock. In this implementation manner, the preset audio data sampling frequency is a single-edge acquisition frequency of the serial clock of the audio data, or the preset audio data sampling frequency is a sum of two-edge acquisition frequencies of the serial clock of the audio data.
For example, the audio processing circuit is specifically configured to rearrange the audio data of each channel according to the preset audio data sampling frequency, a preset audio data sampling bit width, and the preset audio data arrangement manner, so as to obtain the at least two channels of audio transmission data; and the preset audio data sampling bit width is greater than the sampling bit width of audio data of one sound channel. Correspondingly, the coaural audio processing circuit is specifically configured to disassemble at least two paths of audio transmission data according to the preset audio data sampling frequency, the preset audio data sampling bit width, and the preset audio data arrangement mode, so as to obtain audio data of each channel. Optionally, at least two paths of the audio transmission data share a clock, or each path of the audio transmission data corresponds to one clock. In this implementation manner, the preset audio data sampling frequency is a single-edge acquisition frequency of a serial clock of the audio data, or the preset audio data sampling frequency is a sum of double-edge acquisition frequencies of the serial clock of the audio data.
A second aspect of the present invention provides an audio playback apparatus, which may include: the audio processing circuit, the multiple sound track broadcast circuit; the output end of the audio processing circuit is connected with the input end of the audio processing circuit through at least two audio data transmission lines, the output end of the audio processing circuit is connected with the multiple sound channel playing circuits, and the number of sound channels corresponding to the audio data which can be transmitted by the at least two audio data transmission lines is less than that of the sound channels corresponding to the multi-channel audio data;
the audio processing circuit is used for rearranging the received multi-channel audio data to obtain at least two paths of audio transmission data, and sending the at least two paths of audio transmission data to the audio processing circuit through the at least two paths of audio data transmission lines;
the audio processing circuit is used for disassembling at least two paths of audio transmission data to obtain audio data of multiple sound channels in the multi-channel audio data, and sending the audio data of each sound channel to the corresponding sound channel playing circuit for playing.
A third aspect of the present invention provides an audio playing method, including: receiving multi-channel audio data; rearranging the multi-channel audio data to obtain at least two paths of audio transmission data; and sending the at least two paths of audio transmission data to a cooperative audio processing circuit through the at least two paths of audio data transmission lines, so that the cooperative audio processing circuit disassembles the at least two paths of audio transmission data to obtain audio data of a plurality of sound channels in the multi-channel audio data, and sends the audio data of each sound channel to a corresponding sound channel playing circuit for playing.
When the number of channels corresponding to the audio data which can be transmitted by at least two audio data transmission lines between the audio processing circuit and the auxiliary audio processing circuit is less than the number of channels corresponding to the multi-channel audio data, the audio processing circuit can rearrange the received multi-channel audio data to obtain the audio transmission data which can be transmitted by the at least two audio data transmission lines. Correspondingly, after receiving at least two paths of audio transmission data transmitted by the audio processing circuit through at least two paths of audio data transmission lines, the auxiliary audio processing circuit can disassemble the at least two paths of audio transmission data to obtain audio data of multiple sound channels in the multi-channel audio data, and sends the audio data of each sound channel to the corresponding sound channel playing circuit for playing, so that the sound effect of multi-channel surround sound is realized. By the mode, when the number of the channels corresponding to the audio data which can be transmitted by the audio processing circuit is less than that of the channels corresponding to the multi-channel audio data, the sound effect of multi-channel surround sound can be realized.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the following briefly introduces the drawings needed to be used in the description of the embodiments or the prior art, and obviously, the drawings in the following description are some embodiments of the present invention, and those skilled in the art can obtain other drawings according to the drawings without inventive labor.
FIG. 1 is a first schematic diagram of a conventional television for playing audio data;
FIG. 2 is a diagram illustrating a second example of playing audio data by a conventional TV;
FIG. 3 is a schematic diagram of an I2S formatted signal;
FIG. 4 is a schematic diagram of an audio playback circuit of a conventional television;
FIG. 5 is a schematic structural diagram of a display device according to the present invention;
fig. 6 is a schematic structural diagram of a television according to the present invention;
FIG. 7 is a diagram illustrating audio data played by a television according to the present invention;
FIG. 8 is a diagram illustrating conventional audio data processing;
FIG. 9 is a schematic diagram of an audio data processing method according to the present invention;
FIG. 10 is a schematic diagram of another audio data processing provided by the present invention;
FIG. 11 is a schematic diagram of another audio data processing method according to the present invention;
FIG. 12 is a schematic diagram of still another audio data processing provided by the present invention;
FIG. 13 is a schematic diagram of another audio data processing method according to the present invention;
fig. 14 is a flowchart illustrating an audio data processing method according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
In order to facilitate understanding of the embodiment of the present invention, the following description will be made of signals transmitted on the I2S bus and the I2S bus according to the embodiment of the present invention:
the integrated circuit embeds audio frequency (Inter-IC Sound, abbreviated as I2S) bus: the bus is used for data transmission between audio devices. The I2S bus adopts the design of clock and data signals transmitted along independent wires, and distortion caused by time difference is avoided by separating the data and clock signals.
Fig. 3 is a schematic diagram of an I2S formatted signal. As shown in fig. 3, one I2S bus is composed of 3 serial conductors, 1 is a clock line, 1 is a word select line, and 1 is a Time Division Multiplexing (TDM) data line.
The TDM data lines are used for transmitting serial data SDATA, i.e., audio data represented by two's complement.
And the clock line is used for transmitting the serial clock SCLK. SCLK may also be referred to as a Bit Clock (BCLK). Each bit of audio data transmitted by the SCLK corresponding to the TDM data line has 1 pulse, which is convenient for the receiver to extract the audio data. The frequency of SCLK is equal to the product of 2 and the sampling frequency times the number of sampling bits (which may also be referred to as the sampling bit width). Optionally, the clock line also transmits a master clock MCLK, sometimes for better synchronization between systems. MCLK, which may also be referred to as a system Clock (Sys Clock), is 256 times or 384 times the sampling frequency.
A word select line for transmitting a frame clock WS (also called LRCK). WS is used to indicate whether the audio data being transmitted on the TDM data line is audio data of the left channel or audio data of the right channel. When WS is "1", it indicates that the audio data being transmitted on the TDM data line is audio data of the right channel; when WS is "0", it indicates that the audio data being transmitted on the TDM data line is audio data of the left channel. The frequency of WS is equal to the sampling frequency. Through WS, one I2S bus can transmit audio data of two channels. For convenience of description, the data transmitted by one I2S bus is subsequently changed into one I2S audio data.
Fig. 4 is a schematic diagram of an audio playing circuit of a conventional television. As shown in fig. 4, when the conventional television realizes two-channel stereo, a main chip of the television is connected to a power amplifier circuit through an I2S bus, and the power amplifier circuit is connected to a speaker 1 for playing left channel audio data and a speaker 2 for playing right channel audio data, respectively.
After the main chip of the television acquires the sound source, the main chip of the television can decode the sound source to obtain audio data of a left channel and audio data of a right channel. Then, the main chip of the television transmits the audio data of the left channel and the audio data of the right channel to the power amplifier circuit through an I2S bus in the transmission manner of the I2S signal shown in fig. 3. Based on WS and SCLK transmitted in I2S bus, the power amplifier circuit extracts audio data of left channel from TDM data line, and sends the audio data of left channel to loudspeaker 1 for playing, and extracts audio data of right channel, and sends the audio data of right channel to loudspeaker 2 for playing, thereby realizing sound effect of two-channel stereo. It should be understood that the main chip of the television set as described above may have a television function of displaying images, etc., in addition to the function of playing audio data.
In view of considerations such as multiplexing of chip pins, audio playing requirements, and cost, the main chip of the currently designed television can be 3I 2S buses at most. For example, some televisions may have a master chip that supports 1I 2S bus, some televisions may have a master chip that supports 2I 2S buses, and some televisions may have a master chip that supports 3I 2S buses. Because a path of I2S bus can transmit 2 channels of audio data. Therefore, the main chip of the television can transmit audio data of 6 channels at most.
Multi-channel surround sound, which may also be referred to as dolby panoramas, 5.1.2 panoramas, etc., can be realized by compressing audio data of at least 8 channels in a sound source capable of multi-channel surround sound. Taking 8 channels as an example, the 8 channels may be: the sound system comprises a front left sound channel, a front right sound channel, a middle sound channel, a built-in surround left sound channel, a built-in surround right sound channel, a built-in top left sound channel, a built-in top right sound channel and a subwoofer sound channel. For convenience of the following description, a sound source capable of realizing multi-channel surround sound is simply referred to as multi-channel audio data.
At present, although a television can receive multi-channel audio data, the main chip of the television cannot independently output audio data of each channel due to the reason that the main chip of the television can transmit audio data of 6 channels at most. Most televisions are implemented by embedding 2 speakers, mixing multi-channel audio data to obtain two-channel audio data, and transmitting the two-channel audio data to the 2 speakers for playing, so as to realize a stereo sound effect of two channels. That is, even if the television can receive multi-channel audio data, the effect of dolby pano sound of the multi-channel audio data cannot be achieved.
Therefore, when the number of channels corresponding to audio data that can be transmitted by the main chip of the television is less than the number of channels corresponding to multi-channel audio data, how the television realizes the sound effect of multi-channel surround sound is a problem to be solved urgently.
In view of the above, the present invention provides a display device capable of solving the above problems. The technical solution of the present invention will be described in detail with reference to specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 5 is a schematic structural diagram of a display device according to the present invention. As shown in fig. 5, the display device may include: audio processing circuit, a plurality of sound channel broadcast circuit are assisted. The output end of the audio processing circuit is connected with the input end of the audio processing circuit through at least two audio data transmission lines, and the output end of the audio processing circuit is connected with the plurality of sound channel playing circuits.
In this embodiment, the number of channels corresponding to the audio data that can be transmitted by the at least two audio data transmission lines is less than the number of channels corresponding to the multi-channel audio data. That is, the audio processing circuit is capable of transmitting audio data corresponding to fewer channels than the number of channels corresponding to multi-channel audio data. For example, the audio processing circuit can transmit 4-channel audio data, and the multi-channel audio data is 6-channel audio data, or the audio processing circuit can transmit 6-channel audio data, and the multi-channel audio data is 8-channel audio data, and the like.
Therefore, in a scenario that the number of channels of the audio data that can be transmitted by the audio processing circuit is less than the number of channels of the multi-channel audio data, the audio processing circuit may rearrange the received multi-channel audio data to obtain at least two channels of audio transmission data, and send the at least two channels of audio transmission data to the audio processing circuit through the at least two channels of audio data transmission lines.
That is, the audio processing circuit may rearrange and combine the audio data of multiple channels to obtain the audio data of the number of channels that the audio processing circuit can transmit, so that the audio processing circuit may transmit the audio data of multiple channels to the audio processing circuit through at least two audio data transmission lines. That is to say, in a scenario that the number of channels of the audio data that can be transmitted by the audio processing circuit is less than the number of channels of the multi-channel audio data, the audio processing circuit may rearrange and combine the multi-channel audio data to obtain the audio data with a smaller number of channels, so as to match the transmission capability of the audio processing circuit. For example, the audio processing circuit may transmit 6 channels of audio data, and the multi-channel audio data is 8 channels of audio data, then the audio processing circuit may arrange and combine the 8 channels of audio data into 6 channels of audio data, so that the audio processing circuit may send the 8 channels of audio data to the co-audio processing circuit.
Correspondingly, after receiving the at least two paths of audio transmission data transmitted by the audio processing circuit through the at least two paths of audio data transmission lines, the audio processing circuit can disassemble the at least two paths of audio transmission data to obtain audio data of a plurality of channels in the multi-channel audio data, and sends the audio data of each channel to the corresponding channel playing circuit for playing, thereby realizing the sound effect of multi-channel surround sound. By the mode, when the number of the channels corresponding to the audio data which can be transmitted by the audio processing circuit is less than that of the channels corresponding to the multi-channel audio data, the sound effect of multi-channel surround sound can be realized.
It should be understood that the audio processing circuit involved in the display device described above may be any circuit having audio data processing capability. Optionally, the audio processing circuit may have image processing and display capabilities, etc. in addition to the audio data processing and audio data transmission capabilities described above. Taking the display device as an example, the audio processing circuit may be a main chip of a television.
The audio data transmission line related to the display device may be any bus used for data transmission between audio devices, such as an I2S bus.
The audio processing circuit related to the display device may be any circuit having an audio processing capability, for example, and is not limited thereto.
The number of the channel playing circuits related to the display device can be determined according to the number of channels of the multi-channel audio data required to be played. Each channel playback circuit may include, for example: power amplifier circuit and at least one speaker. Each speaker may play one channel of audio data. The power amplifier circuit is used for transmitting the received audio data to the corresponding loudspeaker for playing.
The following describes a display device provided by the present invention by taking the display device as an example:
fig. 6 is a schematic structural diagram of a television according to the present invention, and fig. 7 is a schematic diagram of audio data played by the television according to the present invention. As shown in fig. 6 and fig. 7, taking the example that the multi-channel audio data is 8-channel audio data, it is assumed that the audio processing circuit is a main chip of a television, the audio data transmission line is an I2S bus, and the channel playing circuit is as shown in fig. 6, and the position of the speaker in each channel playing circuit on the television can be as shown in fig. 7. The loudspeakers for the subwoofer channel are not shown in fig. 7 due to viewing angle issues. It should be understood that the installation positions of the speakers shown in fig. 7 are merely illustrative, and do not limit the speakers as long as the installation positions of the speakers can realize multi-channel surround sound.
In this example, the main chip of the television is connected to the cooperative audio processing circuit via a 3-way I2S bus. As mentioned previously, the 1-way I2S bus is capable of transmitting 2 channels of audio data. That is, the main chip of the television can transmit audio data of 6 channels, which is less than 8 channels. Fig. 8 is a schematic diagram of conventional audio data processing. As shown in fig. 8, it is assumed that audio data of each channel of 8-channel audio data is audio data of 1696t @ 48khz. Where 16bit is the sampling bit width of the audio data and 48Khz is the sampling frequency of the audio data. According to the conventional processing method, the 3-channel I2S can only transmit audio data of 6 channels, for example, audio data of channels 0 to 5 shown in fig. 8, and audio data of the remaining 2 channels cannot be transmitted to the joint audio processing circuit. That is, even if the television can receive multi-channel audio data, the effect of dolby pano sound of the multi-channel audio data cannot be achieved.
In this embodiment, after the main chip of the television acquires the audio data of 8 channels, the audio data of 8 channels may be rearranged, and the audio data of partial channels are combined to combine into the audio data of 6 channels, that is, to combine into the audio data that can be transmitted by the 3-way I2S bus. Then, the main chip of the television can send the audio data of the 6 channels to the audio processing circuit through the 3-way I2S bus. Wherein, each path of I2S bus transmits 2 sound channels of audio data. Correspondingly, after receiving the audio data transmitted through the 3-channel I2S bus, the auxiliary audio processing circuit can disassemble the audio transmission data, restore the audio data of 8 sound channels, and send the audio data of each sound channel to the corresponding sound channel playing circuit for playing, so that the sound effect of multi-channel surround sound is realized.
For example, the main chip of the television may notify the middleware and driver layers when an upper application opens an application with multi-channel surround sound effects. The middleware can open relevant codes corresponding to multi-channel surround sound, and the driving layer can open the auxiliary audio processing circuit connected with the bottom layer.
After receiving the multi-channel audio data, the main chip of the television can decode the multi-channel audio data to obtain the audio data of each channel. At this time, the audio data of each channel is located at the bottom layer. The driver layer may then request "enumerated to the audio processing circuit, if data needs to be sent" from the middleware. Accordingly, the middleware may request the upper layer application whether to issue the received audio data. When the upper application notifies the middle layer to send out the received audio data, the middle layer may send the processing method of the multi-channel data (including an audio processing method (e.g., dolby panoramas), a dynamic compression range, and a transmission method of data rearrangement according to the present embodiment) to the driver layer. And after the driving layer processes the audio data of the bottom layer, the audio data is sent to the auxiliary audio processing circuit through the I2S bus. It should be understood that the actions shown in the upper layers, middleware, and driver layers referred to in this example may be implemented by the main chip of the television.
Comparing fig. 4 and fig. 6, it can be known that, by adding the auxiliary audio processing circuit between the main chip and the power amplifier circuit of the television, when the number of channels of the audio data that can be transmitted by the main chip is less than the number of channels of the multi-channel audio data, the multi-channel audio data can be rearranged and combined into audio data with a smaller number of channels, and the multi-channel audio data is transmitted to the power amplifier circuit, so that the playing of the multi-channel audio data is realized, and further, the sound effect of the multi-channel surround sound is realized. By the method, the transmission capability of the audio data of the main chip of the television can be not limited.
The following describes how the audio processing circuit rearranges the multi-channel audio data in detail, which may specifically include the following ways:
the first mode is as follows: and rearranging the audio data of each channel by improving the sampling bit width of the audio data. In this manner, the sampling frequency of the audio data is not changed.
In this manner, the audio processing circuit may decode the multi-channel audio data to obtain audio data of each channel. Then, the audio processing circuit may rearrange the audio data of each channel according to a preset audio data sampling bit width and a preset audio data arrangement manner, so as to obtain at least two channels of the audio transmission data. The preset audio data sampling bit width is greater than that of one sound channel audio data, and the sampling frequency of at least two paths of audio transmission data is the same as that of one sound channel audio data. Optionally, before the audio processing circuit rearranges the audio data of each channel, the audio processing circuit may further perform sound effect processing on the audio data of each channel, which may be specifically referred to in the prior art and is not described herein again.
The preset audio data sampling bit width can be determined according to an audio data transmission line between the audio processing circuit and the cooperative audio processing circuit and the number of channels of multi-channel audio data.
Taking the example that the multi-channel audio data is 9-channel audio data, and the audio processing circuit and the auxiliary audio processing circuit are connected through a 3-channel I2S bus, it is assumed that the audio data of each channel in the multi-channel audio data is 1mbit @48Khz audio data. Where 16bit is the sampling bit width of the audio data and 48Khz is the sampling frequency of the audio data. The preset audio data sampling bit width is 24 bits.
Fig. 9 is a schematic diagram of audio data processing according to the present invention. As shown in fig. 9, in this example, the audio processing circuit may decode 9-channel audio data to obtain audio data of each channel. I.e. the audio data shown in the left box in fig. 9. At this time, the audio data of each channel obtained by decoding is audio data having a sampling bit width of 16 bits and a sampling frequency of 48Khz. Then, the audio processing circuit may rearrange the audio data of each channel according to the sampling bit width 24bit and the preset audio data arrangement manner, to obtain 3 channels of audio transmission data shown in the right block diagram in fig. 9. Wherein, the sampling bit width of each path of audio transmission data is 24 bits, and the sampling frequency is 48Khz.
Since the sampling bit width is adjusted to 24 bits, that is, each I2S bus can transmit audio data of a 24-bit left channel and audio data of a 24-bit right channel. Taking an I2S0 bus as an example, the audio data of the left channel transmitted on the I2S0 bus includes not only the 16-bit audio data of channel 0 but also the lower 8-bit audio data of channel 1, and accordingly, the audio data of the right channel transmitted on the I2S0 bus includes not only the 16-bit audio data of channel 2 but also the upper 8-bit audio data of channel 1. That is, when the audio transport data is transported using one I2S bus, 24-bit left channel data can be collected when WS (LRCK) is equal to 0, and 24-bit right channel data can be collected when WS (LRCK) is equal to 1. That is to say, by increasing the sampling bit width of the audio transmission data, the bit number of the left channel data and the right channel data can be increased from 16 to 24, so that one I2S bus can transmit 2 channels of audio data to 3 channels of audio data, and the 3I 2S buses can be used to transmit at most 9 channels of audio data.
When audio data of less than 9 channels, for example, audio data of 8 channels, are transmitted in this manner, it is necessary to fill up the vacant positions in the right block diagram in fig. 9 with redundant data (for example, 0).
It should be understood that the audio data arrangement and the preset audio data sample bit width shown in fig. 9 are only an illustration, and in a specific implementation, other data arrangement and sample bit width may also be used.
Taking multichannel audio data as 8-channel audio data, and taking 2-channel I2S bus connection between the audio processing circuit and the cooperative audio processing circuit as an example, it is assumed that the audio data of each channel in the multichannel audio data is 1mbit @48Khz audio data, and the preset audio data sampling bit width is 24 bits.
Fig. 10 is a schematic diagram of another audio data processing method according to the present invention. As shown in fig. 10, in this example, the audio processing circuit may decode 8-channel audio data first, resulting in audio data of each channel. I.e. the audio data shown in the left box in fig. 10. At this time, the audio data of each channel obtained by decoding is audio data having a sampling bit width of 16 bits and a sampling frequency of 48Khz. Then, the audio processing circuit may rearrange the audio data of each channel according to the sampling bit width 32bit and the preset audio data arrangement manner, so as to obtain 2 channels of audio transmission data shown in the right block diagram in fig. 10. Wherein, the sampling bit width of each path of audio transmission data is 32 bits, and the sampling frequency is 48Khz.
Since the sampling bit width is adjusted to 32 bits, that is, each I2S bus can transmit audio data of a 32-bit left channel and audio data of a 32-bit right channel. Taking the I2S0 bus as an example, the audio data of the left channel transmitted on the I2S0 bus includes not only the 16-bit audio data of channel 0 but also the 16-bit audio data of channel 1, and accordingly, the audio data of the right channel transmitted on the I2S0 bus includes not only the 16-bit audio data of channel 2 but also the 16-bit audio data of channel 3. That is, when the audio transport data is transported using one I2S bus, 32-bit left channel data can be collected when WS (LRCK) is equal to 0, and 32-bit right channel data can be collected when WS (LRCK) is equal to 1. That is, by increasing the sampling bit width of the audio transmission data, the bit number of the left channel data and the right channel data can be increased from 16 to 32, so that one I2S bus can transmit audio data of 2 channels to 4 channels, and further, audio data of 8 channels at most can be transmitted by using the 2I 2S bus.
When audio data of less than 8 channels, for example, audio data of 6 channels, are transmitted in this manner, it is necessary to fill up the vacant positions in the right block diagram in fig. 10 with redundant data (for example, 0).
It should be understood that the audio data arrangement and the preset audio data sample bit width shown in fig. 10 are only an illustration, and in a specific implementation, other data arrangement and sample bit width may also be used.
As can be seen from the examples in fig. 9 and fig. 10, for a television, as long as the number of channels corresponding to audio data that can be transmitted by a main chip using an I2S bus is less than the number of channels corresponding to multi-channel audio data, data of multiple channels may be merged into one channel of audio transmission data by changing the sampling bit width of the audio data, and then audio data of more channels may be transmitted using the I2S bus, so that, under the condition that the transmission capability of the main chip is limited, playing of the multi-channel audio data is realized, and then the sound effect of multi-channel surround sound is realized.
After the audio processing circuit rearranges the audio data of each channel by increasing the sampling bit width of the audio data to obtain at least two channels of audio transmission data and sends the audio transmission data to the audio processing circuit, the audio processing circuit can disassemble the at least two channels of audio transmission data by adopting the following method to obtain the audio data of each channel in the multi-channel audio data, specifically:
the coaural audio processing circuit can disassemble at least two paths of audio transmission data according to a preset audio data sampling bit width and a preset audio data arrangement mode to obtain audio data of each sound channel. That is, how the audio processing circuit arranges the multi-channel audio data, how the audio processing circuit disassembles the multi-channel audio data, and then the multi-channel audio data is restored. Specifically, the following two cases may be included:
in case 1, the at least two audio transmission data share a clock, that is, the same clock is used when the audio processing circuit sends the at least two audio transmission data to the audio processing circuit through the at least two audio data transmission lines. In this scenario, the audio processing circuit needs to synchronously send the at least two paths of audio transmission data to the audio processing circuit. Taking the audio data transmission line as an I2S bus as an example, assuming that the audio processing circuit sends 3 audio transmission data to the audio processing circuit through the 3I 2S buses, the 3 audio transmission data may share one clock. The clock may include at least one of a system clock, a serial clock, and a frame clock.
In this scenario, the audio processing circuit may adopt a mode of simultaneous receiving and dismantling to dismantle the received at least two channels of audio transmission data, so as to obtain the audio data of each channel.
Taking the example shown in fig. 9 as an example, the joint audio processing circuit may first intercept the front 16bit data of the left and right channels from the data on each I2S bus, and restore the audio data of channel 0, channel 2, channel 3, channel 5, channel 6, and channel 8. Then, the cooperative audio processing circuit may synthesize the rear 8-bit data received by the left and right channels on the I2S0 bus to obtain the audio data of channel 1, synthesize the rear 8-bit data received by the left and right channels on the I2S1 bus to obtain the audio data of channel 4, and synthesize the rear 8-bit data received by the left and right channels on the I2S2 bus to obtain the audio data of channel 7. In this example, the resulting audio data of each channel is that of 1mbit @ 48khz. Then, the auxiliary audio processing circuit can send the audio data of each channel to the playing circuit corresponding to each channel for playing, so as to realize the sound effect of the multi-channel surround sound. Taking the television shown in fig. 6 as an example and taking the display device as an example, the audio processing circuit can generate a clock signal of 48Khz, transmit the audio data of each channel to the corresponding power amplifier circuit based on the clock signal, and send the audio data to the corresponding loudspeaker by the power amplifier circuit for playing.
It should be noted that, in the scenario shown in fig. 9, that is, when the audio data of some channels needs to be split and then arranged, the audio data of the secondary channel may be split. For example, audio data of a center channel, audio data of heavy bass, and the like. Therefore, when the at least two paths of audio transmission data share the clock, the audio data of the main sound channel collected by the audio processing circuit can not be lost or wrong due to the interference of the clock or the abnormal clock. The main channels referred to herein may be, for example: front left channel, front right channel, built-in top left channel, built-in top right channel, built-in surround left channel, built-in surround right channel, etc.
Case 2: one clock for each audio transmission data. That is, when the audio processing circuit sends the at least two audio transmission data to the auxiliary audio processing circuit through the at least two audio data transmission lines, the clocks used by the audio data transmission lines are different. In this scenario, the audio processing circuit may send the at least two paths of audio transmission data synchronously or asynchronously. Taking the audio data transmission line as an I2S bus as an example, assuming that the audio processing circuit sends 3 channels of audio transmission data to the audio processing circuit through the 3 channels of I2S bus, the clocks corresponding to the channels of audio transmission data are different, and may be at least one of the following clocks: a system clock, a serial clock, and a frame clock.
In this scenario, the cooperative audio processing circuit may disassemble all audio transmission data after receiving the audio transmission data, so as to obtain the audio data of each channel.
Taking the example shown in fig. 9 as an example, the cooperative audio processing circuit may collect, based on a clock of the cooperative audio processing circuit itself, first valid MCLK signals of the audio transmission data transmitted by the 3 paths of I2S buses respectively after the audio transmission data transmitted by the 3 paths of I2S buses are buffered, so as to collect front 16bit data of left and right channels on each path of I2S buses, and restore the audio data of the channel 0, the channel 2, the channel 3, the channel 5, the channel 6, and the channel 8. Then, the auxiliary audio processing circuit can synthesize the rear 8-bit data received by the left and right channels on the I2S0 bus to obtain the audio data of the channel 1 based on the first bit effective MCLK signal of the audio transmission data transmitted on the I2S0 bus; based on a first bit valid MCLK signal of audio transmission data transmitted on an I2S1 bus, synthesizing rear 8bit data received by a left channel and a right channel on the I2S1 bus to obtain audio data of a channel 4; based on the first bit valid MCLK signal of audio transmission data transmitted on the I2S2 bus, the rear 8bit data received by the left and right sound channels on the I2S2 bus are synthesized to obtain the audio data of the sound channel 7. In this example, the resulting audio data for each channel is 1696t @48Khz audio data. Then, the auxiliary audio processing circuit can send the audio data of each channel to the playing circuit corresponding to each channel for playing, so as to realize the sound effect of the multi-channel surround sound.
Through the mode that each path of audio transmission data corresponds to one clock, even if the clock transmitted on one path of audio data transmission line is abnormal, the audio transmission data transmitted on other paths of audio data transmission lines cannot be influenced, and therefore the probability of data errors can be reduced.
The second mode is as follows: the audio data of each channel is rearranged by increasing the sampling frequency of the audio data. In this approach, the sampling bit width of the audio data is unchanged.
In this manner, the audio processing circuit may decode the multi-channel audio data to obtain audio data of each channel. Then, the audio processing circuit may rearrange the audio data of each channel according to a preset audio data sampling frequency and a preset audio data arrangement mode to obtain at least two channels of the audio transmission data. The preset audio data sampling frequency is greater than the sampling frequency of one channel audio data, and the sampling bit width of the at least two paths of audio transmission data is the same as the sampling bit width of one channel audio data.
Taking the example that the audio data transmission line is an I2S bus, the preset audio data sampling frequency may be a single-edge sampling frequency of a serial clock of the audio data, that is, the audio processing circuit still performs data sampling in a BCLK rising edge sampling manner, but the single-edge sampling frequency of BCLK needs to be increased to reach the preset audio data sampling frequency. Or the preset audio data sampling frequency is the sum of the serial clock double-edge acquisition frequencies of the audio data. For example, the audio processing circuit keeps the sampling frequency of BCLK unchanged, but collects data in a double-edge collection manner, that is, collects data in both the rising edge and the falling edge of BCLK. Or the audio processing circuit increases the sampling frequency of the BCLK and obtains the preset audio data sampling frequency in a double-edge acquisition mode.
Taking multichannel audio data as 11-channel audio data, and taking 3-channel I2S bus connection between the audio processing circuit and the auxiliary audio processing circuit as an example, it is assumed that the audio data of each channel in the multichannel audio data is 1mbit @48Khz audio data, and the preset audio data sampling frequency is 96Khz.
Fig. 11 is a schematic diagram of another audio data processing method according to the present invention. As shown in fig. 11, in this example, the audio processing circuit may decode 12-channel audio data first, and obtain audio data of each channel. I.e., the audio data shown in the left-hand box in fig. 11. At this time, the audio data of each channel obtained by decoding is audio data having a sampling bit width of 16 bits and a sampling frequency of 48Khz.
Then, the audio processing circuit may rearrange the audio data of each channel according to the sampling frequency 96Khz and a preset audio data arrangement manner, so as to obtain 3 channels of audio transmission data shown in the right block diagram in fig. 11. Wherein, the sampling bit width of each path of audio transmission data is 16 bits, and the sampling frequency is 96Khz.
As mentioned previously, the frequency of SCLK is equal to 2 times the product of the sampling frequency and the number of sample bits (which may also be referred to as the sampling bit width). In this example, the frequency of SCLK is 2 times the product of 96 and 16, which is 3.072Mhz. BCLK collected data is promoted from 16 to 32 bits. Alternatively, the sampling frequency may be changed in two ways: one is to keep the sample of the rising edge of BCLK, changing the frequency of BCLK. Another is to keep the frequency of BCLK constant, but instead use a dual edge acquisition. Both ways can achieve a sampling frequency of 96Khz.
Fig. 11 shows WS taking a double-edge acquisition as an example, that is, on one I2S bus, the rising edge and the falling edge of each WS correspond to one bit of audio data. Therefore, the bit number of audio data transmitted by one I2S bus can be increased by 1 time under the same sampling frequency, and then the left channel can transmit 32-bit data and the right channel can transmit 32-bit data on one I2S bus. That is, on one I2S bus, 32-bit left channel data can be collected when WS (LRCK) is equal to 0, and 32-bit right channel data can be collected when WS (LRCK) is equal to 1. That is to say, by increasing the sampling frequency of the audio transmission data, the bit number of the audio data of the left channel and the audio data of the right channel that can be transmitted on one I2S bus can be increased from 16 to 32, so that one I2S bus can transmit the audio data of 2 channels instead of 4 channels, and further, the audio data of 12 channels at most can be transmitted using the 3I 2S bus.
In this manner, when audio data of less than 12 channels is transmitted, for example, when audio data of 8 channels is transmitted, it is necessary to fill up the vacant positions in the right block diagram in fig. 11 with redundant data (for example, 0).
It should be understood that the audio data arrangement and the predetermined audio data sampling frequency shown in fig. 11 are only an example, and in a specific implementation, other data arrangement and sampling frequencies may be used.
Taking the example that the multi-channel audio data is 8-channel audio data, and the audio processing circuit and the auxiliary audio processing circuit are connected through a 2-channel I2S bus, it is assumed that the audio data of each channel in the multi-channel audio data is 1mbit @48Khz audio data, and the preset audio data sampling frequency is 96Khz.
Fig. 12 is a schematic diagram of another audio data processing method according to the present invention. As shown in fig. 12, in this example, the audio processing circuit may rearrange the audio data of each channel according to the sampling frequency 96Khz and the preset audio data arrangement mode, so as to obtain 2 channels of audio transmission data shown in the right-side block diagram in fig. 12. Wherein, the sampling bit width of each path of audio transmission data is 16 bits, and the sampling frequency is 96Khz. Specific implementation can be seen in the description of the example shown in fig. 11, which is implemented similarly.
That is, when the audio processing circuit and the cooperative audio processing circuit are connected via the 2-way I2S bus, audio data of 8 channels at the maximum can be transmitted using the 2-way I2S bus by increasing the sampling frequency of audio transmission data. In this case, when audio data of less than 8 channels, for example, audio data of 6 channels is transmitted, it is necessary to fill up the vacant positions in the right block diagram in fig. 12 with redundant data (for example, 0).
It should be understood that the audio data arrangement and the predetermined audio data sampling frequency shown in fig. 12 are only illustrative, and other data arrangements and sampling frequencies may be used in specific implementations.
As can be seen from the examples in fig. 11 and 12, for a television, as long as the number of channels corresponding to audio data that can be transmitted by a main chip using an I2S bus is less than the number of channels corresponding to multi-channel audio data, data of multiple channels may be merged into one channel of audio transmission data by changing the sampling frequency of the audio data, and then audio data of more channels may be transmitted using the I2S bus, so that, under the condition that the transmission capability of the main chip is limited, playing of the multi-channel audio data is realized, and then the sound effect of multi-channel surround sound is realized.
After the audio processing circuit rearranges the audio data of each channel by increasing the sampling frequency of the audio data to obtain at least two channels of audio transmission data and sends the at least two channels of audio transmission data to the audio processing circuit, the audio processing circuit can disassemble the at least two channels of audio transmission data in the following way to obtain the audio data of each channel in the multi-channel audio data, specifically:
the audio processing circuit can disassemble at least two paths of audio transmission data according to the preset audio data sampling frequency and the preset audio data arrangement mode to obtain the audio data of each sound channel. That is, how the audio processing circuit arranges the multi-channel audio data, how the audio processing circuit disassembles the multi-channel audio data, and then the multi-channel audio data is restored. Specifically, the following two cases may be included:
in case 1, at least two paths of audio transmission data share a clock. For a description of the common clock reference is made to the previous description of the common clock.
In this scenario, the audio processing circuit may adopt a mode of simultaneous receiving and dismantling to dismantle the received at least two channels of audio transmission data, so as to obtain the audio data of each channel.
Taking the example shown in fig. 11 as an example, the cooperative audio processing circuit may collect audio data transmitted on the I2S bus according to the clock waveform transmitted on the I2S bus. For example, if the audio processing circuit transmits audio data by holding the sample of the rising edge of BCLK but changing the frequency of BCLK, the audio processing circuit also collects audio data transmitted on the I2S bus by this method. If the audio processing circuit adopts a manner of maintaining the frequency of the BCLK unchanged and adopting a dual-edge acquisition to transmit the audio data, the audio processing circuit also adopts the manner to acquire the audio data transmitted on the I2S bus.
Taking the manner of dual edge acquisition as an example, the cooperative audio processing circuit may sample the dual edge sampling, extract 16bit data of 4 channels from the audio transmission data transmitted on the I2S0 bus, extract 16bit data of 4 channels from the audio transmission data transmitted on the I2S1 bus, and extract 16bit data of 4 channels from the audio transmission data transmitted on the I2S2 bus. In this example, the resulting audio data for each channel is 1696t @48Khz audio data. Then, the audio processing circuit can send the audio data of each channel to the corresponding playing circuit of each channel for playing, so as to realize the sound effect of multi-channel surround sound.
Case 2: each audio transmission data corresponds to one clock. In this scenario, the audio processing circuit may send the at least two paths of audio transmission data synchronously or asynchronously. For the description of one clock for each audio transmission data, reference may be made to the foregoing description of one clock for each audio transmission data.
In this scenario, the cooperative audio processing circuit may disassemble all audio transmission data after receiving the audio transmission data, so as to obtain the audio data of each channel.
Taking the example shown in fig. 11 as an example, the cooperative audio processing circuit may collect the first valid MCLK signals of the audio transmission data transmitted by the 3 paths of I2S buses respectively based on the clock of the cooperative audio processing circuit after the audio transmission data transmitted by the 3 paths of I2S buses is buffered. If the audio processing circuit transmits the audio data in a mode of keeping the rising edge sampling of the BCLK and changing the frequency of the BCLK, the audio processing circuit collects the audio data on one path of I2S bus in the mode after collecting the first valid MCLK signal on the path of I2S bus. If the audio processing circuit adopts a mode of keeping the frequency of the BCLK unchanged and adopting double-edge collection to transmit the audio data, the audio processing circuit also adopts the mode to collect the audio data on one path of I2S bus.
Taking a mode of dual-edge acquisition as an example, the cooperative audio processing circuit may sample dual-edge sampling, and extract 16-bit data of 4 channels from the audio transmission data transmitted on the I2S0 bus based on the first valid MCLK signal in the audio transmission data transmitted on the I2S0 bus; based on a first bit effective MCLK signal in the audio transmission data transmitted on the I2S1 bus, 16bit data of 4 sound channels are extracted from the audio transmission data transmitted on the I2S1 bus; based on the first valid MCLK signal in the collected audio transmission data transmitted on the I2S2 bus, 16-bit data of 4 sound channels are extracted from the audio transmission data transmitted on the I2S2 bus. In this example, the resulting audio data of each channel is that of 1mbit @ 48khz. Then, the auxiliary audio processing circuit can send the audio data of each channel to the playing circuit corresponding to each channel for playing, so as to realize the sound effect of the multi-channel surround sound.
Through the mode that each path of audio data transmission corresponds to one clock, even if the clock transmitted on one path of audio data transmission line is abnormal, the audio data transmission data transmitted on other paths of audio data transmission lines cannot be influenced, and therefore the probability of data errors can be reduced.
The third mode is as follows: and rearranging the audio data of each sound channel in a mode of improving the sampling bit width and the sampling frequency of the audio data.
In this manner, the audio processing circuit may decode the multi-channel audio data to obtain audio data of each channel. Then, the audio processing circuit may rearrange the audio data of each channel according to the preset audio data sampling frequency, the preset audio data sampling bit width, and the preset audio data arrangement manner, so as to obtain the at least two channels of audio transmission data. The preset audio data sampling frequency is greater than the sampling frequency of one channel audio data, and the preset audio data sampling bit width is greater than the sampling bit width of one channel audio data. For how to implement the preset audio data sampling frequency, reference may be made to the description of the preset audio sampling frequency part in the foregoing embodiments, and details are not described herein again.
Taking the example that the multi-channel audio data is 8-channel audio data, and the audio processing circuit and the cooperative audio processing circuit are connected through a 2-channel I2S bus, it is assumed that the audio data of each channel in the multi-channel audio data is 1mbit @48Khz audio data. The preset audio data sampling bit width is 24 bits, and the preset audio data sampling frequency is 96Khz.
Fig. 13 is a schematic diagram of another audio data processing method according to the present invention. As shown in fig. 13, in this example, the audio processing circuit may decode 8-channel audio data first, resulting in audio data of each channel. I.e., the audio data shown in the left-hand box in fig. 13. At this time, the audio data of each channel obtained by decoding is audio data having a sampling bit width of 16 bits and a sampling frequency of 48Khz.
Then, the audio processing circuit may rearrange the audio data of each channel according to the sampling bit width 24bit, the sampling frequency 96Khz, and the preset audio data arrangement manner, to obtain 2 channels of audio transmission data shown in the right block diagram in fig. 13. Wherein, the sampling bit width of each path of audio transmission data is 24 bits, and the sampling frequency is 96Khz. At this time, each I2S bus can transmit audio data of a 48-bit left channel and audio data of a 48-bit right channel.
Taking the 1-channel audio transmission data shown in the right block diagram in fig. 13 as an example, when the audio transmission data is transmitted using the I2S bus, 48 bits of left channel data can be collected when WS (LRCK) is equal to 0, and 48 bits of right channel data can be collected when WS (LRCK) is equal to 1. That is to say, by increasing the sampling frequency and the sampling bit width of the audio transmission data, the bit number of the audio data of the left channel and the audio data of the right channel that can be transmitted on the one I2S bus can be increased from 16 to 48, so that the one I2S bus can transmit the audio data of 2 channels instead of 6 channels, and the 2I 2S bus can be used to transmit the audio data of 12 channels at most. However, since the present example is to transmit audio data of 8 channels using a 2-way I2S bus, redundant data (e.g., 0) is employed to fill up the vacant positions in the right block diagram in fig. 13.
It should be understood that the audio data arrangement, the preset audio data sampling frequency, and the preset audio data sampling bit width shown in fig. 13 are only an illustration, and when the audio data arrangement, the preset audio data sampling frequency, and the preset audio data sampling bit width are implemented specifically, other data arrangement and sampling frequencies may also be used. In addition, the position of the redundant data in fig. 13 is merely an example, and is not limited thereto.
After the audio processing circuit rearranges the audio data of each channel by increasing the sampling bit width and the sampling frequency of the audio data to obtain at least two channels of audio transmission data and sends the data to the audio processing circuit, the audio processing circuit can disassemble the at least two channels of audio transmission data by adopting the following method to obtain the audio data of each channel in the multi-channel audio data, specifically:
the auxiliary audio processing circuit can disassemble at least two paths of audio transmission data according to the preset audio data sampling frequency, the preset audio data sampling bit width and the preset audio data arrangement mode to obtain audio data of each sound channel. That is, how the audio processing circuit arranges the multi-channel audio data, how the audio processing circuit disassembles the multi-channel audio data, and then the multi-channel audio data is restored. Specifically, the following two cases may be included:
in case 1, at least two of the audio transmission data share a clock. For a description of the common clock reference is made to the previous description of the common clock.
In this scenario, the audio processing circuit may adopt a mode of simultaneous receiving and dismantling to dismantle the received at least two channels of audio transmission data, so as to obtain the audio data of each channel.
Taking the example shown in fig. 13 as an example, the cooperative audio processing circuit may collect audio data transmitted on the I2S bus according to the clock waveform transmitted on the I2S bus. For example, if the audio processing circuit transmits audio data by holding the sample of the rising edge of BCLK but changing the frequency of BCLK, the audio processing circuit also collects audio data transmitted on the I2S bus by this method. If the audio processing circuit adopts a mode of keeping the frequency of the BCLK unchanged and adopting double-edge collection to transmit data, the audio processing circuit also adopts the mode to collect the audio data transmitted on the I2S bus.
Taking a mode of adopting dual-edge acquisition as an example, the cooperative audio processing circuit can sample dual-edge sampling, extract 16bit data of the sound channel 0, 16bit data of the sound channel 1, 16bit data of the sound channel 4 and 16bit data of the sound channel 5 from audio transmission data transmitted on the I2S0 bus, extract 16bit data of the sound channel 2, 16bit data of the sound channel 3, 16bit data of the sound channel 6 and 16bit data of the sound channel 7 from audio transmission data transmitted on the I2S1 bus, and discard redundant data acquired from the I2S0 bus and the I2S1 bus. In this example, the resulting audio data of each channel is that of 1mbit @ 48khz. Then, the audio processing circuit can send the audio data of each channel to the corresponding playing circuit of each channel for playing, so as to realize the sound effect of multi-channel surround sound.
In the scenario shown in fig. 13, when 12 channels of audio data are transmitted, that is, when audio data of some channels need to be split and then arranged, audio data of a secondary channel may be split. E.g., audio data for a center channel, audio data for heavy bass, etc. Therefore, when the at least two paths of audio transmission data share the clock, the audio data of the main sound channel collected by the audio processing circuit can not be lost or wrong due to the fact that the clock is interfered or the clock is abnormal. The main channels referred to herein may be, for example: front left channel, front right channel, built-in top left channel, built-in top right channel, built-in surround left channel, built-in surround right channel, etc.
Case 2: each audio transmission data corresponds to one clock. In this scenario, the audio processing circuit may send the at least two paths of audio transmission data synchronously or asynchronously. For the description of one clock for each audio transmission data, reference may be made to the foregoing description of one clock for each audio transmission data.
In this scenario, the cooperative audio processing circuit may disassemble all audio transmission data after receiving the audio transmission data, so as to obtain the audio data of each channel.
Taking the example shown in fig. 13 as an example, the cooperative audio processing circuit may collect the first valid MCLK signal of the audio transmission data transmitted by the 2I 2S buses respectively based on the clock of the cooperative audio processing circuit after the audio transmission data transmitted by the 2I 2S buses is buffered. If the audio processing circuit transmits the audio data in a mode of keeping the rising edge of the BCLK for sampling but changing the frequency of the BCLK, the audio processing circuit also acquires the audio data transmitted on the I2S bus in the mode after acquiring the first valid MCLK signal. If the audio processing circuit adopts a mode of keeping the frequency of the BCLK unchanged and adopting double-edge collection to transmit data, the audio processing circuit also adopts the mode to collect the audio data transmitted on the I2S bus.
Taking a mode of adopting dual-edge acquisition as an example, the cooperative audio processing circuit may sample dual-edge sampling, and extract 16-bit data of the channel 0, 16-bit data of the channel 1, 16-bit data of the channel 4, and 16-bit data of the channel 5 from the audio transmission data transmitted on the I2S0 bus based on the first-bit valid MCLK signal in the audio transmission data transmitted on the I2S0 bus. The auxiliary audio processing circuit can sample double-edge sampling, and based on the first-bit effective MCLK signal in the audio transmission data transmitted on the I2S1 bus, 16-bit data of the sound channel 2, 16-bit data of the sound channel 3, 16-bit data of the sound channel 6 and 16-bit data of the sound channel 7 are extracted from the audio transmission data transmitted on the I2S1 bus. It should be appreciated that redundant data collected from the I2S0 bus and the I2S1 bus may be discarded. In this example, the resulting audio data of each channel is that of 1mbit @ 48khz. Then, the auxiliary audio processing circuit can send the audio data of each channel to the playing circuit corresponding to each channel for playing, so as to realize the sound effect of the multi-channel surround sound.
Through the mode that each path of audio transmission data corresponds to one clock, even if the clock transmitted on one path of audio data transmission line is abnormal, the audio transmission data transmitted on other paths of audio data transmission lines cannot be influenced, and therefore the probability of data errors can be reduced.
In the display device provided by the invention, when the number of channels corresponding to the audio data which can be transmitted by the at least two audio data transmission lines between the audio processing circuit and the auxiliary audio processing circuit is less than the number of channels corresponding to the multi-channel audio data, the audio processing circuit can rearrange the received multi-channel audio data to obtain the audio transmission data which can be transmitted by the at least two audio data transmission lines. Correspondingly, after receiving at least two paths of audio transmission data transmitted by the audio processing circuit through at least two paths of audio data transmission lines, the auxiliary audio processing circuit can disassemble the at least two paths of audio transmission data to obtain audio data of multiple sound channels in the multi-channel audio data, and sends the audio data of each sound channel to the corresponding sound channel playing circuit for playing, so that the sound effect of multi-channel surround sound is realized. By the mode, when the number of the channels corresponding to the audio data which can be transmitted by the audio processing circuit is less than that of the channels corresponding to the multi-channel audio data, the sound effect of multi-channel surround sound can be realized.
Fig. 14 is a flowchart illustrating an audio data processing method according to the present invention. The main body of the method can be the audio processing circuit in the display device shown in the foregoing. As shown in fig. 14, the method includes:
s101, receiving multi-channel audio data.
S102, rearranging the multi-channel audio data to obtain at least two paths of audio transmission data.
S103, sending the at least two paths of audio transmission data to a cooperative audio processing circuit through the at least two paths of audio data transmission lines, so that the cooperative audio processing circuit disassembles the at least two paths of audio transmission data to obtain audio data of a plurality of sound channels in the multi-channel audio data, and sending the audio data of each sound channel to a corresponding sound channel playing circuit for playing.
For example, the multi-channel audio data is decoded to obtain audio data of each channel, and the audio data of each channel is rearranged according to a preset audio data sampling bit width and a preset audio data arrangement mode to obtain at least two paths of audio transmission data; the preset audio data sampling bit width is greater than that of one sound channel audio data, and the sampling frequency of at least two paths of audio transmission data is the same as that of one sound channel audio data. Optionally, at least two paths of the audio transmission data share a clock, or each path of the audio transmission data corresponds to one clock.
Correspondingly, the cooperative audio processing circuit may disassemble at least two paths of the audio transmission data according to the preset audio data sampling bit width and the preset audio data arrangement mode to obtain the audio data of each channel.
For another example, the multi-channel audio data is decoded to obtain audio data of each channel, and the audio data of each channel is rearranged according to a preset audio data sampling frequency and a preset audio data arrangement mode to obtain at least two paths of audio transmission data; the preset audio data sampling frequency is greater than the sampling frequency of one channel audio data, and the sampling bit width of at least two paths of audio transmission data is the same as the sampling bit width of one channel audio data. Optionally, at least two paths of the audio transmission data share a clock, or each path of the audio transmission data corresponds to one clock. Optionally, the preset audio data sampling frequency is a single-edge acquisition frequency of a serial clock of the audio data, or the preset audio data sampling frequency is a sum of double-edge acquisition frequencies of the serial clock of the audio data.
Correspondingly, the cooperative audio processing circuit can disassemble at least two paths of audio transmission data according to the preset audio data sampling frequency and the preset audio data arrangement mode to obtain the audio data of each sound channel.
For another example, the multi-channel audio data is decoded to obtain audio data of each channel, and the audio data of each channel is rearranged according to the preset audio data sampling frequency, the preset audio data sampling bit width and the preset audio data arrangement mode to obtain the at least two channels of audio transmission data; the preset audio data sampling frequency is greater than the sampling frequency of one channel audio data, and the preset audio data sampling bit width is greater than the sampling bit width of one channel audio data. Optionally, at least two paths of the audio transmission data share a clock, or each path of the audio transmission data corresponds to one clock. Optionally, the preset audio data sampling frequency is a single-edge acquisition frequency of a serial clock of the audio data, or the preset audio data sampling frequency is a sum of double-edge acquisition frequencies of the serial clock of the audio data.
Correspondingly, the cooperative audio processing circuit may disassemble at least two paths of audio transmission data according to the preset audio data sampling frequency, the preset audio data sampling bit width, and the preset audio data arrangement mode, so as to obtain the audio data of each channel.
In the audio playing method provided in the embodiment of the present invention, when the number of channels corresponding to the audio data that can be transmitted by the at least two audio data transmission lines between the audio processing circuit and the audio processing circuit is less than the number of channels corresponding to the multi-channel audio data, the audio processing circuit may rearrange the received multi-channel audio data to obtain audio transmission data that can be transmitted by the at least two audio data transmission lines. Correspondingly, after receiving at least two paths of audio transmission data transmitted by the audio processing circuit through at least two paths of audio data transmission lines, the auxiliary audio processing circuit can disassemble the at least two paths of audio transmission data to obtain audio data of multiple channels in the multi-channel audio data, and sends the audio data of each channel to the corresponding channel playing circuit for playing, thereby realizing the sound effect of multi-channel surround sound. By the mode, when the number of the channels corresponding to the audio data which can be transmitted by the audio processing circuit is less than that of the channels corresponding to the multi-channel audio data, the sound effect of multi-channel surround sound can be realized.
In another aspect, an embodiment of the present invention further provides an audio playing apparatus, where the audio playing apparatus may include: the audio processing circuit, the multiple sound track broadcast circuit; the output end of the audio processing circuit is connected with the input end of the audio processing circuit through at least two audio data transmission lines, the output end of the audio processing circuit is connected with the multiple sound channel playing circuits, and the number of sound channels corresponding to the audio data which can be transmitted by the at least two audio data transmission lines is less than that of the sound channels corresponding to the multi-channel audio data;
the audio processing circuit is used for rearranging the received multi-channel audio data to obtain at least two paths of audio transmission data, and sending the at least two paths of audio transmission data to the audio processing circuit through the at least two paths of audio data transmission lines;
the audio processing circuit is used for disassembling at least two paths of audio transmission data to obtain audio data of a plurality of sound channels in the multi-channel audio data, and sending the audio data of each sound channel to a corresponding sound channel playing circuit for playing.
The audio playing device provided by the embodiment can realize the sound effect of multi-channel surround sound when the number of channels corresponding to the audio data which can be transmitted by the audio processing circuit is less than that of the channels corresponding to the multi-channel audio data. The audio playing device may be any device having an audio playing function, such as a home theater, a sound box, and the like. The implementation principle and technical effect are similar to those of the display device, and are not described in detail herein.
In another aspect of the embodiments of the present invention, a chip is further provided, where a computer program is stored on the chip, and when the computer program is executed by the chip, the function of the foregoing audio processing circuit or the audio processing circuit may be implemented.
In another aspect of the embodiments of the present invention, a computer-readable storage medium is provided, which stores a computer program or instructions, and when the computer program or instructions runs on a computer, the computer executes the actions of the aforementioned audio processing circuit or the actions of the audio processing circuit in conjunction with the computer.
In the description herein, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The terms "include" and "comprise," as well as derivatives thereof, mean inclusion without limitation.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the spirit of the corresponding technical solutions of the embodiments of the present invention.

Claims (9)

1. A display device, comprising: the audio processing circuit, the multiple sound track broadcast circuit; the output end of the audio processing circuit is connected with the input end of the audio processing circuit through at least two audio data transmission lines, the output end of the audio processing circuit is connected with the multiple sound channel playing circuits, and the number of sound channels corresponding to the audio data which can be transmitted by the at least two audio data transmission lines is less than that of the sound channels corresponding to the multi-channel audio data;
the audio processing circuit is used for rearranging the received multi-channel audio data to obtain at least two paths of audio transmission data, and sending the at least two paths of audio transmission data to the auxiliary audio processing circuit through the at least two paths of audio data transmission lines;
the audio processing circuit is used for disassembling at least two paths of audio transmission data to obtain audio data of multiple sound channels in the multi-channel audio data, and sending the audio data of each sound channel to a corresponding sound channel playing circuit for playing;
the audio processing circuit is specifically configured to decode the multi-channel audio data to obtain audio data of multiple channels, and rearrange the audio data of each channel according to a preset audio data sampling frequency and a preset audio data arrangement manner to obtain at least two channels of audio transmission data;
the preset audio data sampling frequency is greater than the sampling frequency of one channel audio data, and the sampling bit width of at least two paths of audio transmission data is the same as the sampling bit width of one channel audio data;
at least two paths of audio transmission data share a clock, or each path of audio transmission data corresponds to one clock.
2. The apparatus of claim 1, wherein:
the audio processing circuit is specifically configured to decode the multi-channel audio data to obtain audio data of multiple channels, and rearrange the audio data of the multiple channels according to a preset audio data sampling bit width and a preset audio data arrangement manner to obtain at least two channels of audio transmission data;
the preset audio data sampling bit width is greater than that of one sound channel audio data, and the sampling frequency of at least two paths of audio transmission data is the same as that of one sound channel audio data.
3. The apparatus of claim 2, wherein:
the audio processing circuit is specifically configured to disassemble at least two paths of audio transmission data according to the preset audio data sampling bit width and the preset audio data arrangement mode to obtain audio data of multiple channels.
4. The apparatus of claim 1, wherein:
the audio processing circuit is specifically configured to disassemble at least two paths of audio transmission data according to the preset audio data sampling frequency and the preset audio data arrangement mode to obtain audio data of multiple channels.
5. The apparatus of claim 1, wherein:
the audio processing circuit is specifically configured to rearrange the audio data of each channel according to the preset audio data sampling frequency, a preset audio data sampling bit width, and the preset audio data arrangement manner, so as to obtain the at least two channels of audio transmission data;
wherein the preset audio data sampling bit width is larger than the sampling bit width of one channel audio data.
6. The apparatus of claim 5, wherein:
the audio processing circuit is specifically configured to disassemble at least two paths of audio transmission data according to the preset audio data sampling frequency, the preset audio data sampling bit width, and the preset audio data arrangement mode, so as to obtain audio data of multiple channels.
7. The apparatus according to any one of claims 1-6, wherein the predetermined audio data sampling frequency is a single-edge sampling frequency of the serial clock of the audio data, or the predetermined audio data sampling frequency is a sum of two-edge sampling frequencies of the serial clock of the audio data.
8. An audio playback apparatus, comprising: the audio processing circuit, the multiple sound channel playing circuit; the output end of the audio processing circuit is connected with the input end of the audio processing circuit through at least two audio data transmission lines, the output end of the audio processing circuit is connected with the multiple sound channel playing circuits, and the number of sound channels corresponding to the audio data which can be transmitted by the at least two audio data transmission lines is less than that of sound channels corresponding to the multi-channel audio data;
the audio processing circuit is used for rearranging the received multi-channel audio data to obtain at least two paths of audio transmission data, and sending the at least two paths of audio transmission data to the audio processing circuit through the at least two paths of audio data transmission lines;
the audio processing circuit is used for disassembling at least two paths of audio transmission data to obtain audio data of a plurality of sound channels in the multi-channel audio data, and sending the audio data of each sound channel to a corresponding sound channel playing circuit for playing;
the audio processing circuit is specifically configured to decode the multi-channel audio data to obtain audio data of multiple channels, and rearrange the audio data of each channel according to a preset audio data sampling frequency and a preset audio data arrangement manner to obtain at least two channels of audio transmission data;
the preset audio data sampling frequency is greater than the sampling frequency of one channel audio data, and the sampling bit width of at least two paths of audio transmission data is the same as the sampling bit width of one channel audio data;
at least two paths of audio transmission data share a clock, or each path of audio transmission data corresponds to one clock.
9. An audio playing method, the method comprising: receiving multi-channel audio data;
rearranging the multi-channel audio data to obtain at least two paths of audio transmission data;
sending at least two paths of audio transmission data to a cooperative audio processing circuit through at least two paths of audio data transmission lines, so that the cooperative audio processing circuit disassembles the at least two paths of audio transmission data to obtain audio data of multiple sound channels in the multi-channel audio data, and sending the audio data of the multiple sound channels to a corresponding sound channel playing circuit for playing;
the rearranging the multi-channel audio data to obtain at least two paths of audio transmission data comprises:
decoding the multi-channel audio data to obtain audio data of a plurality of channels, and rearranging the audio data of each channel according to a preset audio data sampling frequency and a preset audio data arrangement mode to obtain at least two paths of audio transmission data;
the preset audio data sampling frequency is greater than the sampling frequency of one channel audio data, and the sampling bit width of at least two paths of audio transmission data is the same as the sampling bit width of one channel audio data;
at least two paths of audio transmission data share a clock, or each path of audio transmission data corresponds to one clock.
CN201910710346.3A 2019-07-09 2019-08-02 Display device, audio playing method and device Active CN112218210B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/CN2020/070890 WO2021004046A1 (en) 2019-07-09 2020-01-08 Audio processing method and apparatus, and display device
PCT/CN2020/070887 WO2021004045A1 (en) 2019-07-09 2020-01-08 Method for transmitting audio data of multichannel platform, apparatus thereof, and display device
PCT/CN2020/070902 WO2021004048A1 (en) 2019-07-09 2020-01-08 Display device and audio data transmission method
PCT/CN2020/070929 WO2021004049A1 (en) 2019-07-09 2020-01-08 Display device, and audio data transmission method and device
PCT/CN2020/070891 WO2021004047A1 (en) 2019-07-09 2020-01-08 Display device and audio playing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910613160 2019-07-09
CN2019106131606 2019-07-09

Publications (2)

Publication Number Publication Date
CN112218210A CN112218210A (en) 2021-01-12
CN112218210B true CN112218210B (en) 2023-01-31

Family

ID=74048659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910710346.3A Active CN112218210B (en) 2019-07-09 2019-08-02 Display device, audio playing method and device

Country Status (1)

Country Link
CN (1) CN112218210B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113507633B (en) * 2021-05-26 2023-08-22 海信视像科技股份有限公司 Sound data processing method and device
CN115278458B (en) * 2022-07-25 2023-03-24 邓剑辉 Multi-channel digital audio processing system based on PCIE interface

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204406122U (en) * 2015-02-15 2015-06-17 科大讯飞股份有限公司 Audio signal processor
CN105139865A (en) * 2015-06-19 2015-12-09 中央电视台 Method and device for determining left-right channel audio correlation coefficient
CN107135301A (en) * 2016-02-29 2017-09-05 宇龙计算机通信科技(深圳)有限公司 A kind of audio data processing method and device
CN108520763A (en) * 2018-04-13 2018-09-11 广州醇美电子有限公司 A kind of date storage method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11012736B2 (en) * 2014-09-29 2021-05-18 Sony Corporation Transmission apparatus, transmission method, reception apparatus, and reception method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204406122U (en) * 2015-02-15 2015-06-17 科大讯飞股份有限公司 Audio signal processor
CN105139865A (en) * 2015-06-19 2015-12-09 中央电视台 Method and device for determining left-right channel audio correlation coefficient
CN107135301A (en) * 2016-02-29 2017-09-05 宇龙计算机通信科技(深圳)有限公司 A kind of audio data processing method and device
CN108520763A (en) * 2018-04-13 2018-09-11 广州醇美电子有限公司 A kind of date storage method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112218210A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CA2557993C (en) Frequency-based coding of audio channels in parametric multi-channel coding systems
US8705780B2 (en) Audio apparatus, audio signal transmission method, and audio system
US7742832B1 (en) Method and apparatus for wireless digital audio playback for player piano applications
JP5174527B2 (en) Acoustic signal multiplex transmission system, production apparatus and reproduction apparatus to which sound image localization acoustic meta information is added
US8311240B2 (en) Audio signal processing apparatus and audio signal processing method
WO2020182020A1 (en) Audio signal playback method and display device
CN112218210B (en) Display device, audio playing method and device
WO2006024981A1 (en) Audio/visual apparatus with ultrasound
CN1126431C (en) A mono-stereo conversion device, an audio reproduction system using such a device and a mono-stereo conversion method
CN112218020B (en) Audio data transmission method and device for multi-channel platform
US20140369503A1 (en) Simultaneous broadcaster-mixed and receiver-mixed supplementary audio services
US6711270B2 (en) Audio reproducing apparatus
US8605564B2 (en) Audio mixing method and audio mixing apparatus capable of processing and/or mixing audio inputs individually
JP7434792B2 (en) Transmitting device, receiving device, and sound system
KR101634387B1 (en) Apparatus and system for reproducing multi channel audio signal
KR20110049083A (en) Portable multimedia apparatus, audio reproducing apparatus and audio system for reproducing digital audio signal
KR102370348B1 (en) Apparatus and method for providing the audio metadata, apparatus and method for providing the audio data, apparatus and method for playing the audio data
CN112346694A (en) Display device
CN113709630B (en) Video processing device and video processing method thereof
KR102529400B1 (en) Apparatus and method for providing the audio metadata, apparatus and method for providing the audio data, apparatus and method for playing the audio data
KR20130049611A (en) Apparatus and method for playing multimedia
KR20100083477A (en) Multi-channel surround speaker system
KR20080034253A (en) Apparatus and method for multi-channel sounding in portable terminal
JP2000236599A (en) Multichannel stereo sound field reproduction/ transmission system
KR100516733B1 (en) Dolby prologic audio apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant