CN113923264A - Scene-based audio channel metadata and generation method, device and storage medium - Google Patents

Scene-based audio channel metadata and generation method, device and storage medium Download PDF

Info

Publication number
CN113923264A
CN113923264A CN202111021066.5A CN202111021066A CN113923264A CN 113923264 A CN113923264 A CN 113923264A CN 202111021066 A CN202111021066 A CN 202111021066A CN 113923264 A CN113923264 A CN 113923264A
Authority
CN
China
Prior art keywords
audio
scene
audio channel
metadata
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111021066.5A
Other languages
Chinese (zh)
Inventor
吴健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Saiyinxin Micro Beijing Electronic Technology Co ltd
Original Assignee
Saiyinxin Micro Beijing Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Saiyinxin Micro Beijing Electronic Technology Co ltd filed Critical Saiyinxin Micro Beijing Electronic Technology Co ltd
Priority to CN202111021066.5A priority Critical patent/CN113923264A/en
Publication of CN113923264A publication Critical patent/CN113923264A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/06Notations for structuring of protocol data, e.g. abstract syntax notation one [ASN.1]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Stereophonic System (AREA)

Abstract

The present disclosure relates to a scene-based audio channel metadata and generation method, apparatus, and storage medium. Scene-based audio channel metadata, comprising: the attribute zone comprises an audio channel name, an audio channel identifier and audio channel type description information; and the sub-element zone comprises at least one audio block format and audio cut-off frequency information which are used for indicating the time domain division of the audio channel, wherein the audio block format comprises an audio block identifier and a scene sub-element, and the scene sub-element comprises scene component description information. The audio data can realize the reproduction of three-dimensional sound in the space during rendering, thereby improving the quality of sound scenes.

Description

Scene-based audio channel metadata and generation method, device and storage medium
Technical Field
The present disclosure relates to the field of audio processing technologies, and in particular, to a scene-based audio channel metadata generation method, device, and storage medium.
Background
With the development of technology, audio becomes more and more complex. The early single-channel audio is converted into stereo, and the working center also focuses on the correct processing mode of the left and right channels. But the process begins to become complex after surround sound occurs. The surround 5.1 speaker system performs ordering constraint on a plurality of channels, and further the surround 6.1 speaker system, the surround 7.1 speaker system and the like enable audio processing to be varied, and correct signals are transmitted to proper speakers to form an effect of mutual involvement. Thus, as sound becomes more immersive and interactive, the complexity of audio processing also increases greatly.
Audio channels (or audio channels) refer to audio signals that are independent of each other and that are captured or played back at different spatial locations when sound is recorded or played. The number of channels is the number of sound sources when recording or the number of corresponding speakers when playing back sound. For example, in a surround 5.1 speaker system comprising audio signals at 6 different spatial locations, each separate audio signal is used to drive a speaker at a corresponding spatial location; in a surround 7.1 speaker system comprising audio signals at 8 different spatial positions, each separate audio signal is used to drive a speaker at a corresponding spatial position.
Therefore, the effect achieved by current loudspeaker systems depends on the number and spatial position of the loudspeakers. For example, a binaural speaker system cannot achieve the effect of a surround 5.1 speaker system.
The present disclosure provides audio channel metadata and a construction method thereof in order to provide metadata capable of solving the above technical problems.
Disclosure of Invention
The present disclosure is directed to a scene-based audio channel metadata generation method, device and storage medium, so as to solve one of the above technical problems.
To achieve the above object, a first aspect of the present disclosure provides scene-based audio channel metadata, including:
the attribute zone comprises an audio channel name, an audio channel identifier and audio channel type description information;
and the sub-element zone comprises at least one audio block format and audio cut-off frequency information which are used for indicating the time domain division of the audio channel, wherein the audio block format comprises an audio block identifier and a scene sub-element, and the scene sub-element comprises scene component description information. Optionally, the scene component description information includes hoa (high Order ambisonics) component description information.
To achieve the above object, a second aspect of the present disclosure provides a method for generating audio channel metadata, including:
the generating comprises generating scene-based audio channel metadata as described in the first aspect.
To achieve the above object, a third aspect of the present disclosure provides an electronic device, including: a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to generate audio data including scene-based audio channel metadata as described in the first aspect.
To achieve the above object, a fourth aspect of the present disclosure provides a storage medium containing computer-executable instructions which, when generated by a computer processor, comprise scene-based audio channel metadata as described in the first aspect.
From the above, the present disclosure is based on scene audio channel metadata, including: the attribute zone comprises an audio channel name, an audio channel identifier and audio channel type description information; a sub-element zone comprising at least one audio block format and audio cutoff frequency information for indicating a temporal division of an audio channel, wherein the audio block format comprises an audio block identification and a scene sub-element, the scene sub-element comprising HOA component description information. The scene-based audio channel metadata describes coefficient signals of HOAs in the scene-based audio to enable reproduction of three-dimensional sound in space, thereby improving the quality of the sound scene.
Drawings
Fig. 1 is a schematic diagram of a three-dimensional acoustic audio production model provided in embodiment 1 of the present disclosure;
fig. 2 is a flowchart of a method for generating audio channel metadata according to embodiment 2 of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device provided in embodiment 3 of the present disclosure;
fig. 4 is a polar pattern diagram of the environmental sound in embodiment 1 of the present disclosure.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
As shown in fig. 1, a three-dimensional audio production model is composed of a set of production elements each describing one stage of audio production, and includes a content production section and a format production section.
The content production section includes: audio programs, audio content, audio objects and audio tracks are uniquely identified.
The audio program includes narration, sound effects, and background music, and the audio program references one or more audio contents that are combined together to construct a complete audio program.
The audio content describes the content of a component of an audio program, such as background music, and relates the content to its format by reference to one or more audio objects.
The audio objects are used to establish a relationship between content, format, and asset using soundtrack unique identification elements and to determine soundtrack unique identification of the actual soundtrack.
The format making part comprises: audio packet format, audio channel format, audio stream format, audio track format.
The audio packet format will be the format used when the audio objects and the original audio data are packed in packets according to the channels.
The audio channel format represents a single sequence of audio samples on which certain operations may be performed, such as movement of rendering objects in a scene.
Stream, is a combination of audio tracks needed to render a channel, object, higher order ambient sound component, or packet. The audio stream format establishes a relationship between a set of audio track formats and a set of audio channel formats or audio packet formats.
The audio track format corresponds to a set of samples or data in a single audio track in the storage medium, describing the format of the original audio data, and the decoded signal of the renderer. The audio track format is derived from an audio stream format for identifying the combination of audio tracks required for successful decoding of the audio track data.
And generating synthetic audio data containing metadata after the original audio data is produced through the three-dimensional sound audio production model.
The Metadata (Metadata) is information describing characteristics of data, and functions supported by the Metadata include indicating a storage location, history data, resource lookup, or file record.
And after the synthesized audio data is transmitted to the far end in a communication mode, the far end renders the synthesized audio data based on the metadata to restore the original sound scene.
Example 1
The present disclosure provides and describes in detail audio channel metadata in a three-dimensional acoustic audio model.
The channel-based audio type used in the prior art is a way to directly transmit each channel audio signal to each corresponding speaker without any signal modification. For example, mono, stereo, surround 5.1, surround 7.1 and surround 22.2 are all channel-based audio formats, each channel feeding one loudspeaker. Although channel-based audio types have been used in the prior art, adding corresponding audio channel metadata to a channel-based audio type can facilitate audio processing, and by tagging each channel with an appropriate identifier, can ensure that the audio is directed to the correct speaker.
The audio channel format represents a single sequence of audio samples on which certain operations may be performed, such as movement of rendering objects in a scene. The disclosed embodiments describe audio channel formats with audio channel metadata. The audio channel format of the scene type is explained.
The audio channel metadata includes a property region and a sub-element region.
The attribute zone comprises an audio channel name, an audio channel identifier and audio channel type description information;
and the sub-element zone comprises at least one audio block format and audio cut-off frequency information which are used for indicating the time domain division of the audio channel, wherein the audio block format comprises an audio block format identifier and a scene sub-element, and the scene sub-element comprises scene component description information. Optionally, the scene component description information includes HOA component description information.
Wherein an audio channel format comprises a set of one or more audio block formats that subdivide the audio channel format in the time domain.
The property region includes a generic definition of audio channel metadata. The audio channel name may be a name set for the audio channel, and the user may determine the audio channel by the audio channel name. The audio channel identification is an audio channel identification symbol. The audio channel type description information may be a descriptor of the audio channel type and/or description information of the audio channel type, and the type of the channel may be defined using a type definition and/or a type tag. The type definition of an audio channel format specifies the audio type it describes and determines which parameters to use in the audio block format sub-stage. In the disclosed embodiment, the audio types may include: channel type, matrix type, object type, scene type, and binaural channel type. The type tags may be numerical codes and each channel type may have a corresponding numerical code representation. For example, the channel for the HOA scene type is denoted 0004.
The audio channel identification may include: an audio type identifier for indicating a type of audio contained in the audio channel and an audio stream identifier for indicating a format of an audio stream contained in the audio channel. Alternatively, the audio channel identifier may comprise an 8-bit hexadecimal number, the first four digits representing the type of audio contained in the channel, and the second four digits representing a matching audio stream format. For example, the audio channel is identified as AC _ yyyyxxxx, yyyyy represents the type of audio contained in the channel, and xxxx is digitally matched to the audio stream format. As shown in the table 1 below, the following examples,
TABLE 1
Figure BDA0003241387360000061
In table 1, the requirement item means whether the attribute of the item needs to be set when generating the audio channel metadata, yes indicates that the attribute of the item is a necessary item, optional indicates that the attribute of the item is an optional item, and at least one of the type definition and the type tag needs to be set.
The sub-element zone includes at least one audio block format, the audio block including a channel time-domain partition of the dynamic metadata. Audio cut-off frequency information may also be included in the sub-element region, and the audio cut-off frequency information may be set to an audio frequency indicating a high cut-off and/or a low cut-off. As shown in the table 2 below, the following examples,
TABLE 2
Figure BDA0003241387360000062
The number one item in table 2 indicates the number of sub-elements that can be set, and an audio channel may include at least one audio block, so the number of sub-element audio blocks in the audio channel format may be an integer greater than 0, and audio cutoff frequency information is a selectable item, the number of items is 0 when the item is not set, the number of items is 1 when one of the audio low cutoff frequency and the audio high cutoff frequency is set, and the number of items is 2 when both the audio low cutoff frequency and the audio high cutoff frequency are set.
Each audio block format is provided with an audio block identifier, wherein the audio block identifier may comprise an index for indicating an audio block in an audio channel. The audio block identifier may include 8-bit hexadecimal digits as an index of the audio block in the channel, for example, the audio block identifier is AB _00010001_00000001, the last 8-bit hexadecimal digit is an index of the audio block in the channel, and the index of the first audio block in the audio channel may start from 00000001. The audio block format may also include the start time of the block and the duration of the block, and if the start time of the block is not set, the audio block may be considered to start from 00:00:00.0000, and for the time format, a "hh mm: ss.zzzz" format may be used, where "hh" represents time, "mm" represents minutes, "ss" represents an integer part of seconds, and "ZZZZ" represents seconds of a smaller order, such as: milliseconds; if the duration of a block is not set, the block of audio will last for the duration of the entire audio channel. If there is only one audio block format in the audio channel format, it is assumed to be a "static" object, lasting for the duration of the audio channel, and therefore the start time of the block and the duration of the block should be ignored. If multiple audio block formats are included in an audio channel format, they are assumed to be "dynamic" objects, and therefore both the start-up time of a block and the duration of a block should be used. The audio block format attribute settings are as in table 3,
TABLE 3
Figure BDA0003241387360000071
Figure BDA0003241387360000081
The types of audio channel formats may include: sound beds, matrices, objects, scenes, and binaural channels, embodiments of the present disclosure account for audio channel format metadata for scene types.
The audio channel type description information in the attribute area is set as a scene type, and the type can be defined as a 'scene'. The information in the sub-element region is also set for type definition "scene". The audio block format defines scene sub-elements for the audio block format with the type defined as "scene" as sub-elements of the audio block format, in addition to the information contained in the audio block format described above. Wherein the scene sub-element comprises HOA component description information. The audio channel format metadata for scene types is applicable to scene-based audio, such as HOA-based audio. In scene-based audio, a sound scene is represented by a set of coefficient signals. These coefficient signals are linear weights of spatially orthogonal basis functions, such as spherical harmonics or circular harmonics. The scene may then be reproduced by rendering these coefficient signals to a target speaker layout or headphones. Programming is separated from reproduction and runs create mixed program material without knowledge of the number and location of target speakers. The scene-based audio is, for example, HOA. Scene type audio channel format metadata for scene-based audio. The scene-based audio includes ambient sounds and HOAs. Each component channel represents a sound field independent of the loudspeaker, rather than a single loudspeaker. The polarity pattern (0 at the top and 3 at the bottom) of each component channel for 0, 1, 2 and 3 order ambient sounds is shown in fig. 4. For converting the HOA into a loudspeaker signal (i.e. into a channel-based signal), a decoding of a set of equations is used. In the audio channel format metadata of the scene type, each component may be described by a combination of degrees, orders, and normalized values, or may be described by an equation. When there is no equation, the HOA component is defined by the presence degree, the order, and the normalized value. The formula field is used only to provide information when present and is not used for resolution when there are degrees, orders and normalized values. A C language mathematical function (e.g., "cos (a) × sin (e)") may be used for the equation elements. The purpose is to allow the description of custom or experimental HOA components that cannot be described with order, degree and normalization parameters alone. The sub-elements defined by the audio block format for a scene type are as in table 4,
TABLE 4
Figure BDA0003241387360000091
Figure BDA0003241387360000101
In table 4, the requirement "yes" element is a mandatory item, the requirement "optional" element is an optional item, and the item with the content "empty" in the table is an item that is not required to be set.
The disclosed embodiments describe scene-based audio channels by audio channel metadata, wherein the audio channel metadata describes coefficient signals of scene components in the scene-based audio to enable reproduction of three-dimensional sound in space, thereby improving the quality of sound scenes.
Example 2
The present disclosure also provides an embodiment of a method for adapting to the above embodiment, which is used for a method for generating audio packet metadata, and the explanation based on the same name and meaning is the same as that of the above embodiment, and has the same technical effect as that of the above embodiment, and details are not repeated here.
A method for generating audio channel metadata, as shown in fig. 2, comprises the following steps:
step S110, in response to a setting operation of a user on audio channel metadata, generating audio channel metadata, where the audio channel metadata includes:
the attribute zone comprises an audio channel name, an audio channel identifier and audio channel type description information;
and the sub-element zone comprises at least one audio block format and audio cut-off frequency information which are used for indicating the time domain division of the audio channel, wherein the audio block format comprises an audio block identifier and a scene sub-element, and the scene sub-element comprises scene component description information. Optionally, the scene component description information includes HOA component description information.
The setting operation of the user for the audio channel metadata may be an operation of setting the user for the relevant attribute of the audio channel metadata, for example, the relevant attribute of the audio channel metadata input item by the user is received; or, automatically generating audio channel metadata according to the operation of a user on a preset metadata generation program, where the preset metadata generation program may be set to set all attributes of the audio channel metadata according to default attributes of the system; alternatively, the audio channel metadata may be automatically generated according to a user's operation on a preset metadata generation program, and the preset metadata generation program may be configured to set a partial attribute of the audio channel metadata according to a default attribute of the system, and then receive a remaining attribute input by the user.
Optionally, the audio channel identifier includes: an audio type identifier for indicating a type of audio contained in the audio channel and an audio stream identifier for indicating a format of an audio stream contained in the audio channel.
Optionally, the audio channel type description information includes a type tag and/or a type definition.
Optionally, the audio block identifier includes an index indicating an audio block within an audio channel.
Optionally, the audio cut-off frequency information comprises audio frequencies indicating a high cut-off and/or a low cut-off.
Optionally, the scene component description information includes: the order of the scene component and the sound intensity of the scene component.
Optionally, the scene component description information further includes at least one of the following:
an equation for describing the scene component, standardized information for representing the scene component, reference distance information for representing speaker settings for near-field compensation, and screen-related information for indicating a correlation of the scene component with the screen.
The audio channel metadata generated by the method for generating the audio channel metadata in the embodiment of the disclosure can realize the reproduction of three-dimensional sound in space based on the audio channel of the scene, thereby improving the quality of the sound scene.
Example 3
Fig. 3 is a schematic structural diagram of an electronic device provided in embodiment 3 of the present disclosure. As shown in fig. 3, the electronic apparatus includes: a processor 30, a memory 31, an input device 32, and an output device 33. The number of the processors 30 in the electronic device may be one or more, and one processor 30 is taken as an example in fig. 3. The number of the memories 31 in the electronic device may be one or more, and one memory 31 is taken as an example in fig. 3. The processor 30, the memory 31, the input device 32 and the output device 33 of the electronic apparatus may be connected by a bus or other means, and fig. 3 illustrates the connection by a bus as an example. The electronic device can be a computer, a server and the like. The embodiment of the present disclosure describes in detail by taking an electronic device as a server, and the server may be an independent server or a cluster server.
Memory 31 is provided as a computer-readable storage medium that may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules for generating audio channel metadata as described in any embodiment of the present disclosure. The memory 31 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the device, and the like. Further, the memory 31 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 31 may further include memory located remotely from the processor 30, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 32 may be used to receive input numeric or character information and generate key signal inputs related to viewer user settings and function controls of the electronic device, as well as a camera for capturing images and a sound pickup device for capturing audio data. The output device 33 may include an audio device such as a speaker. It should be noted that the specific composition of the input device 32 and the output device 33 can be set according to actual conditions.
The processor 30 executes various functional applications of the device and data processing, i.e. generating audio channel metadata, by running software programs, instructions and modules stored in the memory 31.
Example 4
The disclosed embodiment 4 also provides a storage medium containing computer-executable instructions that, when generated by a computer processor, include audio channel metadata as described in embodiment 1.
Of course, the storage medium provided by the embodiments of the present disclosure contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the electronic method described above, and may also perform related operations in the electronic method provided by any embodiments of the present disclosure, and have corresponding functions and advantages.
From the above description of the embodiments, it is obvious for a person skilled in the art that the present disclosure can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present disclosure may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a robot, a personal computer, a server, or a network device) to execute the electronic method according to any embodiment of the present disclosure.
It should be noted that, in the electronic device, the units and modules included in the electronic device are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be realized; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present disclosure.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "in an embodiment," "in yet another embodiment," "exemplary" or "in a particular embodiment," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the disclosure. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although the present disclosure has been described in detail hereinabove with respect to general description, specific embodiments and experiments, it will be apparent to those skilled in the art that some modifications or improvements may be made based on the present disclosure. Accordingly, such modifications and improvements are intended to be within the scope of this disclosure, as claimed.

Claims (10)

1. A scene-based audio channel metadata, comprising:
the attribute zone comprises an audio channel name, an audio channel identifier and audio channel type description information;
and the sub-element zone comprises at least one audio block format and audio cut-off frequency information which are used for indicating the time domain division of the audio channel, wherein the audio block format comprises an audio block identifier and a scene sub-element, and the scene sub-element comprises scene component description information.
2. The scene-based audio channel metadata according to claim 1, wherein the audio channel identification comprises: an audio type identifier for indicating a type of audio contained in the audio channel and an audio stream identifier for indicating a format of an audio stream contained in the audio channel.
3. The scene-based audio channel metadata according to claim 1, wherein the audio channel type description information includes a type tag typeLabel and/or a type definition.
4. The scene-based audio channel metadata according to claim 1, wherein the audio block identification comprises an index indicating an audio block within an audio channel.
5. The scene-based audio channel metadata according to claim 1, wherein the audio cut-off frequency information comprises audio frequencies indicating a high cut-off and/or a low cut-off.
6. The scene-based audio channel metadata according to claim 1, wherein the scene component description information includes: the order of the scene component and the sound intensity of the scene component.
7. The scene-based audio channel metadata according to claim 6, wherein the scene component description information further comprises at least one of:
an equation for describing the scene component, standardized information for representing the scene component, reference distance information for representing speaker settings for near-field compensation, and screen-related information for indicating a correlation of the scene component with the screen.
8. A method of generating audio channel metadata, arranged to generate audio channel metadata comprising a scene based audio channel as claimed in any one of claims 1 to 7.
9. An electronic device, comprising: a memory and one or more processors;
the memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to generate audio data including the scene-based audio channel metadata of any of claims 1-7.
10. A storage medium containing computer-executable instructions which, when generated by a computer processor, comprise scene-based audio channel metadata as recited in any of claims 1-7.
CN202111021066.5A 2021-09-01 2021-09-01 Scene-based audio channel metadata and generation method, device and storage medium Pending CN113923264A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111021066.5A CN113923264A (en) 2021-09-01 2021-09-01 Scene-based audio channel metadata and generation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111021066.5A CN113923264A (en) 2021-09-01 2021-09-01 Scene-based audio channel metadata and generation method, device and storage medium

Publications (1)

Publication Number Publication Date
CN113923264A true CN113923264A (en) 2022-01-11

Family

ID=79233665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111021066.5A Pending CN113923264A (en) 2021-09-01 2021-09-01 Scene-based audio channel metadata and generation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113923264A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106104679A (en) * 2014-04-02 2016-11-09 杜比国际公司 Utilize the metadata redundancy in immersion audio metadata
CN107925797A (en) * 2015-08-25 2018-04-17 高通股份有限公司 Transmit the voice data of coding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106104679A (en) * 2014-04-02 2016-11-09 杜比国际公司 Utilize the metadata redundancy in immersion audio metadata
CN107925797A (en) * 2015-08-25 2018-04-17 高通股份有限公司 Transmit the voice data of coding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ITU-R: "音频定义模型 BS系列 广播业务(声音)", ITU-R BS.2076-1 建议书, pages 3 - 5 *

Similar Documents

Publication Publication Date Title
JP2014142475A (en) Acoustic signal description method, acoustic signal preparation device, and acoustic signal reproduction device
WO2014160717A1 (en) Using single bitstream to produce tailored audio device mixes
CN114915874B (en) Audio processing method, device, equipment and medium
CN113905321A (en) Object-based audio channel metadata and generation method, device and storage medium
CN113923264A (en) Scene-based audio channel metadata and generation method, device and storage medium
Kares et al. Streaming immersive audio content
CN113905322A (en) Method, device and storage medium for generating metadata based on binaural audio channel
CN114512152A (en) Method, device and equipment for generating broadcast audio format file and storage medium
CN114339297A (en) Audio processing method, device, electronic equipment and computer readable storage medium
CN113923584A (en) Matrix-based audio channel metadata and generation method, equipment and storage medium
CN114121036A (en) Audio track unique identification metadata and generation method, electronic device and storage medium
CN113938811A (en) Audio channel metadata based on sound bed, generation method, equipment and storage medium
CN114051194A (en) Audio track metadata and generation method, electronic equipment and storage medium
CN114143695A (en) Audio stream metadata and generation method, electronic equipment and storage medium
CN114023339A (en) Audio-bed-based audio packet format metadata and generation method, device and medium
CN114203189A (en) Method, apparatus and medium for generating metadata based on binaural audio packet format
CN114203188A (en) Scene-based audio packet format metadata and generation method, device and storage medium
Ando Preface to the Special Issue on High-reality Audio: From High-fidelity Audio to High-reality Audio
CN113889128A (en) Audio production model and generation method, electronic equipment and storage medium
CN114530157A (en) Audio metadata channel allocation block generation method, apparatus, device and medium
CN115190412A (en) Method, device and equipment for generating internal data structure of renderer and storage medium
CN114023340A (en) Object-based audio packet format metadata and generation method, apparatus, and medium
CN114360556A (en) Serial audio metadata frame generation method, device, equipment and storage medium
CN115038029A (en) Rendering item processing method, device and equipment of audio renderer and storage medium
KR20190081163A (en) Method for selective providing advertisement using stereoscopic content authoring tool and application thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination