TWI317247B - Removing time delays in signal paths - Google Patents

Removing time delays in signal paths Download PDF

Info

Publication number
TWI317247B
TWI317247B TW095136564A TW95136564A TWI317247B TW I317247 B TWI317247 B TW I317247B TW 095136564 A TW095136564 A TW 095136564A TW 95136564 A TW95136564 A TW 95136564A TW I317247 B TWI317247 B TW I317247B
Authority
TW
Taiwan
Prior art keywords
signal
domain
downmix
downmix signal
spatial information
Prior art date
Application number
TW095136564A
Other languages
Chinese (zh)
Other versions
TW200723932A (en
Inventor
Yang Won Jung
Hee Suk Pang
Hyen O Oh
Dong Soo Kim
Jae Hyun Lim
Original Assignee
Lg Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020060078223A external-priority patent/KR20070037986A/en
Priority claimed from KR1020060078219A external-priority patent/KR20070074442A/en
Application filed by Lg Electronics Inc filed Critical Lg Electronics Inc
Publication of TW200723932A publication Critical patent/TW200723932A/en
Application granted granted Critical
Publication of TWI317247B publication Critical patent/TWI317247B/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/167Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Stereophonic System (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Oscillators With Electromechanical Resonators (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Synchronisation In Digital Transmission Systems (AREA)

Description

1317247 九、發明說明: 【發明所屬之技術領域】 本發明係關於一種訊號的處理方法,特別係有關一種音頻訊 號的處理方法。 【先前技術】 多通道音頻編碼(通常稱為空間音頻編碼)係從傳輸的降混 汛唬中擷取多通道音頻訊號的空間影像,編碼為一組壓縮的空間 參數’用於組合高品質的多通道表示。 多通道音頻系統支援若干編碼方案,其中,由於進行訊號處 J如4域至頻域之轉換)’降混訊號可相對於其他降混訊號與 /或對應的空間參數發生延遲。 【發明内容】 間貪訊係藉由轉換降混訊 【實施方式】 本發明的主要目的在於提供一種音頻訊號之處理方法,係於 :時域之降混訊號及頻域之空間f訊後,將時域之降混訊號轉 少=頻域之降混訊號’接著補償經键之降混峨及空間資訊至 =中之時序延遲’其中此時序延遲係祕轉換降混訊號所 〜,取触合_叙降混職妓_訊,財麵合之空 號所耗#的_而產生時序延遲 1317247 因為音頻訊號的訊號處理可能位於若干定義域中,尤其是時 域,所以需要考慮時序校準以適當地處理音頻訊號。 因此,音頻訊號之定義域(d〇main)可於音頻訊號處理中進行轉 換。音頻訊號之定義域轉換可包含時/頻(Time/Frequency,t/f) (complexity domain conversion) 〇 efx 頻域轉換至少包含時域峨至頻域訊號的轉換及賴訊號至時域 訊號的轉換其巾之-。_度域轉絲示賴音頻峨處理的作 業複雜度而進行的域轉換。此外,複雜度域轉換還包含實數頻域 之訊號轉換為複數躺之訊號,複數頻域之峨轉換為實數頻域 之訊號等。如果處理音頻峨料考慮時序解,將可能導致音 頻品質的退化。延遲處理可執行時序校準卫作。延遲處理二 少包含編碼延遲轉碼延遲。編碼輯表示鮮真^綱所引起 的訊號延遲。解碼延遲係表示於訊號解蘭間所導人的即時時序 延遲。 解釋本發明之前,本發明之·書中使用的術語定義如下。 “純輸入域’,(d_mix input domainM系表示可被多通道解石馬 單元所接受之降混訊制絲域,其〇通鞠解元用^ 多通道音頻訊號。 “殘餘輸人域,,(—input domain)係表示可被多通道解『 元所接受之殘餘訊號的定義域。 ”、、早 “時序串列資料,’(time-Series data)係表示資料必須與多通道音 1317247 頻訊號時序同步或者需要時序校準。“g夺序串列資料,,的實例包含 用於動態影像(moving picture )、靜態影像(still image )及文字(text) 專的資料。 “前導’’(leading)表示將訊號提前一特定時序之過程。 “滯後’’(lagging)表示將訊號延遲一特定時序之過程。 “空間資訊’’(spatial information)表示用以合成多通道音頻訊號 的資訊。空間資訊可為空間參數,.其包含通道位準差值(channei level difference,CLD )、通道間同調(inter_channei coherences; ICC ) 及通道預測係數(channel prediction coefficients ; CPC)等,但非 用以限疋本發明之應用範嘴。通道位準差值(chapel ievei difference ; CLD)表示兩個通道之間的能量差值;通道間同調 (inter-channel coherences ; ICC)表示兩個通道之間的相關性;通 道預測係數(channel prediction coefficients ; CPC),係用以由兩個 通道產生三個通道之預測係數。 本說明書所描述的音頻訊號解碼技術係為可自本發明獲益之 訊號處理的實例。本發明還可應用於其他類型的訊號處理(例如 視頻訊號處理)。本說明書描述的實施例可修改為包含各種數目的 訊號’其中各訊號可以任何種類的定義域表示,包含時域 正交鏡相濾、波器(Quadrature Mirror Filter ; QMF )、修正式離散餘 弦轉換(Modified Discreet Cosine Transform ; MDCT)及複雜声 (complexity)等,但非用以限定本發明之應用範疇。 1317247 本發明實關之音舰號之紐方法包含透馳合降混訊號 及寧間資訊以產生多通道音頻訊號。其中可利用複數個定義域以 表不降混訊號(例如時域、正交鏡相濾波器或改進離散餘弦轉 換)。由於定義域之間的轉換將於聲混訊號的訊號路經中引入時序 延遲’因此需要補償步驟以補償降混峨與對應於降混訊號的空 間資訊之時序同步錄。補償時辆步差值包含延遲降混訊號或 空間資訊。下面將結合_描_於補償兩個職之間與/或訊 號及參數之間的時序同步差值的若干實施例。 本乳明書所述之“裝置,,並不限制所描述的實施例為硬體。本 說明書描述的實施例可以硬體、軟體、勤體或任何上述之組合而 實施。 ° 本說明書職之實施例可藉由電腦可讀取顧上的指令而執 行’當此指令透過處理器(例如,電腦處理器)執行時,將可使 得處理器完成碱處鱗業。術語“電腦可讀取髓”係指參與提 供指令至處理器以供執行的任何媒體,包含非揮發性媒體(例如 光盤或磁盤)、揮發性舰(例如記憶體)以及傳輸介質等,㈣ 用以限定本發明之應用_。其中,傳輸介質包含同轴電豐 (_ial cable)、銅線以及光纖’但非用以限定本發明之應用範 .。此外’傳輸介質還可採用聲波、光波或無線電波等形式。 「第1圖」麻係為倾本發明實施例之解辭頻訊號之袭 置之方塊圖。 .!317247 請茶考「第1圖」,依照本發明實施例,解碼音頻訊號的裝置 I 3降此解碼單元1〇〇以及多通道解碼單元2〇〇。 降混解碼單元1GG包含正交鏡相濾波器域料域轉換單元 0圖式之只例中’降混解碼單元1〇〇傳輸經正交鏡相濾波器域 處理的降混訊號XQ1至多通道解碼單元·,無魏—步訊號處 理。降混解碼單元100帛可傳輸時域的降混訊號XT1至多通道解 碼單元200,並使用轉換單元11〇將降混訊號XQ1從正交鏡相濾 波為域轉換至時域,從*產生時_降混減XT1。將音頻訊號 攸正父鏡相舰II域轉換至時域的技術眾所周知,且已加入公開 的曰頻訊號處理標準(例如mpeg)中。 多通道解碼單it200利用降混訊號χτι或XQ1以及空間資訊 SI1或SI2以產生多通道音頻訊號又^^。1317247 IX. Description of the Invention: [Technical Field] The present invention relates to a method for processing a signal, and more particularly to a method for processing an audio signal. [Prior Art] Multi-channel audio coding (commonly referred to as spatial audio coding) is the acquisition of a spatial image of a multi-channel audio signal from a transmitted downmix, encoded as a set of compressed spatial parameters 'for combining high quality Multi-channel representation. The multi-channel audio system supports a number of coding schemes in which the down-mix signal can be delayed relative to other down-mix signals and/or corresponding spatial parameters due to the conversion of the signal location (such as the 4-domain to the frequency domain). SUMMARY OF THE INVENTION The present invention aims to provide an audio signal processing method, which is: after the down-mixing signal in the time domain and the spatial frequency in the frequency domain, Turn down the down-mix signal in the time domain = the down-mix signal in the frequency domain' and then compensate the delay of the key and the spatial information to the timing delay in = 'where the timing delay is the secret conversion downmix signal~, touch _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ Process audio signals appropriately. Therefore, the domain of the audio signal (d〇main) can be converted in the audio signal processing. The definition of the audio signal can include time/frequency (t/f) (complexity domain conversion). The 〇efx frequency domain conversion includes at least the conversion from the time domain to the frequency domain signal and the conversion of the signal to the time domain signal. Its towel -. The domain conversion is performed by the processing complexity of the audio processing. In addition, the complexity domain conversion also includes the signal converted into a complex number in the real frequency domain, and converted into a real frequency domain signal in the complex frequency domain. If the audio data is processed in consideration of the timing solution, it may cause degradation of the audio quality. Delay processing can perform timing calibration. Delay processing 2 contains code delay delay transcoding delay. The coded series indicates the signal delay caused by the simplification. The decoding delay is expressed as the instantaneous timing delay of the signal introduced by the signal. Prior to the explanation of the present invention, the terms used in the book of the present invention are defined as follows. "Pure input field", (d_mix input domainM indicates the downmixed silk domain that can be accepted by the multi-channel solution stone unit, and the multi-channel audio signal is used by the 〇 鞠 鞠 “. Input domain) indicates the domain of the residual signal that can be decoded by the multi-channel. The early "time-series data" indicates that the data must be synchronized with the multi-channel 1317247 frequency signal. Synchronization or timing calibration is required. Examples of "g-sequence data" include data for moving pictures, still images, and text. "leading" The process of advancing a signal by a specific timing. "Lagging" means the process of delaying a signal by a specific timing. "Spatial information" means information used to synthesize a multi-channel audio signal. Spatial parameters, which include channel level difference (CCN), inter-channel coherence (ICC), and channel prediction coefficient (channel prediction) Coefficients; CPC), etc., but not limited to the application of the present invention. The channel position difference (CLD) represents the energy difference between two channels; inter-channel coherences ; ICC) represents the correlation between two channels; channel prediction coefficients (CPC) are used to generate three channels of prediction coefficients from two channels. The audio signal decoding technique described in this specification is Examples of signal processing that may benefit from the present invention. The invention is also applicable to other types of signal processing (e.g., video signal processing). The embodiments described herein may be modified to include various numbers of signals 'where each signal may be any The definition of the domain of the category, including time domain orthogonal mirror filter, Qapex, Modified Discreet Cosine Transform (MDCT) and complex sound, but not used The application scope of the present invention is limited. 1317247 The method of the sound ship number of the present invention includes a turbo-combination and a mixed-mix signal and Ningjian Information to generate multi-channel audio signals, which can utilize multiple fields to represent downmix signals (such as time domain, quadrature mirror filter, or modified discrete cosine transform). Since the transition between the domains introduces a timing delay in the signal path of the sound mixed signal, a compensation step is required to compensate for the timing synchronization of the downmix and the spatial information corresponding to the downmix signal. The step difference value during compensation includes delayed downmix signal or spatial information. Several embodiments of the timing synchronization difference between the two jobs and/or signals and parameters will be combined below. The "apparatus" described in this nipple does not limit the described embodiments to hardware. The embodiments described in this specification can be implemented in hardware, software, work, or any combination of the above. The embodiment can be executed by a computer readable command "when the instruction is executed by a processor (for example, a computer processor), the processor can be used to complete the alkali squad. The term "computer readable pith" "" means any medium that participates in providing instructions to the processor for execution, including non-volatile media (such as optical or magnetic disks), volatile ships (such as memory), and transmission media, etc., (d) to define the application of the present invention. The transmission medium includes a coaxial cable, a copper wire, and an optical fiber, but is not intended to limit the application of the present invention. In addition, the transmission medium may also be in the form of sound waves, light waves, or radio waves. Figure 1 is a block diagram of the attack signal of the embodiment of the present invention. 317247 Please refer to the "Fig. 1", in accordance with an embodiment of the present invention, the apparatus for decoding an audio signal I 3 is lowered by the decoding unit 1 and the multi-channel decoding unit 2A. The downmix decoding unit 1GG includes an orthogonal mirror phase filter domain region conversion unit 0. In the example of the example, the downmix decoding unit 1 transmits the downmixed signal XQ1 to multichannel decoding processed by the orthogonal mirror phase filter domain. Unit·, no Wei-step signal processing. The downmix decoding unit 100 can transmit the down-mix signal XT1 to the multi-channel decoding unit 200 in the time domain, and use the conversion unit 11 to phase-filter the down-mix signal XQ1 from the orthogonal mirror to the domain to the time domain, from the time of generation_ Reduce and reduce XT1. The technique of converting the audio signal to the time domain of the 父 父 父 父 父 众所周知 众所周知 众所周知 II II II II 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 The multi-channel decoding unit it200 uses the downmix signal χτι or XQ1 and the spatial information SI1 or SI2 to generate a multi-channel audio signal.

「第2圖」所示係為依照本發明另—實補之解碼音頻訊號 的裝置之方塊圖。 U μ參考第2目」’依照本發明另—實施例,解碼音頻訊號的 裝置包含降混解碼單幻⑻a、多通道解解元施以及修正式離 散餘弦轉換域至JL交目錄H_解元施。 降混解碼單元職包含修正式離散餘___ 元腕。圖式之實例中’降混解碼單元⑽a輸出修正式離散· 轉換域處理的降混訊號如。降混解碼單元驗還輸出時域的哼 混訊號XT2,並制修正式離散餘轉換域至時域魏單元⑽ 1317247 將修正式離散餘弦轉換域的Xm轉換至時域,從而產生時域的降 混訊號XT2。 時域的降混訊號XT2傳輸至多通道解碼單元臟,而修正式 離散餘弦轉換域的降混訊號Xm則通過域轉換單元施,並轉換 為正交鏡減波輯的降混訊號XQ2。_經過轉換的降驗號 XQ2將傳輸至多通道解碼單元200a。 多通道解碼單元200a使用經傳輸的降混訊號χτ2或耶以 及空間資訊SI3或SI4以產生多通道音頻訊號。 「第3圖」所示係為依照本發明另一實施例之解碼音頻訊號 的裝置之方塊圖。 請參考「第3圖」’依照本發明另一實施例,解碼音頻訊號的 裝置包含降此解碼單元l〇〇b,多通道解碼單元200b,殘餘解碼單 元400b以及修正式離散餘弦轉換域至正交鏡相濾波器域轉換單元 500b 〇 降混解碼單元l〇〇b包含正交鏡相濾波器域至時域轉換單元 110b。降混解碼單元i〇〇b傳輸經正交鏡相濾波器域處理的降混訊 旒XQ3至多通道解碼單元2〇〇b,無需進一步訊號處理。降混解碼 單元100b尚傳輸降混訊號χΤ3至多通道解碼單元2〇〇b,並使用 正交鏡相濾波器域至時域轉換單元110b將降混訊號Xq3從正交 鏡相濾波器域轉換至時域,從而產生降混訊號ΧΤ3。 實施例中’經過編碼的殘餘訊號RB輸入殘餘解碼單元4〇〇b 10 1317247 中進打處理。本實施例中,經過處理的殘餘訊號RM係為修正式 離散餘弦轉換域的訊號。例如,殘餘訊號可為音頻編碼應用(例 如MPEG)中經常使用的預測誤差訊號。 接下來彡過修正式離散餘弦轉換域至正交鏡相遽波器域轉 換單兀5〇〇b將修正式離散餘弦轉換域的殘餘訊號跑轉換為正交 鏡相滤波购麵職RQ,織傳輸至錢道解碼單元鳩。 如果殘餘解碼單元魏+所處理及輸出之殘餘訊號的定義域 ,為殘餘輸域,則歧猶_細臟可雜以通道解瑪 單元200b,無需進行定義域轉換程序。 ^考第3圖」’貫施例中,修正式離散餘弦轉換域至正交 鏡相濾波器域轉換單元編將修正式離散餘弦轉換域的殘餘訊號 RM轉換為正交鏡減波器域的殘餘訊號RQ。尤其是,修正式離 散餘弦轉換域至正錢補波輯轉解元5㈣翻㈣輸出自 2餘解瑪單元娜的殘餘訊細轉換為正交鏡相濾波器域的殘 餘訊號RQ。 如前所述,因為複數個降混訊號定義域之存在,導致降混訊 叙Φ及功與空間資訊SI5及服之間的時序同步差值,所以 必須進行補償。以下·述用以爾鱗同步差值的各種實施例。 依照本發明實關,可藉由解碼經編·音触號以產生多 ^逼音頻訊號’其中此經編碼的音頻訊號包含降混訊號及空間資 1317247 解碼過財’岐《及如資贿财_纽,可造成 不同的時序延遲。 、 編=過程中,降混訊號及空間資訊可編竭為時序同步。 本貝加例中’降混訊號經過降混解碼單元⑽、黯或藤 的處理被傳輸至多通道解石馬單元·、2Q()a 〇_,__ 訊倾細定義域以時序同步化降混訊號及空間資訊。 實施财’降混編碼識別碼可包含於經過編碼的音頻訊號 中,用以勤j降混訊號與空間資訊之間的時序同步匹配所在的定 義域。此實施例中,降混編碼識別碼可指示降混訊號的解碼方案。 例如^果降混編碼識別碼識別出先進音頻編碼 AuchoCodmg ; AAC)的解碼轉’ _編侧音親號則可透過 先進音頻解碼器進行解碼。 —實補t,降混編碼識酬還可用於判斷使降混訊號及空間 資訊之間時序同步匹配之定義域。 本發明實施_音親號的處财法中,降混峨可於不同 時序同姐輯定義域巾進行處理,魄傳輸以通道解碼單元 200、200a或200b。本實施例中’解石馬單元細施或篇將 對降混訊號及空間資訊之間的時序同步進行補償,以產生多通道 音頻訊號xm、XM2及XM3 〇Fig. 2 is a block diagram showing an apparatus for decoding an audio signal in accordance with the present invention. U μ refers to the second item "In accordance with another embodiment of the present invention, the apparatus for decoding an audio signal includes a downmix decoding single magic (8) a, a multi-channel solution element, and a modified discrete cosine transform domain to a JL intersection H_demodulation Shi. The downmix decoding unit includes a modified discrete residual ___ yuan wrist. In the example of the figure, the downmix decoding unit (10)a outputs the reduced-difference signal processed by the modified discrete transform domain. The downmix decoding unit verifies the 时 mixed signal XT2 of the output time domain, and converts the modified discrete residual transform domain to the time domain Wei unit (10) 1317247. Converts the Xm of the modified discrete cosine transform domain to the time domain, thereby generating a time domain drop. Mixed signal XT2. The down-mix signal XT2 in the time domain is transmitted to the multi-channel decoding unit dirty, and the down-mix signal Xm of the modified discrete cosine transform domain is applied by the domain conversion unit and converted into the down-mix signal XQ2 of the orthogonal mirror subtraction series. The converted converted test number XQ2 is transmitted to the multi-channel decoding unit 200a. The multi-channel decoding unit 200a uses the transmitted downmix signal χτ2 or the spatial and spatial information SI3 or SI4 to generate a multi-channel audio signal. Fig. 3 is a block diagram showing an apparatus for decoding an audio signal in accordance with another embodiment of the present invention. Please refer to "FIG. 3". According to another embodiment of the present invention, an apparatus for decoding an audio signal includes dropping the decoding unit 100b, the multi-channel decoding unit 200b, the residual decoding unit 400b, and the modified discrete cosine transform domain to positive The mirror phase filter domain converting unit 500b 〇 downmix decoding unit 100b includes an orthogonal mirror phase filter domain to a time domain converting unit 110b. The downmix decoding unit i〇〇b transmits the downmix 旒XQ3 to multichannel decoding unit 2〇〇b processed by the orthogonal mirror phase filter domain without further signal processing. The downmix decoding unit 100b also transmits the downmix signal χΤ3 to the multichannel decoding unit 2〇〇b, and uses the orthogonal mirror phase filter domain to the time domain converting unit 110b to convert the downmix signal Xq3 from the orthogonal mirror phase filter domain to Time domain, resulting in a downmix signal ΧΤ3. In the embodiment, the encoded residual signal RB is input to the residual decoding unit 4〇〇b 10 1317247 for processing. In this embodiment, the processed residual signal RM is a signal of a modified discrete cosine transform domain. For example, the residual signal can be a prediction error signal that is often used in audio coding applications such as MPEG. Next, the modified discrete cosine transform domain to the orthogonal mirror phase chopper domain conversion unit 兀5〇〇b converts the residual signal of the modified discrete cosine transform domain into orthogonal mirror phase filtering. Transfer to the money channel decoding unit 鸠. If the domain of the residual signal processed and output by the residual decoding unit is the residual input domain, the channel can be mixed with the channel decoding unit 200b without performing a domain conversion procedure. ^考图3"" In the example, the modified discrete cosine transform domain to the orthogonal mirror phase filter domain conversion unit converts the residual signal RM of the modified discrete cosine transform domain into the orthogonal mirror subtractor domain Residual signal RQ. In particular, the modified discrete cosine transform domain is converted to a positive residual wave. The residual signal of the 2nd solution is converted to the residual signal RQ of the orthogonal mirror phase filter domain. As mentioned above, because of the existence of a plurality of downmix signal definition domains, the downmix syntax and the timing synchronization difference between the power and spatial information SI5 and the service are required, so compensation must be performed. The following describes various embodiments of the sigma synchronization difference. According to the present invention, the encoded audio signal can be generated by decoding the warp-knitted touch signal, wherein the encoded audio signal includes the downmix signal and the space resource 1317247 decoded the rich '岐' and the bribe _ New, can cause different timing delays. In the process of editing, the downmix signal and spatial information can be exhausted into timing synchronization. In the Benbega example, the 'downmix signal' is processed by the downmix decoding unit (10), 黯 or 藤, and is transmitted to the multi-channel calculus unit, 2Q()a 〇_, __ Signal and space information. The implementation of the degraded code identification code may be included in the encoded audio signal, and used to define the definition field in which the timing synchronization between the mixed signal and the spatial information is matched. In this embodiment, the downmix coded identification code may indicate a decoding scheme of the downmix signal. For example, the degraded code identification code identifies the advanced audio code AuchoCodmg; AAC) decoding turn _ _ side tone number can be decoded by the advanced audio decoder. - Real complement t, downmix code recognition can also be used to determine the domain that matches the timing synchronization between the downmix signal and the spatial information. In the method of implementing the _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ In this embodiment, the 'synthesis unit" will compensate for the timing synchronization between the downmix signal and the spatial information to generate multi-channel audio signals xm, XM2 and XM3.

下面結合「第1圖i及「篦4闻 A 弟4圖」以解釋降混訊號與空間資 訊之間時序同步差值之補償方法。 12 .^17247 「第4圖」係為「第1圖」所示的多通道解碼單S 200的方 塊圖。 月參考第1圖」及帛4圖」,本發明實施例的音頻訊號的 處理方法中,降混解碼單元100 (參考「第i圖」)處理降混訊號, 然後以兩種定義域之-的形式傳輪至多通道解碼單元200。本實施 2中’假設降混訊號及空間資訊係時序同步匹配於正交鏡相遽波 器域。也可能是其它的定義域。 第4圖」所不之貫施例中,經正交鏡相濾波器域處理的降 混訊號XQ1傳輸至多通道解碼單元200以進行訊號處理。 轉輸的降混訊號XQ1與空間資訊組合於多通道產生單元 230 ’並產生多通道音頻訊號XjVQ。 ^本貫把例中’空間資訊SI1經過時序延遲後與降混訊號XQ1 ' 此日守序延遲對應於編碼程序的時序同步。此延遲可為編碼 7遲。由於空間資訊SI1與降混訊號χφ已於編石馬過程進行時序 同步匹配’因此無韻殊關步匹配程序就可產生多通道音頻訊 唬。換言之,本實施例中,空間資訊SI1無需進行解碼延遲。 除了降混訊號卿之外,經時域處理崎混訊號XT1也傳輸 至夕通道解碼單元200以進行訊號處理。如「第i圖」所示,透 =交鏡相滤波器域至時域轉換單力110將正交鏡相滤波器域的 降混崎XQ1轉換树域崎混減XT1,鹏域的降混訊號 XT1被傳送至多通道解碼單元200 〇 13 1317247 請參考「第4圖」’透過時域至正交鏡相濾波器域轉換單元別 將經傳輸的降混訊號xT1轉換為正交鏡相濾波器域的降混訊號 Xql。 時域的降混訊號XT1傳輸至多通道解碼單元2〇〇時,至少降 混訊號xql及空間資訊SI2其中之一可於時序延遲補償完成後被 傳送至多通道產生單元230。 口口透過組合經傳輸的降混訊號Xql,與空間資訊犯,,多通道產 生單元230可產生多通道音頻訊號幻^丨。 由於空間資訊與降混訊號之間係於編碼軸行正交鏡相滤波 益域之時序同步匹配’所以至少降混訊號邮及空間資訊犯其 中之-應該完成時序延遲補償。經過定義域轉換的降混訊號邮 於訊號延遲處理單元220中補償失配的時序同步差值後,可輸入 多通道產生單元230。 補償時序时紐財法係透過時相步差贿導降混訊號 邮。本實施例中,時序同步差值係為正交鏡相舰器域至時域轉 換早元no所產生的延遲時序_域至正域減波輯轉換單 元210所產生的延遲時序之和。 也可透過補償空間資訊SI2的時序延遲來補償時序同步差 值例如,空間#訊SI2於空間資訊延遲處理單元⑽中進行 序同步差值之遲滞,絲被傳輪至多通道產生單元咖。 實質上被㈣簡延遲㈣對胁失崎序同步差 14 1317247 •,值與匹配時序同步的延遲時序之和。換言之,經過延遲的空間資 訊,過編碼延遲以及解碼延遲而被延遲。這個和也對應於降混解 ,碼早:100 (參考「第1圖」)中降混訊號與空間資訊之間的時序 同步差值與在多通道解碼單元勘中所產生㈣序同步差值之和。 ▲可根據濾波器(例如正交鏡相濾波器或混合濾波器組)的性 能及延遲決定實質上被延遲的空間資訊SI2之延遲值。 例如,考慮到濾波器的性能及延遲,空間資訊的延遲值可為 961個%序取樣。分析空間資訊的延遲值時,如果降混解碼單元 100所產生的時序同步差值為257個時序取樣,則多通道解碼單元 200所產生的時序同步差值為7〇4個時序取樣。雖然可用時序取樣 單位表示延遲值,也可用時槽單位表示。 第5圖」所示係為「第2圖」所示之多通道解媽單元2〇〇a 的方塊圖。 叫參考「第2圖」和「第5圖」’本發明實施例之音頻訊號的 處理方法中,降混解碼單元100a處理的降混訊號可以兩種定義域 其中之一的形式傳輸至多通道解碼單元2〇〇a。本實施例中,假設 降混汛號及空間資訊係時序同步匹配於正交鏡相滤波器域。也可 能是其它的定義域。如果音頻訊號的降混訊號及空間資訊係匹配 於不同於時域之定義域’此種訊號也可進行處理。 如「第2圖」所示,經時域處理的降混訊號χτ2傳輸至多通 道解碼單元200a以進行訊號處理。 15 1317247 透過修正式離散餘弦轉換域至時域轉換單元ll〇a,修正式離 散餘弦轉換域的降混訊號Xm將被轉換為時域的降混訊號χΤ2。 然後,經過轉換的降混訊號ΧΤ2將被傳輸至多通道解碼單元 200a。 透過%r域至正父鏡相濾波器域轉換單元,經傳輸的降混 訊號XT2將被轉換為正交鏡相濾波器域的降混訊號,然後傳 輸至多通道產生單元23〇^ 經傳輸的降混訊號邮與空間資訊SI3組合於多通道產生單 兀230a,以產生多通道音頻訊號χΜ2。 本只靶例中’空間資訊SB經過與編碼時序同步相對應的時 序L遲處理後’與降混訊號Xq2進行組合。此延遲可為編碼延遲。 口為門資Λ SB與p奢混訊號Xq2已於編碼程序進行時序同步匹 =所以2而特殊的同步匹配程序即可產生多通道音頻訊號。換 。之本只知例中,空間資訊SI3不需要進行解碼延遲。 本只%例中之正父鏡相濾波器域處理的降混訊號Xq2 將傳:至多通道解解元聽以進行訊號處理。 -4離政餘弦轉換域處理的降混訊號細係自降混解碼 =00^!^ __離散餘弦轉換域至正交鏡相滤波器 1祕I^300a以將輪出的降混訊號Xm轉換為正交鏡相濾、波器 :、孔遗XQ2。然S,經過轉換的降混訊號則被傳輪至 多通道解碼單元如㈨。 16 1317247 , 當正交鏡相濾波器域的降混訊號XQ2傳輸至多通道解碼單元 -200a時’至少降混訊號XQ2與空間資訊SM其中之一可於完成時 序延遲補償後,被傳輸至多通道產生單元23〇a。 多通運產生單元230a透過將經傳輪的降混訊號Xq2,及空間 資訊SI4’組合在一起,以產生多通道音頻訊號_2。 由於空間貢訊與降混訊號之間係於編碼程序時進行時域之時 序同步匹配’所以至少降混訊號XQ2與空間資訊SI4其中之一應 已完成時序延遲補償。經過定義域轉換的降混訊號耶於訊號延 遲處理單元220a中補償失配的時序同步差值後,可輸入至多通道 產生單元230a。 補償時序同步差_方法鱗過時序畔差偏延遲降混訊 號项2。本實施例中,時序同步差值係為修正式離散餘弦轉換域 至時域轉換單S110a所產生的延遲時序與時域至正交鏡她皮哭 轉換單元職所產生的延遲時序之和,與修正式離散餘弦轉換 域至正交鏡相濾波輯轉換單元3術所產生的延遲時序之間的差 值0 ,還可能透過蠕空财訊⑽㈣柄遲來娜時序同步差 ^例如’空間資訊SM於空間資訊延遲處理單元鳩中進行時 序同步差值之前導,然後傳輸至多通道產生單元23〇&。 實質上延遲的㈣資訊之延遲值储應於統時相 與匹配時序同步的延遲時序 奐自 / 、、工延遲的空間資訊SI4, 17 1317247 係透過編碼延遲以及解碼延遲而獲得延遲。 斗依'、'、本㈣之實施例’―種音麵號的處财法包含編碼立 頻訊號並解碼經過编瑪沾立^ 匕3、·扁竭音 宰咖_ 頻域,射透麟雌麵解碼方 案从匹崎混訊號與空_訊之_時辆步。 弓方 現有基於品質(例如高品f先物_ 如低複雜度先進音頻編瑪) 安 力羊(例 干貫例。高品質解碼 道音頻職,其音齡糾低功麵财案的音頻 二加出色。低功率解碼方案的功率消耗相對較低,因為宜组 恶>又有高品質解碼方案的組態複雜。 '' —下面的描述中,同時透過高品質以及低功率的解碼方案作為 貝例^解釋本㈣。其他解碼謂亦可顧於本發明之實施例。The following is a description of the compensation method for the timing synchronization difference between the downmix signal and the spatial information by combining "1st image i" and "篦4 smell A brother 4 picture". 12 .^17247 "4th picture" is a block diagram of the multi-channel decoding single S 200 shown in "1st picture". Referring to FIG. 1 and FIG. 4, in the audio signal processing method according to the embodiment of the present invention, the downmix decoding unit 100 (refer to "i") processes the downmix signal, and then uses two types of domains - The form passes to the multi-channel decoding unit 200. In the present embodiment 2, it is assumed that the downmix signal and the spatial information system timing synchronization are matched to the orthogonal mirror phase chopper domain. It may also be another domain. In the example of Fig. 4, the downmix signal XQ1 processed by the orthogonal mirror phase filter domain is transmitted to the multichannel decoding unit 200 for signal processing. The transferred downmix signal XQ1 and spatial information are combined in the multi-channel generating unit 230' to generate a multi-channel audio signal XjVQ. ^ In the example, the spatial information SI1 is delayed by the timing and the downmix signal XQ1'. The current sequence delay corresponds to the timing synchronization of the encoding program. This delay can be delayed by code 7. Since the spatial information SI1 and the downmix signal χφ have been time-synchronously matched in the weaving process, the multi-channel audio signal can be generated without the gem-step matching program. In other words, in the present embodiment, the spatial information SI1 does not need to perform a decoding delay. In addition to the downmix signal, the time domain processed hash signal XT1 is also transmitted to the day channel decoding unit 200 for signal processing. As shown in the "i-th image", the transmissive phase filter filter domain to the time domain conversion single force 110 will reduce the subsurface XS1 conversion tree domain of the orthogonal mirror phase filter domain XT1, the downmix of the Peng domain The signal XT1 is transmitted to the multi-channel decoding unit 200 〇13 1317247 Please refer to "Fig. 4" to convert the transmitted down-mix signal xT1 into an orthogonal mirror filter through the time domain to orthogonal mirror phase filter domain conversion unit. The downmix signal Xql of the domain. When the down-mix signal XT1 of the time domain is transmitted to the multi-channel decoding unit 2, at least one of the down-mix signal xq1 and the spatial information SI2 may be transmitted to the multi-channel generating unit 230 after the timing delay compensation is completed. The multi-channel generating unit 230 can generate a multi-channel audio signal by combining the transmitted downmix signal Xql with the spatial information. Since the spatial information and the downmix signal are matched by the timing synchronization of the coded axis line orthogonal mirror phase filtering, so at least the downmix signal postal and spatial information is committed - the timing delay compensation should be completed. After the domain-converted downmix signal is sent to the signal delay processing unit 220 to compensate for the mismatched timing synchronization difference, the multi-channel generating unit 230 can be input. When compensating for the timing, the New York legal system passes the phase step bribe to guide the downmix signal. In this embodiment, the timing synchronization difference is the sum of the delay timings generated by the delay timing_domain to the positive domain subtraction conversion unit 210 generated by the orthogonal mirror domain to the time domain conversion early no. The timing synchronization difference value can also be compensated by compensating for the timing delay of the spatial information SI2. For example, the space #SI2 performs the hysteresis of the sequence synchronization difference in the spatial information delay processing unit (10), and the wire is transmitted to the multi-channel generation unit. Substantially (4) Jane Delay (4) Pair of Threshold Synchronization Difference 14 1317247 • The sum of the value and the delay timing of the matching timing synchronization. In other words, delayed spatial information, delayed encoding delay, and delayed decoding are delayed. This sum also corresponds to the downmix solution. The code is early: 100 (refer to "1"). The timing synchronization difference between the downmix signal and the spatial information is compared with the (4) sequence synchronization difference generated in the multichannel decoding unit. Sum. ▲ The delay value of the substantially delayed spatial information SI2 can be determined based on the performance and delay of the filter (e.g., an orthogonal mirror filter or a hybrid filter bank). For example, considering the performance and delay of the filter, the spatial information delay value can be sampled at 961 %. When the delay value of the spatial information is analyzed, if the timing synchronization difference generated by the downmix decoding unit 100 is 257 timing samples, the timing synchronization difference generated by the multichannel decoding unit 200 is 7 〇 4 timing samples. Although the delay value can be expressed in units of time series sampling, it can also be expressed in time slot units. Figure 5 is a block diagram of the multi-channel solution unit 2〇〇a shown in Figure 2. Referring to the processing method of the audio signal in the embodiment of the present invention, the downmix signal processed by the downmix decoding unit 100a can be transmitted to the multi-channel decoding in one of two defined domains. Unit 2〇〇a. In this embodiment, it is assumed that the downmix nickname and the spatial information system timing synchronization are matched to the orthogonal mirror phase filter domain. It may also be another domain. If the audio signal's downmix signal and spatial information match a domain other than the time domain', the signal can also be processed. As shown in Fig. 2, the down-converted signal χτ2 processed in the time domain is transmitted to the multi-channel decoding unit 200a for signal processing. 15 1317247 Through the modified discrete cosine transform domain to the time domain conversion unit lla, the downmix signal Xm of the modified discrete cosine transform domain will be converted to the time domain downmix signal χΤ2. Then, the converted downmix signal ΧΤ2 is transmitted to the multi-channel decoding unit 200a. Through the %r domain to the positive-parent mirror phase domain conversion unit, the transmitted downmix signal XT2 is converted into a down-mix signal of the orthogonal mirror phase filter domain, and then transmitted to the multi-channel generating unit 23〇 The downmix signal postal and space information SI3 is combined in a multi-channel generation unit 230a to generate a multi-channel audio signal χΜ2. In the present target example, the spatial information SB is combined with the downmix signal Xq2 after the delay of the time L corresponding to the coding timing synchronization. This delay can be an encoding delay. The port is a door Λ SB and p luxury mixed signal number Xq2 has been synchronized in the encoding program. Therefore, a special synchronization matching program can generate multi-channel audio signals. Change. In this example, the spatial information SI3 does not require a decoding delay. The downmix signal Xq2 processed by the positive parent mirror filter field in this % example will pass: to the multi-channel solution to listen to the signal processing. -4 The downmix signal processed by the cosine transform domain is self-downmix decoding = 00^!^ __ discrete cosine transform domain to the orthogonal mirror filter 1 I ^ 300a to round out the downmix signal Xm Converted to orthogonal mirror phase filter, wave:: hole XQ2. However, the converted downmix signal is transmitted to the multi-channel decoding unit such as (9). 16 1317247, when the downmix signal XQ2 of the orthogonal mirror filter domain is transmitted to the multichannel decoding unit -200a, at least one of the downmix signal XQ2 and the spatial information SM can be transmitted to the multichannel after the timing delay compensation is completed. Unit 23〇a. The multi-pass transmission generating unit 230a generates the multi-channel audio signal_2 by combining the down-mixed signal Xq2 of the passing wheel and the spatial information SI4'. Since the spatial synchronization and the downmix signal are time-synchronously matched in the time domain when encoding the program, at least one of the downmix signal XQ2 and the spatial information SI4 should have completed the timing delay compensation. After the down-mix signal of the domain-converted signal is compensated for the mismatched timing synchronization difference in the signal delay processing unit 220a, it can be input to the multi-channel generating unit 230a. Compensation timing synchronization difference _ method scales the timing difference delay delay downmix signal item 2. In this embodiment, the timing synchronization difference is the sum of the delay timing generated by the modified discrete cosine transform domain to the time domain conversion sequence S110a and the delay timing generated by the time domain to the orthogonal mirror. The difference between the delay time of the modified discrete cosine transform domain to the orthogonal mirror phase filter conversion unit 3 is 0, and it is also possible to pass the creep signal (10) (four) handle late arrival timing synchronization difference ^ for example, 'spatial information SM The timing synchronization difference preamble is performed in the spatial information delay processing unit ,, and then transmitted to the multi-channel generating unit 23 〇 & The delay value of the (4) information that is substantially delayed is stored in the delay sequence synchronized with the matching timing. The spatial information SI4, 17 1317247 from the /, and the delay is delayed by the encoding delay and the decoding delay. Douyi', ', and (4)'s example '---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The female-side decoding scheme is stepped from the 崎 混 mixed signal and the empty _ _ _. The bow is currently based on quality (such as high-quality f first-class _ such as low-complexity advanced audio editing) An Liyang (example of dry case. High-quality decoding of audio audio, its audio age correction low-cost audio two plus Excellent. The power consumption of the low-power decoding scheme is relatively low, because the configuration of the group should be complicated. The configuration of the high-quality decoding scheme is complicated. '' - In the following description, the high-quality and low-power decoding scheme is adopted as the shell. Example 4 explains this (4). Other decodings can also take into account embodiments of the present invention.

「第6圖」所示係為触本發明另—實施例之音頻解碼訊號 方法之方塊圖P U 。月多考第ό圖」,本發明的解碼裝置包含降混解碼單元100c 以及多通道解碼單元2〇〇c。 實施例中’、經降混解碼單元觸c處理的降混訊號χτ4被傳輪 至夕通道解石馬單元2〇〇c ’其中此訊號與空間資訊SI7或⑽進行 組合以產生多通這音頻訊號M1或奶。本實施例中,經過處理的 降混訊號XT4係為時域的降混訊號。 經過編碼的降混訊號DB被傳輸至降混解碼單元1〇〇c進行處 理。經過處理的降混訊號XT4被傳輪至多通道解碼單元2〇〇c,依 18 1317247 照兩種解碼方案其中之一以產生多通道音頻訊號,兩種解碼方案 係為咼品質解碼方案以及低功率解碼方案。 如果經過處理的降混訊號XT4係採用低功率解碼方案解碼, 降混訊號XT4則沿路徑P2傳輸及解碼。透過時域至實數正交鏡 相慮波裔域轉換單元240c,經過處理的降混訊號X丁4將轉換為實 數正交鏡相濾波器域的訊號XRQ。 透過實數至複數正交鏡相濾波器域轉換單元25〇c,經過轉換 的降混訊號XRQ將被轉換為複數正交鏡相濾波器域的訊號 XQC2。使XRQ降混訊號轉換至XQC2降混訊號係為複雜度域轉 換之實例。 接下來’複數正交鏡相濾、波器域的訊號XQC2與空間資訊SI8 組合於多通道產生單元260c中,以產生多通道音頻訊號M2。 因此,採用低功率解碼方案解碼降混訊號XT4時,不需要單 獨的延遲處理程序。這是因為依照低功率解碼方案,音頻訊號於 編碼程序時’降混訊號與空間資訊之間的時序同步已經匹配。換 言之’本實施例中’降混訊號XRQ不需要進行解碼延遲。 如果採用高品質解碼方案解碼經過處理的降混訊號XT4,降 混訊號XT4則沿路徑P1傳輸及解碼。透過時域至複數正交鏡相 濾波器域轉換單元210c,經過處理的降混訊號XT4將被轉換為複 數正交鏡相濾波器域的訊號XCQ1。 然後,於訊號延遲處理單元220c中,透過降混訊號XCQ1與 19 !317247 空間資訊sn之間的時序延遲差值以延遲經過轉換的降混訊號 XCQ1。 夕接下來,經過延遲的降混訊號XCQ1,與空間資訊si7組合於 多通道產生單元班巾,减生㈣道音頻訊號M1。 …降混峨XCQlit過訊號輯處理單元2施伽為進行音頻 ^虎之編雜料假設採祕功麵碼方案,卿料致產生降 混訊號XCQ1與空間資訊SI7之間的時序同步差值。 時序同步差值係為時序延遲差值,取決於所使用的解碼方 案。例如,由於低功率解碼方案的解碼程序不同於高品質解碼方 二、解1¾序’所以會產生時序延遲差值。因為組合降混訊號與 二間貝Λ叫間點後’可能不需要同步化降混峨與空間資訊, 所以直雜合降混訊號與m資訊的時财才考射序 值。 睛參考「第6圖」,直到組合降混訊號XCQ2與空間資訊诹 =時間點喊生第—延遲時序,朗組合降混訊號XCQ1,與空間 貝戒SI7的時間點時產生第二延遲時序,時序同步差值係為第— (遲^序與乐二延遲時序之間的差值。本實施例中,—個時序取 樣單元或0铸單元可作树序延遲之單位。 。士如果時域至複航交鏡減波器域轉鮮元2池所產生的延 遲h序等於時域至實數正交鏡赠波賊轉解元.所產生的 (遲時序、將相使訊艇遲處理單元瑜娜實數至複數正 20 1317247 父鏡相濾A器域轉換單元挪所產生的延遲時序來延遲降混訊號 XCQ1 〇 请茶考「第6圖」所示之實施例,多通道解碼單元2〇〇c包含 兩種解瑪方案。另外,多通道解碼單元紙也可僅包含—種解碼 方案。 本發明之上述實施例中,降混訊號與空間資訊之間的時序同 v係依k低功率解碼方案產生匹配。此外,本發明更包含依照高 品質解碼方案使降混訊號與空間資訊之間產生時序同步匹配之實 例。本實施例中,以相對於採用低功率解碼方案產生時序同步匹 配的方式而前導降混訊號。 第7圖」所示係為依照本發明另一實施例之音頻訊號解碼 方法之方塊圖。 请麥考「第7圖」’本發明之解碼裝置包含降混解碼單元1〇〇d 以及多通道解碼單元2〇〇d。 經降混解碼單元100d處理的降混訊號χτ4被傳輸至多通道解 碼單元200d,其中降混訊號與空間資訊sn,或SI8將進行組合, 以產生^通道音頻訊號M3或]\/[2。本實施例中,經過處理的降混 訊號XT4係為時域的訊號。 經過編碼的降混訊號DB被傳輸至降混解碼單元1〇〇d以進行 處理。經過處理的降混訊號XT4被傳輸至多通道解碼單元2〇〇d , 依照兩種解碼方案其中之一而產生多通道音頻訊號,兩種解碼方 21 1317247 案係為兩品質解碼方案以及低功率解碼方案。 如果採用低功率解碼方案解碼經過處理的降混訊號χτ4,則 降混訊號ΧΤ4沿路徑Ρ4傳輸及解碼。透過時域至實數正交鏡相 濾波器域轉換單元240d ’經過處理的降混訊號χτ4被轉換為實數 正交鏡相濾波器域的訊號XRQ。 透過實數至複數正交鏡相濾波器域轉換單元250d,經過轉換 的降混虎XRQ被轉換為複數正交鏡相濾波器域的訊號XCQ2。 使XRQ降混訊號轉換iXCQ2降混訊號之程序係為複雜度域轉換 之實例。 接下來’複數正交鏡相濾波器域的訊號XCq2與空間資訊⑽ 組合於多通道產生單元260d中,以產生多通道音頻訊號M2。 因此,採用低功率解碼方案解碼降混訊號χΤ4時,不需要單 獨的延遲程序。這是由於依照低功率解碼方案,在音頻訊號編碼 日守,降混§孔5虎與空間資訊之間係以時序同步匹配。換言之,本實 加例中,空間資§孔SI8不需要進行解碼延遲。 如果採用咼品質解碼方案解碼經過處理的降混訊號χτ4,降 混訊號ΧΤ4則沿路徑Ρ3傳輸及解碼。透過時域至複數正交鏡相 濾波益域轉換單元210d ’經過處理的降混訊號χτ4將被轉換為複 數正交鏡相遽波器域的訊號XCQ1。 經過轉換的降混訊號XCQ1傳輸至多通道產生單元23〇d,並 與空間> §fl SI7組合以產生多通道音頻訊號M3。本實施例中,由 22 1317247 於工門貝。fi sn通過空間資訊延遲處理單元:廳,所以空間資訊 SI7’係為經過時序延遲補償之空間資訊。、 工間貝訊SI7通過空間資訊延遲處理單元22〇d係因為進行音 頻訊號之編碼程序時假設採用低功率解碼方案,所以將導致產生 降混訊號XCQ1與空間資訊SI7之間的時序同步差值。 時序同步差值係為時序延遲差值,取決於所使用的解碼方 案例如’由於低功率解石馬方案的解碼程序不同於高品質解碼方 案的解碼程序,所崎產生鱗延遲缝。因為組合降混訊號盘 空間資訊㈣間點後,不需要同步化降混訊號與空間資訊,所以 直到2合降混訊賴郎魏的_科树慮鱗延遲差值。 明參考「第7圖」’直到組合降混訊號XCQ2與空間資訊弧 的時間點時產生第—延遲時序,直顺合降混訊號xcQi與空間 資訊防,的時間點時產生第二延遲時序,時序同步差值係為第一 延_序與第二延遲時序之間的差值。本實施例中…個時序取 樣單位或時槽單位可作為時序延遲之單位。 如果時域至複數正交鏡相渡波器域轉換單元210d所產生的延 遲時序等於時域至實數正交鏡域波_轉解W所產生的 延遲時序,财間資訊延遲處理單元應足崎過實數至複數正 父鏡相濾波器域轉換單元2观所產生的延遲時序前導空 SI7。 、 圖式之實例中,多通道解碼單元2_包含_解碼方案。另 23 1317247 外夕通道%碼卓元200d也可僅包含一種解石馬方案。 本發明之上述實施例中,降混訊號與空間資訊之間的時序同 步係依加低功率解碼方案產生匹配。此外,本發明更包含依照高 品質解碼方案以使降混訊號與空間資訊之間產生時序同步匹配之 實例。本實施例中,以相對於採用低功率解碼方案產生時序同步 匹配的方式以延遲降混訊號。 雖然如「第6圖」以及「第7圖」所示,訊號延遲處理單元 220c以及空職訊延遲處理單元22Qd僅其中之—包含於多通道解 碼單元職或200钟,但是本發明尚包含空間資訊延遲處理單 元2观以及訊號延遲處理單元2施同時包含於多通道解碼單元 2〇〇c或麵中之實例。本實施例中,空間資訊延遲處理單元胸 的延遲補餅賴峨延遲處料元2施顺獅餅序之和係 荨於時序同步差值。 士以上所闡述係為由於複數個降混輸入定義域之存在㈣起的 日可序同步差值之補償方法以及由於複數個解碼方案之存在而引起 的時序同步差值之補儂方法。 下面將解釋—種由於複數個降混輸人域之存在以及複數個解 瑪方案之存在帥摘時相步絲之補償方法。 第8目」所讀為細本發明之實關之音頻喊解瑪方 法的方塊圖。 請參考「第請」,本發明之解碼裝置包錯鱗碼單元職 24 1317247 以及多通道解碼單元200e。 依’知、本發明另一實施例之音頻訊號的處理方法中,降混解石馬 早兀100e中所處理的降混訊號可以兩種定義域其申之一的方式傳 輸至多通這解碼單元施巾。本發明實施例中,假設採用低功率 解碼方案’冑彳降混訊號與訊之間的時序同步係匹配於正交 鏡相濾波$域。另外,各種修正的方案也可應齡本發明。 下面將解釋正交鏡相遽波器域處理的降混訊號XQ5透過傳輪 至多通道解碼單元職以進行訊號處理的方法。本實施例中,降 虎XQ5可為複數正交鏡相濾波器訊號XCQ5以及實數正交鏡 相濾波器纖XRQ5之其巾任—。採用高品f解碼方案於降混解 碼單7C 100e中處理複數正交鏡相濾波器域的降混訊號xcQ5。採 用低功率解碼方餘降混解解元廟e巾處理實數正交鏡相據波 器域的降混訊號XRQ5。 本發明實關巾’假設降混解碼單元職巾高品質解碼方案 處理的訊號係連接於高品質解碼方案的多通道解瑪單元施,且 降混解碼單元職中低辨解碼方案處理的訊雜連接於低功率 解碼方案的多通雜碼單元施。另外,各種修正的方案也可應 用於本發明。 叙设採用低功率解碼方案解碼經過處理的降混訊號XQ5,則 降混喊XQ5沿路徑P6傳輪及解碼。本實施例中,降混訊號Xq5 係為實數正交鏡相濾波器域的降混訊號双^5。 25 1317247 务/tb Λ號XRQ5與空間資訊sil 0組合於多通道產生單元231 e 中’以產生多通道音頻訊號M5。 因此’採用低功率解碼方案解碼降混訊號耶時,不需要單 獨的延遲處理程序。這是因為依照低功率解碼方案進行音頻訊號 之編碼程序時,降混訊號與空間資訊之間的時序同步係已匹配。 女果採用同扣質解碼方案解碼經過處理的降混訊號時, 則降此減XQ5沿路徑p5傳輸及解碼。本實關巾,降混訊號 耶係為複數正交馳驗1域轉混職XCQ5。降混訊號 XCQ5與空間資訊SI9組合於多通道產生單元2施中,以產生多 通道音頻訊號M4。 以下將解釋經正交鏡相濾波器域至時域轉換單元職進行時 域轉換處理的降混訊號XT5傳輸至多通道解碼單元施以進行訊 號處理之實施例。 經降混解碼單元1 OOe處理崎混訊號ΧΤ5被傳輸至多通道解 碼單元2〇〇e,並與空間資訊SI11或SIU進行組合以產生多通道 音頻訊號M6或M7 〇 降混訊说XT5傳輸至乡通道解碼單元細ϋ照^種解碼方 案其中之-而產生乡通道音頻峨,兩種解碼方案係為高品質解 碼方案以及低功率解碼方案。 如杲採用低功率解碼方案解碼經過處理的降混訊號ΧΤ5,則 降混訊號ΧΤ5沿路徑Ρ8傳輸及解碼。透過時域至實數正交鏡相 26 1317247 濾波器域轉換單元241e,經過處理的降混訊號XT5將轉換為實數 正交鏡相濾波器域的訊號XR。 透過實數正交鏡相濾波器域至複數正交鏡相濾波器域轉換單 元251e ’經過轉換的降混訊號xr將被轉換為複數正交鏡相濾波 器域的訊號XC2。使XR降混訊號轉換至xC2降混訊號係為複雜 度域轉換之實例。 接下来,複數正交鏡相濾波器域的訊號XC2與空間資訊SI 12, 組合於多通道產生單元233e中,以產生多通道音頻訊號M7。 本實施例中’由於空間資訊SI12通過空間資訊延遲處理單元 270e,所以空間資訊SI12’係為經過時序延遲補償之空間資訊。 空間資訊SI12之所以通過空間資訊延遲處理單元27〇e係因為 假設降混訊號與空間資訊之間係已時序同步匹配於正交鏡相濾、波 益域,由於採用低功率解碼方案以完成音頻訊號編碼,以致降混 訊號XC2與空間資訊SI12之間將產生時序同步差值。經過延遲的 空間資訊SI12’係透過編碼延遲以及解碼延遲而被延遲。 如果採用高品質解碼方案解碼經過處理的降混訊號χτ5,降 混§fl號ΧΤ5則沿路徑Ρ7傳輸及解碼。透過時域至複數正交鏡相 濾波益域轉換單元240e將經過處理的降混訊號χΤ5轉換為複數正 交鏡相濾波器域的訊號XC1。 藉由降混訊號XC1以及空間資訊sii 1之間的時序同步差值, 經過轉換的降混訊號XC1及空間資訊sill則各自於訊號延遲處理 27 1317247 早7L25〇e及空财訊延遲處理單元麻巾進行時序延遲補償。 、…接下來’經過時序延遲補償的降混訊號χα,與經過時序延遲 補危的空間資訊SI11,組合於多通道產生單元孤中,以生 通道音頻訊號M6。 因此’降混訊號χα通過訊號延遲處理單元25〇e,且空間資 訊sm通過空間資訊延遲處理單元施。這是因為假設採^低功 率解碼方案,更假設降混峨雜職訊之間㈣相步係已匹 配於正交鏡域波ϋ域,目此音頻減的編碼將產生降混訊號 xci與二間為訊sm之間的時序同步差值。 「第9圖」係為依照本發明實施例之音頻訊號之解碼方法的 方塊圖。 請參考「第9圖」,本發明之解碼裝置包含降混解碼單元匿 以及多通道解碼單元2〇〇f。 經過編碼的降混訊號腦傳輸至降混解碼單元證以進行處 理三降混峨DB1進行編碼程序時將考慮兩種降混解碼方案,包 含第一降混解碼方案以及第二降混解碼方案。 依照-種降混解碼方案於降混解碼單元随巾處理降混訊號 DB1.。此種降混解碼方案可為第一降混解馬方宰。 經過處理的降混訊號灯6傳輸至多通道解碼單元肅,以產 生多通道音頻訊號Mf。 經過處理的降混訊號XT6,透過解碼延遲而於訊號延遲處理單 28 .1317247 ' 元21〇f中被延遲。因此,降混訊號XT6,可透過解碼延遲而被延 遲。延遲降混訊號XT6的原因,在於編碼時的降混解碼方案不同 於解碼時所使用的降混解碼方案。 因此’需要依照不同的情況以升取樣降混訊號χΤ6,。 經過延遲的降混訊號ΧΤ6’升取樣於升取樣單元22〇f中。升取 樣降混訊號XT6,的原因在於降混訊號X丁6,的取樣數目不同於空 間資訊SI13的取樣數目。 降混訊號XT6的延遲處理以及降混訊號ΧΤ6,的升取樣處理 之順序係可互換。 經過升取樣的降混訊號UXT6係於定義域處理單元23〇f中進 行定義域轉換。降混訊號UXT6的定義域轉換可包含頻/時域轉 換以及複雜度域轉換。 接下來,經過定義域轉換的降混訊號UXTD6與空間資訊sn3 組合於多通道產生單元篇中,以產生多通道音頻訊號隱。 以上之闡述係為降混訊號與空間資訊之間產生的時序同步差 值的補償方法。 ^ 下面將闡述時序串列資料與透過前述方法之一所產生的多通 ϋ頻訊號m的挪时差值之補償方法。 「第1〇圖」係為本發明實施例之音頻訊號解碼裝置之方塊圖。 請參考「第10圖」,依照本發明實施例,音頻訊號的解碼裝 置包含時序串列解碼單元10以及多通道音頻訊號處理單元20 / 29 1317247 如多通道音頻訊號處理單元20包含降混解碼單元21、多通道解 馬單元22以及時序延遲補償單元&。 降犯位兀流IN2,係為經過編瑪的降混訊號之實例,將被輸入 至降混解碼單元21以進行解碼。 本實施例中,降混位元流聰可以兩種定義域的方式被解碼 炎輪出可用以輸出之定義域包含時域以及正交鏡相爐波器域。 ^考標號‘5〇,麵啤衫柄碼及触崎混減,參考標號 表不以正父鏡相濾波器域方式解碼及輸出的降混訊號。雖然本 毛月之’、施例巾僅描述兩種定義域,但是本發明尚可包含以其他 種類疋義域解碼及輸出的降混訊號。 降此訊號50及51傳輪至多通道解碼單元22後,然後依昭兩 ,解碼方案观以及进各自進行解碼程序。本實施例中,參考 標就‘22Η’表示高品質解碼方案’‘孤,表示低功轉碼方案。 本發明之實施例中,雖僅採用兩種解碼方案’但是,本發明 亦可採用更多的解碼方案。 以時域方式進行解碼及輪出的降混訊號5〇係依照選擇路徑 p9舁Pi〇其中之一而進行解碼。本實施例中,路裎p9表示採用 高品質解碼方案22H之解碼路徑,而路徑plQ表示_低功率解 碼方案22L之解碼路徑。 依照高品質解碼方案22H,沿路徑p9雜的降混訊號%與 二間貝訊SI進行組合,以產生多通道音頻訊號贿T。依照低功 30 1317247 解馬方木22L /口路徑pi〇傳輸的降混訊號π與空間資訊幻 進行組合,以產生多通道音頻訊號MLT。 另-以正交鏡相濾波器域方式解碼及輸出的降混訊號51係依 照選擇路徑P11與P12其中之一而進行解碼。本實施例中,路徑 阳表示採用高品質解碼方案22H之解碼路徑,而路徑pi2表示 採用低功率解碼方案22L之解碼路徑。 *依“、、同σσ質解碼方案22H,沿路徑pn j專輸的降混訊號51與 μ門貝λ si進行組合’以產生多通道音頻訊號mhq。依照低功 率解碼方案22L,沿路徑Ρ12傳輸的降混訊號&與空間資訊si 進仃組合,以產生多通道音頻訊號MLQ。 C方法所產生夕通道音頻訊號Mht、MHQ、mlT以及 =Q’至少其中之一係於時序延遲補償單元23中完成時序延遲補 ‘耘序’然後輸出為時序串列資料OUT2、OUT3、OUT4或0UT5。 本貝知例中’假设降混位元流取丨經由時序串列解碼單元1〇 解碼且獅之時序串列資料Ο™與上述的多通道音頻訊號ΜΗΤ 之間辦相步匹配’因鱗序延賴償料能触止時序延遲 $於比較時序同步失配的多通道音頻訊號MHQ、MLT或MLQ與 夕通道音頻訊號ΜΗΤ時產生。當然,如果時序串列資料 '、夕通道s步員訊號(除了上述的多通道音頻訊號MHT) MHQ、 、及]VILQ其中之一之間係時序同步匹配,則透過補償時序 同步失配的殘餘多通道音頻訊號其中之—的時序延遲以匹配時序 31 1317247 串列資料的時序同步。 如果時序串列資料0UT1與多通道音頻訊號ΜΗΤ、MHQ、 MLT或MLQ並非-同進行處理’實施例尚可完成時序延遲之補 償處理。例如’使用多通道音頻訊號⑽的比較結果,補償且防 止發生多通道音頻訊號的時序延遲。這可以多種方式實現。 熟悉本領域藝人貞’在不本發明之精神和範圍内, 顯然可作出多種更動與_。因此,申請專利範圍内所作之更動 與潤飾均屬本發明之專利保護範圍之内。 工業應用 因此,本發明提如下效益或優點。 首先’如料混峨錢_訊之間產生時辆步差值 發明透過補償時辆步差值之方式哪止音齡質退化。 其次’本發明能_償時序串列資料與待處理的多通道音頻 ^以及、价#細_物麵之間的時序 【圖式簡單說明】 訊號圖所糊編照本靖關之解糊 K方塊圖; 法之:::所不係為第1圖所示之多通道解碼單元之訊號細 弟圖所不係為第2圖所示之多通道解碼單元之訊號處理力 32 1317247 法之方塊圖;以及 —實施例之解石馬 第6圖至第10圖所示係為依照解釋本發明另 音頻訊號方法之方塊圖。 【主要元件符號說明】 100、 100a、 l〇〇b、 100c、 110、110b、ll〇e 110a 〇〇d l〇〇e、、21降混解碼單元 正父鏡相濾波器域至時域轉換單元 210、210a 修正式離散餘弦轉換域至時域轉換單元 時域至正交鏡相濾波器域轉換單元 210c ^ 210d > 240e 喊至複數正交鏡相驗ϋ域轉換單元 240c > 240d > 241e 時域至實數正交鏡相濾波器域轉換單元 250c、250d、251e 實數正交鏡相濾波器域至複數正交鏡相 濾、波器域轉換單元 300a ' 500b 修正式離散餘弦轉換域至正交鏡相濾波 器域轉換單元 200、200a、200b、200c、200d、200e、200f、22 多通道解碼單元 220、220a、220c、250e、21〇f 訊號延遲處理單元 230、230a、230c、230d、230e、260c、260d、231e、232e、233e、 240f 多通道產生單元 240、240a、22〇d、260e、270e 空間資訊延遲處理單元 Χφ、XII、Xm、XT2、XQ2、XQ2,、XT3、XQ3、Xq卜 Xql,、 33 1317247Fig. 6 is a block diagram P U of the audio decoding signal method of another embodiment of the present invention. The multi-channel decoding unit 100c and the multi-channel decoding unit 2〇〇c are included in the decoding apparatus of the present invention. In the embodiment, the downmix signal χτ4 processed by the down-mix decoding unit is transmitted to the eve channel smashing horse unit 2〇〇c ', wherein the signal is combined with the spatial information SI7 or (10) to generate the multi-pass audio. Signal M1 or milk. In this embodiment, the processed downmix signal XT4 is a downmix signal in the time domain. The encoded downmix signal DB is transmitted to the downmix decoding unit 1〇〇c for processing. The processed downmix signal XT4 is transmitted to the multi-channel decoding unit 2〇〇c, according to one of the two decoding schemes to generate multi-channel audio signals according to 18 1317247, and the two decoding schemes are a quality decoding scheme and low power. Decoding scheme. If the processed downmix signal XT4 is decoded using a low power decoding scheme, the downmix signal XT4 is transmitted and decoded along path P2. The processed downmix signal X D4 is converted to the signal XRQ of the real orthogonal mirror filter domain by the time domain to real orthogonal mirror. The converted downmix signal XRQ is converted to the signal XQC2 of the complex orthogonal mirror phase filter domain by the real number to the complex orthogonal mirror phase filter domain converting unit 25〇c. Converting the XRQ downmix signal to the XQC2 downmix signal is an example of complexity domain conversion. Next, the complex orthogonal phase filter, the signal XQC2 of the wave domain and the spatial information SI8 are combined in the multi-channel generating unit 260c to generate a multi-channel audio signal M2. Therefore, when the downmix signal XT4 is decoded using a low power decoding scheme, a separate delay handler is not required. This is because, according to the low power decoding scheme, the timing synchronization between the downmix signal and the spatial information of the audio signal during the encoding process has been matched. In other words, the 'downmix signal XRQ' in this embodiment does not require a decoding delay. If the processed downmix signal XT4 is decoded using a high quality decoding scheme, the downmix signal XT4 is transmitted and decoded along path P1. The processed downmix signal XT4 is converted to the signal XCQ1 of the complex orthogonal mirror filter domain through the time domain to complex orthogonal mirror phase filter domain conversion unit 210c. Then, in the signal delay processing unit 220c, the timing delay difference between the downmix signal XCQ1 and the 19!317247 spatial information sn is transmitted to delay the converted downmix signal XCQ1. In the evening, the delayed downmix signal XCQ1 is combined with the spatial information si7 in the multi-channel generating unit to reduce the (four) channel audio signal M1. ...downmixing XCQlit signal processing unit 2 Shi Jia for the audio ^ tiger's choreography hypothesis mining secret surface code scheme, the Qing material caused the timing synchronization difference between the downmix signal XCQ1 and the spatial information SI7. The timing synchronization difference is the timing delay difference, depending on the decoding scheme used. For example, since the decoding procedure of the low power decoding scheme is different from the high quality decoding scheme, the timing delay difference is generated. Because the combination of the downmix signal and the second bayonet call point may not need to synchronize the downmix and the spatial information, so the coincidence of the downmix signal and the m information will only be tested. The eye is referred to in "Picture 6" until the combined downmix signal XCQ2 and the spatial information 诹 = time point screaming the first delay sequence, the lang combination downmix signal XCQ1, and the space 贝 SI SI7 time point produces a second delay timing, The timing synchronization difference is the difference between the (the delay sequence and the delay sequence). In this embodiment, the timing sampling unit or the 0 casting unit can be used as the unit of the tree sequence delay. The delay h sequence generated by the reversing mirror subtractor domain is equal to the time domain to real orthogonal mirror gift thief transfer solution. The delayed sequence will be delayed by the carrier. Yuna real number to complex positive 20 1317247 The delay sequence generated by the parent mirror phase filter A-field conversion unit is delayed to delay the down-mix signal XCQ1. Please refer to the embodiment shown in "Figure 6", multi-channel decoding unit 2〇 〇c includes two kinds of solution solutions. In addition, the multi-channel decoding unit paper may also include only one decoding scheme. In the above embodiment of the present invention, the timing between the down-mix signal and the spatial information is the same as the v-based power. The decoding scheme produces a match. Furthermore, the invention further includes The quality decoding scheme causes an example of timing synchronization matching between the downmix signal and the spatial information. In this embodiment, the downmix signal is pre-processed in a manner that produces timing synchronization matching using a low power decoding scheme. Figure 7 A block diagram of an audio signal decoding method according to another embodiment of the present invention. Please refer to "Figure 7". The decoding device of the present invention includes a downmix decoding unit 1 〇〇d and a multi-channel decoding unit 2 〇〇d The downmix signal χτ4 processed by the downmix decoding unit 100d is transmitted to the multi-channel decoding unit 200d, wherein the downmix signal and the spatial information sn, or SI8, are combined to generate a channel audio signal M3 or ]\/[2. In this embodiment, the processed downmix signal XT4 is a time domain signal. The encoded downmix signal DB is transmitted to the downmix decoding unit 1〇〇d for processing. The processed downmix signal XT4 is transmitted. The multi-channel decoding unit 2〇〇d generates a multi-channel audio signal according to one of the two decoding schemes, and the two decoders 21 1317247 are two quality decoding schemes and low power. The code scheme: If the processed downmix signal χτ4 is decoded by the low power decoding scheme, the downmix signal ΧΤ4 is transmitted and decoded along the path Ρ4. The processed time domain to real orthogonal mirror phase filter domain conversion unit 240d 'processed down The mixed signal χτ4 is converted into a signal XRQ of the real-numbered orthogonal mirror filter domain. Through the real-to-complex-orthogonal phase filter domain conversion unit 250d, the converted downmixed tiger XRQ is converted into a complex orthogonal mirror filter. The signal XCQ2 of the domain is used to convert the XRQ downmix signal into the iXCQ2 downmix signal. The following is an example of complex domain conversion. Next, the signal XCq2 and spatial information (10) of the complex orthogonal mirror filter domain are combined in the multi-channel generating unit. In 260d, a multi-channel audio signal M2 is generated. Therefore, when the downmix signal χΤ4 is decoded using a low power decoding scheme, a separate delay procedure is not required. This is because in accordance with the low-power decoding scheme, the audio signal is encoded in the day-to-day, and the down-mixing 55 tiger and the spatial information are matched in time series. In other words, in this practical example, the space resource SI hole SI8 does not need to perform a decoding delay. If the processed downmix signal χτ4 is decoded using the 咼 quality decoding scheme, the downmix signal ΧΤ4 is transmitted and decoded along path Ρ3. The processed downmix signal χτ4 transmitted through the time domain to complex orthogonal mirror phase filtering domain conversion unit 210d' is converted into the signal XCQ1 of the complex orthogonal mirror phase chopper domain. The converted downmix signal XCQ1 is transmitted to the multi-channel generating unit 23〇d and combined with the space > §fl SI7 to generate a multi-channel audio signal M3. In this embodiment, it is 22 1317247 in Gongmenbei. Fi sn uses the spatial information delay processing unit: hall, so the spatial information SI7' is the spatial information compensated by the timing delay. The inter-station SI7 passes the spatial information delay processing unit 22〇d because the low-power decoding scheme is assumed when the audio signal encoding process is performed, which will result in the timing synchronization difference between the downmix signal XCQ1 and the spatial information SI7. . The timing synchronization difference is the timing delay difference, which depends on the decoding scheme used, e.g., because the decoding procedure of the low power solution solution scheme is different from the decoding procedure of the high quality decoding scheme, the scale delay crack is generated. Because the combination of the downmix signal space information (4) point, there is no need to synchronize the downmix signal and the spatial information, so until the 2nd fall and fall news, Lai Langwei's _ ke tree scales the delay difference. Referring to "Fig. 7", until the time point of the combined downmix signal XCQ2 and the spatial information arc is generated, a first delay timing is generated, and a second delay timing is generated at a time point when the mixed mixed signal xcQi and the spatial information are prevented. The timing synchronization difference is the difference between the first delay sequence and the second delay timing. In this embodiment, a timing sampling unit or a time slot unit can be used as a unit of timing delay. If the delay timing generated by the time domain to complex orthogonal mirror phase domain converter unit 210d is equal to the delay timing generated by the time domain to real orthogonal mirror wave_transformation W, the inter-bank information delay processing unit should have a real number The delay timing leading to the complex positive-synchronized mirror phase domain conversion unit 2 is leading the null SI7. In the example of the figure, the multi-channel decoding unit 2_ includes a decoding scheme. Another 23 1317247 The outer channel % code Zhuo Yuan 200d can also contain only one solution. In the above embodiment of the present invention, the timing synchronization between the downmix signal and the spatial information is matched by the low power decoding scheme. In addition, the present invention further includes an example of a timing synchronization match between a downmix signal and spatial information in accordance with a high quality decoding scheme. In this embodiment, the downmix signal is delayed in a manner that produces a timing synchronization match with a low power decoding scheme. Although the signal delay processing unit 220c and the empty service delay processing unit 22Qd are only included in the multi-channel decoding unit or 200 clocks as shown in "FIG. 6" and "FIG. 7", the present invention still includes space. The information delay processing unit 2 and the signal delay processing unit 2 are simultaneously included in the multi-channel decoding unit 2〇〇c or the surface. In this embodiment, the delay delay of the spatial information delay processing unit chest is delayed by the sum of the elements of the Shi Shun Shi cake sequence. The above description is based on the method of compensating for the day-to-order synchronous difference due to the existence of a plurality of downmix input domain (4) and the method of complementing the timing synchronization difference due to the existence of a plurality of decoding schemes. In the following, we will explain the compensation method for the phase-stepped filaments due to the existence of a plurality of down-mixed input domains and the existence of a plurality of solutions. The eighth item is read as a block diagram of the audio spoofing method of the invention. Please refer to "please", the decoding device of the present invention includes a wrong scalar unit 24 1317247 and a multi-channel decoding unit 200e. According to the processing method of the audio signal according to another embodiment of the present invention, the downmix signal processed in the downmixing solution may be transmitted to the multi-pass decoding unit in a manner of one of the two defined domains. Apply towel. In the embodiment of the present invention, it is assumed that the low-power decoding scheme 'the timing synchronization between the down-mix signal and the signal is matched to the orthogonal mirror phase filtering $ domain. In addition, various modified solutions are also available to the present invention. The method of performing signal processing by the down-mix signal XQ5 processed by the orthogonal mirror phase chopper field through the transmission-to-multi-channel decoding unit will be explained below. In this embodiment, the descending tiger XQ5 can be a complex orthogonal mirror phase filter signal XCQ5 and a real orthogonal mirror filter fiber XRQ5. The downmix signal xcQ5 of the complex orthogonal mirror filter domain is processed in the downmix decoding block 7C 100e using a high-quality f decoding scheme. The low-power decoding square residual-mixing solution is used to process the down-mix signal XRQ5 of the real-time orthogonal mirror phase domain. According to the present invention, the signal processed by the high-quality decoding scheme of the downmix decoding unit is connected to the multi-channel decoding unit of the high-quality decoding scheme, and the signal of the down-mix decoding unit is processed by the low-definition decoding scheme. A multi-pass code unit connected to a low power decoding scheme. In addition, various modified solutions are also applicable to the present invention. The low-power decoding scheme is used to decode the processed downmix signal XQ5, and the downmix XQ5 is transmitted and decoded along the path P6. In this embodiment, the downmix signal Xq5 is a downmix signal double^5 of the real-numbered orthogonal mirror filter domain. 25 1317247 The service /tb nickname XRQ5 is combined with the spatial information sil 0 in the multi-channel generating unit 231 e to generate the multi-channel audio signal M5. Therefore, when the downmix signal is decoded using a low power decoding scheme, a separate delay handler is not required. This is because the timing synchronization between the downmix signal and the spatial information is matched when the audio signal encoding process is performed in accordance with the low power decoding scheme. When the female fruit uses the same deduction quality decoding scheme to decode the processed downmix signal, it is reduced and XQ5 is transmitted and decoded along the path p5. This is the actual closing towel, the downmix signal. The yeah is the complex orthogonal 驰1 domain to the mixed XCQ5. The downmix signal XCQ5 and the spatial information SI9 are combined in the multi-channel generating unit 2 to generate a multi-channel audio signal M4. An embodiment in which the downmix signal XT5 is transmitted to the multichannel decoding unit for signal processing by the orthogonal mirror phase filter domain to the time domain conversion unit for time domain conversion processing will be explained below. The downmix signal decoding unit 1 OOe processes the reverberation signal ΧΤ5 and transmits it to the multi-channel decoding unit 2〇〇e, and combines it with the spatial information SI11 or SIU to generate a multi-channel audio signal M6 or M7. The down-mixing signal says XT5 transmission to the township. The channel decoding unit generates a video channel of the home channel in detail, and the two decoding schemes are a high quality decoding scheme and a low power decoding scheme. If the processed downmix signal ΧΤ5 is decoded using a low power decoding scheme, the downmix signal ΧΤ5 is transmitted and decoded along path Ρ8. Through the time domain to real orthogonal mirror phase 26 1317247 filter domain conversion unit 241e, the processed downmix signal XT5 is converted to the signal XR of the real quadrature mirror phase filter domain. The converted downmix signal xr through the real-numbered orthogonal mirror phase filter domain to the complex-orthogonal mirror phase filter domain conversion unit 251e' will be converted to the signal XC2 of the complex quadrature mirror filter domain. Converting the XR downmix signal to the xC2 downmix signal is an example of a complexity domain conversion. Next, the signal XC2 of the complex orthogonal phase filter domain and the spatial information SI 12 are combined in the multi-channel generating unit 233e to generate a multi-channel audio signal M7. In the present embodiment, since the spatial information SI12 passes through the spatial information delay processing unit 270e, the spatial information SI12' is the spatial information subjected to the timing delay compensation. The space information SI12 passes through the spatial information delay processing unit 27〇e because it is assumed that the downmix signal and the spatial information have been time-synchronized and matched to the orthogonal mirror phase filtering and the Boyi domain, and the audio is completed by using a low power decoding scheme. The signal is encoded such that a timing synchronization difference is generated between the downmix signal XC2 and the spatial information SI12. The delayed spatial information SI12' is delayed by the coding delay and the decoding delay. If the processed downmix signal χτ5 is decoded using a high quality decoding scheme, the downmix §fl ΧΤ5 is transmitted and decoded along path Ρ7. The processed down-conversion signal e5 is converted into the signal XC1 of the complex directional phase filter domain by the time domain to complex orthogonal mirror phase filtering benefit domain conversion unit 240e. By the timing synchronization difference between the downmix signal XC1 and the spatial information sii 1, the converted downmix signal XC1 and the spatial information sill are respectively 7L25〇e and the empty financial delay processing unit in the signal delay processing 27 1317247 The towel performs timing delay compensation. Then, the down-mix signal χα subjected to the timing delay compensation and the spatial information SI11 subjected to the timing delay compensation are combined in the multi-channel generating unit to generate the channel audio signal M6. Therefore, the downmix signal χα passes through the signal delay processing unit 25〇e, and the spatial information sm is applied through the spatial information delay processing unit. This is because it is assumed that the low power decoding scheme is adopted, and it is assumed that the phase difference between the downmix and the mixed traffic is matched to the orthogonal mirror domain, and the audio subtraction code will produce the downmix signal xci and the second. The timing synchronization difference between the messages is sm. Fig. 9 is a block diagram showing a decoding method of an audio signal according to an embodiment of the present invention. Referring to Fig. 9, the decoding apparatus of the present invention includes a downmix decoding unit and a multichannel decoding unit 2〇〇f. The coded downmix signal brain is transmitted to the downmix decoding unit to process the three downmix DB1 for encoding. Two downmix decoding schemes are considered, including a first downmix decoding scheme and a second downmix decoding scheme. The downmix signal DB1 is processed in the downmix decoding unit according to the downmix decoding scheme. Such a downmix decoding scheme can be the first downmix solution. The processed downmix signal lamp 6 is transmitted to the multi-channel decoding unit to generate a multi-channel audio signal Mf. The processed downmix signal XT6 is delayed by the decoding delay in the signal delay processing unit 28 .1317247 'yuan 21〇f. Therefore, the downmix signal XT6 can be delayed by the decoding delay. The reason for delaying the downmix signal XT6 is that the downmix decoding scheme at the time of encoding is different from the downmix decoding scheme used in decoding. Therefore, it is necessary to sample the downmix signal χΤ6 according to different situations. The delayed downmix signal ΧΤ 6' liter is sampled in the upsampling unit 22〇f. The reason for the sample downmix signal XT6 is that the number of samples of the downmix signal X D6 is different from the number of samples of the space information SI13. The sequence of the delay processing of the downmix signal XT6 and the upsampling process of the downmix signal ΧΤ6 are interchangeable. The upsampled downmix signal UXT6 is subjected to a domain conversion in the domain processing unit 23〇f. The domain conversion of the downmix signal UXT6 can include frequency/time domain conversion as well as complexity domain conversion. Next, the down-converted signal UXTD6 and the spatial information sn3, which are converted by the domain, are combined in the multi-channel generating unit to generate a multi-channel audio signal. The above description is a compensation method for the timing synchronization difference generated between the downmix signal and the spatial information. ^ The compensation method of the time series data and the time difference of the multi-channel frequency signal m generated by one of the foregoing methods will be explained below. The "first diagram" is a block diagram of an audio signal decoding apparatus according to an embodiment of the present invention. Please refer to FIG. 10 , in accordance with an embodiment of the present invention, an audio signal decoding apparatus includes a timing serial decoding unit 10 and a multi-channel audio signal processing unit 20 / 29 1317247. The multi-channel audio signal processing unit 20 includes a down-mix decoding unit. 21. A multi-channel de-horse unit 22 and a timing delay compensation unit & The descending spur stream IN2, which is an example of a mashed downmix signal, is input to the downmix decoding unit 21 for decoding. In this embodiment, the downmixing bit stream can be decoded in two ways. The defined domain includes the time domain and the orthogonal mirror phase furnace domain. ^ The test label ‘5〇, the face handle and the touch-and-salt mix, the reference label does not decode and output the downmix signal in the positive filter phase filter domain. Although the present invention only describes two domains, the present invention may also include downmix signals decoded and outputted in other types of domain. After the signals 50 and 51 are transmitted to the multi-channel decoding unit 22, then the decoding scheme and the decoding scheme are performed. In this embodiment, the reference symbol "22" indicates a high quality decoding scheme ''orphan, indicating a low power transcoding scheme. In the embodiment of the present invention, only two decoding schemes are employed. However, the present invention can also adopt more decoding schemes. The downmix signal 5, which is decoded in the time domain manner and rotated out, is decoded according to one of the selection paths p9舁Pi〇. In the present embodiment, the path p9 indicates the decoding path using the high quality decoding scheme 22H, and the path plQ indicates the decoding path of the low power decoding scheme 22L. According to the high quality decoding scheme 22H, the downmix signal % along the path p9 is combined with the two Bayesian SIs to generate a multichannel audio signal. According to the low-power 30 1317247 solution, the downmix signal π transmitted by the square tree 22L / port path pi〇 is combined with the spatial information illusion to generate a multi-channel audio signal MLT. Further, the downmix signal 51 decoded and outputted in the orthogonal mirror phase filter domain manner is decoded in accordance with one of the selection paths P11 and P12. In the present embodiment, the path yang indicates the decoding path using the high quality decoding scheme 22H, and the path pi2 indicates the decoding path using the low power decoding scheme 22L. * According to ",, with the σσ quality decoding scheme 22H, the downmix signal 51 and the μ gate λ si dedicated along the path pn j are combined to generate a multi-channel audio signal mhq. According to the low power decoding scheme 22L, along the path Ρ12 The transmitted downmix signal & is combined with the spatial information si to generate a multi-channel audio signal MLQ. At least one of the evening channel audio signals Mht, MHQ, mlT and =Q' generated by the C method is tied to the timing delay compensation unit. In 23, the timing delay is complemented by 'sequence' and then output as the timing serial data OUT2, OUT3, OUT4 or OUT5. In this example, 'assuming the downmix bit stream is taken and decoded by the timing series decoding unit 1〇 and the lion The timing serial data ΟTM is matched with the multi-channel audio signal 上述 above. 'The multi-channel audio signal MHQ, MLT or the timing synchronization mismatch can be stopped due to the timing delay. MLQ and eve channel audio signals are generated. Of course, if the timing serial data ', the s-channel s-step signal (except the above-mentioned multi-channel audio signal MHT) MHQ, , and ]VILQ is one of the timing synchronization matches , Then, the timing delay of the residual multi-channel audio signal of the compensated timing synchronization mismatch is matched to the timing synchronization of the serial data of the sequence 31 1317247. If the timing serial data 0UT1 and the multi-channel audio signal ΜΗΤ, MHQ, MLT or MLQ are not - The same processing is performed. The embodiment can still complete the timing delay compensation processing. For example, the comparison result using the multi-channel audio signal (10) compensates and prevents the timing delay of the multi-channel audio signal. This can be implemented in various ways. It is obvious that various changes and modifications can be made without departing from the spirit and scope of the invention. Therefore, the modifications and refinements made within the scope of the patent application are within the scope of the patent protection of the present invention. Benefits or advantages. First of all, 'when the mixed money is generated, the difference between the steps of the vehicle is generated by the method of compensating the difference in the step size of the vehicle. Secondly, the invention can compensate the time series data and The multi-channel audio of the pending ^ and the timing between the price _ fine _ object surface [simple description of the diagram] The K-block diagram of the closed solution; the method::: The signal-sharpness of the multi-channel decoding unit not shown in Figure 1 is not the signal processing power of the multi-channel decoding unit shown in Figure 2. 32 1317247 Block diagram of the method; and - Figure 6 to Figure 10 of the embodiment is a block diagram of another audio signal method according to the explanation of the present invention. [Description of main component symbols] 100, 100a, l〇 〇b, 100c, 110, 110b, ll〇e 110a 〇〇 dl 〇〇 e, 21 downmix decoding unit positive parent mirror phase filter domain to time domain conversion unit 210, 210a modified discrete cosine transform domain to time domain Conversion unit time domain to orthogonal mirror phase filter domain conversion unit 210c^210d > 240e yokes to complex orthogonal mirror phase domain conversion unit 240c > 240d > 241e time domain to real orthogonal mirror filter domain Conversion unit 250c, 250d, 251e real orthogonal mirror filter domain to complex orthogonal mirror phase filter, wave domain transform unit 300a '500b modified discrete cosine transform domain to orthogonal mirror phase filter domain conversion unit 200, 200a , 200b, 200c, 200d, 200e, 200f, 22 multi-channel Code unit 220, 220a, 220c, 250e, 21〇f signal delay processing unit 230, 230a, 230c, 230d, 230e, 260c, 260d, 231e, 232e, 233e, 240f multi-channel generating unit 240, 240a, 22〇d, 260e, 270e spatial information delay processing unit Χφ, XII, Xm, XT2, XQ2, XQ2, XT3, XQ3, Xq Bu Xql,, 33 1317247

Xq2、DB、XT4、XCQ卜 XCQl,、XCQ2、XRQ、XT5、XQ5、 XC1、XC1,、XR、XC2、XCQ5、XRQ5、XT6、XT6,、DB1、UXT6、 UXTD6、50、51 降混訊號 ΧΐνΠ、XM2、XM3、Mb M2、M3、M4、M5、M6、M7、Mf、 MHT、MHQ、MLT、MLQ 多通道音頻訊號 Sib SI2、SI3、SI4、SI2,、SI4’、SI5、SI6、SI7、SI8、SI7,、SI9、 SI10、SIH、SI12、Sill,、SI12,、SI13、SI 空間資訊 400b 殘餘解碼單元 RB、RM、RQ 殘餘訊號 PI、P2、P3、P4、P5、P6、P7、P8、P9、P10、Pll、P12 路徑 220f 升取樣單元 230f 定義域處理單元 10 20 22H 22L 23 INI > IN2 時序串列解碼單元 多通道音頻訊號處理單元 兩品質解碼方案 低功率解碼方案 時序延遲補償單元 降混位元流 OUT1、OUT2、OUT3、OUT4、OUT5 時序串列資料 34Xq2, DB, XT4, XCQ, XCQ1, XCQ2, XRQ, XT5, XQ5, XC1, XC1, XR, XC2, XCQ5, XRQ5, XT6, XT6, DB1, UXT6, UXTD6, 50, 51 Downmix signal ΧΐνΠ , XM2, XM3, Mb M2, M3, M4, M5, M6, M7, Mf, MHT, MHQ, MLT, MLQ multi-channel audio signals Sib SI2, SI3, SI4, SI2, SI4', SI5, SI6, SI7, SI8, SI7, SI9, SI10, SIH, SI12, Sill, SI12, SI13, SI spatial information 400b residual decoding unit RB, RM, RQ residual signal PI, P2, P3, P4, P5, P6, P7, P8 , P9, P10, P11, P12 path 220f up sampling unit 230f domain processing unit 10 20 22H 22L 23 INI > IN2 timing serial decoding unit multi-channel audio signal processing unit two quality decoding scheme low power decoding scheme timing delay compensation unit Downmix bit stream OUT1, OUT2, OUT3, OUT4, OUT5 timing series data 34

Claims (1)

1317247 十、申請專利範圍: h 一種音頻訊號之解碼方法,包含有下列步驟: 經由一電腦可讀取媒體接收一音# , 頊汛號’該音頻訊號包括 有-時域之-降混訊號及-空間資訊,該空間資訊係在該音頻 訊號中被延遲,該電腦可讀取媒體係為非揮發性媒體、揮發性 媒體、傳輸媒體及其組合其中之一; 根據該電腦可讀取媒體所接收到之指令,由—處理器將該 時域之該降混訊號轉換為-實數正交鏡像輯之一降混 訊號; 根據該電腦可讀取媒體所接_之指令,由該處理器將該 實數正交鏡減波ϋ域之該降綠轉換為—複數正交鏡相 濾波器域之一降混訊號; ▲根據該電腦可讀取媒體所接_之齡,由該處理器組合 該複數正交鏡相濾波器域之該降混訊號與該空間資訊; ▲根據該電腦可讀取舰所接_之指令,使肋組合後之 "亥降混§fl號與该空間資訊由該處理器產生一多通道音頻訊 遮,該多通道音頻訊號包括至少兩個通道訊號;以及 經由一輸出單元輸出該多通道音頻訊號; 其中,接收该音頻序號之前,該空間資訊係被延遲一時 間’該時間包括將該時域之該降混訊號轉換為該實數正交鏡像 濾波為域之该降混讯號所需之時間以及將該實數正交鏡相濾 波盗域之該降混訊號轉換為該複數正交鏡相濾波器域之該降 35 1317247 混訊號所需之時間。 2. 如申請專利範圍第1項所述之音頻訊號之解碼方法,其中該空 間資訊之該延遲時間係由該正交鏡相濾波器決定。 3. —種音頻訊號之解碼系統,包含有: 一處理器;以及 一電腦可讀取媒體,係與該處理器耦合’該電腦可讀取媒 體儲存有指令’當該電腦可讀取媒體被該處理器執行時,致使 該處理器執行下列操作: 接收一音頻訊號,該音頻訊號包括有一時域之一降混 訊號及一空間資訊,該空間資訊係在該音頻訊號中被延 遲; 將該時域之該降混訊號轉換為一實數正交鏡像濾波 器域之一降混訊號; 該實數正交鏡相濾波器域之該降混訊號轉換為一複 數正交鏡相濾波器域之一降混訊號; 組合該複數正交鏡相濾波器域之該降混訊號與該空 間資訊; 使用該組合後之該降混訊號與該空間資訊由該處理 器產生一多通道音頻訊號,該多通道音頻訊號包括至少兩 個通道訊號;以及 輪出該多通道音頻訊號; 36 1317247 其中’接收該音頻序號之前,該空間資訊係被延遲一 時間’該時間包括將該時域之該降混訊號轉換為該實數正 交鏡像濾波器域之該降混訊號所需之時間以及將該實數 正交鏡相濾波器域之該降混訊號轉換為該複數正交鏡相 濾波器域之該降混訊號所需之時間。 4.如申請專利範圍第4項所述之音頻訊號之解碼系統,其中該空 間資訊之該延遲時間係由該正交鏡相濾波器決定。 5· —種音頻訊號之處理系統,包含: 一音頻说號接收裝置’該接收裝置經由一電腦可讀取媒體 接收一時域之一降混訊號以及一頻域之一空間資訊,該空間資 訊係在該音頻訊號中被延遲’該電腦可讀取媒體係為非揮發性 媒體、揮發性媒體、傳輸媒體及其組合其中之一; 一第一轉換裝置,用以根據該電腦可讀取媒體所接收到之 指令,將該時域之該降混訊號轉換為一實數正交鏡像遽波器域 之一降混訊號; 一第二訊號裝置’用以根據該電腦可讀取媒體所接收到之 指令,將該實數正交鏡相濾波器域之該降混訊號轉換為—複數 正交鏡相濾波器域之一降混訊號; 一組合裝置,用以根據該電腦可讀取媒體所接收到之指 令,組合該複數正交鏡相濾波器域之該降混訊號及該空間資 訊; 37 1317247 多通逼音頻訊號產生裝置,用以根據該電腦可讀取媒體 所接收到之指令’使用該組合後之該降混訊號與該空間資訊由 。亥處理器產生一多通道音頻訊號,該多通道音頻訊號包括至少 兩個通道訊號; —輸出裝置,用以經由一輸出單元輸出該多通道音頻訊 號; 其中,接收該音頻序號之前,該空間資訊係被延遲一時 間,该時間包括將該時域之該降混訊號轉換為該實數正交鏡像 滤波為域之該降混訊號所需之時間以及將該實數正交鏡相濾 波益域之該降混訊號轉換為該複數正交鏡相濾波器域之該降 混訊號所需之時間。 6.如申請專利範圍第5項所述之音頻訊號之處理系統,其中該空 間貧訊之該延遲時間係由該正交鏡相濾波器決定。 381317247 X. Patent application scope: h A method for decoding an audio signal, comprising the following steps: receiving a sound # via a computer readable medium, the nickname 'the audio signal includes a time-domain-downmix signal and - spatial information, the spatial information is delayed in the audio signal, the computer readable medium is one of non-volatile media, volatile media, transmission media and combinations thereof; according to the computer readable media Receiving the instruction, the processor converts the downmix signal in the time domain into one of the real quadrature mirror series downmix signals; according to the instruction of the computer readable medium, the processor will The green reduction of the real-number orthogonal mirror subtraction domain is converted into a down-mix signal of the complex-mirror phase filter domain; ▲ according to the age of the computer-readable medium, the processor combines the The down-mixed signal and the spatial information of the complex orthogonal phase-phase filter domain; ▲ according to the computer-readable ship-connected instruction, the rib combination is combined with the quot; The processor generates Multi-channel audio mask, the multi-channel audio signal includes at least two channel signals; and outputting the multi-channel audio signal via an output unit; wherein the spatial information is delayed for a time before receiving the audio serial number' Converting the downmix signal of the time domain to the time required for the real quadrature mirror filtering to be the downmix signal of the domain and converting the downmix signal of the real orthogonal phase filter to the complex number The time required for the cross-phase filter field to drop 35 1317247 mixed signal. 2. The method of decoding an audio signal according to claim 1, wherein the delay time of the spatial information is determined by the orthogonal mirror filter. 3. A decoding system for an audio signal, comprising: a processor; and a computer readable medium coupled to the processor 'the computer readable medium storing instructions' when the computer readable medium is When the processor is executed, the processor is caused to: receive an audio signal, the audio signal includes a downmix signal and a spatial information in a time domain, and the spatial information is delayed in the audio signal; The downmix signal in the time domain is converted into a downmix signal of a real quadrature mirror filter domain; the downmix signal of the real orthogonal mirror filter domain is converted into one of a complex orthogonal mirror phase filter domain Downmixing signal; combining the downmix signal of the complex orthogonal mirror filter domain with the spatial information; using the combined downmix signal and the spatial information to generate a multi-channel audio signal by the processor, The channel audio signal includes at least two channel signals; and the multi-channel audio signal is rotated; 36 1317247 wherein the spatial information is delayed by one before receiving the audio serial number Time 'this time includes converting the downmix signal of the time domain to the time required for the downmix signal of the real quadrature mirror filter domain and converting the downmix signal of the real orthogonal phase filter domain The time required for the downmix signal of the complex orthogonal phase phase filter domain. 4. The decoding system for an audio signal according to claim 4, wherein the delay time of the spatial information is determined by the orthogonal mirror filter. A processing system for an audio signal, comprising: an audio speaker receiving device, wherein the receiving device receives a downmix signal in one time domain and a spatial information in a frequency domain via a computer readable medium, the spatial information system Delayed in the audio signal 'The computer readable medium is one of non-volatile media, volatile media, transmission media, and a combination thereof; a first conversion device for reading media according to the computer Receiving an instruction to convert the downmix signal of the time domain into a downmix signal of a real orthogonal mirror chopper domain; a second signal device 'for receiving according to the computer readable medium An instruction to convert the downmix signal of the real-numbered orthogonal mirror filter domain into one of the complex-orthogonal phase filter filter domains; a combination device for receiving the received medium according to the computer An instruction to combine the downmix signal and the spatial information of the complex orthogonal phase filter filter domain; 37 1317247 multi-channel forced audio signal generating device for receiving according to the computer readable medium After the instruction to 'the use of the combination of the downmix signal and the spatial information. The multi-channel audio signal includes at least two channel signals; the output device outputs the multi-channel audio signal via an output unit; wherein the spatial information is received before receiving the audio serial number Is delayed by a time, the time includes converting the downmix signal of the time domain into a time required for the real quadrature mirror filtering to be the downmix signal of the domain and filtering the real-domain orthogonal mirror phase The time required for the downmix signal to be converted to the downmix signal of the complex orthogonal mirror filter domain. 6. The processing system of an audio signal according to claim 5, wherein the delay time of the space lag is determined by the orthogonal mirror filter. 38
TW095136564A 2005-10-24 2006-10-02 Removing time delays in signal paths TWI317247B (en)

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US72922505P 2005-10-24 2005-10-24
US75700506P 2006-01-09 2006-01-09
US78674006P 2006-03-29 2006-03-29
US79232906P 2006-04-17 2006-04-17
KR1020060078223A KR20070037986A (en) 2005-10-04 2006-08-18 Method and apparatus method for processing multi-channel audio signal
KR1020060078218A KR20070037983A (en) 2005-10-04 2006-08-18 Method for decoding multi-channel audio signals and method for generating encoded audio signal
KR1020060078225A KR20070037987A (en) 2005-10-04 2006-08-18 Method and apparatus for decoding multi-channel audio signal
KR1020060078219A KR20070074442A (en) 2006-01-09 2006-08-18 Apparatus and method for recovering multi-channel audio signal, and computer-readable medium storing a program performed in the apparatus
KR1020060078222A KR20070037985A (en) 2005-10-04 2006-08-18 Method and apparatus method for decoding multi-channel audio signals
KR1020060078221A KR20070037984A (en) 2005-10-04 2006-08-18 Method and apparatus for decoding multi-channel audio signals

Publications (2)

Publication Number Publication Date
TW200723932A TW200723932A (en) 2007-06-16
TWI317247B true TWI317247B (en) 2009-11-11

Family

ID=44454038

Family Applications (6)

Application Number Title Priority Date Filing Date
TW095136562A TWI317246B (en) 2005-10-24 2006-10-02 Removing time delays in signal paths
TW095136559A TWI317245B (en) 2005-10-24 2006-10-02 Removing time delays in signal paths
TW095136563A TWI317244B (en) 2005-10-24 2006-10-02 Removing time delays in signal paths
TW095136566A TWI310544B (en) 2005-10-24 2006-10-02 Removing time delays in signal paths
TW095136561A TWI317243B (en) 2005-10-24 2006-10-02 Removing time delays in signal paths
TW095136564A TWI317247B (en) 2005-10-24 2006-10-02 Removing time delays in signal paths

Family Applications Before (5)

Application Number Title Priority Date Filing Date
TW095136562A TWI317246B (en) 2005-10-24 2006-10-02 Removing time delays in signal paths
TW095136559A TWI317245B (en) 2005-10-24 2006-10-02 Removing time delays in signal paths
TW095136563A TWI317244B (en) 2005-10-24 2006-10-02 Removing time delays in signal paths
TW095136566A TWI310544B (en) 2005-10-24 2006-10-02 Removing time delays in signal paths
TW095136561A TWI317243B (en) 2005-10-24 2006-10-02 Removing time delays in signal paths

Country Status (11)

Country Link
US (8) US7716043B2 (en)
EP (6) EP1952673A1 (en)
JP (6) JP5270357B2 (en)
KR (7) KR101186611B1 (en)
CN (6) CN101297595A (en)
AU (1) AU2006306942B2 (en)
BR (1) BRPI0617779A2 (en)
CA (1) CA2626132C (en)
HK (1) HK1126071A1 (en)
TW (6) TWI317246B (en)
WO (6) WO2007049864A1 (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7116787B2 (en) * 2001-05-04 2006-10-03 Agere Systems Inc. Perceptual synthesis of auditory scenes
US7644003B2 (en) * 2001-05-04 2010-01-05 Agere Systems Inc. Cue-based audio coding/decoding
US7805313B2 (en) * 2004-03-04 2010-09-28 Agere Systems Inc. Frequency-based coding of channels in parametric multi-channel coding systems
US7720230B2 (en) * 2004-10-20 2010-05-18 Agere Systems, Inc. Individual channel shaping for BCC schemes and the like
US8204261B2 (en) * 2004-10-20 2012-06-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Diffuse sound shaping for BCC schemes and the like
US7761304B2 (en) * 2004-11-30 2010-07-20 Agere Systems Inc. Synchronizing parametric coding of spatial audio with externally provided downmix
US8340306B2 (en) * 2004-11-30 2012-12-25 Agere Systems Llc Parametric coding of spatial audio with object-based side information
US7787631B2 (en) * 2004-11-30 2010-08-31 Agere Systems Inc. Parametric coding of spatial audio with cues based on transmitted channels
US7903824B2 (en) * 2005-01-10 2011-03-08 Agere Systems Inc. Compact side information for parametric coding of spatial audio
CN101253556B (en) * 2005-09-02 2011-06-22 松下电器产业株式会社 Energy shaping device and energy shaping method
US7716043B2 (en) 2005-10-24 2010-05-11 Lg Electronics Inc. Removing time delays in signal paths
CN102394063B (en) * 2006-07-04 2013-03-20 韩国电子通信研究院 MPEG surround decoder and method for restoring multi-channel audio signal
FR2911031B1 (en) * 2006-12-28 2009-04-10 Actimagine Soc Par Actions Sim AUDIO CODING METHOD AND DEVICE
FR2911020B1 (en) * 2006-12-28 2009-05-01 Actimagine Soc Par Actions Sim AUDIO CODING METHOD AND DEVICE
JP5018193B2 (en) * 2007-04-06 2012-09-05 ヤマハ株式会社 Noise suppression device and program
GB2453117B (en) * 2007-09-25 2012-05-23 Motorola Mobility Inc Apparatus and method for encoding a multi channel audio signal
JPWO2009050896A1 (en) * 2007-10-16 2011-02-24 パナソニック株式会社 Stream synthesizing apparatus, decoding apparatus, and method
TWI407362B (en) * 2008-03-28 2013-09-01 Hon Hai Prec Ind Co Ltd Playing device and audio outputting method
WO2010005224A2 (en) * 2008-07-07 2010-01-14 Lg Electronics Inc. A method and an apparatus for processing an audio signal
EP2144230A1 (en) * 2008-07-11 2010-01-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Low bitrate audio encoding/decoding scheme having cascaded switches
EP2144231A1 (en) * 2008-07-11 2010-01-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Low bitrate audio encoding/decoding scheme with common preprocessing
BRPI0905069A2 (en) * 2008-07-29 2015-06-30 Panasonic Corp Audio coding apparatus, audio decoding apparatus, audio coding and decoding apparatus and teleconferencing system
TWI503816B (en) * 2009-05-06 2015-10-11 Dolby Lab Licensing Corp Adjusting the loudness of an audio signal with perceived spectral balance preservation
US20110153391A1 (en) * 2009-12-21 2011-06-23 Michael Tenbrock Peer-to-peer privacy panel for audience measurement
EP2862168B1 (en) 2012-06-14 2017-08-09 Dolby International AB Smooth configuration switching for multichannel audio
EP2757559A1 (en) * 2013-01-22 2014-07-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for spatial audio object coding employing hidden objects for signal mixture manipulation
US9715880B2 (en) * 2013-02-21 2017-07-25 Dolby International Ab Methods for parametric multi-channel encoding
EP3044790B1 (en) 2013-09-12 2018-10-03 Dolby International AB Time-alignment of qmf based processing data
US10152977B2 (en) * 2015-11-20 2018-12-11 Qualcomm Incorporated Encoding of multiple audio signals
US9978381B2 (en) * 2016-02-12 2018-05-22 Qualcomm Incorporated Encoding of multiple audio signals
JP6866071B2 (en) * 2016-04-25 2021-04-28 ヤマハ株式会社 Terminal device, terminal device operation method and program
KR101687741B1 (en) 2016-05-12 2016-12-19 김태서 Active advertisement system and control method thereof based on traffic signal
KR101687745B1 (en) 2016-05-12 2016-12-19 김태서 Advertisement system and control method thereof for bi-directional data communication based on traffic signal
EP4336497A3 (en) * 2018-07-04 2024-03-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Multisignal encoder, multisignal decoder, and related methods using signal whitening or signal post processing

Family Cites Families (152)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6096079A (en) 1983-10-31 1985-05-29 Matsushita Electric Ind Co Ltd Encoding method of multivalue picture
US4661862A (en) 1984-04-27 1987-04-28 Rca Corporation Differential PCM video transmission system employing horizontally offset five pixel groups and delta signals having plural non-linear encoding functions
US4621862A (en) 1984-10-22 1986-11-11 The Coca-Cola Company Closing means for trucks
JPS6294090A (en) 1985-10-21 1987-04-30 Hitachi Ltd Encoding device
JPS6294090U (en) 1985-12-02 1987-06-16
US4725885A (en) * 1986-12-22 1988-02-16 International Business Machines Corporation Adaptive graylevel image compression system
JPH0793584B2 (en) 1987-09-25 1995-10-09 株式会社日立製作所 Encoder
NL8901032A (en) 1988-11-10 1990-06-01 Philips Nv CODER FOR INCLUDING ADDITIONAL INFORMATION IN A DIGITAL AUDIO SIGNAL WITH A PREFERRED FORMAT, A DECODER FOR DERIVING THIS ADDITIONAL INFORMATION FROM THIS DIGITAL SIGNAL, AN APPARATUS FOR RECORDING A DIGITAL SIGNAL ON A CODE OF RECORD. OBTAINED A RECORD CARRIER WITH THIS DEVICE.
US5243686A (en) 1988-12-09 1993-09-07 Oki Electric Industry Co., Ltd. Multi-stage linear predictive analysis method for feature extraction from acoustic signals
JP2811369B2 (en) 1989-01-27 1998-10-15 ドルビー・ラボラトリーズ・ライセンシング・コーポレーション Short-time delay conversion coder, decoder and encoder / decoder for high quality audio
DE3912605B4 (en) 1989-04-17 2008-09-04 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Digital coding method
US6289308B1 (en) * 1990-06-01 2001-09-11 U.S. Philips Corporation Encoded wideband digital transmission signal and record carrier recorded with such a signal
NL9000338A (en) 1989-06-02 1991-01-02 Koninkl Philips Electronics Nv DIGITAL TRANSMISSION SYSTEM, TRANSMITTER AND RECEIVER FOR USE IN THE TRANSMISSION SYSTEM AND RECORD CARRIED OUT WITH THE TRANSMITTER IN THE FORM OF A RECORDING DEVICE.
GB8921320D0 (en) 1989-09-21 1989-11-08 British Broadcasting Corp Digital video coding
AU653582B2 (en) * 1991-01-08 1994-10-06 Dolby Laboratories Licensing Corporation Encoder/decoder for multidimensional sound fields
CA2075156A1 (en) * 1991-08-02 1993-02-03 Kenzo Akagiri Digital encoder with dynamic quantization bit allocation
DE4209544A1 (en) 1992-03-24 1993-09-30 Inst Rundfunktechnik Gmbh Method for transmitting or storing digitized, multi-channel audio signals
JP3104400B2 (en) 1992-04-27 2000-10-30 ソニー株式会社 Audio signal encoding apparatus and method
JP3123286B2 (en) * 1993-02-18 2001-01-09 ソニー株式会社 Digital signal processing device or method, and recording medium
US5481643A (en) * 1993-03-18 1996-01-02 U.S. Philips Corporation Transmitter, receiver and record carrier for transmitting/receiving at least a first and a second signal component
US5563661A (en) 1993-04-05 1996-10-08 Canon Kabushiki Kaisha Image processing apparatus
US6125398A (en) * 1993-11-24 2000-09-26 Intel Corporation Communications subsystem for computer-based conferencing system using both ISDN B channels for transmission
US5508942A (en) 1993-11-24 1996-04-16 Intel Corporation Intra/inter decision rules for encoding and decoding video signals
US5640159A (en) 1994-01-03 1997-06-17 International Business Machines Corporation Quantization method for image data compression employing context modeling algorithm
RU2158970C2 (en) 1994-03-01 2000-11-10 Сони Корпорейшн Method for digital signal encoding and device which implements said method, carrier for digital signal recording, method for digital signal decoding and device which implements said method
US5550541A (en) 1994-04-01 1996-08-27 Dolby Laboratories Licensing Corporation Compact source coding tables for encoder/decoder system
DE4414445A1 (en) * 1994-04-26 1995-11-09 Heidelberger Druckmasch Ag Tacting roll for transporting sheets into a sheet processing machine
JP3498375B2 (en) * 1994-07-20 2004-02-16 ソニー株式会社 Digital audio signal recording device
US6549666B1 (en) * 1994-09-21 2003-04-15 Ricoh Company, Ltd Reversible embedded wavelet system implementation
JPH08123494A (en) 1994-10-28 1996-05-17 Mitsubishi Electric Corp Speech encoding device, speech decoding device, speech encoding and decoding method, and phase amplitude characteristic derivation device usable for same
JPH08130649A (en) * 1994-11-01 1996-05-21 Canon Inc Data processing unit
KR100209877B1 (en) * 1994-11-26 1999-07-15 윤종용 Variable length coding encoder and decoder using multiple huffman table
JP3371590B2 (en) 1994-12-28 2003-01-27 ソニー株式会社 High efficiency coding method and high efficiency decoding method
JP3484832B2 (en) 1995-08-02 2004-01-06 ソニー株式会社 Recording apparatus, recording method, reproducing apparatus and reproducing method
KR100219217B1 (en) 1995-08-31 1999-09-01 전주범 Method and device for losslessly encoding
US5956674A (en) 1995-12-01 1999-09-21 Digital Theater Systems, Inc. Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels
US6047027A (en) 1996-02-07 2000-04-04 Matsushita Electric Industrial Co., Ltd. Packetized data stream decoder using timing information extraction and insertion
JP3088319B2 (en) 1996-02-07 2000-09-18 松下電器産業株式会社 Decoding device and decoding method
US6399760B1 (en) 1996-04-12 2002-06-04 Millennium Pharmaceuticals, Inc. RP compositions and therapeutic and diagnostic uses therefor
KR100430328B1 (en) 1996-04-18 2004-07-14 노키아 모빌 폰즈 리미티드 Video data encoders and decoders
US5970152A (en) * 1996-04-30 1999-10-19 Srs Labs, Inc. Audio enhancement system for use in a surround sound environment
KR100206786B1 (en) * 1996-06-22 1999-07-01 구자홍 Multi-audio processing device for a dvd player
EP0827312A3 (en) 1996-08-22 2003-10-01 Marconi Communications GmbH Method for changing the configuration of data packets
US5912636A (en) * 1996-09-26 1999-06-15 Ricoh Company, Ltd. Apparatus and method for performing m-ary finite state machine entropy coding
US5893066A (en) 1996-10-15 1999-04-06 Samsung Electronics Co. Ltd. Fast requantization apparatus and method for MPEG audio decoding
TW429700B (en) 1997-02-26 2001-04-11 Sony Corp Information encoding method and apparatus, information decoding method and apparatus and information recording medium
US6134518A (en) 1997-03-04 2000-10-17 International Business Machines Corporation Digital audio signal coding using a CELP coder and a transform coder
US6639945B2 (en) 1997-03-14 2003-10-28 Microsoft Corporation Method and apparatus for implementing motion detection in video compression
US6131084A (en) 1997-03-14 2000-10-10 Digital Voice Systems, Inc. Dual subframe quantization of spectral magnitudes
US5924930A (en) * 1997-04-03 1999-07-20 Stewart; Roger K. Hitting station and methods related thereto
TW405328B (en) 1997-04-11 2000-09-11 Matsushita Electric Ind Co Ltd Audio decoding apparatus, signal processing device, sound image localization device, sound image control method, audio signal processing device, and audio signal high-rate reproduction method used for audio visual equipment
US5890125A (en) * 1997-07-16 1999-03-30 Dolby Laboratories Licensing Corporation Method and apparatus for encoding and decoding multiple audio channels at low bit rates using adaptive selection of encoding method
TW432372B (en) 1997-09-17 2001-05-01 Matsushita Electric Ind Co Ltd Optical disc, video data editing apparatus, computer -readable recording medium storing an editing program, reproduction apparatus for the optical disc, and computer -readable recording medium storing an reproduction program
US6130418A (en) 1997-10-06 2000-10-10 U.S. Philips Corporation Optical scanning unit having a main lens and an auxiliary lens
US5966688A (en) 1997-10-28 1999-10-12 Hughes Electronics Corporation Speech mode based multi-stage vector quantizer
JP2005063655A (en) 1997-11-28 2005-03-10 Victor Co Of Japan Ltd Encoding method and decoding method of audio signal
NO306154B1 (en) * 1997-12-05 1999-09-27 Jan H Iien PolstringshÕndtak
JP3022462B2 (en) 1998-01-13 2000-03-21 興和株式会社 Vibration wave encoding method and decoding method
DE69926821T2 (en) 1998-01-22 2007-12-06 Deutsche Telekom Ag Method for signal-controlled switching between different audio coding systems
JPH11282496A (en) 1998-03-30 1999-10-15 Matsushita Electric Ind Co Ltd Decoding device
AUPP272898A0 (en) * 1998-03-31 1998-04-23 Lake Dsp Pty Limited Time processed head related transfer functions in a headphone spatialization system
US6016473A (en) 1998-04-07 2000-01-18 Dolby; Ray M. Low bit-rate spatial coding method and system
US6360204B1 (en) 1998-04-24 2002-03-19 Sarnoff Corporation Method and apparatus for implementing rounding in decoding an audio signal
US6339760B1 (en) * 1998-04-28 2002-01-15 Hitachi, Ltd. Method and system for synchronization of decoded audio and video by adding dummy data to compressed audio data
JPH11330980A (en) 1998-05-13 1999-11-30 Matsushita Electric Ind Co Ltd Decoding device and method and recording medium recording decoding procedure
CA2336411C (en) 1998-07-03 2007-11-13 Dolby Laboratories Licensing Corporation Transcoders for fixed and variable rate data streams
GB2340351B (en) 1998-07-29 2004-06-09 British Broadcasting Corp Data transmission
MY118961A (en) 1998-09-03 2005-02-28 Sony Corp Beam irradiation apparatus, optical apparatus having beam irradiation apparatus for information recording medium, method for manufacturing original disk for information recording medium, and method for manufacturing information recording medium
US6298071B1 (en) 1998-09-03 2001-10-02 Diva Systems Corporation Method and apparatus for processing variable bit rate information in an information distribution system
US6148283A (en) 1998-09-23 2000-11-14 Qualcomm Inc. Method and apparatus using multi-path multi-stage vector quantizer
US6553147B2 (en) * 1998-10-05 2003-04-22 Sarnoff Corporation Apparatus and method for data partitioning to improving error resilience
US6556685B1 (en) 1998-11-06 2003-04-29 Harman Music Group Companding noise reduction system with simultaneous encode and decode
JP3346556B2 (en) 1998-11-16 2002-11-18 日本ビクター株式会社 Audio encoding method and audio decoding method
US6757659B1 (en) 1998-11-16 2004-06-29 Victor Company Of Japan, Ltd. Audio signal processing apparatus
US6195024B1 (en) 1998-12-11 2001-02-27 Realtime Data, Llc Content independent data compression method and system
US6208276B1 (en) 1998-12-30 2001-03-27 At&T Corporation Method and apparatus for sample rate pre- and post-processing to achieve maximal coding gain for transform-based audio encoding and decoding
US6631352B1 (en) 1999-01-08 2003-10-07 Matushita Electric Industrial Co. Ltd. Decoding circuit and reproduction apparatus which mutes audio after header parameter changes
JP4610087B2 (en) 1999-04-07 2011-01-12 ドルビー・ラボラトリーズ・ライセンシング・コーポレーション Matrix improvement to lossless encoding / decoding
JP3323175B2 (en) 1999-04-20 2002-09-09 松下電器産業株式会社 Encoding device
US6421467B1 (en) * 1999-05-28 2002-07-16 Texas Tech University Adaptive vector quantization/quantizer
KR100307596B1 (en) 1999-06-10 2001-11-01 윤종용 Lossless coding and decoding apparatuses of digital audio data
JP2000352999A (en) * 1999-06-11 2000-12-19 Nec Corp Audio switching device
US6226616B1 (en) 1999-06-21 2001-05-01 Digital Theater Systems, Inc. Sound quality of established low bit-rate audio coding systems without loss of decoder compatibility
JP2001006291A (en) 1999-06-21 2001-01-12 Fuji Film Microdevices Co Ltd Encoding system judging device of audio signal and encoding system judging method for audio signal
JP3762579B2 (en) 1999-08-05 2006-04-05 株式会社リコー Digital audio signal encoding apparatus, digital audio signal encoding method, and medium on which digital audio signal encoding program is recorded
JP2002093055A (en) * 2000-07-10 2002-03-29 Matsushita Electric Ind Co Ltd Signal processing device, signal processing method and optical disk reproducing device
US20020049586A1 (en) * 2000-09-11 2002-04-25 Kousuke Nishio Audio encoder, audio decoder, and broadcasting system
US6636830B1 (en) * 2000-11-22 2003-10-21 Vialta Inc. System and method for noise reduction using bi-orthogonal modified discrete cosine transform
JP4008244B2 (en) 2001-03-02 2007-11-14 松下電器産業株式会社 Encoding device and decoding device
JP3566220B2 (en) 2001-03-09 2004-09-15 三菱電機株式会社 Speech coding apparatus, speech coding method, speech decoding apparatus, and speech decoding method
US6504496B1 (en) * 2001-04-10 2003-01-07 Cirrus Logic, Inc. Systems and methods for decoding compressed data
US7292901B2 (en) 2002-06-24 2007-11-06 Agere Systems Inc. Hybrid multi-channel/cue coding/decoding of audio signals
US7583805B2 (en) 2004-02-12 2009-09-01 Agere Systems Inc. Late reverberation-based synthesis of auditory scenes
US7644003B2 (en) * 2001-05-04 2010-01-05 Agere Systems Inc. Cue-based audio coding/decoding
JP2002335230A (en) 2001-05-11 2002-11-22 Victor Co Of Japan Ltd Method and device for decoding audio encoded signal
JP2003005797A (en) 2001-06-21 2003-01-08 Matsushita Electric Ind Co Ltd Method and device for encoding audio signal, and system for encoding and decoding audio signal
GB0119569D0 (en) * 2001-08-13 2001-10-03 Radioscape Ltd Data hiding in digital audio broadcasting (DAB)
EP1308931A1 (en) * 2001-10-23 2003-05-07 Deutsche Thomson-Brandt Gmbh Decoding of a digital audio signal organised in frames comprising a header
KR100480787B1 (en) 2001-11-27 2005-04-07 삼성전자주식회사 Encoding/decoding method and apparatus for key value of coordinate interpolator node
JP2005510925A (en) * 2001-11-30 2005-04-21 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Signal coding
TW569550B (en) 2001-12-28 2004-01-01 Univ Nat Central Method of inverse-modified discrete cosine transform and overlap-add for MPEG layer 3 voice signal decoding and apparatus thereof
EP1833262A1 (en) 2002-01-18 2007-09-12 Kabushiki Kaisha Toshiba Video encoding method and apparatus and video decoding method and apparatus
US7212247B2 (en) * 2002-01-31 2007-05-01 Thomson Licensing Audio/video system providing variable delay
JP2003233395A (en) 2002-02-07 2003-08-22 Matsushita Electric Ind Co Ltd Method and device for encoding audio signal and encoding and decoding system
EP1484841B1 (en) 2002-03-08 2018-12-26 Nippon Telegraph And Telephone Corporation DIGITAL SIGNAL ENCODING METHOD, DECODING METHOD, ENCODING DEVICE, DECODING DEVICE and DIGITAL SIGNAL DECODING PROGRAM
WO2003085644A1 (en) 2002-04-11 2003-10-16 Matsushita Electric Industrial Co., Ltd. Encoding device and decoding device
DE10217297A1 (en) 2002-04-18 2003-11-06 Fraunhofer Ges Forschung Device and method for coding a discrete-time audio signal and device and method for decoding coded audio data
US7275036B2 (en) 2002-04-18 2007-09-25 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for coding a time-discrete audio signal to obtain coded audio data and for decoding coded audio data
CN1663257A (en) 2002-04-19 2005-08-31 德国普莱特科技公司 Wavelet transform system, method and computer program product
DE60311794C5 (en) 2002-04-22 2022-11-10 Koninklijke Philips N.V. SIGNAL SYNTHESIS
AU2003216686A1 (en) 2002-04-22 2003-11-03 Koninklijke Philips Electronics N.V. Parametric multi-channel audio representation
JP2004004274A (en) * 2002-05-31 2004-01-08 Matsushita Electric Ind Co Ltd Voice signal processing switching equipment
KR100486524B1 (en) * 2002-07-04 2005-05-03 엘지전자 주식회사 Shortening apparatus for delay time in video codec
KR100981699B1 (en) 2002-07-12 2010-09-13 코닌클리케 필립스 일렉트로닉스 엔.브이. Audio coding
US7542896B2 (en) 2002-07-16 2009-06-02 Koninklijke Philips Electronics N.V. Audio coding/decoding with spatial parameters and non-uniform segmentation for transients
BR0311601A (en) 2002-07-19 2005-02-22 Nec Corp Audio decoder device and method to enable computer
ATE341923T1 (en) 2002-08-07 2006-10-15 Dolby Lab Licensing Corp AUDIO CHANNEL CONVERSION
JP2004085945A (en) * 2002-08-27 2004-03-18 Canon Inc Sound output device and its data transmission control method
US7536305B2 (en) 2002-09-04 2009-05-19 Microsoft Corporation Mixed lossless audio compression
US7502743B2 (en) 2002-09-04 2009-03-10 Microsoft Corporation Multi-channel audio encoding and decoding with multi-channel transform selection
TW567466B (en) 2002-09-13 2003-12-21 Inventec Besta Co Ltd Method using computer to compress and encode audio data
US8306340B2 (en) 2002-09-17 2012-11-06 Vladimir Ceperkovic Fast codec with high compression ratio and minimum required resources
JP4084990B2 (en) 2002-11-19 2008-04-30 株式会社ケンウッド Encoding device, decoding device, encoding method and decoding method
JP2004220743A (en) 2003-01-17 2004-08-05 Sony Corp Information recording device, information recording control method, information reproducing device, information reproduction control method
JP3761522B2 (en) * 2003-01-22 2006-03-29 パイオニア株式会社 Audio signal processing apparatus and audio signal processing method
WO2004072956A1 (en) 2003-02-11 2004-08-26 Koninklijke Philips Electronics N.V. Audio coding
US7787632B2 (en) 2003-03-04 2010-08-31 Nokia Corporation Support of a multichannel audio extension
US20040199276A1 (en) * 2003-04-03 2004-10-07 Wai-Leong Poon Method and apparatus for audio synchronization
ATE355590T1 (en) 2003-04-17 2006-03-15 Koninkl Philips Electronics Nv AUDIO SIGNAL SYNTHESIS
DE602004005846T2 (en) * 2003-04-17 2007-12-20 Koninklijke Philips Electronics N.V. AUDIO SIGNAL GENERATION
JP2005086486A (en) * 2003-09-09 2005-03-31 Alpine Electronics Inc Audio system and audio processing method
US7447317B2 (en) * 2003-10-02 2008-11-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V Compatible multi-channel coding/decoding by weighting the downmix channel
EP1683133B1 (en) * 2003-10-30 2007-02-14 Koninklijke Philips Electronics N.V. Audio signal encoding or decoding
US20050137729A1 (en) * 2003-12-18 2005-06-23 Atsuhiro Sakurai Time-scale modification stereo audio signals
SE527670C2 (en) 2003-12-19 2006-05-09 Ericsson Telefon Ab L M Natural fidelity optimized coding with variable frame length
US7394903B2 (en) * 2004-01-20 2008-07-01 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Apparatus and method for constructing a multi-channel output signal or for generating a downmix signal
US20050174269A1 (en) 2004-02-05 2005-08-11 Broadcom Corporation Huffman decoder used for decoding both advanced audio coding (AAC) and MP3 audio
US7272567B2 (en) * 2004-03-25 2007-09-18 Zoran Fejzo Scalable lossless audio codec and authoring tool
BRPI0509113B8 (en) * 2004-04-05 2018-10-30 Koninklijke Philips Nv multichannel encoder, method for encoding input signals, encoded data content, data bearer, and operable decoder for decoding encoded output data
CN1947407A (en) * 2004-04-09 2007-04-11 日本电气株式会社 Audio communication method and device
JP4579237B2 (en) * 2004-04-22 2010-11-10 三菱電機株式会社 Image encoding apparatus and image decoding apparatus
JP2005332449A (en) 2004-05-18 2005-12-02 Sony Corp Optical pickup device, optical recording and reproducing device and tilt control method
TWM257575U (en) 2004-05-26 2005-02-21 Aimtron Technology Corp Encoder and decoder for audio and video information
JP2006012301A (en) * 2004-06-25 2006-01-12 Sony Corp Optical recording/reproducing method, optical pickup device, optical recording/reproducing device, method for manufacturing optical recording medium, and semiconductor laser device
US8204261B2 (en) 2004-10-20 2012-06-19 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Diffuse sound shaping for BCC schemes and the like
JP2006120247A (en) 2004-10-21 2006-05-11 Sony Corp Condenser lens and its manufacturing method, exposure apparatus using same, optical pickup apparatus, and optical recording and reproducing apparatus
SE0402650D0 (en) 2004-11-02 2004-11-02 Coding Tech Ab Improved parametric stereo compatible coding or spatial audio
US7573912B2 (en) * 2005-02-22 2009-08-11 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschunng E.V. Near-transparent or transparent multi-channel encoder/decoder scheme
US7991610B2 (en) 2005-04-13 2011-08-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Adaptive grouping of parameters for enhanced coding efficiency
CZ300251B6 (en) 2005-07-20 2009-04-01 Oez S. R. O. Switching apparatus, particularly power circuit breaker
US7716043B2 (en) 2005-10-24 2010-05-11 Lg Electronics Inc. Removing time delays in signal paths
JP4876574B2 (en) * 2005-12-26 2012-02-15 ソニー株式会社 Signal encoding apparatus and method, signal decoding apparatus and method, program, and recording medium

Also Published As

Publication number Publication date
JP5399706B2 (en) 2014-01-29
JP5249039B2 (en) 2013-07-31
TWI317246B (en) 2009-11-11
KR100888973B1 (en) 2009-03-17
KR100888971B1 (en) 2009-03-17
US7716043B2 (en) 2010-05-11
EP1952670A4 (en) 2012-09-26
WO2007049864A1 (en) 2007-05-03
KR20080096603A (en) 2008-10-30
WO2007049863A3 (en) 2007-06-14
WO2007049861A1 (en) 2007-05-03
US20070094010A1 (en) 2007-04-26
US20070094011A1 (en) 2007-04-26
TWI317245B (en) 2009-11-11
US20100329467A1 (en) 2010-12-30
JP2009513084A (en) 2009-03-26
KR20090018131A (en) 2009-02-19
KR101186611B1 (en) 2012-09-27
TWI317244B (en) 2009-11-11
BRPI0617779A2 (en) 2011-08-09
CA2626132C (en) 2012-08-28
CN101297598A (en) 2008-10-29
US20070094014A1 (en) 2007-04-26
US7653533B2 (en) 2010-01-26
AU2006306942B2 (en) 2010-02-18
KR20080040785A (en) 2008-05-08
KR100928268B1 (en) 2009-11-24
JP5270357B2 (en) 2013-08-21
JP2009512901A (en) 2009-03-26
CN101297594B (en) 2014-07-02
JP2009512899A (en) 2009-03-26
TW200723247A (en) 2007-06-16
EP1952675A4 (en) 2010-09-29
TW200723931A (en) 2007-06-16
TW200718259A (en) 2007-05-01
US7840401B2 (en) 2010-11-23
US8095357B2 (en) 2012-01-10
WO2007049866A1 (en) 2007-05-03
US7761289B2 (en) 2010-07-20
EP1952672B1 (en) 2016-04-27
CN101297597A (en) 2008-10-29
JP2009512900A (en) 2009-03-26
US20070092086A1 (en) 2007-04-26
JP5249038B2 (en) 2013-07-31
EP1952670A1 (en) 2008-08-06
EP1952672A2 (en) 2008-08-06
KR20080050442A (en) 2008-06-05
KR100888972B1 (en) 2009-03-17
EP1952674A1 (en) 2008-08-06
WO2007049863A2 (en) 2007-05-03
EP1952674B1 (en) 2015-09-09
KR20080050445A (en) 2008-06-05
TWI310544B (en) 2009-06-01
TW200719747A (en) 2007-05-16
WO2007049863A8 (en) 2007-08-02
KR20080050444A (en) 2008-06-05
TWI317243B (en) 2009-11-11
CN101297596A (en) 2008-10-29
US8095358B2 (en) 2012-01-10
KR100888974B1 (en) 2009-03-17
EP1952674A4 (en) 2010-09-29
EP1952671A4 (en) 2010-09-22
CA2626132A1 (en) 2007-05-03
HK1126071A1 (en) 2009-08-21
US20070094012A1 (en) 2007-04-26
EP1952673A1 (en) 2008-08-06
CN101297594A (en) 2008-10-29
CN101297595A (en) 2008-10-29
CN101297596B (en) 2012-11-07
EP1952671A1 (en) 2008-08-06
CN101297597B (en) 2013-03-27
US20100324916A1 (en) 2010-12-23
CN101297598B (en) 2011-08-17
JP2009512902A (en) 2009-03-26
KR100875428B1 (en) 2008-12-22
KR20080050443A (en) 2008-06-05
WO2007049865A1 (en) 2007-05-03
US20070094013A1 (en) 2007-04-26
WO2007049862A8 (en) 2007-08-02
EP1952672A4 (en) 2010-09-29
AU2006306942A1 (en) 2007-05-03
CN101297599A (en) 2008-10-29
WO2007049862A1 (en) 2007-05-03
EP1952675A1 (en) 2008-08-06
US7742913B2 (en) 2010-06-22
JP2009513085A (en) 2009-03-26
TW200723932A (en) 2007-06-16
JP5270358B2 (en) 2013-08-21

Similar Documents

Publication Publication Date Title
TWI317247B (en) Removing time delays in signal paths
KR100875429B1 (en) How to compensate for time delays in signal processing
TWI450603B (en) Removing time delays in signal paths
RU2389155C2 (en) Elimination of time delays on signal processing channels

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees