JP3139602B2 - Acoustic signal encoding method and decoding method - Google Patents

Acoustic signal encoding method and decoding method

Info

Publication number
JP3139602B2
JP3139602B2 JP07065622A JP6562295A JP3139602B2 JP 3139602 B2 JP3139602 B2 JP 3139602B2 JP 07065622 A JP07065622 A JP 07065622A JP 6562295 A JP6562295 A JP 6562295A JP 3139602 B2 JP3139602 B2 JP 3139602B2
Authority
JP
Japan
Prior art keywords
signal
code
encoding
frequency
band
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP07065622A
Other languages
Japanese (ja)
Other versions
JPH08263096A (en
Inventor
明夫 神
健弘 守谷
聡 三樹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP07065622A priority Critical patent/JP3139602B2/en
Publication of JPH08263096A publication Critical patent/JPH08263096A/en
Application granted granted Critical
Publication of JP3139602B2 publication Critical patent/JP3139602B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Abstract

PURPOSE: To encode a sound at a high compression rate and to encode a musical tone with high quality by using a CELP system and a conversion coding system. CONSTITUTION: An input signal 11 of a sampling frequency fS=24kHz is made a low band signal of fS=16kHz by a converter 221 , and it is encoded by a CELP coder 241 , and a resultant code C1 is outputted, and the code C1 is decoded by a decoder 251 , and the decoded signal is made the signal of fS=24kHz by a converter 26, and it is subtracted from the input signal 11, and a high band signal and a quantization error signal are coded by a conversion coding coder 242 , and the code C2 is outputted. Only the code C1 , or both of C1 and C2 are decoded to be used.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【産業上の利用分野】この発明は、楽音や音声などの音
響信号を周波数領域で帯域分割して階層符号化する符号
化方法及びその復号化方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an encoding method for hierarchically encoding an audio signal such as a musical sound or a voice by dividing it into bands in a frequency domain and a decoding method thereof.

【0002】[0002]

【従来の技術】音響信号を周波数領域で帯域分割して符
号化する方法として、サブバンド符号化方法がある。サ
ブバンド符号化方法はQMF(Quadrature
Mirror Filter)を用いて入力信号を複数
の周波数帯域に分割し、その各帯域に適切なビット割り
当てを行いつつ各帯域を独立に符号化する。
2. Description of the Related Art There is a sub-band encoding method as a method of encoding an audio signal by dividing the frequency band into bands. The subband encoding method is QMF (Quadrature).
The input signal is divided into a plurality of frequency bands using a Mirror Filter, and each band is independently encoded while allocating appropriate bits to each band.

【0003】現在、楽音及び音声などの音響信号の符号
化方法は使用目的、復号品質、符号化速度などに応じて
多種多様な方法が有るが、1つの音響信号に対して複数
の符号化方法を得ることなく1つの符号化方法でのみ符
号化するのが普通である。しかし、例えば図1Aに示す
ように音響信号11を周波数軸上で低域側から3つのサ
ブバンドSB1 ,SB2 ,SB3 に分割して階層化し、
図2に示すようにその下位層(階層1)であるサブバン
SB 1 は符号化品質は低い、すなわち復号再生音の周
波数帯域が狭く、量子化誤差も大きい符号化方法、例え
ば符号駆動線形の予測符号化法:CELPにより高圧縮
率で符号化し、逆に上位層(階層3)であるサブバンド
SB3 の符号化は符号化品質が高く、すなわち復号再生
音の周波数帯域が広く、量子化誤差が小さい符号化方法
(例えば離散コサイン変換符号化方法などの変換符号化
法で低圧縮率で符号化し、中位層(階層2)であるサブ
バンドSB2 に対しては下位層の符号化方法と、上位層
の符号化方法との中間の符号化方法とし、利用者の要求
に応じて階層1のみを符号化送出し、あるいは階層1と
2を符号化送出し、又は全ての階層を符号化送出すると
いう符号化方法も、考えられる。
At present, there are various methods for encoding audio signals such as musical tones and voices, depending on the purpose of use, decoding quality, encoding speed, and the like. It is common to encode with only one encoding method without obtaining. However, for example, as shown in FIG. 1A, the acoustic signal 11 is divided into three sub-bands SB 1 , SB 2 , and SB 3 from the low-frequency side on the frequency axis to form a hierarchy.
As shown in FIG. 2, the sub-band SB 1 which is the lower layer (layer 1) Is a coding method with low coding quality, that is, a narrow frequency band of decoded reproduced sound and a large quantization error, for example, coding with a code-driven linear predictive coding method: CELP at a high compression rate, and conversely, an upper layer ( coding the subband SB 3 is a hierarchy 3) has a high encoding quality, i.e. wide frequency band of the decoded sound data, transform coding such as quantization error is small coding method (e.g. discrete cosine transform coding method encoded at a low compression ratio in the law, and the encoding method of a lower layer, an intermediate encoding method and encoding method of the upper layer with respect to the sub-band SB 2 is moderate layer (hierarchy 2), use A coding method in which only layer 1 is coded and transmitted, or layers 1 and 2 are coded and transmitted, or all layers are coded and transmitted in response to a user's request, is also conceivable.

【0004】あるいは前述のように3つに階層符号化さ
れた各種の楽音又は音声信号を例えばデータベースとし
て設け、利用者からのそのデータベースをアクセスし、
所望の楽音信号を受け取り、その利用者の復号器に応じ
て、階層1の符号のみを復号して狭帯域かつ量子化誤差
の大きい低品質の再生音を得、あるいは階層1及び2の
符号を復号、又は階層1,2,3の全ての符号を復号し
て広帯域かつ量子化誤差の小さい高品質な再生音を得る
ことが考えられる。
[0004] Alternatively, various musical sounds or audio signals hierarchically encoded into three as described above are provided as, for example, a database, and the database is accessed by a user.
A desired tone signal is received, and according to the decoder of the user, only the code of the layer 1 is decoded to obtain a low-quality reproduced sound with a narrow band and a large quantization error, or the code of the layer 1 and 2 is decoded. It is conceivable to obtain high-quality reproduced sound with a wide band and a small quantization error by decoding or decoding all codes in layers 1, 2, and 3.

【0005】又は、例えば、音声が支配的な広帯域の音
響信号を2階層に分けて符号化し、その下位層符号のみ
を復号すれば主に音声的な性質を有する音響信号をきれ
いに復号し、下位層と上位層との両符号を復号すれば、
更に、非音声的な性質を有する音響信号も含めた信号の
復号ができる、ということが考えられる。またこれらの
場合において、下位層符号のみを受け取り、その際の伝
送路の利用時間を短かくしまたは、伝送容量の小さな伝
送路を使用し、かつ実時間で復号したり、長い時間かけ
て上位層符号をも受けとり、一度蓄積した後、改めて再
生復号することにより高品質の復号信号を得ることもで
きる。
[0005] Alternatively, for example, a wideband acoustic signal in which speech is dominant is divided into two layers and encoded, and if only the lower layer code is decoded, the acoustic signal mainly having speech characteristics is decoded clearly, and the lower layer is decoded. By decoding both the upper and lower layer codes,
Further, it is conceivable that a signal including an acoustic signal having a non-speech property can be decoded. Also, in these cases, only the lower layer code is received, and the use time of the transmission path at that time is shortened, or a transmission path with a small transmission capacity is used, and decoding is performed in real time, It is also possible to obtain a high-quality decoded signal by receiving the code, storing it once, and then reproducing and decoding it again.

【0006】あるいは、これらの場合において、下位・
上位層の全ての符号を一度蓄積した後、下位層符号のみ
を、小型かつ経済的な遅延時間の小さい復号器により実
時間で復号したり、高品質な音を再生したい時には、上
位層符号をも含めて、大型かつ遅延時間の大きな復号器
により、時間をかけて復号し、その後で一度に再生する
こともできる。
[0006] Alternatively, in these cases,
After accumulating all the codes of the upper layer once, only the lower layer code is decoded in real time by a small and economical decoder with a small delay time, or when it is desired to reproduce high-quality sound, the upper layer code is used. In addition, decoding can be performed over a long time by a large-sized decoder having a large delay time, and thereafter, can be reproduced at once.

【0007】前述のように復号品質や符号化圧縮率に選
択性をもたせる符号化方法はスケーラブルな階層符号化
方法と称せられる。スケーラブルな階層符号化方法とし
ては図1Aに示したサブバンド符号化方法が考えられ
る。すなわち符号化方法1によってサブバンドSB1
周波数帯域を符号化し、同様にして帯域SB2 ,SB3
を各々独立した符号化方法2,3により符号化を実行す
る。図1Bに示すように、復号化の際には、例えば、広
帯域の復号音を必要としない時には、サブバンドSB1
の符号のみを符号化方法1の復号器により復号化して、
サブバンドSB1 の帯域のみの音の復号信号121
得、また広帯域復号音を必要とする場合はサブバンドS
1 ,SB2 ,SB3 の各符号をそれぞれ符号化方法
1,2,3と対応した復号量により復号して復号信号1
1 ,122 ,123 を得てこれらの合成信号12を出
力する。
[0007] As described above, an encoding method that gives selectivity to the decoding quality and the encoding compression ratio is called a scalable hierarchical encoding method. As a scalable hierarchical encoding method, the subband encoding method shown in FIG. 1A can be considered. That encodes the frequency band of the subband SB 1 by the coding method 1, band SB 2 in the same manner, SB 3
Are encoded by independent encoding methods 2 and 3, respectively. As shown in FIG. 1B, when decoding, for example, when a wideband decoded sound is not required, the sub-band SB 1
Is decoded by the decoder of the encoding method 1,
A decoded signal 12 1 of the sound only in the band of the sub-band SB 1 is obtained.
Each of the codes B 1 , SB 2 , and SB 3 is decoded by a decoding amount corresponding to each of the encoding methods 1, 2, and 3, and the decoded signal 1
2 1, 12 2, 12 3 to obtain outputs these synthetic signals 12.

【0008】[0008]

【発明が解決しようとする課題】しかし、このようなサ
ブバンド符号化方法による階層符号化では、各帯域(す
なわち各層)に発する量子化誤差、すなわち符号器の
入力信号とその局部復号器の出力信号、つまり伝送路な
どの影響を受けていない復号信号との誤差が図1Cに示
すように各帯域SB1 ,SB2 ,SB3 にそれぞれ量子
化誤差131 ,132 ,133 として保存され、よって
全周波数帯域の復号信号12には各帯域毎に独立に歪み
や雑音が発生してしまう。従って、全帯域を復号する場
合(すなわち上位層までの復号化)でさえも、下位層の
大きな量子化誤差131 も、そのまま発生するため、高
品質のものは得られない。広帯域復号信号を高品質に得
るには各サブバンドSB1 ,SB2 ,SB3 での各符号
化圧縮率を小さくしなければ、量子化雑音を低減させる
ことができない。従ってこのような階層符号化方法で
は、スケーラブルな符号化を実現できない。
[SUMMARY OF THE INVENTION However, in the hierarchical coding by such a sub-band coding method, the quantization error that occurs each band (i.e. each layer), i.e., the input signal of the encoder and its local decoder 1C, that is, an error from a decoded signal that is not affected by a transmission path or the like, as quantization errors 13 1 , 13 2 , and 13 3 in each of the bands SB 1 , SB 2 , and SB 3 as shown in FIG. 1C. Thus, the decoded signal 12 in the entire frequency band is subjected to distortion and noise independently for each band. Therefore, even when decoding the entire band (i.e. decoding up to the upper layer), a large quantization error 13 1 of the lower layer also, since it occurs, it can not be obtained of high quality. If in order to obtain a wideband decoded signal with high quality by reducing the respective coding compression rate in each subband SB 1, SB 2, SB 3 , it is impossible to reduce the quantization noise. Therefore, scalable encoding cannot be realized by such a hierarchical encoding method.

【0009】従来のサブバンド符号化方法によってスケ
ーラブルな符号化ができないことを図3を参照して更に
具体的に説明する。即ち原音響信号11の帯域を2分割
し、第1階層(低域領域)をCELP方式で符号化し、
第2階層(高域領域)を変換符号化方法により符号化し
ている。第1階層では、音声の圧縮効率の高いCELP
符号化が実行されているため、その局部復号信号
1 、(図3B)の量子化誤差信号131 は図3Cに示
すように比較的大きい。一方第2階層では様々な波形に
対して符号化可能な変換符号化が実行されているため、
その部復号信号122 は図3Bに示すように原音信号
11に近く、量子化誤差信号132 は図3Cに示すよう
に小さい。しかし第1階層の符号及び第2階層の符号を
それれ復号して広域復号信号を得ても、図3Dに示す
ようにその復号信号の量子化誤差の低域部分141 は第
1階層の量子化誤差131 と変わらない。すなわち、第
2階層までの復号品質は低周波数の帯域においてCEL
P符号化方法の符号化性能に依存してしまう
[0009] more specifically described with reference to FIG. 3 that the conventional sub-band coding method thus can not schedule <br/> Raburu coding. That is, the band of the original audio signal 11 is divided into two, and the first layer (low-frequency region) is encoded by the CELP method.
The second layer (high-frequency region) is encoded by a transform encoding method. In the first layer, CELP with high audio compression efficiency
Since the encoding has been performed, the local decoded signal 1
2 1, the quantization error signal 13 1 (FIG. 3B) is relatively large as shown in FIG. 3C. On the other hand, in the second layer, since transform coding capable of coding various waveforms is performed,
The station unit decoded signal 12 2 is close to the original signal 11 as shown in FIG. 3B, the quantization error signal 13 2 is small as shown in FIG. 3C. But to obtain a wide decoded signal the sign of the first layer code and second layer <br/>, respectively which decodes the low frequency portion 14 1 of the quantization error of the decoded signal, as shown in FIG. 3D It does not change the quantization error 13 1 of the first hierarchy. That is, the decoding quality up to the second layer is CEL in the low frequency band.
It depends on the encoding performance of the P encoding method .

【0010】この発明の目的は、下位層での復号品質の
影響を上位層でも受けない高品質の復号品質が得られ
スケーラブルな符号化方法及びその復号化方法を提供す
ることにある。
[0010] An object of the present invention is to improve decoding quality in a lower layer.
Effect is to provide a scalable encoding method and decoding method that a high-quality decoding quality which is not subject to obtain the upper layer.

【0011】[0011]

【課題を解決するための手段】請求項1の発明によれ
ば、楽音や音声などの最高周波数がfn の音響入力信号
を周波数f1 ,f2 ,……,fn-1 (f1 <f2 <,…
…,<fn-1 <fn )のn個の区分(nは2以上の整
数)に分割して符号化する符号化方法において、入力信
号から周波数がf1 以下の第1帯域信号を選出し、その
第1帯域信号を第1符号化方法で符号化して第1符号を
出力し、第i−1以下の各符号(i=2,3,……,
n)から周波数がfi-1 以下の第i−1復号信号を得、
上記入力信号から周波数fi 以下の第i帯域信号を選出
し、その第i帯域信号から上記第i−1復号信号を差し
引いて第i差信号を得、その第i差信号を第i符号化方
法で符号化して第i符号を出力する。
According to the Summary of the invention of claim 1, the frequency f 1 the highest frequency of the acoustic input signal f n, such as tone and voice, f 2, ......, f n-1 (f 1 <F 2 <, ...
, <F n-1 <f n ) In a coding method of coding by dividing into n sections (n is an integer of 2 or more), a first band signal having a frequency of f 1 or less is input from an input signal. , The first band signal is encoded by a first encoding method, and a first code is output, and each code of i−1 or less (i = 2, 3,...,
n) to obtain an (i-1) th decoded signal having a frequency of f i-1 or less;
An i-th band signal having a frequency f i or lower is selected from the input signal, and the i-th decoded signal is subtracted from the i-th band signal to obtain an i-th difference signal. And output the i-th code.

【0012】第i−1復号信号は例えば、第i−1符号
を復号化した信号と、第i−2復号信号とを加算し、そ
の加算信号を標本化周波数が2fi の信号に変換して得
る。請求項3の発明の符号化方法によれば楽音、音声な
どの最高周波数がfn の音響入力信号を、周波数f1
2 …,fn-1 (f1 <f2 <,…<fn-1 <fn
(n=2以上の整数)で区分してそれぞれ符号化する符
号化方法において、上記音響入力信号より標本化周波数
が2f1 の第1帯域音響信号を得、その第1帯域信号を
第1符号化法により符号化して第1符号を出力し、その
第1符号の符号化誤差を第i−1誤差信号を得(i=
2,3,…,n)、その第i−1誤差信号を標本化周波
数が2fi の第i−1変換誤差信号に変換し、上記音響
入力信号より周波数帯域がfi-1 〜fi 、標本化周波数
が2fi の第i帯域信号を得、上記第i−1変換誤差信
号と上記第i帯域信号とを加算して第i加算信号を得,
その第i加算信号と第i符号化法により符号化して第i
符号を出力する。
[0012] The i-1 decoded signal, for example, a signal obtained by decoding the first i-1 code, adds the first i-2 decoded signal, the addition signal sampling frequency into a signal of 2f i Get it. According to the encoding method of the third aspect of the present invention, an audio input signal having a maximum frequency of f n , such as a musical sound or a voice, is converted into a frequency f 1 ,
f 2 ..., f n-1 (f 1 <f 2 <, ... <f n-1 <f n)
(N = integer of 2 or more) In the encoding method, a first band audio signal having a sampling frequency of 2f 1 is obtained from the audio input signal, and the first band signal is converted to a first code. A first code is output by encoding according to a coding method, and an encoding error of the first code is obtained as an (i−1) th error signal (i =
2, 3,..., N), the (i-1) th error signal is converted into an (i-1) th conversion error signal having a sampling frequency of 2f i , and the frequency band is f i-1 to f i based on the acoustic input signal. , Obtain an i-th band signal having a sampling frequency of 2f i , add the (i−1) -th conversion error signal and the i-th band signal to obtain an i-th added signal,
The i-th addition signal and the i-th encoding method are used to encode the
Output sign.

【0013】請求項4の発明よれば、請求項1乃至3
の何れかの発明において上記第1符号化法として符号駆
動線形予測符号化法を用い、上記第n符号化法として
換符号化法を用いる請求項5の発明によれば、請求項
1乃至3の何れかの発明において上記第1乃至第n符号
化法として何れも変換符号化法を用いる。請求項6の発
明では請求項1乃至5の何れかの発明において、上記音
響入力信号中の周波数fi 以下のほぼ全域の成分のスペ
クトル包絡を重みの基準として、上記第i符号の符号化
過程において心理聴覚重み付け量子化を行う。
According to the invention of claim 4, claims 1 to 3 are provided.
Either using a code excited linear predictive coding method as the first coding method in the invention, using varying <br/>換符-coding method as the first n coding method. According to the invention of claim 5, claim
The first to n-th codes according to any one of the first to third aspects,
In each case, a transform coding method is used. According to a sixth aspect of the present invention, in the invention of any one of the first to fifth aspects, the coding process of the i-th code is performed by using a spectral envelope of components of substantially the entire region below the frequency f i in the audio input signal as a weight reference. Performs psychoacoustic weighting quantization.

【0014】請求項の発明の復号化方法によれば、入
力符号を第1乃至第n符号(nは2以上の整数)に分離
し、上記第1符号を復号して、標本化周波数2f1 の第
1復号信号を出力し、上記第i−1復号信号(i=2,
3,…,n)を標本化周波数が2fi の第i−1変換復
号信号に変換し、上記第i符号を復号して標本化周波数
2fi の第i復号信号を得、その第i復号信号と上記第
i−1変換復号信号とを加算して第i加算信号を出力す
る。
According to the decoding method of the invention of claim 7, the input code first to n symbols (n is an integer of 2 or more) separated into decodes the first code, the sampling frequency 2f 1 to output the first decoded signal, and the (i-1) th decoded signal (i = 2,
,..., N) are converted into an (i−1) -th converted decoded signal having a sampling frequency of 2f i , and the i-th code is decoded to obtain an i-th decoded signal having a sampling frequency of 2f i . The i-th conversion signal is added to the i-th conversion decoded signal to output an i-th addition signal.

【0015】[0015]

【実施例】図4Aに請求項1の発明の符号化方法の実施
例を適用した符号化器の例を示す。この例では原音信号
を2つの周波数帯域に分けて符号化、つまり2階層符号
化する場合である。入力端子21からの原音入力信号1
1は標本化周波数が24kHz、つまり最高周波数f2
が12kHzのデジタル信号であり、この入力信号は第
1帯域選択手段としてのサンプルレート変換器221
標本化周波数が16kHzの信号に変換されて第1帯域
信号23が取出される。このサンプルレート変換はいわ
ゆるダウンサンプリングであり、例えば変換標本化周波
数比に応じた間隔でサンプルが除去された後、デジタル
低域通過フィルタを通されて実行される。このサンプル
レート変換器221 よりの周波数がf1 =8kHz以下
の第1帯域信号23が取出され、この第1帯域信号23
は第1符号化方法による第1符号器241 で符号化され
る。この例では第1符号器241 としてCELP(符号
駆動線形予測符号化)符号方法により符号化する。この
符号化の結果である第1符号C1 が出力される。
FIG. 4A shows an example of an encoder to which an embodiment of the encoding method according to the first aspect of the present invention is applied. In this example, the original sound signal is divided into two frequency bands and encoded, that is, two-layer encoding is performed. Original sound input signal 1 from input terminal 21
1 indicates that the sampling frequency is 24 kHz, that is, the highest frequency f 2
There is a 12kHz digital signal, the input signal is the sampling frequency in the sample rate converter 22 1 of the first band selection unit first band signal 23 is converted into a 16kHz signal is taken out. This sample rate conversion is so-called downsampling, for example, after a sample is removed at an interval corresponding to the conversion sampling frequency ratio, and is then passed through a digital low-pass filter. A first band signal 23 having a frequency f 1 = 8 kHz or less is taken out from the sample rate converter 22 1 , and the first band signal 23
It is encoded by the first encoder 24 1 according to the first coding method. In this example, the first encoder 24 1 is encoded by a CELP (Code-Driven Linear Predictive Encoding) encoding method. First code C 1 is the result of the coding is output.

【0016】この実施例では局部復号器251 で復号さ
れ、周波数がf1 以下の第1復号信号121 が得られ、
その復号信号121 は第1サンプルレート変換器261
で標本化周波数が24kHzの変換復号信号27に変換
される。このサンプルレート変換器261 はいわゆるア
ップサンプリングを行うものであり、例えば、変換周波
数比に応じた間隔でゼロサンプルを加えた後デジタル低
域通過フィルタに通せばよい。差回路28で入力信号1
1からこの変換復号信号27が差引かれ、その差信号2
9が第2符号化方法による第2符号器242 で符号化さ
れる。この実施例では第2符号器242 で変形離散コサ
イン変換などの変換符号化(Transform co
ding)により符号化される。この符号化結果の第2
符号C2は出力される。第1符号C1 と第2符号C2
多重化回路31で、例えば図4bに示すように符号化フ
レームごとに時分割的に多重化され、符号化符号Cとし
て出力される。利用者の要求によっては第1符号C1
みを出力してもよい。
In this embodiment, a first decoded signal 12 1 having a frequency of f 1 or less is obtained by decoding by a local decoder 25 1 ,
The decoded signal 12 1 is supplied to the first sample rate converter 26 1
Is converted into a converted decoded signal 27 having a sampling frequency of 24 kHz. This sample rate converter 26 1 performs so-called upsampling. For example, it is sufficient to add a zero sample at intervals according to the conversion frequency ratio and then pass through a digital low-pass filter. Input signal 1 in the difference circuit 28
1 is subtracted from the converted decoded signal 27 and the difference signal 2
9 is encoded by the second encoder 24 2 by the second encoding method. In this embodiment transformation coding such as modified discrete cosine transform by the second encoder 24 2 (Transform co
ding). The second of this encoding result
Code C 2 is output. The first code C 1 and the second code C 2 are multiplexed in a multiplexing circuit 31 in a time-division manner for each coded frame, for example, as shown in FIG. May output only the first code C 1 is the user's request.

【0017】標本化周波数24kHzの原音入力信号1
1の周波数スペクトルは例えば図5Aに示され、この信
号11中の8kHz以下の信号が標本化周波数16kH
zの信号23(図5B)として下位層の第1符号器24
1 に入力され、高い圧縮効率で符号化される。その符号
化符号C1 の局部復号器251 により復号された復号信
号121 は図5Bに示すように、下位層入力信号23に
対しては少なからず量子化誤差131 が図5Cに示すよ
うに生じる。差回路28からこの誤差信号13 1 と、原
音入力信号11の8kHz以上の高域信号33とよりな
る信号29が上位層の第2符号器242 に入力され、あ
らゆる性質の入力信号を高品質に符号化可能な変換符号
化方法により符号化される。
Original sound input signal 1 having a sampling frequency of 24 kHz
1 is shown in FIG. 5A, for example.
The signal of 8 kHz or less in No. 11 has a sampling frequency of 16 kHz.
z signal 23 (FIG. 5B) as the first encoder 24 in the lower layer.
1And encoded with high compression efficiency. Its sign
Encoding code C1Local decoder 251Decrypted signal decoded by
No. 121Is applied to the lower-layer input signal 23 as shown in FIG. 5B.
On the other hand, the quantization error 131Is shown in Figure 5C
Will occur. From the difference circuit 28, the error signal 13 1And Hara
The high frequency signal 33 of 8 kHz or more of the sound input signal 11
Is transmitted to the second encoder 24 of the upper layer.TwoIs entered in
A transform code that can encode input signals of any quality with high quality
Is encoded by the conversion method.

【0018】このようにこの実施例では下位層の符号化
符号C1 は原音をそれ程忠実には符号化しないが、上位
層では下位層の量子化誤差も含めて符号化されるため、
後述で明らかにするように、上位層まで復号する場合
に、下位層をも高い忠実度で復号再生することが可能と
なる。つまり下位層では高い圧縮効率で符号化し、しか
も上位層をも復号する場合は、高品質の復号信号を得る
ことができる。
As described above, in this embodiment, the lower layer coding code C 1 does not encode the original sound so faithfully, but the upper layer codes the lower layer coding error including the lower layer quantization error.
As will be described later, when decoding is performed up to the upper layer, the lower layer can be decoded and reproduced with high fidelity. That is, when encoding is performed with high compression efficiency in the lower layer and decoding is also performed in the upper layer, a high-quality decoded signal can be obtained.

【0019】特に前記実施例では下位層の符号化にCE
LP方式を用いているため符号化対象が音声の場合、下
位層の第1符号C1 のみを復号しても比較的良好な品質
が得られ、また演算量が少なく、実時間処理が容易であ
る。下位層の第1符号C 1 と上位層の第2符号C 2 をそ
れぞれ復号すれば、上位層の変換符号 2 の復号信号
が、下位層のCELP符号 1 の符号化誤差を補償す
るので、符号化対象が楽音であっても広帯域にわたり、
品質の高い復号信号が得られる。
In particular, in the above-described embodiment, CE is used for the encoding of the lower layer.
If encoded is a voice due to the use of LP mode, and decodes only the first code C 1 of the lower layer relatively good quality can be obtained even, also calculation amount is small, easy real time processing is there. Second code C 2 to its first code C 1 and the upper layer of the lower layer
If each of them is decoded, the decoded signal of the upper-layer transform code C 2 is obtained.
But to compensate for the coding error to the CELP code C 1 of the lower layer
Therefore, even if the encoding target is a musical tone ,
A high quality decoded signal is obtained.

【0020】符号化を行う場合に、人間の心理聴覚、例
えば大きいレベルのスペクトルによるマスキング特性な
どを考慮して、心理聴覚重み付けをして符号化すること
により聴覚的に量子化誤差を抑圧した効率的な符号化を
することがよくある。例えば符号器241 のCELP符
号化方法においては図6に示すように、制御部35によ
り指定される周期(ピッチ)のベクトルが適応符号帳3
6から取出され、また指定された雑音符号帳37から雑
音ベクトルが取出され、これらはそれぞれ利得が付与さ
れた後、合成されて線形予測合成フィルタ38に励振ベ
クトルとして入力される。一方図4Aのサンプルレート
変換器221 よりの入力信号は符号化フレーム周期で線
形予測分析部39で線形予測分析され、その線形予測係
数が量子化部41で量子化され、その量子化線形予測係
数に応じて合成フィルタ38のフィルタ係数が設定され
る。また聴覚重み付け係数演算部43で線形予測係数よ
り求めたスペクトル包絡に基づいて心理聴覚重み付けの
ためのフィルタ係数を求めて、聴覚重み付けフィルタ4
2に設置する。サンプルレート変換器221 よりの入力
信号から合成フィルタ38よりの合成信号が差し引か
れ、その差信号が聴覚重み付けフィルタ42へ通され、
その出力のエネルギーが最小になるように制御部35に
より適応符号帳36、雑音符号帳37に対する選択が行
われる。
In the case of encoding, in consideration of the psychological auditory perception of human beings, for example, masking characteristics by a large level spectrum, etc., the encoding is performed with psychological auditory weighting, so that the quantization error is suppressed acoustically and the efficiency is reduced. Often, the encoding is done. For example, in the CELP encoding method of the encoder 24 1 , as shown in FIG.
6 and a noise vector from a designated noise codebook 37, which are added with a gain and then combined and input to a linear prediction synthesis filter 38 as an excitation vector. One input signal from the sample rate converter 22 1 of FIG. 4A is a linear prediction analysis in the linear prediction analysis unit 39 in the encoding frame period, the linear prediction coefficients are quantized by the quantization unit 41, the quantized linear prediction The filter coefficient of the synthesis filter 38 is set according to the coefficient. Further, a filter coefficient for psychological auditory weighting is obtained based on the spectral envelope obtained from the linear prediction coefficient by the auditory weighting coefficient calculator 43, and the auditory weighting filter 4
2 Sample rate converter 22 combined signal from the input signal from the synthesis filter 38 from 1 is subtracted, and the difference signal is passed to the perceptual weighting filter 42,
The control unit 35 selects the adaptive codebook 36 and the noise codebook 37 so that the output energy is minimized.

【0021】変換符号器242 の変換符号化方法におい
ては、例えば図7に示すように差回路器28の出力が離
散コサイン変換器45で直交コサイン変換されて周波数
領域の係数に変換され、そのスペクトル包絡成分が線形
予測分析部46で線形予測分析され、これよりスペクト
ル包絡を得、そのスペクトル包絡で変換器45の出力係
数が割算されて正規化され、その平均化された係数が聴
覚重み付け部47で聴覚重み付けがなされ、更に量子化
部48で例えばベクトル量子化される。聴覚重み付け係
数を得るため、この実施例について入力端子21から原
音入力信号11が離散コサイン変換器49で直交コサイ
ン変換して、周波数領域に変換され、その変換係数のス
ペクトル包絡にもとづいて聴覚重み付け係数が係数演算
部51で演算されて聴覚重み付け部47に与えられ、正
規化係数の対応する成分に対する乗算がなされる。
In the transform coding method of the transform encoder 24 2 , for example, as shown in FIG. 7, the output of the difference circuit 28 is subjected to orthogonal cosine transform by the discrete cosine transformer 45 to be transformed into frequency domain coefficients. The spectral envelope component is subjected to linear predictive analysis in the linear predictive analysis unit 46, thereby obtaining a spectral envelope. The output coefficient of the converter 45 is divided and normalized by the spectral envelope, and the averaged coefficient is weighted by the auditory weight. Perceptual weighting is performed by the unit 47, and further, the quantization unit 48 performs, for example, vector quantization. In order to obtain an auditory weighting coefficient, in this embodiment, the original sound input signal 11 from the input terminal 21 is orthogonally cosine-transformed by the discrete cosine transformer 49 and converted into the frequency domain. Is calculated by the coefficient calculation unit 51 and given to the auditory weighting unit 47, and the corresponding component of the normalization coefficient is multiplied.

【0022】つまり、上位層の第2の符号器242 では
図5Cに示すスペクトルの信号29を符号化するが、こ
の信号29のスペクトル包絡にもとづいて聴覚重み付け
を行うのではなく、原音入力信号11のスペクトル包絡
(図5D)を求め、これに基づいて聴覚重み付け符号化
を行う。次にこの発明の復号化方法の実施例を図8を参
照して説明する。この実施例は図4に示した符号化法に
よる符号化符号の復号化に適用した場合である。入力端
子55より入力された入力符号に分離回路56で第1符
号C1 と第2符号C2 とに分離され、第1符号C1 は第
1復号器571 によりこの例ではCELP復号化方法に
より最高信号周波数f1 (標本化周波数16kHz)の
第1復号信号58 1 に復号されて下位層(低域)復号化
出力631 として出力される。
That is, the second encoder 24 in the upper layerTwoThen
The signal 29 having the spectrum shown in FIG. 5C is encoded.
Weighting based on the spectral envelope of the signal 29
Instead of performing the spectral envelope
(FIG. 5D) and, based on this, auditory weighted coding
I do. Next, an embodiment of the decoding method of the present invention will be described with reference to FIG.
It will be described in the light of the above. This embodiment is based on the encoding method shown in FIG.
This is a case where the present invention is applied to decoding of a coded code. Input end
The first code is input to the input code input from the
Issue C1And the second code CTwoAnd the first code C1Is the
1 decoder 571In this example, the CELP decoding method
Higher signal frequency f1(Sampling frequency 16kHz)
First decoded signal 58 1And lower layer (low-frequency) decoding
Output 631Is output as

【0023】この第1復号化出力581 はサンプルレー
ト変換器59により最高信号周波数f2 (標本化周波数
が24kHz)の変換復号信号611 に変換される。一
方分離回路56よりの第2符号C2 は第2復号器572
によりこの例では変換符号復号化がなされ、最高信号周
波数f2 (標本化周波数が24kHz)の第2復号信号
582 が得られて、この第2復号信号582 は第1変換
復号信号611 と加算器622 で加算されて上位層(全
帯域)復号化出力632 として出力される。
The first decoded output 58 1 is converted by the sample rate converter 59 into a converted decoded signal 61 1 having the highest signal frequency f 2 (the sampling frequency is 24 kHz). On the other hand, the second code C 2 from the separation circuit 56 is supplied to the second decoder 57 2
In this example, transform code decoding is performed to obtain a second decoded signal 58 2 having the highest signal frequency f 2 (sampling frequency is 24 kHz), and the second decoded signal 58 2 is the first transformed decoded signal 61 1 and it is added by the adder 62 2 is outputted as an upper layer (the entire band) decoded output 63 2.

【0024】つまり下位層復号化出力631 としては理
想的な場合は図5B中の復号信号121 が得られる。一
方第2復号器572 の復号信号582 は理想的には図
に示すように、下位層(低域)の量子化誤差信号13
1 の復号信号601 と、高域信号33の復号信号642
とである。よって加算器622 よりの復号化出力632
には低域の復号信号581 に対し、その量子化誤差13
1 と対応する復号信号601 が加算され、量子化誤差が
著しく軽減され、かつ高域復号信号642 に高い忠実度
のものであるから、加算器62から得られる上位層まで
の復号化出力632 は原音入力信号11に著しく近く、
その量子化誤差信号は例えば図5Fに示すように全帯域
にわたり、著しく小さなものとなる。
That is, in the ideal case, the decoded signal 12 1 in FIG. 5B is obtained as the lower layer decoded output 63 1 . Meanwhile decoded signal 58 of the second decoder 57 2 is ideally 5
As shown in E , the quantization error signal 13 of the lower layer (low band)
A first decoded signal 60 1, the decoded signal 64 and second high-frequency signal 33
And Thus the adder 62 decoding than 2 output 63 2
Has a quantization error of 13 for the low-frequency decoded signal 58 1.
1 and the decoded signal 60 1 corresponding to 1 are added to significantly reduce the quantization error and have high fidelity to the high-frequency decoded signal 64 2. 63 2 is close significantly original input signal 11,
The quantization error signal becomes extremely small over the entire band, for example, as shown in FIG. 5F.

【0025】次にこの発明の符号化方法をn階層(n帯
域)分割符号化に適用した例としてn=4の場合につい
て図9を参照して説明する。図9において図4Aと対応
する部分に同一符号を付けてある。この例では原音入力
信号11は最高周波数がfn=f4 でその標本化周波数
が2f4 であり、第1サンプルレート変換器(第1帯域
選択手段)221 で標本化周波数が2f1 (但しf1
2 <f3 <f4 )の入力信号231 に変換され、つま
り周波数f1 以下の第1帯域信号231 が選出され、そ
の第1帯域信号231 は第1符号器241 で符号化さ
れ、第1符号C1として出力されると共にその第1符号
1 は第1復号器251 で標本化周波数2f1 の信号に
復号され、その復号信号121 は第1サンプルレート変
換器261で標本化周波数が2f2 の第1変換復号信号
に変換される。一方入力信号11が第2帯域選択手段と
してのサンプルレート変換器222 で標本化周波数が2
2の信号に変換されて、周波数f2 以下の第2帯域信
号232 が取出される。この第2帯域信号232 から第
1サンプルレート変換器261 よりの第1変換復号信号
が第2差回路282 で引算され、その第2差信号292
が第2符号器242 で符号化され、第2符号C2 が出力
される。
Next, a case where n = 4 will be described with reference to FIG. 9 as an example in which the encoding method of the present invention is applied to n-layer (n-band) division encoding. In FIG. 9, parts corresponding to those in FIG. 4A are denoted by the same reference numerals. In this example, the original sound input signal 11 has the highest frequency f n = f 4 and the sampling frequency is 2f 4 , and the first sample rate converter (first band selecting means) 22 1 has the sampling frequency of 2f 1 ( However, f 1 <
f 2 <is converted to an input signal 23 1 f 3 <f 4), i.e. the frequency f 1 is the first band signal 23 1 is selected following the first band signal 23 1 code by the first encoder 24 1 ized, its first code C 1 is outputted as the first code C 1 is decoded in the sampling frequency 2f 1 of the signal at a first decoder 25 1, the decoded signal 12 1 is the first sample rate converter At 26 1 , the sampling frequency is converted to a first converted decoded signal having a frequency of 2f 2 . On the other hand the input signal 11 is sampled frequency sample rate converter 22 2 of the second band selection means 2
is converted into f 2 of the signal, the frequency f 2 or less of the second band signal 23 2 is taken out. The first converted decoded signal from the first sample rate converter 26 1 is subtracted from the second band signal 23 2 by the second difference circuit 28 2 , and the second difference signal 29 2 is obtained.
There is encoded in the second encoder 24 2, the second code C 2 is output.

【0026】以下同様の処理を行うが、第3符号C3
得る処理を、i=3(i=2,3,……,n、この例で
n=4)を例として説明する。第i−1(=第2)符
号Ci-1 (=C2 )が第i−1(=第2)復号器252
で復号されて標本化周波数2fi-1 (=2f2 )の第i
−1(=第2)復号信号を得、この第i−1(=第2)
復号信号と第i−2(=第1)サンプルレート変換器2
i-2 (=261 )よりの第i−2(=第1)変換復号
信号との和が加算器60i-1 (=602 )でとられ、そ
の和信号は第i−1(=第2)サンプルレート変換器2
i-1 (=262 )で標本化周波数2fi (=2f
3 )、周波数が i (=f 3 )以下の第i−1(=第
2)変換復号信号に変換される。一方、第i(=第3)
帯域選択手段としてのサンプルレート変換器22i (=
223 )により入力信号11から、周波数がfi (=f
3 )、標本化周波数が2fi (=2f3 )の第i(=第
3)帯域信号23i (=233 )が取出され、その第i
(=第3)帯域信号23i (=233 )は第i−1(=
第2)サンプルレート変換器26i-1 (=262 )より
の変換復号信号が第i(=第3)差回路28i (28
3 )で減算され、その第i(=第3)減算信号293
第i(=第3)符号器24i (=243 )で符号化さ
れ、第i(=第3)符号Ci (=C3 )を出力する。な
お、第i−1(=第2)復号器25i-1 (=252
と、加算器60i-1 (=602 )と第i−1(=第2)
サンプルレート変換器26i-1 (=262 )は第i−1
(=第2)復号化手段40i-1 (=402 )を構成す
る。ただ第1復号化手段401 は第i−2層が存在せ
、つまり更に低域の信号を扱わないので加算器60 1
は省略される。また最上位層の帯域信号23 n (=23
4 )は周波数f n (=f 4 )以下の信号であるため第
(=4)帯域選択手段としてのサンプルレート変換器
n (=22 4 )を備える必要はない。
[0026] performs the same processing, the process of obtaining the third code C 3, i = 3 (i = 2,3, ......, n, in this example n = 4) will be described as an example. The i-1 (= second) code C i-1 (= C 2 ) is the i-1 (= second) decoder 25 2
And the i-th sampling frequency 2f i-1 (= 2f 2 )
-1 (= second) decoded signal, and the i-1 (= second) decoded signal is obtained.
Decoded signal and i-2 (= first) sample rate converter 2
The sum with the i-2 (= first) converted decoded signal from 6 i-2 (= 26 1 ) is obtained by the adder 60 i-1 (= 60 2 ), and the sum signal is the i-1 ( i-1 ) th signal. (= Second) sample rate converter 2
6 i-1 (= 26 2 ) and the sampling frequency 2f i (= 2f)
3 ) and the frequency is f i (= f 3 ) Is converted into the following i-1 (= second) converted decoded signal. On the other hand, the i-th (= third)
The sample rate converter 22 i (=
22 3 ), the frequency f i (= f
3 ), an i -th (= third) band signal 23 i (= 23 3 ) having a sampling frequency of 2f i (= 2f 3 ) is extracted, and the i-th band signal is extracted.
The (= third) band signal 23 i (= 233 3 ) is the i−1 (=
( 2 ) The converted decoded signal from the sample rate converter 26 i-1 (= 26 2 ) is converted into an ith (= third) difference circuit 28 i (28)
Is subtracted by 3), the first i (= 3) subtraction signal 29 3 is encoded in the i (= 3) encoder 24 i (= 24 3), the i (= 3) code C i (= C 3 ) is output. Note that the (i−1) th (= second) decoder 25 i−1 (= 25 2 )
When the adder 60 i-1 (= 60 2 ) and the i-1 (= second)
The sample rate converter 26 i-1 (= 26 2 ) is the i-1
The (= second) decoding means 40 i-1 (= 40 2 ). Only the first decoding means 40 is absent is the i-2 layers, that is so further not handle signals of low frequency adder 60 1
Is omitted. Also, the band signal 23 n (= 23) of the uppermost layer
4 ) is the frequency f n (= f 4 ) N-th
(= 4) Sample rate converter 2 as band selection means
2 n (= 22 4) need not be provided with.

【0027】このようにしてこの発明は入力信号帯域を
n区間に分割して符号化する場合に適用できる。第1〜
第n(=第4)符号C1 〜Cn (=C4 )は多重化回路
31でフレームごとに多重化されて符号化符号Cとして
出力される。この場合多重化回路31は第1又は第1〜
第i符号の何れでも選択して出力することができるよう
にされる。第1〜第n(=第4)符号器241 〜24n
(=244 )は符号器24i のiが大となる程圧縮率が
小さくなる、という使い方をすれば広帯域、高品質の符
号化をする。これを満たせばその符号化方法は、例えば
全てを変換符号化としてもよい。
As described above, the present invention can be applied to a case where an input signal band is divided into n sections and encoded. First to first
The n-th (= fourth) codes C 1 to C n (= C 4 ) are multiplexed for each frame by the multiplexing circuit 31 and output as a coded code C. In this case, the multiplexing circuit 31 has the first or first to first
Any of the i-th codes can be selected and output. First to n-th (= fourth) encoders 24 1 to 24 n
If (= 24 4 ) is used such that the compression ratio decreases as the i of the encoder 24 i increases, high-bandwidth, high-quality encoding is performed. If this is satisfied, the encoding method may be, for example, all transform encoding.

【0028】第1〜第4符号器241 〜244 において
聴覚重み付け符号化を行う場合はサンプルレート変換器
221 ,222 ,223 よりの各周波数がf1 ,f2
3以下の信号が聴覚重み付け係数演算部721 ,72
2 ,723 へそれぞれ供給され、それぞれそのスぺクト
ル包絡に基づく聴覚重み付け係数が演算され、また入力
信号が聴覚重み付け係数演算部724 に入力されて同様
に聴覚重み付け係数が演算され、これら聴覚重み付け係
数演算部721 〜724 でそれぞれ演算された聴覚重み
付け係数が第1〜第4符号器241 〜244 へ供給さ
れ、前述したように聴覚重み付け符号化が行われる。
When perceptual weighting encoding is performed in the first to fourth encoders 24 1 to 24 4 , the frequencies from the sample rate converters 22 1 , 22 2 , and 22 3 are f 1 , f 2 , and 23, respectively.
Signals equal to or smaller than f 3 are output from the auditory weighting coefficient calculators 72 1 , 72
2, 72 3 are supplied to, the perceptual weighting coefficients based on the scan Bae spectrum envelope is calculated respectively, Similarly perceptual weighting coefficient input signal is input to the perceptual weighting coefficient calculating unit 72 4 is calculated, these auditory The perceptual weighting coefficients calculated by the weighting coefficient calculators 72 1 to 72 4 are supplied to the first to fourth encoders 24 1 to 24 4 , and perceptual weighting is performed as described above.

【0029】この発明の符号化方法をn階層分割符号化
への適用例としてn=4の場合を図10に示す。この例
も原音入力信号11の最高周波数がfn =f4 でその標
本化周波数が2f4 の場合で、第1サンプルレート変換
器(第1帯域選択手段)22 1 で標本化周波数が2f1
(但しf1 <f2 <f3 <f4 )の入力信号231 に変
換され、つまり周波数f1 以下の第1帯域信号231
選出され、その第1帯域信号231 は第1符号器241
で符号化され、第1符号C1 として出力されると共にそ
の第1符号C1 は第1複号器251 で標本化周波数2f
1 の信号に復号され、その復号信号121 と第1帯域信
号231 との差が第1差回路651 でとられ、その差信
号(第1誤差信号)131 は第1サンプルレート変換器
261 で標本化周波数が2f2 の第1変換誤差信号に変
換される。
The encoding method according to the present invention is applied to n-layer divisional encoding.
FIG. 10 shows a case where n = 4 as an example of application to the present invention. This example
Also the highest frequency of the original sound input signal 11 is fn= FFourIn that mark
Realization frequency is 2fFour, The first sample rate conversion
Device (first band selecting means) 22 1And the sampling frequency is 2f1
(However, f1<FTwo<FThree<FFour) Input signal 231Strange
The frequency f1The following first band signal 231But
Selected and its first band signal 231Is the first encoder 241
And the first code C1Is output as
The first code C1Is the first duplexer 251And sampling frequency 2f
1And the decoded signal 121And the first band signal
No.231Is different from the first difference circuit 651And the difference
No. (first error signal) 131Is the first sample rate converter
261And the sampling frequency is 2fTwoTo the first conversion error signal of
Is replaced.

【0030】一方入力信号11から第2帯域選択手段6
2 で周波数帯域がf1 〜f2 、標本化周波数が2f2
の第2帯域信号232 が取出される。例えば入力信号1
1がサンプルレート変換器222 で標本化周波数2f2
の信号に変換され、その信号が遮断周波数f1 の高域通
過フィルタ672 に通されて第2帯域信号232 が得ら
れる。この第2帯域信号232 は第1サンプルレート変
換器261 よりの第1変換誤差信号と第2加算器682
で加算され、その第2加算信号292 が第2符号器24
2 で符号化され、第2符号C2 が出力される。
On the other hand, from the input signal 11, the second band selecting means 6
6 2 at the frequency band f 1 ~f 2, the sampling frequency is 2f 2
2 is taken out second band signal 23. For example, input signal 1
1 is a sampling rate converter 22 2 and a sampling frequency 2f 2
Is converted to the signal, the signal is a second band signal 23 2 is passed through at high pass filter 67 second cut-off frequency f 1 is obtained. The second band signal 23 2 is obtained by combining the first conversion error signal from the first sample rate converter 26 1 with the second adder 68 2
And the second addition signal 29 2 is added to the second encoder 24
2 and a second code C 2 is output.

【0031】以下同様の処理を行うが、第3符号C3
得る処理を、i=3(i=2,3,…,n、この例では
4まで)を例として説明する。第i−1(=第2)符号
i- 1 (=C2 )が第i−1(=第2)復号器252
復号されて標本化周波数2f i-1 (=2f2 )の第i−
1(=第2)復号信号を得、この第i−1(=第2)復
号信号と第i−1(=第2)加算器68i-1 (=6
2 )より第i−1(=第2)加算信号29i-1 (=2
2 )との差が差回路65i-1 (=652 )でとられ、
その第i−1(=第2)誤差信号132 は第i−1(=
第2)サンプルレート変換器26i-1 (=262 )で標
本化周波数2fi (=2f3 )の第i−1(=第2)変
換誤差信号に変換される。一方、第i(=第3)帯域選
択手段66i(=663 )により入力信号11から、帯
域がfi-1 〜fi (=f2 〜f3 )、標本化周波数が2
i (=f3 )の第i(=第3)帯域信号23i (=2
3 )が取出され、その第i(=第3)帯域信号23i
(=233 )は第i−1(=第2)変換誤差信号と第i
(=第3)加算器68i (=683 )で加算され、その
第i(=第3)加算信号293 が第i(=第3)符号器
24i (=243 )で符号化され、第i(=第3)符号
i (=C3 )を出力する。
Hereinafter, the same processing is performed, but the third code CThreeTo
The process to obtain is i = 3 (i = 2, 3,..., N, in this example,
4) will be described as an example. The (i-1) th (= second) code
Ci- 1(= CTwo) Is the (i−1) th (= second) decoder 25Twoso
Decoded and sampling frequency 2f i-1(= 2fTwo)-
1 (= second) decoded signal, and the i-1 (= second) decoding signal is obtained.
Signal and the (i-1) th (= second) adder 68i-1(= 6
8Two) From the (i-1) th (= second) addition signal 29i-1(= 2
9Two) Is the difference circuit 65i-1(= 65Two),
The i-1 (= second) error signal 13TwoIs the i-1 (=
2) Sample rate converter 26i-1(= 26Two)
Realization frequency 2fi(= 2fThree) -Th (= second) transformation of
It is converted into a conversion error signal. On the other hand, the i-th (= third) band selection
Selection means 66i(= 66Three), The input signal 11
Area is fi-1~ Fi(= FTwo~ FThree), Sampling frequency is 2
fi(= FThree) -Th (= third) band signal 23i(= 2
3Three) Is extracted and its i-th (= third) band signal 23i
(= 23Three) Is the (i-1) th (= second) conversion error signal and the ith
(= Third) adder 68i(= 68Three)
The ith (= third) addition signal 29ThreeIs the ith (= third) encoder
24i(= 24Three) And the ith (= third) code
Ci(= CThree) Is output.

【0032】このようにしてこの発明は入力信号帯域を
n区間に分割して符号化する場合に適用できる。最上位
層、つまり周波数fn-1 〜fn (=f3 〜f4 )の帯域
を選出する第n(=第4)帯域選択手段66n (=66
4 )は単なる遮断周波数がf n-1 (=f3 )の高域通過
フィルタ67n (=674 )でよい。第1〜第n(=第
4)符号C1 〜Cn (=C4 )は多重化回路31でフレ
ームごとに多重化されて符号化符号Cとして出力され
る。この場合多重化回路31は第1又は第1〜第i符号
の何れでも選択して出力することができるようにされ
る。
As described above, the present invention reduces the input signal band.
This is applicable when encoding is performed by dividing into n sections. Top
Layer, frequency fn-1~ Fn(= FThree~ FFour) Band
N-th (= fourth) band selecting means 66 for selectingn(= 66
Four) Means that the mere cutoff frequency is f n-1(= FThree) High pass
Filter 67n(= 67Four). 1st to nth (=
4) Code C1~ Cn(= CFour) Is a frame in the multiplexing circuit 31.
Multiplexed for each frame and output as encoded code C
You. In this case, the multiplexing circuit 31 has the first or first to i-th codes.
Can be selected and output.
You.

【0033】第1〜第n(=第4)符号器241 〜24
n (=244 )は符号器24i のiが大となる程圧縮率
が小さくなる、という使い方を行えば広帯域、高品質の
符号化をする。これを満せばその符号化方法は、例えば
全てを変換符号化としてもよい。第1〜第4符号器24
1 〜244 において聴覚重み付け符号化を行う場合はサ
ンプルレート変換器711 ,712 ,713 により入力
信号がそれぞれ標本化周波数が2f1 ,2f2 ,2f3
の信号により変換されることにより、入力信号11から
それぞれ周波数がf1 ,f2 ,f3 以下の信号が取出さ
れて聴覚重み付け係数演算部721 ,722 ,723
それぞれ供給され、それぞれそのスペクトル包絡に基づ
く聴覚重み付け係数が演算され、また入力信号が聴覚重
み付け係数演算部724 に入力されて同様に聴覚重み付
け係数が演算され、これら聴覚重み付け係数演算部72
1 〜724 でそれぞれ演算された聴覚重み付け係数が第
1〜第4符号器241 〜244 へ供給され、前述したよ
うに聴覚重み付け符号化が行われる。
The first to n-th (= fourth) encoders 24 1 to 24
If n (= 24 4 ) is used in such a manner that the compression ratio decreases as the i of the encoder 24 i increases, wideband and high-quality encoding is performed. If this is satisfied, the encoding method may be, for example, all transform coding. First to fourth encoders 24
1 when performing perceptual weighting coding in 24 4 Sample rate converter 71 1, 71 2, 71 3 by the input signal is the sampling frequency, respectively 2f 1, 2f 2, 2f 3
By being converted by the signal, the frequency respectively from the input signal 11 is supplied to the f 1, f 2, f 3 the following signals are taken out perceptually weighted coefficient calculation unit 72 1, 72 2, 72 3, respectively its perceptual weighting coefficients based on the spectral envelope is calculated, also the input signal is perceptually weighted coefficients similarly is calculated is input to the perceptual weighting coefficient calculating unit 72 4, these perceptual weighting coefficient calculator 72
1-72 4 perceptual weighting coefficients computed respectively is supplied to the first to fourth encoder 24 1-24 4, perceptual weighting coding is performed as described above.

【0034】この発明による復号化方法を適用した復号
化器の例として、n=4、つまり入力符号が第1〜第4
符号C1 〜C4 が入力される場合を図11に図8と対
応する部分に同一符号を付けて示す。符号分離手段56
で入力符号Cは第1〜第4符号C1 〜C4 に分離され
て、それぞれ第1〜第4復号器571 〜574 へ供給さ
れる。第1復号器571 の第1復号信号581 は第1復
号化出力631 として出力されると共にサンプルレート
変換器591 で標本化周波数がそれぞれ2f2 、第1変
換復号信号611 に変換され、その第1変換復号信号6
1 は第2復号器572 より第2復号信号582 に第2
加算器622 で加算されて第2復号化出力632 として
出力されると共に第2サンプルレート変換器592 で標
本化周波数が2f3 の変換復号信号に変換される。一般
には第i−1(i=2,3,…,n、例えばi=3)加
算器62i-1 (=622 )よりの第i−1(=第3)復
号化出力63i-1 (=632 )が第i−1(第2)サン
プルレート変換器59i-1 (=592 )で標本化周波数
が2fi (=2f3 )の第i−1(=第2)変換復号信
号61i-1 (=612 )に変換され、その第i−1(=
第2)変換復号信号61i-1 (=612 )と第i(=第
3)復号器57i (=573 )からの第i(=第3)復
号信号58i (=583 )とが第i(=第3)加算器6
i (=623)で加算されて第i(=第3)復号化出
力63i (=633 )を得ることができ、これが出力さ
れる。
[0034] Examples of the decoder according to the decoding how according to the present invention, n = 4, i.e. the input code is the first to fourth
FIG. 11 shows a case where the codes C 1 to C 4 are input , and the parts corresponding to those in FIG. Code separation means 56
In input code C is separated to the first to fourth code C 1 -C 4, it is supplied to the first to fourth decoders 57 1 to 57 4, respectively. The first decoded signal 58 of the first decoder 57 1 conversion with sample rate converter 59 1 sampling frequency respectively 2f 2, in the first transform decoding signal 61 1 is output as 1 first decoded output 63 And the first transformed decoded signal 6
1 1 second to 2 second decoded signal 58 from the second decoder 57 2
Adder 62 second sample rate converter 59 2 at the sampling frequency is outputted as a second decoded output 63 2 is added in 2 is converted into converted decoded signal 2f 3. Generally the first i-1 (i = 2,3, ..., n, for example, i = 3) adder 62 i-1 (= 62 2) (i-1) th than (= 3) decoded output 63 i- 1 (= 63 2) is the (i-1) (second) sample rate converter 59 i-1 (= 59 2) at a sampling frequency 2f i (= 2f 3) (i-1) th of the (= second) The converted decoded signal 61 i-1 (= 61 2 ) is converted into the converted decoded signal 61 i-1 (= 61 2 ).
Second) converting the decoded signal 61 i-1 (= 61 2 ) and the i (= 3) first i (= 3 from the decoder 57 i (= 57 3)) decoded signal 58 i (= 58 3) Is the ith (= third) adder 6
2 i (= 62 3) the i (= 3) is added in can Rukoto obtain the decoded output 63 i (= 63 3), which is output.

【0035】[0035]

【発明の効果】以上説明したように、この発明によれ
ば、階層符号化方法において下位層の量子化誤差を上位
層で符号化しているため、CELP符号化方法と変換符
号化方法などの、圧縮方法の異なる符号化方法によって
階層を構成しても、上位層までの復号信号において符号
化品質を低下させない、という効果がある。また、下位
層の量子化誤差を上位層で符号化する、という操作を繰
り返すことにより、複数階層化において量子化誤差を階
層数に応じて減少させることが可能となる。更に、この
ような符号化方法によって、どの階層で復号しても聴感
上の復号品質が最適となり、スケーラブルな階層符号化
を実現できる。
As described above, according to the present invention, the quantization error of the lower layer is coded by the upper layer in the hierarchical coding method, so that the CELP coding method and the transform coding method can be used. Even if a hierarchy is configured by an encoding method having a different compression method, there is an effect that encoding quality is not deteriorated in a decoded signal up to an upper layer. In addition, by repeating the operation of encoding the quantization error of the lower layer in the upper layer, it is possible to reduce the quantization error in multiple layers according to the number of layers. Furthermore, with such an encoding method, the audible decoding quality is optimized regardless of which layer is decoded, and scalable hierarchical encoding can be realized.

【図面の簡単な説明】[Brief description of the drawings]

【図1】サブバンド符号化方法を3つの周波数帯域に分
割する方法によって実現した場合の原音(A)と符号化
再生音(B)、および量子化誤差(C)の例を示す図。
FIG. 1 is a diagram showing an example of an original sound (A), an encoded reproduced sound (B), and a quantization error (C) when a subband encoding method is realized by a method of dividing into three frequency bands.

【図2】スケーラブルな階層構造を持つ階層符号化方法
の特徴を説明するための図。
FIG. 2 is a diagram illustrating characteristics of a hierarchical encoding method having a scalable hierarchical structure.

【図3】サブバンド符号化方法によって階層符号化を実
現した場合の原音、復号信号、量子化誤差の様子を示す
図。
FIG. 3 is a diagram showing a state of an original sound, a decoded signal, and a quantization error when hierarchical encoding is realized by a subband encoding method.

【図4】Aはこの発明による符号化方法を2階層符号化
法に適用した場合の符号化器の例を示すブロック図、B
は多重化された符号の例を示す図である。
FIG. 4A is a block diagram showing an example of an encoder when the encoding method according to the present invention is applied to a two-layer encoding method;
FIG. 3 is a diagram illustrating an example of a multiplexed code.

【図5】A〜Dは図4Aの符号化動作における原音、復
号信号、上位層符号化入力、上位層聴覚重み付けの基準
の各例を示す図、E,Fは上位層の復号信号、上位層ま
での復号の量子化誤差の例を示す図である。
5A to 5D are diagrams showing examples of an original sound, a decoded signal, an upper layer coded input, and a reference for upper layer auditory weighting in the encoding operation of FIG. 4A; E and F are upper layer decoded signals; It is a figure which shows the example of the quantization error of decoding to a layer.

【図6】CELP符号化器の概略を示すブロック図。FIG. 6 is a block diagram schematically showing a CELP encoder.

【図7】変換符号化器の概略を示すブロック図。FIG. 7 is a block diagram schematically showing a transform encoder.

【図8】この発明の復号化方法を2階層符号化の復号法
に適用した復号器の例を示すブロック図。
FIG. 8 is a block diagram showing an example of a decoder in which the decoding method of the present invention is applied to a two-layer coding decoding method.

【図9】この発明の符号化方法を4階層符号化方法とし
て実現した場合の符号器の例を示すブロック図。
FIG. 9 is a block diagram showing an example of an encoder when the encoding method of the present invention is implemented as a four-layer encoding method.

【図10】この発明による4階層符号化方法を実現する
符号器の他の例を示すブロック図。
FIG. 10 is a block diagram showing another example of an encoder for realizing the four-layer encoding method according to the present invention.

【図11】この発明の復号化方法を4階層符号化方法と
して実現した場合の復号器の例を示すブロック図。
FIG. 11 is a block diagram showing an example of a decoder when the decoding method of the present invention is realized as a four-layer coding method.

フロントページの続き (56)参考文献 特開 平8−46517(JP,A) 特開 昭63−201700(JP,A) 特開 平4−104617(JP,A) 特開 平6−197084(JP,A) 特開 平1−233496(JP,A) (58)調査した分野(Int.Cl.7,DB名) G10L 19/02 Continuation of the front page (56) References JP-A-8-46517 (JP, A) JP-A-63-201700 (JP, A) JP-A-4-104617 (JP, A) JP-A-6-19784 (JP) , A) JP-A-1-233496 (JP, A) (58) Fields investigated (Int. Cl. 7 , DB name) G10L 19/02

Claims (7)

(57)【特許請求の範囲】(57) [Claims] 【請求項1】 楽音や音声などの最高周波数がfn の音
響入力信号を周波数f1 ,f2 ,……,fn-1 (f1
2 <,……,<fn-1 <fn )のn個の区分(nは2
以上の整数)に分割して符号化する符号化方法におい
て、 上記入力信号から周波数がf1 以下の第1帯域信号を選
出する第1帯域選択過程と、 上記第1帯域信号を第1符号化方法で符号化して第1符
号を出力する第1符号化過程と、 第i−1以下の各符号(i=2 ,3,……,n)から
周波数がfi-1 以下の第i−1復号信号を得る第i−1
復号化過程と、 上記入力信号から周波数fi 以下の第i帯域信号を選出
する第i選択過程と、 上記第i帯域信号から上記第i−1復号信号を差し引い
て第i差信号を得る第i差過程と、 上記第i差信号を第i符号化方法で符号化して第i符号
を出力する第i符号化過程と、 を有する音響信号符号化方法。
1. A frequency f 1 the highest frequency of the acoustic input signal f n, such as tone and voice, f 2, ......, f n -1 (f 1 <
f 2 <, ......, n-number of segments (n of <f n-1 <f n ) 2
A first band selecting step of selecting a first band signal having a frequency equal to or less than f 1 from the input signal, and a first encoding of the first band signal. A first encoding step of encoding the data and outputting a first code, and an i-th code whose frequency is f i-1 or less from each code (i = 2, 3,..., N) or less. I-1 to obtain one decoded signal
A decoding step, an i-th selection step of selecting an i-th band signal having a frequency f i or less from the input signal, and a i-th difference signal obtained by subtracting the (i−1) -th decoded signal from the i-th band signal. An audio signal encoding method comprising: an i-th difference process; and an i-th encoding process of encoding the i-th difference signal by an i-th encoding method and outputting an i-th code.
【請求項2】 上記第i−1復号化過程は上記第i−1
符号を復号する過程と、その復号された信号と第i−2
復号信号とを加算する過程と、その加算された信号を標
本化周波数が2fi の信号に変換して上記第i−1復号
信号を得る過程と、 を有することを特徴とする請求項1記載の音響信号符号
化方法。
2. The method according to claim 1, wherein the (i-1) th decoding process is performed in the (i-1) th decoding process.
Decoding the code, the decoded signal and i-2
A step of adding the decoded signal, according to claim 1, characterized in that it comprises the steps of obtaining the (i-1) th decoded signal and converts the summed signal into a signal of sampling frequency is 2f i, the Audio signal encoding method.
【請求項3】 楽音、音声などの最高周波数がfn の音
響入力信号を、周波数f1 ,f2 …,fn-1 (f1 <f
2 <,…<fn-1 <fn )(n=2以上の整数)で区分
してそれぞれを符号化する符号化方法において、 上記入力信号より標本化周波数が2f1 の第1帯域信号
を得る第1帯域選択過程と、 上記第1帯域信号を第1符号化法により符号化して第1
符号を出力する第1符号化過程と、 上記i−1符号化過程(i=2,3,…,n)の符号誤
差として第i−1誤差信号を得る第i−1誤差取出し過
程と、 上記第i−1誤差信号を標本化周波数が2fi の第i−
1変換誤差信号に変換する第i−1変換過程と、 上記入力音響信号より周波数帯域がfi-1 〜fi 、標本
化周波数が2fi の第i帯域信号を得る第i帯域選出過
程と、 上記第i−1変換誤差信号と上記第i帯域信号とを加算
して第i加算信号を得る第i加算過程と、 上記第i加算信号を第i符号化法により符号化して第i
符号を出力する第i符号化過程と、 を有する音響信号符号化方法。
3. A musical sound, the acoustic input signal maximum frequency is f n, such as voice, frequency f 1, f 2 ..., f n-1 (f 1 <f
2 <,... <F n-1 <f n ) (n = an integer equal to or greater than 2), and encodes each of the first band signals having a sampling frequency of 2f 1 from the input signal. And a first band selecting step of obtaining the first band signal, and encoding the first band signal by a first encoding method.
A first encoding step of outputting a code, an i-1 error extracting step of obtaining an i-1 error signal as a code error of the i-1 encoding step (i = 2, 3,..., N); The (i-1) -th error signal is converted to an i-th error signal having a sampling frequency of 2f i .
And the i-1 conversion process for converting the 1 conversion error signal, a frequency band from the input audio signal f i-1 ~f i, and the i band selection process sampling frequency to obtain a first i-band signal of 2f i An i-th addition step of adding the i-th conversion error signal and the i-th band signal to obtain an i-th addition signal, and encoding the i-th addition signal by an i-th encoding method to obtain an i-th addition signal.
An i-th encoding step of outputting a code.
【請求項4】 上記第1符号化法は符号駆動線形予測符
号化法であり、上記第n符号化法は変換符号化法である
ことを特徴とする請求項1乃至3の何れかに記載の音響
信号符号化方法。
4. The method according to claim 1, wherein the first encoding method is a code-driven linear predictive encoding method, and the n-th encoding method is a transform encoding method. Audio signal encoding method.
【請求項5】 上記第1符号化法から上記第n符号化法
までの各々が変換符号化法であることを特徴とする請求
項1乃至3の何れかに記載の音響信号符号化方法。
5. The method according to claim 1, wherein said first encoding method is to said n-th encoding method.
Each of which is a transform coding method
Item 4. The audio signal encoding method according to any one of Items 1 to 3.
【請求項6】 上記音響入力信号中の周波数fi 以下の
ほぼ全域の成分のスペクトル包絡を重みの基準として、
上記第i符号化過程において心理聴覚重み付け量子化を
行うことを特徴とする請求項1乃至の何れかに記載の
音響信号符号化方法。
6. The spectral envelope of components of substantially the entire frequency range below the frequency f i in the acoustic input signal is defined as a weight reference.
Acoustic signal encoding method according to any of claims 1 to 5, characterized in that the perceptual weighting quantization in the i-th encoding process.
【請求項7】 入力符号を第1乃至第n符号(nは2以
上の整数)に分離する分離過程と、 上記第1符号を復号して、標本化周波数2f1 の第1復
号信号を第1復号化出力として出力する第1復号過程
と、 上記第i−1復号化出力(i=2,3,…,n)を標本
化周波数が2fi の第i−1変換復号化出力に変換する
第i−1変換過程と、 上記第i符号を復号して標本化周波数2fi の第i復号
信号を得る第i復号過程と、 上記第i復号信号と上記第i−1変換復号化出力とを加
算して第i復号化出力を出力する第i加算過程と、 を有する音響信号復号化方法。
7. A separation process of separating an input code into first to n-th codes (n is an integer of 2 or more), and decoding the first code to convert a first decoded signal of a sampling frequency 2f 1 into a first code. a first decoding step of outputting as a decoded output, said first i-1 decoded output (i = 2,3, ..., n ) converting the sampling frequency to the (i-1) th converted decoded output 2f i I-th conversion process, i-th decoding process of decoding the i-th code to obtain an i-th decoded signal at a sampling frequency of 2f i , And an i-th adding step of adding an i and i to output an i-th decoded output.
JP07065622A 1995-03-24 1995-03-24 Acoustic signal encoding method and decoding method Expired - Lifetime JP3139602B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP07065622A JP3139602B2 (en) 1995-03-24 1995-03-24 Acoustic signal encoding method and decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP07065622A JP3139602B2 (en) 1995-03-24 1995-03-24 Acoustic signal encoding method and decoding method

Publications (2)

Publication Number Publication Date
JPH08263096A JPH08263096A (en) 1996-10-11
JP3139602B2 true JP3139602B2 (en) 2001-03-05

Family

ID=13292314

Family Applications (1)

Application Number Title Priority Date Filing Date
JP07065622A Expired - Lifetime JP3139602B2 (en) 1995-03-24 1995-03-24 Acoustic signal encoding method and decoding method

Country Status (1)

Country Link
JP (1) JP3139602B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009513992A (en) * 2003-06-25 2009-04-02 ドルビー スウェーデン アクチボラゲット Apparatus and method for encoding audio signal and apparatus and method for decoding encoded audio signal

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3134817B2 (en) 1997-07-11 2001-02-13 日本電気株式会社 Audio encoding / decoding device
DE19743662A1 (en) * 1997-10-02 1999-04-08 Bosch Gmbh Robert Bit rate scalable audio data stream generation method
JP3541680B2 (en) 1998-06-15 2004-07-14 日本電気株式会社 Audio music signal encoding device and decoding device
US6549147B1 (en) 1999-05-21 2003-04-15 Nippon Telegraph And Telephone Corporation Methods, apparatuses and recorded medium for reversible encoding and decoding
US6446037B1 (en) * 1999-08-09 2002-09-03 Dolby Laboratories Licensing Corporation Scalable coding method for high quality audio
SE0202159D0 (en) 2001-07-10 2002-07-09 Coding Technologies Sweden Ab Efficientand scalable parametric stereo coding for low bitrate applications
US8605911B2 (en) 2001-07-10 2013-12-10 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
ES2237706T3 (en) 2001-11-29 2005-08-01 Coding Technologies Ab RECONSTRUCTION OF HIGH FREQUENCY COMPONENTS.
JP4290917B2 (en) 2002-02-08 2009-07-08 株式会社エヌ・ティ・ティ・ドコモ Decoding device, encoding device, decoding method, and encoding method
EP1484841B1 (en) 2002-03-08 2018-12-26 Nippon Telegraph And Telephone Corporation DIGITAL SIGNAL ENCODING METHOD, DECODING METHOD, ENCODING DEVICE, DECODING DEVICE and DIGITAL SIGNAL DECODING PROGRAM
CN100346392C (en) 2002-04-26 2007-10-31 松下电器产业株式会社 Device and method for encoding, device and method for decoding
JP3881943B2 (en) * 2002-09-06 2007-02-14 松下電器産業株式会社 Acoustic encoding apparatus and acoustic encoding method
SE0202770D0 (en) 2002-09-18 2002-09-18 Coding Technologies Sweden Ab Method of reduction of aliasing is introduced by spectral envelope adjustment in real-valued filterbanks
KR100513729B1 (en) 2003-07-03 2005-09-08 삼성전자주식회사 Speech compression and decompression apparatus having scalable bandwidth and method thereof
US7844451B2 (en) 2003-09-16 2010-11-30 Panasonic Corporation Spectrum coding/decoding apparatus and method for reducing distortion of two band spectrums
JP4679049B2 (en) 2003-09-30 2011-04-27 パナソニック株式会社 Scalable decoding device
US7949518B2 (en) 2004-04-28 2011-05-24 Panasonic Corporation Hierarchy encoding apparatus and hierarchy encoding method
WO2006030865A1 (en) * 2004-09-17 2006-03-23 Matsushita Electric Industrial Co., Ltd. Scalable encoding apparatus, scalable decoding apparatus, scalable encoding method, scalable decoding method, communication terminal apparatus, and base station apparatus
KR20070061818A (en) * 2004-09-17 2007-06-14 마츠시타 덴끼 산교 가부시키가이샤 Audio encoding apparatus, audio decoding apparatus, communication apparatus and audio encoding method
WO2006057626A1 (en) * 2004-11-29 2006-06-01 National University Of Singapore Perception-aware low-power audio decoder for portable devices
WO2006096099A1 (en) * 2005-03-09 2006-09-14 Telefonaktiebolaget Lm Ericsson (Publ) Low-complexity code excited linear prediction encoding
KR100818268B1 (en) * 2005-04-14 2008-04-02 삼성전자주식회사 Apparatus and method for audio encoding/decoding with scalability
FR2888699A1 (en) * 2005-07-13 2007-01-19 France Telecom HIERACHIC ENCODING / DECODING DEVICE
CN101385079B (en) * 2006-02-14 2012-08-29 法国电信公司 Device for perceptual weighting in audio encoding/decoding
EP2017830B9 (en) * 2006-05-10 2011-02-23 Panasonic Corporation Encoding device and encoding method
JP5403949B2 (en) * 2007-03-02 2014-01-29 パナソニック株式会社 Encoding apparatus and encoding method
WO2009093466A1 (en) * 2008-01-25 2009-07-30 Panasonic Corporation Encoding device, decoding device, and method thereof
US20120041761A1 (en) 2009-03-13 2012-02-16 Panasonic Corporation Voice decoding apparatus and voice decoding method
JPWO2010103854A1 (en) 2009-03-13 2012-09-13 パナソニック株式会社 Speech coding apparatus, speech decoding apparatus, speech coding method, and speech decoding method
CA2759914A1 (en) 2009-05-29 2010-12-02 Nippon Telegraph And Telephone Corporation Encoding device, decoding device, encoding method, decoding method and program therefor
JP5031006B2 (en) * 2009-09-04 2012-09-19 パナソニック株式会社 Scalable decoding apparatus and scalable decoding method
CN109215670B (en) * 2018-09-21 2021-01-29 西安蜂语信息科技有限公司 Audio data transmission method and device, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009513992A (en) * 2003-06-25 2009-04-02 ドルビー スウェーデン アクチボラゲット Apparatus and method for encoding audio signal and apparatus and method for decoding encoded audio signal

Also Published As

Publication number Publication date
JPH08263096A (en) 1996-10-11

Similar Documents

Publication Publication Date Title
JP3139602B2 (en) Acoustic signal encoding method and decoding method
JP3881943B2 (en) Acoustic encoding apparatus and acoustic encoding method
JP4934427B2 (en) Speech signal decoding apparatus and speech signal encoding apparatus
JP4871894B2 (en) Encoding device, decoding device, encoding method, and decoding method
JP3391686B2 (en) Method and apparatus for decoding an encoded audio signal
JP3566652B2 (en) Auditory weighting apparatus and method for efficient coding of wideband signals
KR101303145B1 (en) A system for coding a hierarchical audio signal, a method for coding an audio signal, computer-readable medium and a hierarchical audio decoder
JP4662673B2 (en) Gain smoothing in wideband speech and audio signal decoders.
JP4081447B2 (en) Apparatus and method for encoding time-discrete audio signal and apparatus and method for decoding encoded audio data
JP4958780B2 (en) Encoding device, decoding device and methods thereof
KR19990077753A (en) Audio signal coding apparatus, audio signal decoding apparatus, and audio signal coding and decoding apparatus
JP3344962B2 (en) Audio signal encoding device and audio signal decoding device
US8036390B2 (en) Scalable encoding device and scalable encoding method
JP5236040B2 (en) Encoding device, decoding device, encoding method, and decoding method
US9230551B2 (en) Audio encoder or decoder apparatus
US20100250260A1 (en) Encoder
JP3186007B2 (en) Transform coding method, decoding method
JP2002330075A (en) Subband adpcm encoding/decoding method, subband adpcm encoder/decoder and wireless microphone transmitting/ receiving system
Jayant et al. Coding of wideband speech
JP2004302259A (en) Hierarchical encoding method and hierarchical decoding method for sound signal
JPH09127985A (en) Signal coding method and device therefor
JP4373693B2 (en) Hierarchical encoding method and hierarchical decoding method for acoustic signals
JPH09127987A (en) Signal coding method and device therefor
JPH09127998A (en) Signal quantizing method and signal coding device
JP3504485B2 (en) Tone encoding device, tone decoding device, tone encoding / decoding device, and program storage medium

Legal Events

Date Code Title Description
FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20071215

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20081215

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20091215

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101215

Year of fee payment: 10

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101215

Year of fee payment: 10

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111215

Year of fee payment: 11

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111215

Year of fee payment: 11

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121215

Year of fee payment: 12

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121215

Year of fee payment: 12

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20131215

Year of fee payment: 13

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

EXPY Cancellation because of completion of term