JPH08263096A - Acoustic signal encoding method and decoding method - Google Patents

Acoustic signal encoding method and decoding method

Info

Publication number
JPH08263096A
JPH08263096A JP7065622A JP6562295A JPH08263096A JP H08263096 A JPH08263096 A JP H08263096A JP 7065622 A JP7065622 A JP 7065622A JP 6562295 A JP6562295 A JP 6562295A JP H08263096 A JPH08263096 A JP H08263096A
Authority
JP
Japan
Prior art keywords
signal
code
decoded
band
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP7065622A
Other languages
Japanese (ja)
Other versions
JP3139602B2 (en
Inventor
Akio Jin
明夫 神
Takehiro Moriya
健弘 守谷
Satoshi Miki
聡 三樹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP07065622A priority Critical patent/JP3139602B2/en
Publication of JPH08263096A publication Critical patent/JPH08263096A/en
Application granted granted Critical
Publication of JP3139602B2 publication Critical patent/JP3139602B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Abstract

PURPOSE: To encode a sound at a high compression rate and to encode a musical tone with high quality by using a CELP system and a conversion coding system. CONSTITUTION: An input signal 11 of a sampling frequency fS=24kHz is made a low band signal of fS=16kHz by a converter 221 , and it is encoded by a CELP coder 241 , and a resultant code C1 is outputted, and the code C1 is decoded by a decoder 251 , and the decoded signal is made the signal of fS=24kHz by a converter 26, and it is subtracted from the input signal 11, and a high band signal and a quantization error signal are coded by a conversion coding coder 242 , and the code C2 is outputted. Only the code C1 , or both of C1 and C2 are decoded to be used.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】この発明は、楽音や音声などの音
響信号を周波数領域で帯域分割して階層符号化する符号
化方法及びその復号化方法に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a coding method and a decoding method for hierarchically coding an audio signal such as a musical sound or voice by band-dividing it in the frequency domain.

【0002】[0002]

【従来の技術】音響信号を周波数領域で帯域分割して符
号化する方法として、サブバンド符号化方法がある。サ
ブバンド符号化方法はQMF(Quadrature
Mirror Filter)を用いて入力信号を複数
の周波数帯域に分割し、その各帯域に適切なビット割り
当てを行いつつ各帯域を独立に符号化する。
2. Description of the Related Art A sub-band coding method is known as a method for coding an audio signal by band-dividing it in the frequency domain. The subband coding method is QMF (Quadrature).
The input signal is divided into a plurality of frequency bands by using a Mirror Filter, and each band is independently encoded while appropriate bit allocation is performed for each band.

【0003】現在、楽音及び音声などの音響信号の符号
化方法は使用目的、復号品質、符号化速度などに応じて
多種多様な方法が有るが、1つの音響信号に対して複数
の符号化方法を得ることなく1つの符号化方法でのみ符
号化するのが普通である。しかし、例えば図1Aに示す
ように音響信号11を周波数軸上で低域側から3つのサ
ブバンドSB1 ,SB2 ,SB3 に分割して階層化し、
図2に示すようにその下位層(階層1)であるサブバン
ドSBは符号化品質は低い、すなわち復号再生音の周波
数帯域が狭く、量子化誤差も大きい符号化方法、例えば
符号駆動線形の予測符号化法:CELPにより高圧縮率
で符号化し、逆に上位層(階層3)であるサブバンドS
3 の符号化は符号化品質が高く、すなわち復号再生音
の周波数帯域が広く、量子化誤差が小さい符号化方法
(例えば離散コサイン変換符号化方法などの変換符号化
法で低圧縮率で符号化し、中位層(階層2)であるサブ
バンドSB2 に対しては下位層の符号化方法と、上位層
の符号化方法との中間の符号化方法とし、利用者の要求
に応じて階層1のみを符号化送出し、あるいは階層1と
2を符号化送出し、又は全ての階層を符号化送出すると
いう符号化方法も、考えられる。
At present, there are various methods for encoding acoustic signals such as musical sounds and voices according to the purpose of use, decoding quality, encoding speed, etc. However, a plurality of encoding methods for one acoustic signal. It is common to encode with only one encoding method without getting. However, for example, as shown in FIG. 1A, the acoustic signal 11 is divided into three subbands SB 1 , SB 2 and SB 3 from the low frequency side on the frequency axis to be hierarchical,
As shown in FIG. 2, the subband SB, which is the lower layer (layer 1), has a low coding quality, that is, a decoding reproduction sound has a narrow frequency band and a large quantization error, for example, code-driven linear prediction. Encoding method: CELP is used to encode at a high compression rate, and conversely, a subband S that is an upper layer (layer 3)
The coding of B 3 has a high coding quality, that is, a decoding reproduction sound has a wide frequency band and a small quantization error (for example, a coding method such as a discrete cosine transform coding method at a low compression rate). For the sub-band SB 2 which is the middle layer (layer 2), a coding method intermediate between the coding method of the lower layer and the coding method of the upper layer is used, and the hierarchy is set according to the user's request. A coding method in which only 1 is coded and transmitted, or layers 1 and 2 are coded and transmitted, or all layers are coded and transmitted is also conceivable.

【0004】あるいは前述のように3つに階層符号化さ
れた各種の楽音又は音声信号を例えばデータベースとし
て設け、利用者からのそのデータベースをアクセスし、
所望の楽音信号を受け取り、その利用者の復号器に応じ
て、階層1の符号のみを復号して狭帯域かつ量子化誤差
の大きい低品質の再生音を得、あるいは階層1及び2の
符号を復号、又は階層1,2,3の全ての符号を復号し
て広帯域かつ量子化誤差の小さい高品質な再生音を得る
ことが考えられる。
Alternatively, as described above, various musical sounds or voice signals hierarchically coded into three layers are provided as a database, and the database is accessed by the user.
It receives a desired tone signal and, depending on the user's decoder, decodes only the code of layer 1 to obtain a low-quality reproduced sound with a narrow band and a large quantization error, or to encode the code of layers 1 and 2. It is conceivable to obtain high-quality reproduced sound with a wide band and a small quantization error by decoding or decoding all codes of layers 1, 2, and 3.

【0005】又は、例えば、音声が支配的な広帯域の音
響信号を2階層に分けて符号化し、その下位層符号のみ
を復号すれば主に音声的な性質を有する音響信号をきれ
いに復号し、下位層と上位層との両符号を復号すれば、
更に、非音声的な性質を有する音響信号も含めた信号の
復号ができる、ということが考えられる。またこれらの
場合において、下位層符号のみを受け取り、その際の伝
送路の利用時間を短かくしまたは、伝送容量の小さな伝
送路を使用し、かつ実時間で復号したり、長い時間かけ
て上位層符号をも受けとり、一度蓄積した後、改めて再
生復号することにより高品質の復号信号を得ることもで
きる。
Alternatively, for example, if a wide band acoustic signal in which speech is dominant is divided into two layers and encoded, and only the lower layer code is decoded, an acoustic signal mainly having a speech characteristic is beautifully decoded, If both layer and upper layer codes are decoded,
Furthermore, it is conceivable that a signal including an acoustic signal having a non-voice property can be decoded. Also, in these cases, only the lower layer code is received, and the use time of the transmission line at that time is shortened, or the transmission line with a small transmission capacity is used, and decoding is performed in real time, or the upper layer takes a long time. It is also possible to obtain a high-quality decoded signal by receiving the code, storing it once, and then reproducing and decoding it again.

【0006】あるいは、これらの場合において、下位・
上位層の全ての符号を一度蓄積した後、下位層符号のみ
を、小型かつ経済的な遅延時間の小さい復号器により実
時間で復号したり、高品質な音を再生したい時には、上
位層符号をも含めて、大型かつ遅延時間の大きな復号器
により、時間をかけて復号し、その後で一度に再生する
こともできる。
Alternatively, in these cases,
After accumulating all the codes of the upper layer once, only the lower layer code can be decoded in real time by a small and economical decoder with a small delay time, or when you want to reproduce a high quality sound, the upper layer code Including the above, a large-sized decoder having a large delay time can be used to perform decoding over time and then reproduce at once.

【0007】前述のように復号品質や符号化圧縮率に選
択性をもたせる符号化方法はスケーラブルな階層符号化
方法と称せられる。スケーラブルな階層符号化方法とし
ては図1Aに示したサブバンド符号化方法が考えられ
る。すなわち符号化方法1によってサブバンドSB1
周波数帯域を符号化し、同様にして帯域SB2 ,SB3
を各々独立した符号化方法2,3により符号化を実行す
る。図1Bに示すように、復号化の際には、例えば、広
帯域の復号音を必要としない時には、サブバンドSB1
の符号のみを符号化方法1の復号器により復号化して、
サブバンドSB1 の帯域のみの音の復号信号121
得、また広帯域復号音を必要とする場合はサブバンドS
1 ,SB2 ,SB3 の各符号をそれぞれ符号化方法
1,2,3と対応した復号量により復号して復号信号1
1 ,122 ,123 を得てこれらの合成信号12を出
力する。
[0007] As described above, the coding method that gives selectivity to the decoding quality and the coding compression rate is called a scalable hierarchical coding method. As a scalable hierarchical coding method, the subband coding method shown in FIG. 1A can be considered. That is, the frequency band of the subband SB 1 is coded by the coding method 1, and the bands SB 2 and SB 3 are similarly coded.
Are encoded by the independent encoding methods 2 and 3. As shown in FIG. 1B, when decoding, for example, when a wideband decoded sound is not needed, the subband SB 1
Only the code of is decoded by the decoder of the coding method 1,
The decoded signal 12 1 of the sound only in the band of the subband SB 1 is obtained, and when the wideband decoded sound is required, the subband S
Decoded signal 1 is obtained by decoding each code of B 1 , SB 2 and SB 3 with a decoding amount corresponding to encoding methods 1, 2 and 3, respectively.
2 1 , 12 2 and 12 3 are obtained and a composite signal 12 of these is output.

【0008】[0008]

【発明が解決しようとする課題】しかし、このようなサ
ブバンド符号化方法による階層符号化では、各帯域(す
なわち各層)に発声する量子化誤差、すなわち符号器の
入力信号とその局部復号器の出力信号、つまり伝送路な
どの影響を受けていない復号信号との誤差が図1Cに示
すように各帯域SB1 ,SB2 ,SB3 にそれぞれ量子
化誤差131 ,132 ,133 として保存され、よって
全周波数帯域の復号信号12には各帯域毎に独立に歪み
や雑音が発生してしまう。従って、全帯域を復号する場
合(すなわち上位層までの復号化)でさえも、下位層の
大きな量子化誤差131 も、そのまま発生するため、高
品質のものは得られない。広帯域復号信号を高品質に得
るには各サブバンドSB1 ,SB2 ,SB3 での各符号
化圧縮率を小さくしなければ、量子化雑音を低減させる
ことができない。従ってこのような階層符号化方法で
は、スケーラブルな符号化を実現できない。
However, in the hierarchical coding by such a sub-band coding method, the quantization error uttered in each band (that is, each layer), that is, the input signal of the encoder and its local decoder As shown in FIG. 1C, the error between the output signal, that is, the decoded signal that is not affected by the transmission path, etc., is stored in each of the bands SB 1 , SB 2 , SB 3 as quantization errors 13 1 , 13 2 , 13 3 , respectively. As a result, the decoded signal 12 in all frequency bands is independently distorted and noise is generated in each band. Therefore, even when decoding the entire band (i.e. decoding up to the upper layer), a large quantization error 13 1 of the lower layer also, since it occurs, it can not be obtained of high quality. In order to obtain a wideband decoded signal with high quality, the quantization noise cannot be reduced unless the coding compression rate in each subband SB 1 , SB 2 , SB 3 is reduced. Therefore, such a hierarchical coding method cannot realize scalable coding.

【0009】従来のサブバンド符号化方法によるスケー
ラブルな符号化ができないことを図3を参照して更に具
体的に説明する。即ち原音響信号11の帯域を2分割
し、第1階層(低域領域)をCELP方式で符号化し、
第2階層(高域領域)を変換符号化方法により符号化し
ている。第1階層では、音声の圧縮効率の高いCELP
符号化が実行されているため、その局部復号信号12、
(図3B)の量子化誤差信号131 は図3Cに示すよう
に比較的大きい。一方第2階層では様々な波形に対して
符号化可能な変換符号化が実行されているため、その曲
部復号信号122は図3Bに示すように原音信号11に
近く、量子化誤差信号132 は図3Cに示すように小さ
い。しかし第1階層の符号及び第2階層の符号をそれそ
れ復号して広域復号信号を得ても、図3Dに示すように
その復号信号の量子化誤差の低域部分141 は第1階層
の量子化誤差131 と変わらない。すなわち、第2階層
までの復号品質は低周波数の帯域においてCELP符号
化方法の符号化性能に依存してしまう。よって、サブバ
ンド符号化方法で階層符号化を行い高品質な符号化品質
を実現するためには、各階層全てを圧縮率が小さいか、
または演算量の大きな高品質符号化方法によって符号化
しなければならない。
The fact that scalable coding cannot be performed by the conventional subband coding method will be described more specifically with reference to FIG. That is, the band of the original acoustic signal 11 is divided into two, and the first layer (low frequency region) is encoded by the CELP method,
The second layer (high-frequency region) is encoded by the transform encoding method. In the first layer, CELP with high voice compression efficiency
Since the encoding is being performed, the locally decoded signal 12,
The quantized error signal 13 1 (FIG. 3B) is relatively large as shown in FIG. 3C. On the other hand, since the transform coding capable of coding various waveforms is executed in the second layer, the music piece decoded signal 12 2 is close to the original sound signal 11 as shown in FIG. 3B, and the quantization error signal 13 2 is small as shown in FIG. 3C. However, even if the wide layer decoded signal is obtained by decoding the first layer code and the second layer code respectively, as shown in FIG. 3D, the low band portion 14 1 of the quantization error of the decoded signal is in the first layer. It is the same as the quantization error 13 1 . That is, the decoding quality up to the second layer depends on the coding performance of the CELP coding method in the low frequency band. Therefore, in order to achieve high-quality coding quality by performing hierarchical coding with the subband coding method, whether all layers have a low compression rate,
Or, it must be encoded by a high quality encoding method with a large amount of calculation.

【0010】この発明の目的は、下位層での符号化を高
圧縮率、低復号品質とし、しかも上位層までの復号信号
に下位層の低復号品質の影響を受けない高品質のものを
得ることができるスケーラブルな符号化方法及びその復
号化方法を提供することにある。
It is an object of the present invention to obtain a high quality coding in the lower layer with a high compression rate and a low decoding quality, and in which a decoded signal up to the upper layer is not affected by the low decoding quality of the lower layer. (EN) Provided are a scalable encoding method and a decoding method thereof.

【0011】[0011]

【課題を解決するための手段】請求項1の発明によれ
ば、楽音や音声などの最高周波数がfn の音響入力信号
を周波数f1 ,f2 ,……,fn-1 (f1 <f2 <,…
…,<fn-1 <fn )のn個の区分(nは2以上の整
数)に分割して符号化する符号化方法において、入力信
号から周波数がf1 以下の第1帯域信号を選出し、その
第1帯域信号を第1符号化方法で符号化して第1符号を
出力し、第i−1以下の各符号(i=2,3,……,
n)から周波数がfi-1 以下の第i−1復号信号を得、
上記入力信号から周波数fi 以下の第i帯域信号を選出
し、その第i帯域信号から上記第i−1復号信号を差し
引いて第i差信号を得、その第i差信号を第i符号化方
法で符号化して第i符号を出力する。
According to the invention of claim 1, the acoustic input signal having the highest frequency f n , such as a musical sound or voice, is converted into frequencies f 1 , f 2 , ..., F n-1 (f 1 <F 2 <, ...
,, <f n-1 <f n ) is divided into n sections (n is an integer of 2 or more) and encoded, and a first band signal having a frequency of f 1 or less is input from an input signal. The first band signal is selected, the first band signal is coded by the first coding method, and the first code is output.
n) to obtain an i-1 th decoded signal whose frequency is f i -1 or less,
An i-th band signal having a frequency of f i or less is selected from the input signal, the i−1 th decoded signal is subtracted from the i th band signal to obtain an i th difference signal, and the i th difference signal is i th encoded. Then, the i-th code is output.

【0012】第i−1復号信号は例えば、第i−1符号
を復号化した信号と、第i−2復号信号とを加算し、そ
の加算信号を標本化周波数が2fi の信号に変換して得
る。請求項3の発明の符号化方法によれば楽音、音声な
どの最高周波数がfn の音響入力信号を、周波数f1
2 …,fn-1 (f1 <f2 <,…<fn-1 <fn
(n=2以上の整数)で区分してそれぞれ符号化する符
号化方法において、上記音響入力信号より標本化周波数
が2f1 の第1帯域音響信号を得、その第1帯域信号を
第1符号化法により符号化して第1符号を出力し、その
第1符号の符号化誤差を第i−1誤差信号を得(i=
2,3,…,n)、その第i−1誤差信号を標本化周波
数が2fi の第i−1変換誤差信号に変換し、上記音響
入力信号より周波数帯域がfi-1 〜fi 、標本化周波数
が2fi の第i帯域信号を得、上記第i−1変換誤差信
号と上記第i帯域信号とを加算して第i加算信号を得,
その第i加算信号と第i符号化法により符号化して第i
符号を出力する。
The i-1th decoded signal is obtained by, for example, adding a signal obtained by decoding the i-1th code and the i-2th decoded signal, and converting the added signal into a signal having a sampling frequency of 2f i. Get According to the encoding method of the third aspect of the present invention, the acoustic input signal having the highest frequency f n , such as a musical sound or voice, is converted into the frequency f 1 ,
f 2 ..., f n-1 (f 1 <f 2 <, ... <f n-1 <f n )
In a coding method in which (n = integer of 2 or more) is divided and coded respectively, a first band acoustic signal having a sampling frequency of 2f 1 is obtained from the acoustic input signal, and the first band signal is a first code. The encoding error of the first code is obtained by encoding the first code by the encoding method, and the i−1th error signal is obtained (i =
2,3, ..., n), that the first i-1 error signal sampling frequency is converted to the (i-1) th converted error signals 2f i, a frequency band higher than the acoustic input signal f i-1 ~f i , An i- th band signal having a sampling frequency of 2f i is obtained, and the i−1th conversion error signal and the i-th band signal are added to obtain an i-th addition signal,
The i-th addition signal and the i-th encoding method are used to encode the i-th addition signal.
Output the code.

【0013】請求項4の発明のよれば、請求項1乃至3
の何れかの発明において上記第1符号化法は符号駆動線
形予測符号化法であり、上記第n符号化法は変換符号化
法である。請求項5の発明では請求項1乃至4の何れか
の発明において、上記音響入力信号中の周波数fi 以下
のほぼ全域の成分のスペクトル包絡を重みの基準とし
て、上記第i符号の符号化過程において心理聴覚重み付
け量子化を行う。
According to the invention of claim 4, claims 1 to 3
In any one of the above inventions, the first encoding method is a code-driven linear predictive encoding method, and the n-th encoding method is a transform encoding method. According to a fifth aspect of the present invention, in any one of the first to fourth aspects of the invention, the encoding process of the i-th code is performed with the spectral envelope of almost all components of the acoustic input signal equal to or lower than the frequency f i as a weight standard. In, we perform psychological auditory weighting quantization.

【0014】請求項6の発明の復号化方法によれば、入
力符号を第1乃至第n符号(nは2以上の整数)に分離
し、上記第1符号を復号して、標本化周波数2f1 の第
1復号信号を出力し、上記第i−1復号信号(i=2,
3,…,n)を標本化周波数が2fi の第i−1変換復
号信号に変換し、上記第i符号を復号して標本化周波数
2fi の第i復号信号を得、その第i復号信号と上記第
i−1変換復号信号とを加算して第i加算信号を出力す
る。
According to the decoding method of the sixth aspect of the present invention, the input code is separated into the first to nth codes (n is an integer of 2 or more), the first code is decoded, and the sampling frequency 2f. outputting a first decoded signal 1, the first i-1 decoded signal (i = 2,
, ..., n) is converted to an i−1th transformed decoded signal having a sampling frequency of 2f i , the i-th code is decoded to obtain an i-th decoded signal having a sampling frequency of 2f i , and the i-th decoded signal is obtained. The signal is added to the i-1th converted decoded signal to output the i-th added signal.

【0015】[0015]

【実施例】図4Aに請求項1の発明の符号化方法の実施
例を適用した符号化器の例を示す。この例では原音信号
を2つの周波数帯域に分けて符号化、つまり2階層符号
化する場合である。入力端子21からの原音入力信号1
1は標本化周波数が24kHz、つまり最高周波数f2
が12kHzのデジタル信号であり、この入力信号は第
1帯域選択手段としてのサンプルレート変換器221
標本化周波数が16kHzの信号に変換されて第1帯域
信号23が取出される。このサンプルレート変換はいわ
ゆるダウンサンプリングであり、例えば変換標本化周波
数比に応じた間隔でサンプルが除去された後、デジタル
低域通過フィルタを通されて実行される。このサンプル
レート変換器221 よりの周波数がf1 =8kHz以下
の第1帯域信号23が取出され、この第1帯域信号23
は第1符号化方法による第1符号器241 で符号化され
る。この例では第1符号器241 としてCELP(符号
駆動線形予測符号化)符号方法により符号化する。この
符号化の結果である第1符号C1 が出力される。
FIG. 4A shows an example of an encoder to which an embodiment of the encoding method of the invention of claim 1 is applied. In this example, the original sound signal is divided into two frequency bands for encoding, that is, two-layer encoding. Original sound input signal 1 from input terminal 21
1 has a sampling frequency of 24 kHz, that is, the highest frequency f 2
Is a 12 kHz digital signal, and this input signal is converted into a signal having a sampling frequency of 16 kHz by the sample rate converter 22 1 as the first band selecting means to take out the first band signal 23. This sample rate conversion is so-called down-sampling. For example, after the samples are removed at intervals according to the conversion sampling frequency ratio, they are passed through a digital low-pass filter and then executed. A first band signal 23 having a frequency of f 1 = 8 kHz or less is taken out from the sample rate converter 22 1 , and the first band signal 23 is extracted.
Is encoded by the first encoder 24 1 according to the first encoding method. In this example, the first encoder 24 1 is coded by the CELP (code driven linear predictive coding) coding method. The first code C 1, which is the result of this encoding, is output.

【0016】この実施例では局部復号器251 で復号さ
れ、周波数がf1 以下の第1復号信号121 が得られ、
その復号信号121 は第1サンプルレート変換器261
で標本化周波数が24kHzの変換復号信号27に変換
される。このサンプルレート変換器261 はいわゆるア
ップサンプリングを行うものであり、例えば、変換周波
数比に応じた間隔でゼロサンプルを加えた後デジタル低
域通過フィルタに通せばよい。差回路28で入力信号1
1からこの変換復号信号27が差引かれ、その差信号2
9が第2符号化方法による第2符号器242 で符号化さ
れる。この実施例では第2符号器242 で変形離散コサ
イン変換などの変換符号化(Transform co
ding)により符号化される。この符号化結果の第2
符号C2は出力される。第1符号C1 と第2符号C2
多重化回路31で、例えば図4bに示すように符号化フ
レームごとに時分割的に多重化され、符号化符号Cとし
て出力される。利用者の要求によっては第1符号C1
みを出力してもよい。
In this embodiment, the first decoder signal 12 1 having a frequency of f 1 or less is obtained after being decoded by the local decoder 25 1 .
The decoded signal 12 1 is the first sample rate converter 26 1
Is converted into a converted decoded signal 27 having a sampling frequency of 24 kHz. The sample rate converter 26 1 performs so-called up-sampling, and for example, it is sufficient to add zero samples at intervals according to the conversion frequency ratio and then pass the sample through a digital low-pass filter. Input signal 1 at the difference circuit 28
This converted decoded signal 27 is subtracted from 1, and the difference signal 2
9 is encoded by the second encoder 24 2 according to the second encoding method. In this embodiment, the second encoder 24 2 performs transform coding such as modified discrete cosine transform.
ding). The second of this encoding result
The code C 2 is output. The first code C 1 and the second code C 2 are multiplexed by the multiplexing circuit 31, for example, as shown in FIG. 4B, time-division multiplexed for each coded frame, and output as a coded code C. Depending on the user's request, only the first code C 1 may be output.

【0017】標本化周波数24kHzの原音入力信号1
1の周波数スペクトルは例えば図5Aに示され、この信
号11中の8kHz以下の信号が標本化周波数16kH
zの信号23(図5B)として下位層の第1符号器24
1 に入力され、高い圧縮効率で符号化される。その符号
化符号C1 の局部復号器251 により復号された復号信
号121 は図5Bに示すように、下位層入力信号23に
対しては少なからず量子化誤差131 が図5Cに示すよ
うに生じる。差回路28からこの誤差信号13 1 と、原
音入力信号11の8kHz以上の高域信号33とよりな
る信号29が上位層の第2符号器242 に入力され、あ
らゆる性質の入力信号を高品質に符号化可能な変換符号
化方法により符号化される。
Original sound input signal 1 having a sampling frequency of 24 kHz
The frequency spectrum of No. 1 is shown, for example, in FIG. 5A.
The signal below 8 kHz in No. 11 has a sampling frequency of 16 kHz
The first encoder 24 of the lower layer as the z signal 23 (FIG. 5B).
1And is encoded with high compression efficiency. Its sign
Code C1Local decoder 251Decryption signal decrypted by
No. 121To the lower layer input signal 23, as shown in FIG. 5B.
On the other hand, the quantization error is 131Is shown in Figure 5C
It occurs like this. This error signal 13 from the difference circuit 28 1And Hara
The sound input signal 11 and the high frequency signal 33 of 8 kHz or more
Signal 29 that is the upper layer of the second encoder 242Entered in
Transform code capable of high-quality coding of input signals of any nature
It is coded according to the coding method.

【0018】このようにこの実施例では下位層の符号化
符号C1 は原音をそれ程忠実には符号化しないが、上位
層では下位層の量子化誤差も含めて符号化されるため、
後述で明らかにするように、上位層まで復号する場合
に、下位層をも高い忠実度で復号再生することが可能と
なる。つまり下位層では高い圧縮効率で符号化し、しか
も上位層をも復号する場合は、高品質の復号信号を得る
ことができる。
As described above, in this embodiment, the lower layer encoding code C 1 does not encode the original sound so faithfully, but the upper layer encodes the lower layer including the quantization error.
As will be described later, when decoding is performed up to the upper layer, the lower layer can be decoded and reproduced with high fidelity. That is, when the lower layer is encoded with high compression efficiency and the upper layer is also decoded, a high-quality decoded signal can be obtained.

【0019】特に前記実施例では下位層の符号化にCE
LP方式を用いているため符号化対象が音声の場合、下
位層の第1符号C1 のみを復号しても比較的良好な品質
が得られ、また演算量が少なく、実時間処理が容易であ
る。第1、第2符号C1 ,C 2 を復号して、符号化対象
が楽音であっても、上位層の変換符号の復号により、か
つ下位層のCELP符号の符号化誤差の補償により、広
帯域にわたり、品質の高い復号信号が得られる。
Particularly, in the above embodiment, CE is used for encoding the lower layer.
Since the LP system is used, if the coding target is speech,
First code C of the upper layer1Relatively good quality with only decoding
And the amount of calculation is small and real-time processing is easy.
It First and second code C1, C 2To decode and encode
Even if the sound is a musical sound,
By compensating for the coding error of the CELP code of the second lower layer,
A high-quality decoded signal can be obtained over the band.

【0020】符号化を行う場合に、人間の心理聴覚、例
えば大きいレベルのスペクトルによるマスキング特性な
どを考慮して、心理聴覚重み付けをして符号化すること
により聴覚的に量子化誤差を抑圧した効率的な符号化を
することがよくある。例えば符号器241 のCELP符
号化方法においては図6に示すように、制御部35によ
り指定される周期(ピッチ)のベクトルが適応符号帳3
6から取出され、また指定された雑音符号帳37から雑
音ベクトルが取出され、これらはそれぞれ利得が付与さ
れた後、合成されて線形予測合成フィルタ38に励振ベ
クトルとして入力される。一方図4Aのサンプルレート
変換器221 よりの入力信号は符号化フレーム周期で線
形予測分析部39で線形予測分析され、その線形予測係
数が量子化部41で量子化され、その量子化線形予測係
数に応じて合成フィルタ38のフィルタ係数が設定され
る。また聴覚重み付け係数演算部43で線形予測係数よ
り求めたスペクトル包絡に基づいて心理聴覚重み付けの
ためのフィルタ係数を求めて、聴覚重み付けフィルタ4
2に設置する。サンプルレート変換器221 よりの入力
信号から合成フィルタ38よりの合成信号が差し引か
れ、その差信号が聴覚重み付けフィルタ42へ通され、
その出力のエネルギーが最小になるように制御部35に
より適応符号帳36、雑音符号帳37に対する選択が行
われる。
When coding is performed, the psychoacoustic weighting is taken into consideration in consideration of human psychology, for example, masking characteristics due to a large level spectrum, and the efficiency of aurally suppressing the quantization error is encoded. Is often encoded. For example, in the CELP encoding method of the encoder 24 1 , as shown in FIG. 6, the vector of the period (pitch) designated by the control unit 35 is the adaptive codebook 3
6 and noise vectors are extracted from the designated noise codebook 37. These noise vectors are given gains, then combined, and input to the linear prediction synthesis filter 38 as an excitation vector. On the other hand, the input signal from the sample rate converter 22 1 of FIG. 4A is subjected to linear prediction analysis by the linear prediction analysis unit 39 in the coding frame period, and its linear prediction coefficient is quantized by the quantization unit 41, and the quantized linear prediction is performed. The filter coefficient of the synthesis filter 38 is set according to the coefficient. Further, the auditory weighting coefficient calculation unit 43 obtains a filter coefficient for psychological auditory weighting based on the spectrum envelope obtained from the linear prediction coefficient, and the auditory weighting filter 4
Install in 2. The composite signal from the composite filter 38 is subtracted from the input signal from the sample rate converter 22 1 , and the difference signal is passed to the auditory weighting filter 42,
The control unit 35 selects the adaptive codebook 36 and the noise codebook 37 so that the output energy is minimized.

【0021】変換符号器242 の変換符号化方法におい
ては、例えば図7に示すように差回路器28の出力が離
散コサイン変換器45で直交コサイン変換されて周波数
領域の係数に変換され、そのスペクトル包絡成分が線形
予測分析部46で線形予測分析され、これよりスペクト
ル包絡を得、そのスペクトル包絡で変換器45の出力係
数が割算されて正規化され、その平均化された係数が聴
覚重み付け部47で聴覚重み付けがなされ、更に量子化
部48で例えばベクトル量子化される。聴覚重み付け係
数を得るため、この実施例について入力端子21から原
音入力信号11が離散コサイン変換器49で直交コサイ
ン変換して、周波数領域に変換され、その変換係数のス
ペクトル包絡にもとづいて聴覚重み付け係数が係数演算
部51で演算されて聴覚重み付け部47に与えられ、正
規化係数の対応する成分に対する乗算がなされる。
In the transform coding method of the transform encoder 24 2 , for example, as shown in FIG. 7, the output of the difference circuit unit 28 is orthogonal cosine transformed by the discrete cosine transformer 45 to be transformed into a coefficient in the frequency domain. The spectral envelope component is subjected to linear predictive analysis by the linear predictive analysis unit 46, a spectral envelope is obtained therefrom, the output coefficient of the converter 45 is divided by the spectral envelope and normalized, and the averaged coefficient is auditory weighted. The unit 47 performs auditory weighting, and the quantizer 48 further performs vector quantization, for example. In order to obtain the perceptual weighting coefficient, the original sound input signal 11 from the input terminal 21 is orthogonally cosine-transformed by the discrete cosine transformer 49 in this embodiment to be converted into the frequency domain, and the perceptual weighting coefficient is based on the spectral envelope of the transform coefficient. Is calculated by the coefficient calculation unit 51 and given to the perceptual weighting unit 47, and the corresponding component of the normalization coefficient is multiplied.

【0022】つまり、上位層の第2の符号器242 では
図5Cに示すスペクトルの信号29を符号化するが、こ
の信号29のスペクトル包絡にもとづいて聴覚重み付け
を行うのではなく、原音入力信号11のスペクトル包絡
(図5D)を求め、これに基づいて聴覚重み付け符号化
を行う。次にこの発明の復号化方法の実施例を図8を参
照して説明する。この実施例は図4に示した符号化法に
よる符号化符号の復号化に適用した場合である。入力端
子55より入力された入力符号に分離回路56で第1符
号C1 と第2符号C2 とに分離され、第1符号C1 は第
1復号器571 によりこの例ではCELP復号化方法に
より最高信号周波数f1 (標本化周波数16kHz)の
第1復号信号58 1 に復号されて下位層(低域)復号化
出力631 として出力される。
That is, the upper layer second encoder 242Then
The signal 29 of the spectrum shown in FIG. 5C is encoded.
Weighting based on the spectral envelope of the signal 29 of
The spectral envelope of the original sound input signal 11
(FIG. 5D), and based on this, auditory weighted coding
I do. Next, refer to FIG. 8 for an embodiment of the decoding method of the present invention.
I will explain. This embodiment is based on the encoding method shown in FIG.
This is a case where the present invention is applied to the decoding of the coded code. Input end
The first code is input to the input code input from the child 55 by the separation circuit 56.
Issue C1And the second code C2And the first code C1Is the
1 decoder 571Therefore, in this example, the CELP decoding method is
Higher signal frequency f1(Sampling frequency 16 kHz)
First decoded signal 58 1Is decoded to lower layer (low range) decoding
Output 631Is output as.

【0023】この第1復号化出力581 はサンプルレー
ト変換器59により最高信号周波数f2 (標本化周波数
が24kHz)の変換復号信号611 に変換される。一
方分離回路56よりの第2符号C2 は第2復号器572
によりこの例では変換符号復号化がなされ、最高信号周
波数f2 (標本化周波数が24kHz)の第2復号信号
582 が得られて、この第2復号信号582 は第1変換
復号信号611 と加算器622 で加算されて上位層(全
帯域)復号化出力632 として出力される。
The first decoded output 58 1 is converted by the sample rate converter 59 into a converted decoded signal 61 1 having the highest signal frequency f 2 (sampling frequency is 24 kHz). On the other hand, the second code C 2 from the separation circuit 56 is the second decoder 57 2
Thus, in this example, transform code decoding is performed, and a second decoded signal 58 2 having the highest signal frequency f 2 (sampling frequency is 24 kHz) is obtained. This second decoded signal 58 2 is the first transformed decoded signal 61 1 Is added by the adder 62 2 and output as the upper layer (all band) decoded output 63 2 .

【0024】つまり下位層復号化出力631 としては理
想的な場合は図5B中の復号信号121 が得られる。一
方第2復号器572 の復号信号582 は理想的には図5
2Eに示すように、下位層(低域)の量子化誤差信号1
1 の復号信号601 と、高域信号33の復号信号64
2 とである。よって加算器622 よりの復号化出力63
2 には低域の復号信号581 に対し、その量子化誤差1
1 と対応する復号信号601 が加算され、量子化誤差
が著しく軽減され、かつ高域復号信号642 に高い忠実
度のものであるから、加算器62から得られる上位層ま
での復号化出力632 は原音入力信号11に著しく近
く、その量子化誤差信号は例えば図5Fに示すように全
帯域にわたり、著しく小さなものとなる。
That is, in the ideal case as the lower layer decoded output 63 1 , the decoded signal 12 1 in FIG. 5B is obtained. Meanwhile decoded signal 58 of the second decoder 57 2 is ideally 5
2E, the quantization error signal 1 of the lower layer (low band)
3 1 decoded signal 60 1 and high frequency signal 33 decoded signal 64
2 and. Therefore, the decoded output 63 from the adder 62 2
The 2 to decoded signal 58 1 of the low frequency, the quantization error 1
3 1 and the corresponding decoded signal 60 1 are added, the quantization error is remarkably reduced, and the high-frequency decoded signal 64 2 has high fidelity. Therefore, the decoding up to the upper layer obtained from the adder 62 is performed. The output 63 2 is extremely close to the original sound input signal 11, and its quantization error signal becomes extremely small over the entire band as shown in FIG. 5F, for example.

【0025】次にこの発明の符号化方法をn階層(n帯
域)分割符号化に適用した例としてn=4の場合につい
て図9を参照して説明する。図9において図4Aと対応
する部分に同一符号を付けてある。この例では原音入力
信号11は最高周波数がfn=f4 でその標本化周波数
が2f4 であり、第1サンプルレート変換器(第1帯域
選択手段)221 で標本化周波数が2f1 (但しf1
2 <f3 <f4 )の入力信号231 に変換され、つま
り周波数f1 以下の第1帯域信号231 が選出され、そ
の第1帯域信号231 は第1符号器241 で符号化さ
れ、第1符号C1として出力されると共にその第1符号
1 は第1復号器251 で標本化周波数2f1 の信号に
復号され、その復号信号121 は第1サンプルレート変
換器261で標本化周波数が2f2 の第1変換復号信号
に変換される。一方入力信号11が第2帯域選択手段と
してのサンプルレート変換器222 で標本化周波数が2
2の信号に変換されて、周波数f2 以下の第2帯域信
号232 が取出される。この第2帯域信号232 から第
1サンプルレート変換器261 よりの第1変換復号信号
が第2差回路282 で引算され、その第2差信号292
が第2符号器242 で符号化され、第2符号C2 が出力
される。
Next, a case where n = 4 will be described with reference to FIG. 9 as an example in which the coding method of the present invention is applied to n-layer (n-band) division coding. In FIG. 9, parts corresponding to those in FIG. 4A are designated by the same reference numerals. In this example, the original sound input signal 11 has a maximum frequency of f n = f 4 and a sampling frequency of 2f 4 , and the first sampling rate converter (first band selecting means) 22 1 has a sampling frequency of 2f 1 ( However, f 1 <
f 2 <f 3 <f 4 ) is converted into the input signal 23 1 , that is, the first band signal 23 1 having a frequency of f 1 or less is selected, and the first band signal 23 1 is encoded by the first encoder 24 1 . ized, its first code C 1 is outputted as the first code C 1 is decoded in the sampling frequency 2f 1 of the signal at a first decoder 25 1, the decoded signal 12 1 is the first sample rate converter At 26 1 , it is converted into a first converted decoded signal having a sampling frequency of 2f 2 . On the other hand, the input signal 11 is sampled at a sampling frequency of 2 by the sample rate converter 22 2 as the second band selection means.
is converted into f 2 of the signal, the frequency f 2 or less of the second band signal 23 2 is taken out. The first conversion decoded signal from the first sample rate converter 26 1 is subtracted from the second band signal 23 2 by the second difference circuit 28 2 , and the second difference signal 29 2 is obtained.
Is encoded by the second encoder 24 2 , and the second code C 2 is output.

【0026】以下同様の処理を行うが、第3符号C3
得る処理を、i=3(i=2,3,……,n、この例で
は4まで)を例として説明する。第i−1(=第2)符
号C i-1 (=C2 )が第i−1(=第2)復号器252
で復号されて標本化周波数2fi-1 (=2f2 )の第i
−1(=第2)復号信号を得、この第i−1(=第2)
復号信号と第i−2(=第1)サンプルレート変換器2
i-2 (=261 )よりの第i−2(=第1)変換復号
信号との和が加算器60i-1 (=602 )でとられ、そ
の和信号は第i−1(=第2)サンプルレート変換器2
i-1 (=26 2 )で標本化周波数2fi (=2
3 )、周波数がfi-1 (=f2 )以下の第i−1(=
第2)変換復号信号に変換される。一方、第i(=第
3)帯域選択手段としてのサンプルレート変換器22i
(=223 )により入力信号11から、周波数がf
i (=f3 )、標本化周波数が2fi (=2f3 )の第
i(=第3)帯域信号23i (=233 )が取出され、
その第i(=第3)帯域信号23i (=233 )は第i
−1(=第2)サンプルレート変換器26i-1 (=26
2 )よりの変換復号信号が第i(=第3)差回路28i
(283 )で減算され、その第i(=第3)減算信号2
3 が第i(=第3)符号器24i (=243 )で符号
化され、第i(=第3)符号Ci (=C3 )を出力す
る。なお、第i−1(=第2)復号器25i-1 (=25
2 )と、加算器60i-1 (=602 )と第i−1(=第
2)サンプルレート変換器26i-1 (=262 )は第i
−1(=第2)復号化手段40i-1 (=402 )を構成
する。ただ第1復号化手段401 は第i−2層が存在せ
ず加算器600 は省略される。また最上位層、この例で
は第i(=第4)帯域信号234 は周波数f4 以下の信
号であるため第i帯域選択手段としてのサンプルレート
変換器224 は省略される。
The same processing is performed thereafter, but the third code C3To
The process to obtain is i = 3 (i = 2, 3, ..., N, in this example
Up to 4) as an example. I-1 (= second) mark
Issue C i-1(= C2) Is the i−1th (= second) decoder 252
Sampling frequency 2f decoded byi-1(= 2f2) Of the i
−1 (= second) decoded signal is obtained, and this i−1 (= second)
Decoded signal and i-2th (= first) sample rate converter 2
6i-2(= 261) -Th i-2 (= first) conversion decoding from
The sum of the signal and the adder 60i-1(= 602)
Of the sum signal is the (i-1) th (= second) sample rate converter 2
6i-1(= 26 2) With sampling frequency 2fi(= 2
f3), The frequency is fi-1(= F2) The following i-1 (=
Second) converted into a decoded signal. On the other hand, the i-th (= the
3) Sample rate converter 22 as band selection meansi
(= 223), The frequency is f from the input signal 11
i(= F3), The sampling frequency is 2fi(= 2f3) Of
i (= third) band signal 23i(= 233) Is taken out,
The i-th (= third) band signal 23i(= 233) Is the i
-1 (= second) sample rate converter 26i-1(= 26
2) Is the converted decoded signal from the i-th (= third) difference circuit 28.i
(283), The i-th (= third) subtraction signal 2
93Is the i-th (= third) encoder 24i(= 243) Sign
And the i-th (= third) code Ci(= C3) Is output
It Note that the i−1th (= second) decoder 25i-1(= 25
2) And the adder 60i-1(= 602) And the i-1 (= the
2) Sample rate converter 26i-1(= 262) Is the i
-1 (= second) decoding means 40i-1(= 402) Configured
I do. Only the first decryption means 401Is the i-2 layer
Adder 600Is omitted. Also the top layer, in this example
Is the i-th (= fourth) band signal 23FourIs the frequency fFourThe following beliefs
Sample rate as the i-th band selection means
Converter 22FourIs omitted.

【0027】このようにしてこの発明は入力信号帯域を
n区間に分割して符号化する場合に適用できる。第1〜
第n(=第4)符号C1 〜Cn (=C4 )は多重化回路
31でフレームごとに多重化されて符号化符号Cとして
出力される。この場合多重化回路31は第1又は第1〜
第i符号の何れでも選択して出力することができるよう
にされる。第1〜第n(=第4)符号器241 〜24n
(=244 )は符号器24i のiが大となる程圧縮率が
小さくなる、という使い方をすれば広帯域、高品質の符
号化をする。これを満たせばその符号化方法は、例えば
全てを変換符号化としてもよい。
As described above, the present invention can be applied to the case where the input signal band is divided into n sections and encoded. First to first
The n-th (= fourth) code C 1 to C n (= C 4 ) is multiplexed for each frame by the multiplexing circuit 31 and output as a coded code C. In this case, the multiplexing circuit 31 includes the first or first to first
Any i-th code can be selected and output. First to nth (= fourth) encoders 24 1 to 24 n
If (= 24 4 ) is used in such a manner that the compression rate becomes smaller as i of the encoder 24 i becomes larger, wideband and high quality encoding is performed. If this is satisfied, the encoding method may be, for example, all transform encoding.

【0028】第1〜第4符号器241 〜244 において
聴覚重み付け符号化を行う場合はサンプルレート変換器
221 ,222 ,223 よりの各周波数がf1 ,f2
3以下の信号が聴覚重み付け係数演算部721 ,72
2 ,723 へそれぞれ供給され、それぞれそのスぺクト
ル包絡に基づく聴覚重み付け係数が演算され、また入力
信号が聴覚重み付け係数演算部724 に入力されて同様
に聴覚重み付け係数が演算され、これら聴覚重み付け係
数演算部721 〜724 でそれぞれ演算された聴覚重み
付け係数が第1〜第4符号器241 〜244 へ供給さ
れ、前述したように聴覚重み付け符号化が行われる。
When the first to fourth encoders 24 1 to 24 4 perform auditory weighted encoding, the frequencies from the sample rate converters 22 1 , 22 2 , and 22 3 are f 1 , f 2 , and
Signals less than or equal to f 3 are auditory weighting coefficient calculation units 72 1 , 72
2 and 72 3 , respectively, and the auditory weighting coefficients based on the respective spectrum envelopes are calculated. Further, the input signal is input to the auditory weighting coefficient calculator 72 4 and the auditory weighting coefficients are calculated in the same manner. The auditory weighting coefficients calculated by the weighting coefficient calculators 72 1 to 72 4 are supplied to the first to fourth encoders 24 1 to 24 4 , and the auditory weighting coding is performed as described above.

【0029】この発明の符号化方法をn階層分割符号化
への適用例としてn=4の場合を図10に示す。この例
も原音入力信号11の最高周波数がfn =f4 でその標
本化周波数が2f4 の場合で、第1サンプルレート変換
器(第1帯域選択手段)22 1 で標本化周波数が2f1
(但しf1 <f2 <f3 <f4 )の入力信号231 に変
換され、つまり周波数f1 以下の第1帯域信号231
選出され、その第1帯域信号231 は第1符号器241
で符号化され、第1符号C1 として出力されると共にそ
の第1符号C1 は第1複号器251 で標本化周波数2f
1 の信号に復号され、その復号信号121 と第1帯域信
号231 との差が第1差回路651 でとられ、その差信
号(第1誤差信号)131 は第1サンプルレート変換器
261 で標本化周波数が2f2 の第1変換誤差信号に変
換される。
The encoding method of the present invention is divided into n layers and divided.
FIG. 10 shows a case where n = 4 as an application example to the above. This example
Also the maximum frequency of the original sound input signal 11 is fn= FFourAt that mark
Main frequency is 2fFourIn case of, the first sample rate conversion
Container (first band selecting means) 22 1And the sampling frequency is 2f1
(However, f1<F2<F3<FFour) Input signal 231Strange
That is, the frequency f1The following first band signal 231But
The first band signal 23 selected1Is the first encoder 241
The first code C1Is output as
First code C of1Is the first decoder 251At sampling frequency 2f
1And the decoded signal 121And the first band signal
No. 231The difference from the first difference circuit 651Taken in and the difference
No. 13 (first error signal)1Is the first sample rate converter
261And the sampling frequency is 2f2To the first conversion error signal of
Will be replaced.

【0030】一方入力信号11から第2帯域選択手段6
2 で周波数帯域がf1 〜f2 、標本化周波数が2f2
の第2帯域信号232 が取出される。例えば入力信号1
1がサンプルレート変換器222 で標本化周波数2f2
の信号に変換され、その信号が遮断周波数f1 の高域通
過フィルタ672 に通されて第2帯域信号232 が得ら
れる。この第2帯域信号232 は第1サンプルレート変
換器261 よりの第1変換誤差信号と第2加算器682
で加算され、その第2加算信号292 が第2符号器24
2 で符号化され、第2符号C2 が出力される。
On the other hand, from the input signal 11 to the second band selecting means 6
6 2 , the frequency band is f 1 to f 2 , and the sampling frequency is 2f 2.
Second band signal 23 2 of is extracted. For example, input signal 1
Sampling frequency 2f 2 1 is at a sample rate converter 22 2
Signal is passed through a high-pass filter 67 2 having a cutoff frequency f 1 to obtain a second band signal 23 2 . This second band signal 23 2 is the first conversion error signal from the first sample rate converter 26 1 and the second adder 68 2
And the second addition signal 29 2 is added in the second encoder 24.
It is encoded by 2 , and the second code C 2 is output.

【0031】以下同様の処理を行うが、第3符号C3
得る処理を、i=3(i=2,3,…,n、この例では
4まで)を例として説明する。第i−1(=第2)符号
i- 1 (=C2 )が第i−1(=第2)復号器252
復号されて標本化周波数2f i-1 (=2f2 )の第i−
1(=第2)復号信号を得、この第i−1(=第2)復
号信号と第i−1(=第2)加算器68i-1 (=6
2 )より第i−1(=第2)加算信号29i-1 (=2
2 )との差が差回路65i-1 (=652 )でとられ、
その第i−1(=第2)誤差信号132 は第i−1(=
第2)サンプルレート変換器26i-1 (=262 )で標
本化周波数2fi (=2f3 )の第i−1(=第2)変
換誤差信号に変換される。一方、第i(=第3)帯域選
択手段66i(=663 )により入力信号11から、帯
域がfi-1 〜fi (=f2 〜f3 )、標本化周波数が2
i (=f3 )の第i(=第3)帯域信号23i (=2
3 )が取出され、その第i(=第3)帯域信号23i
(=233 )は第i−1(=第2)変換誤差信号と第i
(=第3)加算器68i (=683 )で加算され、その
第i(=第3)加算信号293 が第i(=第3)符号器
24i (=243 )で符号化され、第i(=第3)符号
i (=C3 )を出力する。
Similar processing is performed thereafter, but the third code C3To
The process to obtain is i = 3 (i = 2, 3, ..., N, in this example,
Up to 4) as an example. I-1th (= second) code
Ci- 1(= C2) Is the i−1th (= second) decoder 252so
Decoded and sampling frequency 2f i-1(= 2f2) I-
1 (= second) decoded signal is obtained, and this i−1 (= second) decoded signal is obtained.
Signal and i-1 (= second) adder 68i-1(= 6
82) From the i-th (= second) addition signal 29i-1(= 2
92) Difference circuit 65i-1(= 652),
The i−1th (= second) error signal 132Is the i-1 (=
Second) sample rate converter 26i-1(= 262)
Main frequency 2fi(= 2f3) I−1 (= second) variation
It is converted into a conversion error signal. On the other hand, i-th (= third) band selection
Selection means 66i(= 663) From the input signal 11
Area fi-1~ Fi(= F2~ F3), The sampling frequency is 2
fi(= F3) I-th (= third) band signal 23i(= 2
Three3) Is extracted and its i-th (= third) band signal 23i
(= 233) Is the i-1 (= second) conversion error signal and the i-th
(= Third) adder 68i(= 683) Is added and then
I-th (= third) addition signal 293Is the i-th (= third) encoder
24i(= 243), I-th (= third) code
Ci(= C3) Is output.

【0032】このようにしてこの発明は入力信号帯域を
n区間に分割して符号化する場合に適用できる。最上位
層、つまり周波数fn-1 〜fn (=f3 〜f4 )の帯域
を選出する第n(=第4)帯域選択手段66n (=66
4 )は単なる遮断周波数がf n-1 (=f3 )の高域通過
フィルタ67n (=674 )でよい。第1〜第n(=第
4)符号C1 〜Cn (=C4 )は多重化回路31でフレ
ームごとに多重化されて符号化符号Cとして出力され
る。この場合多重化回路31は第1又は第1〜第i符号
の何れでも選択して出力することができるようにされ
る。
Thus, the present invention reduces the input signal band
It can be applied to the case of dividing into n sections and encoding. Top
Layer, ie frequency fn-1~ Fn(= F3~ FFour) Band
N-th (= fourth) band selection means 66 for selectingn(= 66
Four) Is simply the cutoff frequency f n-1(= F3) High pass
Filter 67n(= 67Four) Is good. 1st to nth (= th
4) Code C1~ Cn(= CFour) In the multiplexing circuit 31
It is multiplexed for each frame and is output as the encoded code C.
It In this case, the multiplexing circuit 31 uses the first or first to i-th codes.
It is possible to select and output any of
It

【0033】第1〜第n(=第4)符号器241 〜24
n (=244 )は符号器24i のiが大となる程圧縮率
が小さくなる、という使い方を行えば広帯域、高品質の
符号化をする。これを満せばその符号化方法は、例えば
全てを変換符号化としてもよい。第1〜第4符号器24
1 〜244 において聴覚重み付け符号化を行う場合はサ
ンプルレート変換器711 ,712 ,713 により入力
信号がそれぞれ標本化周波数が2f1 ,2f2 ,2f3
の信号により変換されることにより、入力信号11から
それぞれ周波数がf1 ,f2 ,f3 以下の信号が取出さ
れて聴覚重み付け係数演算部721 ,722 ,723
それぞれ供給され、それぞれそのスペクトル包絡に基づ
く聴覚重み付け係数が演算され、また入力信号が聴覚重
み付け係数演算部724 に入力されて同様に聴覚重み付
け係数が演算され、これら聴覚重み付け係数演算部72
1 〜724 でそれぞれ演算された聴覚重み付け係数が第
1〜第4符号器241 〜244 へ供給され、前述したよ
うに聴覚重み付け符号化が行われる。
First to nth (= fourth) encoders 24 1 to 24
When n (= 24 4 ) is used in such a manner that the compression rate becomes smaller as i of the encoder 24 i becomes larger, wideband and high quality encoding is performed. If this is satisfied, the encoding method may be, for example, all transform encoding. First to fourth encoder 24
When auditory weighted coding is performed in 1 to 24 4 , the sampling rates of the input signals are 2f 1 , 2f 2 and 2f 3 by the sample rate converters 71 1 , 71 2 and 71 3 , respectively.
Signals having frequencies f 1 , f 2 , and f 3 or less are extracted from the input signal 11 and supplied to the auditory weighting coefficient calculation units 72 1 , 72 2 , 72 3 , respectively. its perceptual weighting coefficients based on the spectral envelope is calculated, also the input signal is perceptually weighted coefficients similarly is calculated is input to the perceptual weighting coefficient calculating unit 72 4, these perceptual weighting coefficient calculator 72
The perceptual weighting coefficients calculated by 1 to 72 4 are supplied to the first to fourth encoders 24 1 to 24 4 , and perceptual weighting coding is performed as described above.

【0034】この発明による復号化方法の一般的な方法
を適用した復号化器の例として、n=4、つまり入力符
号が第1〜第4符号C1 〜C4 が入力される場合を図1
1に図8と対応する部分に同一符号を付けて示す。符号
分離手段56で入力符号Cは第1〜第4符号C1 〜C4
に分離されて、それぞれ第1〜第4復号器571 〜57
4 へ供給される。第1復号器571 の第1復号信号58
1 は第1復号化出力631 として出力されると共にサン
プルレート変換器591 で標本化周波数がそれぞれ2f
2 、第1変換復号信号611 に変換され、その第1変換
復号信号611は第2復号器572 より第2復号信号5
2 に第2加算器622 で加算されて第2復号化出力6
2 として出力されると共に第2サンプルレート変換器
592 で標本化周波数が2f3 の変換復号信号に変換さ
れる。一般には第i−1(i=2,3,…,n、例えば
i=3)加算器62i-1 (=622 )よりの第i−1
(=第3)復号化出力63i-1 (=632 )が第i−1
(第2)サンプルレート変換器59i-1 (=592 )で
標本化周波数が2fi (=2f3 )の第i−1(=第
2)変換復号信号61i-1 (=612 )に変換され、そ
の第i−1(=第2)変換復号信号61i-1 (=6
2 )と第i(=第3)復号器57i (=573 )から
の第i(=第3)復号信号58i (=583 )とが第i
(=第3)加算器62 i (=623 )で加算されて第i
(=第3)復号化出力63i (=633 )を得、これが
出力される。
General method of decoding method according to the invention
As an example of the decoder applying
No. 1 to 4 code C1~ CFourFigure 1 shows the case where is input
1, the parts corresponding to those in FIG. 8 are designated by the same reference numerals. Sign
The input code C in the separating means 56 is the first to fourth codes C.1~ CFour
To the first to fourth decoders 57, respectively.1~ 57
FourIs supplied to. First decoder 571First decoded signal 58 of
1Is the first decoded output 631Is output as
Pull rate converter 591And sampling frequency is 2f each
2, The first converted decoded signal 611Converted to the first conversion
Decoded signal 611Is the second decoder 572Second decoded signal 5
82To the second adder 622The second decoded output 6
Three2And a second sample rate converter
592And the sampling frequency is 2f3Converted into a decoded signal
Be done. Generally, the i−1th (i = 2, 3, ..., N, for example,
i = 3) adder 62i-1(= 622) From i-1
(= Third) decoded output 63i-1(= 632) Is the i-1
(Second) sample rate converter 59i-1(= 592)so
Sampling frequency is 2fi(= 2f3) I−1 (= th)
2) Converted decoded signal 61i-1(= 612) Is converted to
I−1 (= second) converted decoded signal 61 ofi-1(= 6
12) And i-th (= third) decoder 57i(= 573) From
I-th (= third) decoded signal 58 ofi(= 583) And i
(= Third) adder 62 i(= 623) Is added to the i-th
(= Third) decoded output 63i(= 633) And this is
Is output.

【0035】[0035]

【発明の効果】以上説明したように、この発明によれ
ば、階層符号化方法において下位層の量子化誤差を上位
層で符号化しているため、CELP符号化方法と変換符
号化方法などの、圧縮方法の異なる符号化方法によって
階層を構成しても、上位層までの復号信号において符号
化品質を低下させない、という効果がある。また、下位
層の量子化誤差を上位層で符号化する、という操作を繰
り返すことにより、複数階層化において量子化誤差を階
層数に応じて減少させることが可能となる。更に、この
ような符号化方法によって、どの階層で復号しても聴感
上の復号品質が最適となり、スケーラブルな階層符号化
を実現できる。
As described above, according to the present invention, since the quantization error in the lower layer is encoded in the upper layer in the hierarchical encoding method, the CELP encoding method and the transform encoding method can be used. Even if the layers are configured by encoding methods having different compression methods, there is an effect that the encoding quality does not deteriorate in the decoded signals up to the upper layer. Further, by repeating the operation of encoding the quantization error of the lower layer in the upper layer, it is possible to reduce the quantization error according to the number of layers in the multi-layered structure. Further, with such an encoding method, the audible decoding quality is optimized regardless of which layer the decoding is performed, and scalable hierarchical coding can be realized.

【図面の簡単な説明】[Brief description of drawings]

【図1】サブバンド符号化方法を3つの周波数帯域に分
割する方法によって実現した場合の原音(A)と符号化
再生音(B)、および量子化誤差(C)の例を示す図。
FIG. 1 is a diagram showing an example of an original sound (A), a coded reproduced sound (B), and a quantization error (C) when realized by a method of dividing a subband coding method into three frequency bands.

【図2】スケーラブルな階層構造を持つ階層符号化方法
の特徴を説明するための図。
FIG. 2 is a diagram for explaining features of a hierarchical coding method having a scalable hierarchical structure.

【図3】サブバンド符号化方法によって階層符号化を実
現した場合の原音、復号信号、量子化誤差の様子を示す
図。
FIG. 3 is a diagram showing a state of an original sound, a decoded signal, and a quantization error when hierarchical coding is realized by a subband coding method.

【図4】Aはこの発明による符号化方法を2階層符号化
法に適用した場合の符号化器の例を示すブロック図、B
は多重化された符号の例を示す図である。
4A is a block diagram showing an example of an encoder when the encoding method according to the present invention is applied to a two-layer encoding method, and FIG.
FIG. 6 is a diagram showing an example of multiplexed codes.

【図5】A〜Dは図4Aの符号化動作における原音、復
号信号、上位層符号化入力、上位層聴覚重み付けの基準
の各例を示す図、E,Fは上位層の復号信号、上位層ま
での復号の量子化誤差の例を示す図である。
5A to 5D are diagrams showing examples of original sound, decoded signal, upper layer coding input, upper layer perceptual weighting reference in the encoding operation of FIG. 4A, and E and F are upper layer decoded signals and upper layer. It is a figure which shows the example of the quantization error of decoding to a layer.

【図6】CELP符号化器の概略を示すブロック図。FIG. 6 is a block diagram showing an outline of a CELP encoder.

【図7】変換符号化器の概略を示すブロック図。FIG. 7 is a block diagram showing an outline of a transform encoder.

【図8】この発明の復号化方法を2階層符号化の復号法
に適用した復号器の例を示すブロック図。
FIG. 8 is a block diagram showing an example of a decoder in which the decoding method of the present invention is applied to a decoding method of two-layer coding.

【図9】この発明の符号化方法を4階層符号化方法とし
て実現した場合の符号器の例を示すブロック図。
FIG. 9 is a block diagram showing an example of an encoder when the encoding method of the present invention is realized as a four-layer encoding method.

【図10】この発明による4階層符号化方法を実現する
符号器の他の例を示すブロック図。
FIG. 10 is a block diagram showing another example of an encoder that realizes the four-layer encoding method according to the present invention.

【図11】この発明の復号化方法を4階層符号化方法と
して実現した場合の復号器の例を示すブロック図。
FIG. 11 is a block diagram showing an example of a decoder when the decoding method of the present invention is realized as a four-layer coding method.

Claims (6)

【特許請求の範囲】[Claims] 【請求項1】 楽音や音声などの最高周波数がfn の音
響入力信号を周波数f1 ,f2 ,……,fn-1 (f1
2 <,……,<fn-1 <fn )のn個の区分(nは2
以上の整数)に分割して符号化する符号化方法におい
て、 上記入力信号から周波数がf1 以下の第1帯域信号を選
出する第1帯域選択過程と、 上記第1帯域信号を第1符号化方法で符号化して第1符
号を出力する第1符号化過程と、 第i−1以下の各符号(i=2 ,3,……,n)から
周波数がfi-1 以下の第i−1復号信号を得る第i−1
復号化過程と、 上記入力信号から周波数fi 以下の第i帯域信号を選出
する第i選択過程と、 上記第i帯域信号から上記第i−1復号信号を差し引い
て第i差信号を得る第i差過程と、 上記第i差信号を第i符号化方法で符号化して第i符号
を出力する第i符号化過程と、 を有する音響信号符号化方法。
1. A sound input signal having a maximum frequency of f n , such as a musical sound or a voice, is converted into frequencies f 1 , f 2 , ..., F n-1 (f 1 <
f 2 <, ..., <f n-1 <f n ) n divisions (n is 2)
A first band selection step of selecting a first band signal having a frequency of f 1 or less from the input signal, and a first coding of the first band signal. The first encoding process of encoding the first code by the method and outputting the first code, and the i-th of which the frequency is f i-1 or less from each of the i-1 or less codes (i = 2, 3, ..., N). I−1 th to obtain one decoded signal
A decoding step, an i-th selection step of selecting an i-th band signal having a frequency of f i or less from the input signal, and an i-th difference signal obtained by subtracting the i-1 th decoded signal from the i-th band signal An acoustic signal coding method comprising: an i-difference step; and an i-th coding step of coding the i-th difference signal by an i-th coding method and outputting an i-th code.
【請求項2】 上記第i−1復号化過程は上記第i−1
符号を復号する過程と、その復号された信号と第i−2
復号信号とを加算する過程と、その加算された信号を標
本化周波数が2fi の信号に変換して上記第i−1復号
信号を得る過程と、 を有することを特徴とする請求項1記載の音響信号符号
化方法。
2. The i−1 th decoding process is the i−1 th decoding process.
The process of decoding the code, the decoded signal and the i-2
2. The method according to claim 1, further comprising a step of adding the decoded signal and a step of converting the added signal into a signal having a sampling frequency of 2f i to obtain the i−1 th decoded signal. Audio signal encoding method.
【請求項3】 楽音、音声などの最高周波数がfn の音
響入力信号を、周波数f1 ,f2 …,fn-1 (f1 <f
2 <,…<fn-1 <fn )(n=2以上の整数)で区分
してそれぞれを符号化する符号化方法において、 上記入力信号より標本化周波数が2f1 の第1帯域信号
を得る第1帯域選択過程と、 上記第1帯域信号を第1符号化法により符号化して第1
符号を出力する第1符号化過程と、 上記i−1符号化過程(i=2,3,…,n)の符号誤
差として第i−1誤差信号を得る第i−1誤差取出し過
程と、 上記第i−1誤差信号を標本化周波数が2fi の第i−
1変換誤差信号に変換する第i−1変換過程と、 上記入力音響信号より周波数帯域がfi-1 〜fi 、標本
化周波数が2fi の第i帯域信号を得る第i帯域選出過
程と、 上記第i−1変換誤差信号と上記第i帯域信号とを加算
して第i加算信号を得る第i加算過程と、 上記第i加算信号を第i符号化法により符号化して第i
符号を出力する第i符号化過程と、 を有する音響信号符号化方法。
3. A musical sound, the acoustic input signal maximum frequency is f n, such as voice, frequency f 1, f 2 ..., f n-1 (f 1 <f
In the coding method of coding with 2 <, ... <f n-1 <f n ) (n = an integer of 2 or more) and coding each, a first band signal having a sampling frequency of 2f 1 from the input signal And a first band selection process for obtaining the
A first encoding step of outputting a code, an i-1th error extracting step of obtaining an i-1th error signal as a code error of the i-1 encoding step (i = 2, 3, ..., N), The i-1th error signal is converted into the i-th sampling frequency of 2f i
I-th conversion step of converting into 1-conversion error signal, and i- th band selection step of obtaining the i-th band signal whose frequency band is f i-1 to f i and sampling frequency is 2f i from the input acoustic signal An i-th addition process of adding the i-1th conversion error signal and the i-th band signal to obtain an i-th addition signal, and encoding the i-th addition signal by an i-th encoding method to obtain the i-th addition signal.
An i-th encoding process for outputting a code, and an acoustic signal encoding method comprising:
【請求項4】 上記第1符号化法は符号駆動線形予測符
号化法であり、上記第n符号化法は変換符号化法である
ことを特徴とする請求項1乃至3の何れかに記載の音響
信号符号化方法。
4. The method according to claim 1, wherein the first coding method is a code-driven linear predictive coding method, and the n-th coding method is a transform coding method. Audio signal encoding method.
【請求項5】 上記音響入力信号中の周波数fi 以下の
ほぼ全域の成分のスペクトル包絡を重みの基準として、
上記第i符号化過程において心理聴覚重み付け量子化を
行うことを特徴とする請求項1乃至4の何れかに記載の
音響信号符号化方法。
5. The spectrum envelope of almost all components below the frequency f i in the acoustic input signal is used as a weight standard,
5. The acoustic signal encoding method according to claim 1, wherein psychoacoustic weighting quantization is performed in the i-th encoding process.
【請求項6】 入力符号を第1乃至第n符号(nは2以
上の整数)に分離する分離過程と、 上記第1符号を復号して、標本化周波数2f1 の第1復
号信号を第1復号化出力として出力する第1復号過程
と、 上記第i−1復号化出力(i=2,3,…,n)を標本
化周波数が2fi の第i−1変換復号化出力に変換する
第i−1変換過程と、 上記第i符号を復号して標本化周波数2fi の第i復号
信号を得る第i復号過程と、 上記第i復号信号と上記第i−1変換復号化出力とを加
算して第i復号化出力を出力する第i加算過程と、 を有する音響信号復号化方法。
6. A separation process of separating an input code into first to n-th codes (n is an integer of 2 or more), and decoding the first code to generate a first decoded signal of a sampling frequency 2f 1 . 1st decoding process for outputting as 1st decoding output, and the above i-1th decoding output (i = 2, 3, ..., N) is converted to an i-1th conversion decoding output whose sampling frequency is 2f i I-th transforming step, an i-th decoding step for decoding the i-th code to obtain an i-th decoded signal having a sampling frequency of 2f i , the i-th decoded signal and the i-1 th transform-decoding output And an i-th addition step of adding and to output an i-th decoded output.
JP07065622A 1995-03-24 1995-03-24 Acoustic signal encoding method and decoding method Expired - Lifetime JP3139602B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP07065622A JP3139602B2 (en) 1995-03-24 1995-03-24 Acoustic signal encoding method and decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP07065622A JP3139602B2 (en) 1995-03-24 1995-03-24 Acoustic signal encoding method and decoding method

Publications (2)

Publication Number Publication Date
JPH08263096A true JPH08263096A (en) 1996-10-11
JP3139602B2 JP3139602B2 (en) 2001-03-05

Family

ID=13292314

Family Applications (1)

Application Number Title Priority Date Filing Date
JP07065622A Expired - Lifetime JP3139602B2 (en) 1995-03-24 1995-03-24 Acoustic signal encoding method and decoding method

Country Status (1)

Country Link
JP (1) JP3139602B2 (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0890943A2 (en) * 1997-07-11 1999-01-13 Nec Corporation Voice coding and decoding system
WO1999066497A1 (en) * 1998-06-15 1999-12-23 Nec Corporation Voice/music signal encoder and decoder
JP2001519552A (en) * 1997-10-02 2001-10-23 シーメンス アクチエンゲゼルシヤフト Method and apparatus for generating a bit rate scalable audio data stream
JP2003506763A (en) * 1999-08-09 2003-02-18 ドルビー・ラボラトリーズ・ライセンシング・コーポレーション Scaleable encoding method for high quality audio
US6549147B1 (en) 1999-05-21 2003-04-15 Nippon Telegraph And Telephone Corporation Methods, apparatuses and recorded medium for reversible encoding and decoding
WO2003046891A1 (en) * 2001-11-29 2003-06-05 Coding Technologies Ab Methods for improving high frequency reconstruction
WO2004023457A1 (en) * 2002-09-06 2004-03-18 Matsushita Electric Industrial Co., Ltd. Sound encoding apparatus and sound encoding method
JP2005025203A (en) * 2003-07-03 2005-01-27 Samsung Electronics Co Ltd Speech compression and decompression apparatus having scalable bandwidth structure and its method
JP2005107255A (en) * 2003-09-30 2005-04-21 Matsushita Electric Ind Co Ltd Sampling rate converting device, encoding device, and decoding device
WO2006030865A1 (en) * 2004-09-17 2006-03-23 Matsushita Electric Industrial Co., Ltd. Scalable encoding apparatus, scalable decoding apparatus, scalable encoding method, scalable decoding method, communication terminal apparatus, and base station apparatus
WO2006030864A1 (en) * 2004-09-17 2006-03-23 Matsushita Electric Industrial Co., Ltd. Audio encoding apparatus, audio decoding apparatus, communication apparatus and audio encoding method
JP2006293375A (en) * 2005-04-14 2006-10-26 Samsung Electronics Co Ltd Apparatus and method of encoding audio data and apparatus and method of decoding encoded audio data
WO2007129728A1 (en) * 2006-05-10 2007-11-15 Panasonic Corporation Encoding device and encoding method
JP2008522214A (en) * 2004-11-29 2008-06-26 ナショナル ユニバーシティ オブ シンガポール Perceptually conscious low-power audio decoder for portable devices
US7406410B2 (en) 2002-02-08 2008-07-29 Ntt Docomo, Inc. Encoding and decoding method and apparatus using rising-transition detection and notification
JP2008533522A (en) * 2005-03-09 2008-08-21 テレフオンアクチーボラゲット エル エム エリクソン(パブル) Low complexity code-excited linear predictive coding
JP2009501351A (en) * 2005-07-13 2009-01-15 フランス テレコム Hierarchical encoding / decoding device
JP2009042740A (en) * 2007-03-02 2009-02-26 Panasonic Corp Encoding device
JP2009527017A (en) * 2006-02-14 2009-07-23 フランス テレコム Apparatus for perceptual weighting in audio encoding / decoding
WO2009093466A1 (en) * 2008-01-25 2009-07-30 Panasonic Corporation Encoding device, decoding device, and method thereof
US7599835B2 (en) 2002-03-08 2009-10-06 Nippon Telegraph And Telephone Corporation Digital signal encoding method, decoding method, encoding device, decoding device, digital signal encoding program, and decoding program
JP2010020333A (en) * 2009-09-04 2010-01-28 Panasonic Corp Scalable coder and scalable decoder
US7752052B2 (en) 2002-04-26 2010-07-06 Panasonic Corporation Scalable coder and decoder performing amplitude flattening for error spectrum estimation
WO2010103854A2 (en) 2009-03-13 2010-09-16 パナソニック株式会社 Speech encoding device, speech decoding device, speech encoding method, and speech decoding method
WO2010103855A1 (en) 2009-03-13 2010-09-16 パナソニック株式会社 Voice decoding apparatus and voice decoding method
US7844451B2 (en) 2003-09-16 2010-11-30 Panasonic Corporation Spectrum coding/decoding apparatus and method for reducing distortion of two band spectrums
WO2010137692A1 (en) 2009-05-29 2010-12-02 日本電信電話株式会社 Coding device, decoding device, coding method, decoding method, and program therefor
US7949518B2 (en) 2004-04-28 2011-05-24 Panasonic Corporation Hierarchy encoding apparatus and hierarchy encoding method
US9218818B2 (en) 2001-07-10 2015-12-22 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US9542950B2 (en) 2002-09-18 2017-01-10 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US9792919B2 (en) 2001-07-10 2017-10-17 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate applications
CN109215670A (en) * 2018-09-21 2019-01-15 西安蜂语信息科技有限公司 Transmission method, device, computer equipment and the storage medium of audio data

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10328777A1 (en) * 2003-06-25 2005-01-27 Coding Technologies Ab Apparatus and method for encoding an audio signal and apparatus and method for decoding an encoded audio signal

Cited By (88)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0890943A3 (en) * 1997-07-11 1999-12-22 Nec Corporation Voice coding and decoding system
US6208957B1 (en) 1997-07-11 2001-03-27 Nec Corporation Voice coding and decoding system
EP0890943A2 (en) * 1997-07-11 1999-01-13 Nec Corporation Voice coding and decoding system
JP2001519552A (en) * 1997-10-02 2001-10-23 シーメンス アクチエンゲゼルシヤフト Method and apparatus for generating a bit rate scalable audio data stream
US6865534B1 (en) 1998-06-15 2005-03-08 Nec Corporation Speech and music signal coder/decoder
WO1999066497A1 (en) * 1998-06-15 1999-12-23 Nec Corporation Voice/music signal encoder and decoder
US6549147B1 (en) 1999-05-21 2003-04-15 Nippon Telegraph And Telephone Corporation Methods, apparatuses and recorded medium for reversible encoding and decoding
JP2003506763A (en) * 1999-08-09 2003-02-18 ドルビー・ラボラトリーズ・ライセンシング・コーポレーション Scaleable encoding method for high quality audio
JP4731774B2 (en) * 1999-08-09 2011-07-27 ドルビー・ラボラトリーズ・ライセンシング・コーポレーション Scaleable encoding method for high quality audio
US10540982B2 (en) 2001-07-10 2020-01-21 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US10297261B2 (en) 2001-07-10 2019-05-21 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US9865271B2 (en) 2001-07-10 2018-01-09 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate applications
US9799341B2 (en) 2001-07-10 2017-10-24 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate applications
US9799340B2 (en) 2001-07-10 2017-10-24 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US9792919B2 (en) 2001-07-10 2017-10-17 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate applications
US10902859B2 (en) 2001-07-10 2021-01-26 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US9218818B2 (en) 2001-07-10 2015-12-22 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
US9779746B2 (en) 2001-11-29 2017-10-03 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US9761236B2 (en) 2001-11-29 2017-09-12 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US9761234B2 (en) 2001-11-29 2017-09-12 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US7469206B2 (en) 2001-11-29 2008-12-23 Coding Technologies Ab Methods for improving high frequency reconstruction
US11238876B2 (en) 2001-11-29 2022-02-01 Dolby International Ab Methods for improving high frequency reconstruction
US8447621B2 (en) 2001-11-29 2013-05-21 Dolby International Ab Methods for improving high frequency reconstruction
WO2003046891A1 (en) * 2001-11-29 2003-06-05 Coding Technologies Ab Methods for improving high frequency reconstruction
US9812142B2 (en) 2001-11-29 2017-11-07 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US9818417B2 (en) 2001-11-29 2017-11-14 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US9431020B2 (en) 2001-11-29 2016-08-30 Dolby International Ab Methods for improving high frequency reconstruction
US9792923B2 (en) 2001-11-29 2017-10-17 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US9818418B2 (en) 2001-11-29 2017-11-14 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US8112284B2 (en) 2001-11-29 2012-02-07 Coding Technologies Ab Methods and apparatus for improving high frequency reconstruction of audio and speech signals
US8019612B2 (en) 2001-11-29 2011-09-13 Coding Technologies Ab Methods for improving high frequency reconstruction
US9761237B2 (en) 2001-11-29 2017-09-12 Dolby International Ab High frequency regeneration of an audio signal with synthetic sinusoid addition
US10403295B2 (en) 2001-11-29 2019-09-03 Dolby International Ab Methods for improving high frequency reconstruction
US7406410B2 (en) 2002-02-08 2008-07-29 Ntt Docomo, Inc. Encoding and decoding method and apparatus using rising-transition detection and notification
US7599835B2 (en) 2002-03-08 2009-10-06 Nippon Telegraph And Telephone Corporation Digital signal encoding method, decoding method, encoding device, decoding device, digital signal encoding program, and decoding program
US8311815B2 (en) 2002-03-08 2012-11-13 Nippon Telegraph And Telephone Corporation Method, apparatus, and program for encoding digital signal, and method, apparatus, and program for decoding digital signal
US8209188B2 (en) 2002-04-26 2012-06-26 Panasonic Corporation Scalable coding/decoding apparatus and method based on quantization precision in bands
US7752052B2 (en) 2002-04-26 2010-07-06 Panasonic Corporation Scalable coder and decoder performing amplitude flattening for error spectrum estimation
US7996233B2 (en) 2002-09-06 2011-08-09 Panasonic Corporation Acoustic coding of an enhancement frame having a shorter time length than a base frame
CN100454389C (en) * 2002-09-06 2009-01-21 松下电器产业株式会社 Sound encoding apparatus and sound encoding method
CN101425294A (en) * 2002-09-06 2009-05-06 松下电器产业株式会社 Sound encoding apparatus and sound encoding method
WO2004023457A1 (en) * 2002-09-06 2004-03-18 Matsushita Electric Industrial Co., Ltd. Sound encoding apparatus and sound encoding method
US10157623B2 (en) 2002-09-18 2018-12-18 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US10685661B2 (en) 2002-09-18 2020-06-16 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US11423916B2 (en) 2002-09-18 2022-08-23 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US10418040B2 (en) 2002-09-18 2019-09-17 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US9842600B2 (en) 2002-09-18 2017-12-12 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US10013991B2 (en) 2002-09-18 2018-07-03 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US10115405B2 (en) 2002-09-18 2018-10-30 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US9990929B2 (en) 2002-09-18 2018-06-05 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US9542950B2 (en) 2002-09-18 2017-01-10 Dolby International Ab Method for reduction of aliasing introduced by spectral envelope adjustment in real-valued filterbanks
US7624022B2 (en) 2003-07-03 2009-11-24 Samsung Electronics Co., Ltd. Speech compression and decompression apparatuses and methods providing scalable bandwidth structure
JP2011154378A (en) * 2003-07-03 2011-08-11 Samsung Electronics Co Ltd Speech compression and decompression apparatuses and methods having scalable bandwidth structure
US8571878B2 (en) 2003-07-03 2013-10-29 Samsung Electronics Co., Ltd. Speech compression and decompression apparatuses and methods providing scalable bandwidth structure
JP2005025203A (en) * 2003-07-03 2005-01-27 Samsung Electronics Co Ltd Speech compression and decompression apparatus having scalable bandwidth structure and its method
US8738372B2 (en) 2003-09-16 2014-05-27 Panasonic Corporation Spectrum coding apparatus and decoding apparatus that respectively encodes and decodes a spectrum including a first band and a second band
US7844451B2 (en) 2003-09-16 2010-11-30 Panasonic Corporation Spectrum coding/decoding apparatus and method for reducing distortion of two band spectrums
US8374884B2 (en) 2003-09-30 2013-02-12 Panasonic Corporation Decoding apparatus and decoding method
US8195471B2 (en) 2003-09-30 2012-06-05 Panasonic Corporation Sampling rate conversion apparatus, coding apparatus, decoding apparatus and methods thereof
JP2005107255A (en) * 2003-09-30 2005-04-21 Matsushita Electric Ind Co Ltd Sampling rate converting device, encoding device, and decoding device
JP4679049B2 (en) * 2003-09-30 2011-04-27 パナソニック株式会社 Scalable decoding device
US7756711B2 (en) 2003-09-30 2010-07-13 Panasonic Corporation Sampling rate conversion apparatus, encoding apparatus decoding apparatus and methods thereof
US7949518B2 (en) 2004-04-28 2011-05-24 Panasonic Corporation Hierarchy encoding apparatus and hierarchy encoding method
WO2006030864A1 (en) * 2004-09-17 2006-03-23 Matsushita Electric Industrial Co., Ltd. Audio encoding apparatus, audio decoding apparatus, communication apparatus and audio encoding method
JP4963963B2 (en) * 2004-09-17 2012-06-27 パナソニック株式会社 Scalable encoding device, scalable decoding device, scalable encoding method, and scalable decoding method
US7783480B2 (en) 2004-09-17 2010-08-24 Panasonic Corporation Audio encoding apparatus, audio decoding apparatus, communication apparatus and audio encoding method
US8712767B2 (en) 2004-09-17 2014-04-29 Panasonic Corporation Scalable encoding apparatus, scalable decoding apparatus, scalable encoding method, scalable decoding method, communication terminal apparatus, and base station apparatus
JP2010244078A (en) * 2004-09-17 2010-10-28 Panasonic Corp Spectrum envelope information quantization device, spectrum envelope information decoding device, spectrum envelope information quantizatization method, and spectrum envelope information decoding method
JPWO2006030865A1 (en) * 2004-09-17 2008-05-15 松下電器産業株式会社 Scalable encoding apparatus, scalable decoding apparatus, scalable encoding method, scalable decoding method, communication terminal apparatus, and base station apparatus
WO2006030865A1 (en) * 2004-09-17 2006-03-23 Matsushita Electric Industrial Co., Ltd. Scalable encoding apparatus, scalable decoding apparatus, scalable encoding method, scalable decoding method, communication terminal apparatus, and base station apparatus
US7848925B2 (en) 2004-09-17 2010-12-07 Panasonic Corporation Scalable encoding apparatus, scalable decoding apparatus, scalable encoding method, scalable decoding method, communication terminal apparatus, and base station apparatus
JP2008522214A (en) * 2004-11-29 2008-06-26 ナショナル ユニバーシティ オブ シンガポール Perceptually conscious low-power audio decoder for portable devices
JP2008533522A (en) * 2005-03-09 2008-08-21 テレフオンアクチーボラゲット エル エム エリクソン(パブル) Low complexity code-excited linear predictive coding
JP2006293375A (en) * 2005-04-14 2006-10-26 Samsung Electronics Co Ltd Apparatus and method of encoding audio data and apparatus and method of decoding encoded audio data
JP2009501351A (en) * 2005-07-13 2009-01-15 フランス テレコム Hierarchical encoding / decoding device
JP2009527017A (en) * 2006-02-14 2009-07-23 フランス テレコム Apparatus for perceptual weighting in audio encoding / decoding
WO2007129728A1 (en) * 2006-05-10 2007-11-15 Panasonic Corporation Encoding device and encoding method
JP5190359B2 (en) * 2006-05-10 2013-04-24 パナソニック株式会社 Encoding apparatus and encoding method
US8121850B2 (en) 2006-05-10 2012-02-21 Panasonic Corporation Encoding apparatus and encoding method
JP2009042740A (en) * 2007-03-02 2009-02-26 Panasonic Corp Encoding device
JP5448850B2 (en) * 2008-01-25 2014-03-19 パナソニック株式会社 Encoding device, decoding device and methods thereof
US8422569B2 (en) 2008-01-25 2013-04-16 Panasonic Corporation Encoding device, decoding device, and method thereof
WO2009093466A1 (en) * 2008-01-25 2009-07-30 Panasonic Corporation Encoding device, decoding device, and method thereof
WO2010103855A1 (en) 2009-03-13 2010-09-16 パナソニック株式会社 Voice decoding apparatus and voice decoding method
WO2010103854A2 (en) 2009-03-13 2010-09-16 パナソニック株式会社 Speech encoding device, speech decoding device, speech encoding method, and speech decoding method
WO2010137692A1 (en) 2009-05-29 2010-12-02 日本電信電話株式会社 Coding device, decoding device, coding method, decoding method, and program therefor
JP2010020333A (en) * 2009-09-04 2010-01-28 Panasonic Corp Scalable coder and scalable decoder
CN109215670A (en) * 2018-09-21 2019-01-15 西安蜂语信息科技有限公司 Transmission method, device, computer equipment and the storage medium of audio data

Also Published As

Publication number Publication date
JP3139602B2 (en) 2001-03-05

Similar Documents

Publication Publication Date Title
JP3139602B2 (en) Acoustic signal encoding method and decoding method
JP4781153B2 (en) Audio data encoding and decoding apparatus, and audio data encoding and decoding method
Neuendorf et al. MPEG unified speech and audio coding-the ISO/MPEG standard for high-efficiency audio coding of all content types
JP4934427B2 (en) Speech signal decoding apparatus and speech signal encoding apparatus
JP5357055B2 (en) Improved digital audio signal encoding / decoding method
KR101303145B1 (en) A system for coding a hierarchical audio signal, a method for coding an audio signal, computer-readable medium and a hierarchical audio decoder
US5778335A (en) Method and apparatus for efficient multiband celp wideband speech and music coding and decoding
JP4081447B2 (en) Apparatus and method for encoding time-discrete audio signal and apparatus and method for decoding encoded audio data
KR100304092B1 (en) Audio signal coding apparatus, audio signal decoding apparatus, and audio signal coding and decoding apparatus
JP5123303B2 (en) Method and apparatus for reversibly encoding an original signal using a lossy encoded data stream and a lossless decompressed data stream
Herre et al. Overview of MPEG-4 audio and its applications in mobile communications
WO2003091989A1 (en) Coding device, decoding device, coding method, and decoding method
JPH10282999A (en) Method and device for coding audio signal, and method and device decoding for coded audio signal
CA2704807A1 (en) Audio coding apparatus and method thereof
JP3344962B2 (en) Audio signal encoding device and audio signal decoding device
JP2012518194A (en) Audio signal encoding and decoding method and apparatus using adaptive sinusoidal coding
WO2006120931A1 (en) Encoder, decoder, and their methods
US9230551B2 (en) Audio encoder or decoder apparatus
US20100250260A1 (en) Encoder
JP5730860B2 (en) Audio signal encoding and decoding method and apparatus using hierarchical sinusoidal pulse coding
JP2002330075A (en) Subband adpcm encoding/decoding method, subband adpcm encoder/decoder and wireless microphone transmitting/ receiving system
Ramprashad A two stage hybrid embedded speech/audio coding structure
EP0919989A1 (en) Audio signal encoder, audio signal decoder, and method for encoding and decoding audio signal
JP4281131B2 (en) Signal encoding apparatus and method, and signal decoding apparatus and method
Song et al. Harmonic enhancement in low bitrate audio coding using an efficient long-term predictor

Legal Events

Date Code Title Description
FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20071215

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20081215

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20091215

Year of fee payment: 9

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101215

Year of fee payment: 10

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101215

Year of fee payment: 10

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111215

Year of fee payment: 11

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111215

Year of fee payment: 11

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121215

Year of fee payment: 12

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121215

Year of fee payment: 12

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20131215

Year of fee payment: 13

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

EXPY Cancellation because of completion of term