JP2001166800A - Voice encoding method and voice decoding method - Google Patents

Voice encoding method and voice decoding method

Info

Publication number
JP2001166800A
JP2001166800A JP34985799A JP34985799A JP2001166800A JP 2001166800 A JP2001166800 A JP 2001166800A JP 34985799 A JP34985799 A JP 34985799A JP 34985799 A JP34985799 A JP 34985799A JP 2001166800 A JP2001166800 A JP 2001166800A
Authority
JP
Japan
Prior art keywords
frequency component
speech
low
linear prediction
noise code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP34985799A
Other languages
Japanese (ja)
Other versions
JP3510168B2 (en
Inventor
Yuusuke Hiwazaki
祐介 日和▲崎▼
Kazunori Mano
一則 間野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP34985799A priority Critical patent/JP3510168B2/en
Publication of JP2001166800A publication Critical patent/JP2001166800A/en
Application granted granted Critical
Publication of JP3510168B2 publication Critical patent/JP3510168B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Abstract

PROBLEM TO BE SOLVED: To actualize the more efficient quantization of the voice waveform in a voiceless section as to linear predictive encoding using a noise series as an exciting signal. SOLUTION: By this voice encoding method, a linear predictive analysis coefficient is found by taking a linear predictive analysis of the voice signal for each frame, and a code obtained by quantizing the feature quantity of a residual signal obtained by driving a linear predictive synthesis filter using a filter coefficient based upon the linear predictive analysis coefficient is determined. The periodicity of the residual signal is decided as the feature quantity and when the periodicity is smaller than a predetermined threshold, the residual signal is divided by bands into a low-frequency component and a high-frequency component, a noise code corresponding to a noise code vector having the shortest distance to the low-frequency component is selected to generate the code of a code vector, and the the mean power of the high-frequency component is calculated for each subframe constituting the frame to generate a voiceless part code according to the mean power.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は、無声区間の処理に
おいて音声の信号系列を少ない情報量でディジタル符号
化する高能率音声符号化方法及び復号化方法に関するも
のである。特に、本発明は従来のボコーダと呼ばれる音
声分析合成系の領域である2.0kbit/s以下のビットレー
トで高品質な音声符号化を実現するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a highly efficient speech encoding method and a decoding method for digitally encoding a speech signal sequence with a small amount of information in processing of unvoiced sections. In particular, the present invention realizes high-quality speech encoding at a bit rate of 2.0 kbit / s or less, which is the area of a conventional speech analysis / synthesis system called a vocoder.

【0002】[0002]

【従来の技術】本発明に関連する従来技術として、線形
予測ボコーダ、符号励振線形予測符号化(CELP:Code Exc
ited Linear Prediction)がある。線形予測ボコーダ
は、4.8kbit/s以下の低ビットレート領域における音声
符号化方法としてこれまで広く用いられ、PARCOR方式
や、線スペクトル対(LSP:LineSpectrum Pair)方式など
の方式がある。これらの方法の詳細は、たとえば斎藤、
中田著「音声情報処理の基礎」(オーム社出版)に記載
されている。線形予測ボコーダは、音声のスペクトル包
絡特性を表す全極型のフィルタとそれを駆動する励振信
号によって構成される。励振信号には、有声区間に対し
てはパルス系列、無声区間に対しては白色雑音が用いら
れる。しかし、線形予測ボコーダでは、白色雑音による
励振信号は音声波形の特徴、特に破裂音と摩擦音の両方
をうまく再現するには不十分なため、自然性の高い合成
音声を得ることは困難である。
2. Description of the Related Art Conventional techniques related to the present invention include linear predictive vocoders, code-excited linear predictive coding (CELP: Code Exc.
ited Linear Prediction). The linear prediction vocoder has been widely used as a speech coding method in a low bit rate region of 4.8 kbit / s or less, and includes a PARCOR method, a line spectrum pair (LSP: LineSpectrum Pair) method, and the like. For details on these methods, see, for example, Saito,
It is described in Nakada's "Basics of Speech Information Processing" (published by Ohmsha). The linear predictive vocoder is composed of an all-pole filter representing the spectral envelope characteristic of speech and an excitation signal for driving the filter. As the excitation signal, a pulse sequence is used for a voiced section, and white noise is used for an unvoiced section. However, in the linear prediction vocoder, it is difficult to obtain a synthesized speech with high naturalness because the excitation signal due to white noise is not enough to successfully reproduce the characteristics of the speech waveform, particularly both the plosive sound and the fricative sound.

【0003】一方、符号励振線形予測符号化では、雑音
系列を励振信号として音声の近接相関とピッチ相関特性
をあらわす2つの全極型フィルタを駆動することにより
音声を合成する。雑音系列は複数個の符号パターンとし
てあらかじめ用意され、その中から、入力音声波形と合
成音声波形との誤差を最小とする符号パターンが選択さ
れる。その詳細は、文献Schroeder:“Code-Excited Lin
ear Prediction(CELP): High Quality Speech at Very
Low Bit Rates,” Proc. IEEE ICASSP, pp937-940,198
5.に記載されている。符号励振線形予測符号化では、再
現精度は符号パターンの数に依存する関係にある。した
がって、多くの符号パターンを用意すれば音声波形の再
現精度が高まりそれにともなって品質を高めることが出
来る。しかし、音声符号化のビットレートを4kbit/s以
下にすると、符号パターンの数が制限され、その結果十
分な音声品質が得られなくなる。良好な音声品質を得る
には4.8kbit/s程度の情報量が必要であるとされてい
る。
On the other hand, in code-excited linear prediction coding, speech is synthesized by driving two all-pole filters representing the close correlation and pitch correlation characteristics of speech using a noise sequence as an excitation signal. The noise sequence is prepared in advance as a plurality of code patterns, and a code pattern that minimizes an error between the input speech waveform and the synthesized speech waveform is selected from among them. See Schroeder: “Code-Excited Lin” for details.
ear Prediction (CELP): High Quality Speech at Very
Low Bit Rates, ”Proc. IEEE ICASSP, pp937-940,198
It is described in 5. In the code excitation linear prediction coding, the reproduction accuracy depends on the number of code patterns. Therefore, if a large number of code patterns are prepared, the reproduction accuracy of the audio waveform is improved, and accordingly, the quality can be improved. However, if the bit rate of audio coding is set to 4 kbit / s or less, the number of code patterns is limited, and as a result, sufficient audio quality cannot be obtained. It is said that an information amount of about 4.8 kbit / s is required to obtain good voice quality.

【0004】[0004]

【発明が解決しようとする課題】本発明の課題は、雑音
系列を励振信号として用いる線形予測符号化に関して、
より能率的な無声区間の音声波形の量子化を実現する方
法を提供することである。また、ボコーダ方式におい
て、音声の有声無声判別は必ず誤りが含まれ、有声区間
をパワーのみ一致させた白色雑音で駆動すると、著しい
品質劣化が生じる。
SUMMARY OF THE INVENTION An object of the present invention is to provide linear predictive coding using a noise sequence as an excitation signal.
It is an object of the present invention to provide a more efficient method for quantizing a speech waveform in an unvoiced section. Further, in the vocoder method, voiced / unvoiced determination of voice always includes an error, and if a voiced section is driven by white noise in which only power is matched, remarkable quality degradation occurs.

【0005】[0005]

【課題を解決するための手段】前記課題を解決するため
に請求項1に記載の発明は、音声信号をフレームごとに
線形予測分析して線形予測分析係数を求め、前記線形予
測分析係数に基づくフィルタ係数を用いた線形予測合成
フィルタを駆動して得られた残差信号の特徴量を量子化
した符号を決定する音声符号化方法であって、前記特徴
量として、前記残差信号の周期性を判定し、前記周期性
が予め定められた閾値より低い場合、前記残差信号を低
域成分と高域成分とに帯域分割し、前記低域成分との距
離が最小となる雑音符号ベクトルに対応する雑音符号を
選択し、前記高域成分は前記フレームを構成するサブフ
レームごとの平均パワーを算出することを特徴とする。
According to a first aspect of the present invention, a speech signal is subjected to linear prediction analysis for each frame to obtain a linear prediction analysis coefficient, and the speech signal is calculated based on the linear prediction analysis coefficient. A speech encoding method for determining a code obtained by quantizing a characteristic amount of a residual signal obtained by driving a linear prediction synthesis filter using a filter coefficient, wherein the characteristic amount includes a periodicity of the residual signal. If the periodicity is lower than a predetermined threshold, the residual signal is band-divided into a low-frequency component and a high-frequency component, and the distance between the low-frequency component and the noise code vector is minimized. A corresponding noise code is selected, and the high-frequency component calculates an average power for each sub-frame constituting the frame.

【0006】請求項2に記載の発明は、請求項1に記載
の音声符号化方法において、前記算出された平均パワー
を正規化し、正規化平均パワーを算出すると共にそのス
ケール係数を計算することを特徴とする。請求項3に記
載の発明は、請求項1又は2に記載の音声符号化方法に
おいて、前記残差信号の低域成分波形のサンプル点の間
引きを行うことにより低域成分とし、この低域成分との
距離が最小となる雑音符号ベクトルに対応する雑音符号
を選択することを特徴とする。
According to a second aspect of the present invention, in the speech encoding method according to the first aspect, the calculated average power is normalized, a normalized average power is calculated, and a scale factor is calculated. Features. According to a third aspect of the present invention, in the speech encoding method according to the first or second aspect, the low frequency component is obtained by thinning out sample points of a low frequency component waveform of the residual signal, and the low frequency component is obtained. And selecting a noise code corresponding to a noise code vector having a minimum distance from the noise code vector.

【0007】請求項4に記載の発明は、線形予測合成フ
ィルタに励振源を入力して音声信号を復号化する音声復
号化方法であって、請求項1に記載の音声符号化方法に
より生成された雑音符号と平均パワーを入力して、低域
成分の波形は、前記雑音符号に基づいて、符号帳から復
号し、高域成分の波形は、高域通過フィルタを通した白
色雑音を、量子化された前記平均パワーを元にサブフレ
ーム毎に利得を乗じて合成し、これらの二つの帯域の波
形を足し合わせて、線形予測合成フィルタの励振源とす
ることを特徴とする。
According to a fourth aspect of the present invention, there is provided a speech decoding method for decoding a speech signal by inputting an excitation source to a linear predictive synthesis filter, wherein the speech signal is generated by the speech encoding method according to the first aspect. The low-frequency component waveform is decoded from the codebook based on the noise code, and the high-frequency component waveform is obtained by quantizing white noise that has passed through a high-pass filter. Based on the averaged power obtained by the above, the gain is multiplied for each sub-frame and synthesized, and the waveforms of these two bands are summed to form an excitation source of a linear prediction synthesis filter.

【0008】請求項5に記載の発明は、線形予測合成フ
ィルタに励振源を入力して音声信号を復号化する音声復
号化方法であって、請求項2に記載の音声符号化方法に
より生成された正規化平均パワーとスケール係数および
雑音符号を入力し、低域成分の波形は、前記雑音符号に
基づいて符号帳から復号し、高域成分の波形は、前記正
規化平均パワーと前記スケール係数を乗じて合成し、こ
れらの二つの帯域の波形を足し合わせて線形予測合成フ
ィルタの励振源とすることを特徴とする。
According to a fifth aspect of the present invention, there is provided a speech decoding method for inputting an excitation source to a linear prediction synthesis filter and decoding a speech signal, wherein the speech signal is generated by the speech encoding method according to the second aspect. The normalized average power, scale coefficient and noise code are input, and the waveform of the low frequency component is decoded from the codebook based on the noise code, and the waveform of the high frequency component is the normalized average power and the scale factor. , And combining the waveforms of these two bands to obtain an excitation source of a linear prediction synthesis filter.

【0009】請求項6に記載の発明は、請求項4又は5
に記載の音声復号化方法において、音声符号化方法にお
いて生成された残差信号の低域成分波形のサンプル点の
間引きを行うことにより低域成分とし、この低域成分と
の距離が最小となる雑音符号ベクトルに対応する選択し
た雑音符号を入力し、前記雑音符号に基づきサンプリン
グ変換によって間引いたサンプル点の再計算を行うこと
を特徴とする。本発明は、前記構成を備えることにより
音声の無声区間の雑音系列を帯域分割することによっ
て、低域は波形符号化、高域は平均パワーの量子化と併
用することによって、低いビットレートでより効率的に
量子化して品質の向上を図ることを特徴とする。
The invention according to claim 6 is the invention according to claim 4 or 5.
In the audio decoding method described in 1, the sampling point of the low-frequency component waveform of the residual signal generated by the audio encoding method is thinned out to obtain a low-frequency component, and the distance to the low-frequency component is minimized. The method is characterized in that a selected noise code corresponding to a noise code vector is input, and recalculation of sample points thinned out by sampling conversion is performed based on the noise code. The present invention provides the above-described configuration to perform band division of a noise sequence in an unvoiced section of a voice, so that the low band is used in combination with the waveform coding and the high band is combined with quantization of the average power, so that a lower bit rate can be obtained. It is characterized by efficient quantization to improve quality.

【0010】[0010]

【発明の実施の形態】実施例1 図1に、この発明の量子化方法を適用した符号化部の機
能構成を示す。符号化器は、以下の手順をNサンプル数
の長さをもつフレームごとに1回行う。フレームiにお
いて、入力端子TI1よりの入力音声信号s(t)のp次の線形
予測係数(LPC)αj(j=0,1,・・・,p-1)を線形予測係数計算
部1で計算する。この線形予測係数αは線形予測係数量
子化部2で量子化され、線形予測係数符号I1として送出
される。線形予測係数αの量子化の詳細については「音
声の線形予測パラメータ符号化方法」(特願平3-180819
号:特開平5-27798号公報)に記載されている。線形予
測係数量子化部2よりの線形予測係数符号I1は復号さ
れ、その復号された線形予測係数α´に基づいて、線形
予測逆フィルタ3のフィルタ係数を定め、この線形予測
逆フィルタ3に入力音声信号s(t)を通して残差信号r
(t)を得る。線形予測逆フィルタ3は次の伝達特性をも
つデジタルフィルタA(z)で実現される。
DESCRIPTION OF THE PREFERRED EMBODIMENTS Embodiment 1 FIG. 1 shows a functional configuration of an encoding unit to which a quantization method according to the present invention is applied. The encoder performs the following procedure once for each frame having a length of N samples. In the frame i, the p-order linear prediction coefficient (LPC) α j (j = 0, 1,..., P−1) of the input audio signal s (t) from the input terminal TI1 is calculated by the linear prediction coefficient calculation unit 1. Is calculated. The LPC coefficients α are quantized by the linear predictive coefficient quantizing unit 2, is transmitted as linear prediction coefficient code I 1. For details of the quantization of the linear prediction coefficient α, see “Method for Encoding Linear Prediction Parameter of Speech” (Japanese Patent Application No. 3-180819).
No .: JP-A-5-27798). The linear prediction coefficient code I 1 from the linear prediction coefficient quantization unit 2 is decoded, and based on the decoded linear prediction coefficient α ′, a filter coefficient of the linear prediction inverse filter 3 is determined. Residual signal r through input audio signal s (t)
(t) is obtained. The linear prediction inverse filter 3 is realized by a digital filter A (z) having the following transfer characteristics.

【0011】 A(z)=1+α1z-1+・・・+αpz-p (1) ここで得られた残差信号rの相関(偏相関関数)ρを相
関計算部4で計算し、その相関ρの最大値をρmaxとす
る。このとき、周期性判定部5で入力音声信号s(t)が有
声部であるか無声部であるかを、例えば、以下の様に閾
値θ(0.5〜1.0)で判別し、周期性符号I2を出力する。 k1/2+ρmax >θ;有声部 k1/2+ρmax <θ;無声部 (2) ここで、k1は線形予測係数計算部1で求まる第1次の偏
自己相関(PARCOR)係数である。
A (z) = 1 + α 1 z −1 +... + Α p z −p (1) The correlation (partial correlation function) ρ of the obtained residual signal r is calculated by the correlation calculator 4. , Let the maximum value of the correlation ρ be ρ max . At this time, the periodicity determination unit 5 determines whether the input audio signal s (t) is a voiced part or a non-voiced part, for example, using a threshold θ (0.5 to 1.0) as follows, Outputs 2 . k 1/2 + ρ max> θ; voiced portion k 1/2 + ρ max < θ; unvoiced portion (2) where, k 1 is the first-order partial autocorrelation which is obtained by the linear prediction coefficient calculator 1 (PARCOR) coefficients .

【0012】周期性判定部5が有声区間と判断すると、
スイッチSW1を有声区間量子化部6に切り換えて残差信
号r(t)を有声区間量子化部6で量子化を行い、符号化
出力I 3を出力する。なお、有声区間量子化の詳細につい
ては「音声符号化方法」(特願平11−108161号)に記載
されている。周期性判定部5が無声区間と判断した場合
は、スイッチSW1を無声区間量子化部7に切り換えて残
差信号r(t)を無声区間量子化部7で量子化を行い、符
号化出力I4を出力する。
When the periodicity determining unit 5 determines that the section is a voiced section,
Switch SW1 to voiced interval quantizer 6 to switch
The signal r (t) is quantized by the voiced interval quantization unit 6 and is encoded.
Output I ThreeIs output. Note that the details of voiced
Described in "Speech Coding Method" (Japanese Patent Application No. 11-108161)
Have been. When the periodicity determination unit 5 determines that the section is unvoiced
Switches the switch SW1 to the unvoiced section quantization unit 7 and
The difference signal r (t) is quantized by the unvoiced section quantization unit 7 and
Encoding output IFourIs output.

【0013】図2に無声区間量子化部Aの詳細を示す。
線形予測逆フィルタ3からの線形予測残差rに基づき平
均パワー計算部9で、フレームを時間方向にnsfr 分割
した長さNsfr=N/nsfrのサブフレーム単位毎に線形予測
残差の平均パワーp(平均パワー系列の行列表現Pの要
素)を計算する。この計算には、以下の式を用いる。
FIG. 2 shows details of the unvoiced section quantization section A.
Based on the linear prediction residual r from the linear prediction inverse filter 3, the average power calculation unit 9 calculates the linear prediction residual for each subframe unit having a length N sfr = N / n sfr obtained by dividing the frame into n sfr in the time direction. The average power p (the element of the matrix representation P of the average power sequence) is calculated. The following equation is used for this calculation.

【0014】[0014]

【数1】 このとき、0<i<nfsr(nfsr:1フレーム中のサブフレ
ームの個数)である。ここで求まった平均パワーPは、
その平均パワーの1フレーム分を平均パワー量子化部10
でベクトル量子化し、無声部符号I4-1として出力する。
ベクトル量子化の符号選択時の選択尺度dの計算には、
以下の(4)式を用いる。
(Equation 1) At this time, 0 <i <n fsr (n fsr : the number of subframes in one frame). The average power P found here is
One frame of the average power is used as an average power quantization unit 10
, And outputs the result as an unvoiced code I4-1 .
To calculate the selection scale d when selecting the code for vector quantization,
The following equation (4) is used.

【0015】 d=‖P−ci2 (4) このとき、Pは平均パワー系列のベクトル、ciは評価す
る符号ベクトルである。また、線形予測残差信号rはカ
ットオフ周波数fcHzの低域通過フィルタ演算部11でフィ
ルタ処理を行うことによって、低域のみの線形予測残差
rlを計算する。このカットオフ周波数fcHzには500Hzか
ら1000Hzを用いる。
D = {P−c i } 2 (4) At this time, P is a vector of the average power sequence, and c i is a code vector to be evaluated. Also, the linear prediction residual signal r by performing the filtering process with the low pass filter operation unit 11 of the cut-off frequency f c Hz, linear prediction residual of the low-pass only
Calculate r l . 500 Hz to 1000 Hz is used as the cutoff frequency f c Hz.

【0016】次に、低域予測残差のベクトル量子化を行
う。残差信号rは低域通過フィルタ演算部11を介して生
成された低域予測残差信号rlと雑音符号帳12からの雑音
符号ベクトルc(= ci )との距離計算は距離計算部13で
以下の(5)式を用いて計算する。 d=‖rl−ci2 (5) ここで、rl , ciはそれぞれ低域予測残差波形、評価す
る符号ベクトルの行列表現である。
Next, vector quantization of the low band prediction residual is performed. The residual signal r distance calculation between the noise code vector c from the low-frequency prediction residual signal r l and the noise codebook 12, which is produced through a low-pass filter computation unit 11 (= c i) is the distance calculation unit At 13, calculation is made using the following equation (5). d = ‖r l− c i2 (5) where r l and c i are the matrix representation of the low-frequency prediction residual waveform and the code vector to be evaluated, respectively.

【0017】そして、距離計算部13で距離が最小となる
ベクトルcに対応する雑音符号を選択し、選択された符
号ベクトルの符号はI4-2として出力される。次に図1、
2に示した符号化方法の実施例と対応した、復号化方法
の実施例を適用した復号化器Aを図5に示す。ここで
は、入力端子TI2に入力された符号 I1〜I4(すなわち、
ビットストリーム)はデマルチプレクサ29で全ての音声
パラメータが分離復号された後、有声区間音源合成部21
と無声区間音源合成部22において無声・有声パラメータ
I3、I4によって励振信号を生成する。
Then, the distance calculator 13 selects a noise code corresponding to the vector c having the minimum distance, and the code of the selected code vector is output as I4-2 . Next, FIG.
FIG. 5 shows a decoder A to which the embodiment of the decoding method is applied, which corresponds to the embodiment of the encoding method shown in FIG. Here, the symbols I 1 to I 4 input to the input terminal TI2 (ie,
After all voice parameters are separated and decoded by the demultiplexer 29, the voiced section sound source synthesizer 21
Unvoiced and voiced parameters in the unvoiced section sound source synthesis unit 22
An excitation signal is generated by I 3 and I 4 .

【0018】周期性信号I2によりスイッチSW2を切り換
えて無声区間を示す時は無声区間音源合成部22からの合
成励振信号を、I2が有声区間を示す時は有声区間音源合
成部21からの合成励振信号eを用いて線形予測合成フィ
ルタ23を駆動し、出力音声を出力端子TO2に得る。ここ
で、線形予測係数符号I1 は線形予測係数復号部20で復
号され、線形予測合成フィルタ23に出力される。
[0018] The synthesis excitation signal from the unvoiced sound synthesizing unit 22 when showing the unvoiced switches the switch SW2 by periodic signals I 2, when the I 2 indicates the voiced interval from voiced sound synthesis section 21 The linear prediction synthesis filter 23 is driven using the synthesized excitation signal e, and an output sound is obtained at the output terminal TO2. Here, the linear prediction coefficient code I 1 is decoded by the linear prediction coefficient decoding unit 20 and output to the linear prediction synthesis filter 23.

【0019】有声区間の復号化方法については、例え
ば、「音声符号化方法」(特願平11-108161号)に記載の
ものを用いる。図6に、無声区間音源合成部Aの詳細を
示す。無声区間では、まず無声部符号I4-1に基づき平均
パワー復号化部24で平均パワー系列piを復号化する。次
に白色雑音生成部25で生成された白色雑音を乗算器26で
平均パワーpと乗算し、 カットオフ周波数fcHzの高域
通過フィルタ演算部27の処理を行い、先の平均パワー系
列の各係数piにfc/4000を乗じた系列pi ~となるように利
得調整を行い、無声高域駆動音源信号ehを生成する。こ
の処理は、fcHzまでの帯域の信号は、雑音信号符号帳28
より生成されるため、残りのfcHzから4000Hzまでの信号
パワーは、piのほぼfc/4000倍となるためである。
As a method for decoding a voiced section, for example, the method described in "Speech coding method" (Japanese Patent Application No. 11-108161) is used. FIG. 6 shows details of the unvoiced section sound source synthesis unit A. In the unvoiced section, first, the average power decoding unit 24 decodes the average power sequence p i based on the unvoiced part code I4-1 . Then the white noise generated by the white noise generator 25 multiplies the average power p at the multiplier 26, performs the processing of the cut-off frequency f c Hz high pass filter operation unit 27, the previous average power series The gain is adjusted so as to be a series p i ~ obtained by multiplying each coefficient p i by f c / 4000, and an unvoiced high-frequency drive sound source signal e h is generated. This process, band signal to f c Hz is the noise signal codebook 28
This is because the signal power from the remaining f c Hz to 4000 Hz is approximately f c / 4000 times p i .

【0020】また、無声区間低域駆動音源el は、符号
ベクトルの符号I4-2に基づいて雑音符号帳28より復号さ
れる。次に、無声高域駆動音源信号ehと無声区間低域駆
動音源elは加算器35で足し合わされ、無声区間の駆動音
源eとして、線形予測合成フィルタへ入力され、出力音
声が得られる。 実施例2 図1中の平均パワーを正規化して量子化する場合の実施
例の無声区間量子化部Bを図3に示す。
Further, unvoiced low frequency excitation e l is decoded from the noise codebook 28 on the basis of the code I 4-2 of the code vector. Then, no vocal band excitation signal e h and unvoiced low band excitation e l is added together by the adder 35, as excitation e of unvoiced, is inputted to the linear prediction synthesis filter output speech is obtained. Embodiment 2 FIG. 3 shows an unvoiced section quantization unit B of an embodiment in which the average power in FIG. 1 is normalized and quantized.

【0021】残差信号rより平均パワー計算部9で求め
た平均パワー系列pi は以下の式に基づいて正規化され
る。 pi ~=pi/sp (6) ここで、スケール係数spは、
The average power sequence p i obtained by the average power calculator 9 from the residual signal r is normalized based on the following equation. at p i ~ = p i / s p (6) Here, the scale factor s p is,

【0022】[0022]

【数2】 より計算する。スケール係数spはスケール係数量子化部
16でスカラー量子化、正規化系列pi ~は正規化平均パワ
ー量子化部15でベクトル量子化され、それぞれ符号I4-3
とI4-4が出力される。このとき、Pのベクトル量子化
の符号選択時の距離尺度dの計算には、以下の(8)式
を用いる。
(Equation 2) Calculate from The scale factor s p is the scale factor quantization unit
The scalar quantization at 16 and the normalized sequence p i ~ are vector-quantized at the normalized average power quantization unit 15, and the codes I 4-3 respectively
And I 4-4 are output. In this case, the calculation of the distance measure d during code selection vector quantization of P ~, using the following equation (8).

【0023】 d=‖P−ci2 (8) ここで、Pは平均パワー系列の行列表現、ciは評価す
る符号ベクトルである。この実施例に対応する復号器B
での平均パワー系列piの復号を図7に示す。それぞれI
4-3とI4-4はスケール係数復号部31と正規化平均パワー
復号化部30で復号され、乗算器32でpi ~とspとを乗じ、
平均パワー系列piを再計算する。
[0023] d = ‖P ~ -c i2 (8) where the matrix representation of P ~ is average power series, is c i is a code vector to evaluate. Decoder B corresponding to this embodiment
FIG. 7 shows the decoding of the average power sequence p i at the step (c). Each I
4-3 and I 4-4 are decoded by the scale factor decoding unit 31 and the normalized mean power decoding unit 30, multiplied by the p i ~ and s p at the multiplier 32,
Recalculate the average power sequence p i .

【0024】実施例3 図3の低域通過フィルタ演算部11の出力である低域線形
予測残差rlを、時間収縮によって量子化演算量を低減
し、さらに計算された平均パワーを用いて量子化効率を
向上させた場合の無声区間量子化部Cを図4に示す。以
上の実施例と同様にして求めた低域線形予測残差rlは、
時間収縮部17で、信号の間引きを行い時間方向の収縮が
行われ、得られた信号rl ~は正規化雑音符号帳18、距離
計算部19を用いて距離が最小となる雑音符号ベクトルに
対応する雑音符号を選択し符号ベクトルの符号を生成
し、ベクトル量子化が行われる。
[0024] The low-band linear prediction residual r l which is the output of the low-pass filter operation unit 11 of the third embodiment 3, by using the average power reduces quantization operation amount, which is further calculated by the time contraction FIG. 4 shows an unvoiced section quantization unit C in the case where the quantization efficiency is improved. The low-band linear prediction residual r l obtained in the same manner as in the above embodiment is
In the time contraction unit 17, the signal is decimated and contraction in the time direction is performed, and the obtained signal r l ~ is converted into a noise code vector with the minimum distance using the normalized noise codebook 18 and the distance calculation unit 19. A corresponding noise code is selected, a code of a code vector is generated, and vector quantization is performed.

【0025】音声信号sがサンプリング周期8000Hzで標
本化されているとすると、この低域線形予測残差r
l (fc:1000Hz〜500Hz)は、情報量が全帯域の1/8〜1/4し
かないため、サンプリング定理より3〜7サンプルを間
引いて1サンプルで代表させたとしても、元の低域線形
予測残差rlを得ることができ、少ない演算量で低域線形
予測残差の量子化が可能となる。次に、量子化効率を上
げるために、平均パワー計算部9で求まったpiをもと
に、サブフレーム毎にスケールwiを計算し、サブフレー
ム毎に白色雑音にwiを計算し、サブフレーム毎に白色雑
音にwiを乗算器で乗ずることによって重みつける。
Assuming that the audio signal s is sampled at a sampling period of 8000 Hz, this low-band linear prediction residual r
l (f c: 1000Hz~500Hz), since the amount of information there is only 1 / 8-1 / 4 of the entire band, even thinned out 3-7 samples the sampling theorem is represented by 1 sample, the original low it is possible to obtain a frequency linear prediction residual r l, low frequency linear prediction residual quantization can be performed with a small amount of calculation. Next, in order to increase the quantization efficiency, based on the Motoma' was p i in average power calculation unit 9, the scale w i is calculated for each sub-frame, the w i is calculated for the white noise in each sub-frame, Each subframe is weighted by multiplying white noise by w i by a multiplier.

【0026】スケールwi は以下の(9)式より求め
る。
The scale w i is obtained from the following equation (9).

【0027】[0027]

【数3】 低域予測残差信号rlと符号帳ベクトルc(=ci)との間の距
離計算には以下の(10)式を用いる。 d=‖rl ~−wci2 (10) ここで、rl ~、ci、wはそれぞれ時間収縮された低域予測
残差、評価する符号ベクトル、スケールwi系列の行列表
現である。そして、正規化雑音符号帳18から選択された
符号ベクトルの符号I4-5が出力される。
(Equation 3) The following equation (10) is used for calculating the distance between the low-band prediction residual signal r l and the codebook vector c (= c i ). d = ‖r l ~ -wc i2 (10) where, r l ~, c i, w is the low-frequency prediction residual are respectively time shrink, code vector to evaluate, in matrix representation of the scale w i series is there. The code I 4-5 of the selected code vector from the normalized random codebook 18 is outputted.

【0028】この実施例に対応する復号器Cを図8に示
す。符号ベクトルの符号I4-5より復号された低域残差信
号は、時間伸張部34でそのサンプリング変換を行い、符
号化器で間引きされた中間点を計算する。サンプリング
変換は、以下の(11)式に基づき行われる。
FIG. 8 shows a decoder C corresponding to this embodiment. The low-frequency residual signal decoded from the code I4-5 of the code vector performs the sampling conversion in the time expanding unit 34, and calculates the intermediate point thinned out by the encoder. The sampling conversion is performed based on the following equation (11).

【0029】[0029]

【数4】 ここで、x(nTs)は元の信号で、x(t)は求めたい中間点、
Tsはサンプリング周期である。以上で求まった信号に、
平均パワー系列piと一致するように、利得調整の処理を
行い、無声低域駆動音源信号elを生成する。
(Equation 4) Where x (nT s ) is the original signal, x (t) is the desired midpoint,
T s is the sampling period. In the signal obtained above,
To match the average power series p i, performs processing of the gain adjustment to produce a silent low band excitation signal e l.

【0030】[0030]

【発明の効果】以上説明したように、この発明の無声区
間音声符号化方法によれば、従来のボコーダで用いられ
ている単純に白色雑音で無声区間のパワー情報のみを量
子化する方法よりも量子化効率が向上する。また、問題
となっていた有声無声判別も、音声を低域と高域に分離
することにより、高域を白色雑音で駆動しても、低域成
分の波形をそのまま符号化すれば音声波形の周期性はあ
る程度保持される。したがって、有声無声判別に誤りが
生じても、著しく品質を劣化させることがなくなるとい
う利点がある。
As described above, according to the unvoiced section speech coding method of the present invention, compared to the method of quantizing only power information in unvoiced sections with simply white noise used in the conventional vocoder. The quantization efficiency is improved. In addition, voiced and unvoiced discrimination, which has been a problem, can be separated into low and high frequencies so that even if high frequencies are driven by white noise, if the low-frequency component waveform is encoded as it is, Periodicity is maintained to some extent. Therefore, there is an advantage that even if an error occurs in the voiced / unvoiced determination, the quality is not significantly deteriorated.

【0031】本発明の音声符号化・復号化の効果を調べ
るために、以下の条件で分析合成音声実験を行った。入
力音声としては、0〜4kHz帯域の音声を標本化周波数
8.0kHzで標本化した後に、電話機の特性と対応するIR
S特性フィルタを通したものを用いた。符号化器は実施
例2の構成のものを用いた。まず、この信号に、25ms
(200サンプル)毎に音声信号に分析窓長30msのハミン
グ窓を乗じ、分析次数を12次として自己相関法による線
形予測分析を行い、12個の予測計数を求める。予測係数
はLSPパラメータのユークリッド距離を用いてベクト
ル量子化する。上記実施例のI1およびI2、I3に7ビット
を割り当て、また、実施例3の態様においてはI4-3,I
4-4,I4-6にそれぞれ7ビットを割り当て、いずれの態様
においても無声区間のビット数を21ビットとした。さら
にnsfrには10を用いた。
In order to examine the effect of the speech encoding / decoding of the present invention, an analysis / synthesis speech experiment was performed under the following conditions. As input audio, audio in the 0 to 4 kHz band is sampled at the sampling frequency
After sampling at 8.0kHz, the phone characteristics and corresponding IR
What passed through the S characteristic filter was used. The encoder having the configuration of the second embodiment was used. First, 25ms
The speech signal is multiplied by a Hamming window having an analysis window length of 30 ms for each (200 samples), and a linear prediction analysis is performed by an autocorrelation method with the analysis order set to 12, thereby obtaining 12 prediction counts. The prediction coefficient is vector-quantized using the Euclidean distance of the LSP parameter. Seven bits are assigned to I 1, I 2 , and I 3 in the above embodiment, and I 4-3 , I 4
7 bits are assigned to each of 4-4 and I 4-6, and in each case, the number of bits in the unvoiced section is set to 21 bits. Further, 10 was used for n sfr .

【0032】上記の条件で、単純に21bitの白色雑音情
報を用いて無声区間を符号化したものより、聴覚上の品
質向上が認められた。
Under the above conditions, an improvement in auditory quality was recognized as compared with a case in which an unvoiced section was coded simply using 21-bit white noise information.

【図面の簡単な説明】[Brief description of the drawings]

【図1】符号化器の構成を示すブロック図。FIG. 1 is a block diagram showing a configuration of an encoder.

【図2】無声区間量子化部A構成を示すブロック図。FIG. 2 is a block diagram showing a configuration of an unvoiced section quantization unit A.

【図3】無声区間量子化部Bの構成を示すブロック図。FIG. 3 is a block diagram showing a configuration of an unvoiced section quantization unit B;

【図4】無声区間量子化部Cの構成を示すブロック図。FIG. 4 is a block diagram showing a configuration of an unvoiced section quantization unit C;

【図5】復号化器の構成を示すブロック図。FIG. 5 is a block diagram showing a configuration of a decoder.

【図6】無声区間音源合成部Aの構成を示すブロック
図。
FIG. 6 is a block diagram showing a configuration of an unvoiced section sound source synthesis unit A.

【図7】無声区間音源合成部Bの構成を示すブロック
図。
FIG. 7 is a block diagram showing a configuration of an unvoiced section sound source synthesis unit B.

【図8】無声区間音源合成部Cの構成を示すブロック
図。
FIG. 8 is a block diagram showing a configuration of an unvoiced section sound source synthesis section C.

【符号の説明】[Explanation of symbols]

1 線形予測係数計算部 2 線形予測係数量子化部 3 線形予測逆フィルタ 4 相関計算部 5 周期性判定部 6 有声区間量子化部 7 無声区間量子化部 8 マルチプレクサ 9 平均パワー計算部 10 平均パワー量子化部 11 低域通過フィルタ演算部 12 雑音符号帳 13,19 距離計算部 14 平均パワー系列正規化部 15 正規化平均パワー量子化部 16 スケール係数量子化部 17 時間収縮部 18 正規化雑音符号帳 20 線形予測係数復号部 21 有声区間音源合成部 22 無声区間音源合成部 23 線形予測合成フィルタ 24 平均パワー復号化部 25 白色雑音生成部 27 高域通過フィルタ演算部 28 雑音符号帳 29 デマルチプレクサ 31 スケール係数復号化部 34 時間伸張部 DESCRIPTION OF SYMBOLS 1 Linear prediction coefficient calculation part 2 Linear prediction coefficient quantization part 3 Linear prediction inverse filter 4 Correlation calculation part 5 Periodicity judgment part 6 Voiced section quantization part 7 Unvoiced section quantization part 8 Multiplexer 9 Average power calculation part 10 Average power quantum 11 Low-pass filter operation unit 12 Noise codebook 13,19 Distance calculation unit 14 Average power sequence normalization unit 15 Normalized average power quantization unit 16 Scale coefficient quantization unit 17 Time contraction unit 18 Normalized noise codebook 20 Linear prediction coefficient decoder 21 Voiced section sound source synthesizer 22 Unvoiced section sound source synthesizer 23 Linear prediction synthesis filter 24 Average power decoder 25 White noise generator 27 High-pass filter calculator 28 Noise codebook 29 Demultiplexer 31 Scale Coefficient decoding unit 34 Time expansion unit

Claims (6)

【特許請求の範囲】[Claims] 【請求項1】音声信号をフレームごとに線形予測分析し
て線形予測分析係数を求め、前記線形予測分析係数に基
づくフィルタ係数を用いた線形予測合成フィルタを駆動
して得られた残差信号の特徴量を量子化した符号を決定
する音声符号化方法であって、 前記特徴量として、前記残差信号の周期性を判定し、前
記周期性が予め定められた閾値より低い場合、前記残差
信号を低域成分と高域成分とに帯域分割し、 前記低域成分との距離が最小となる雑音符号ベクトルに
対応する雑音符号を選択し、 前記高域成分は前記フレームを構成するサブフレームご
との平均パワーを算出することを特徴とする音声符号化
方法。
1. A linear prediction analysis coefficient is obtained by performing a linear prediction analysis on an audio signal for each frame, and a linear prediction synthesis filter using a filter coefficient based on the linear prediction analysis coefficient is driven. A speech coding method for determining a code obtained by quantizing a feature value, wherein the feature value is determined as periodicity of the residual signal, and the residual value is determined when the periodicity is lower than a predetermined threshold. The signal is band-divided into a low-frequency component and a high-frequency component, and a noise code corresponding to a noise code vector whose distance to the low-frequency component is minimized is selected, and the high-frequency component is a subframe constituting the frame. A speech coding method characterized by calculating an average power for each speech.
【請求項2】請求項1に記載の音声符号化方法におい
て、 前記算出された平均パワーを正規化し、正規化平均パワ
ーを算出すると共にそのスケール係数を計算することを
特徴とする音声符号化方法。
2. The speech encoding method according to claim 1, wherein the calculated average power is normalized, a normalized average power is calculated, and a scale factor thereof is calculated. .
【請求項3】請求項1又は2に記載の音声符号化方法に
おいて、 前記残差信号の低域成分波形のサンプル点の間引きを行
うことにより低域成分とし、この低域成分との距離が最
小となる雑音符号ベクトルに対応する雑音符号を選択す
ることを特徴とする音声符号化方法。
3. The speech encoding method according to claim 1, wherein a sampling point of a low-frequency component waveform of the residual signal is thinned to obtain a low-frequency component, and a distance from the low-frequency component is reduced. A speech coding method characterized by selecting a noise code corresponding to a minimum noise code vector.
【請求項4】線形予測合成フィルタを励振源で駆動して
音声信号を復号化する音声復号化方法であって、 請求項1に記載の音声符号化方法により生成された雑音
符号と平均パワーを入力し、 低域成分の波形は、前記雑音符号に基づいて、符号帳か
ら復号し、 高域成分の波形は、高域通過フィルタを通した白色雑音
を、量子化された前記平均パワーを元にサブフレーム毎
に利得を乗じて合成し、これらの二つの帯域の波形を足
し合わせて、線形予測合成フィルタの励振源とすること
を特徴とする音声復号化方法。
4. A speech decoding method for decoding a speech signal by driving a linear predictive synthesis filter with an excitation source, wherein a noise code and an average power generated by the speech encoding method according to claim 1 are calculated. The low-pass component waveform is decoded from the codebook based on the noise code, and the high-pass component waveform is obtained by converting white noise passed through a high-pass filter into quantized average power. A sound decoding method characterized by multiplying by a gain for each sub-frame and combining the two, and adding the waveforms of these two bands to obtain an excitation source of a linear prediction synthesis filter.
【請求項5】線形予測合成フィルタを励振源で駆動して
音声信号を復号化する音声復号化方法であって、 請求項2に記載の音声符号化方法により生成された正規
化平均パワーとスケール係数および雑音符号を入力し、 低域成分の波形は、前記雑音符号に基づいて符号帳から
復号し、 高域成分の波形は、前記正規化平均パワーと前記スケー
ル係数を乗じて合成し、これらの二つの帯域の波形を足
し合わせて線形予測合成フィルタの励振源とすることを
特徴とする音声復号化方法。
5. A speech decoding method for decoding a speech signal by driving a linear prediction synthesis filter with an excitation source, wherein the normalized average power and scale generated by the speech encoding method according to claim 2. A coefficient and a noise code are input, a low-frequency component waveform is decoded from a codebook based on the noise code, and a high-frequency component waveform is synthesized by multiplying the normalized average power by the scale factor. A speech decoding method characterized in that the waveforms of the two bands are added to each other as an excitation source of a linear prediction synthesis filter.
【請求項6】請求項4又は5に記載の音声復号化方法に
おいて、 音声符号化方法において生成された残差信号の低域成分
波形のサンプル点の間引きを行うことにより低域成分と
し、この低域成分との距離が最小となる雑音符号ベクト
ルに対応する選択した雑音符号を入力し、 前記雑音符号に基づきサンプリング変換によって間引い
たサンプル点の再計算を行うことを特徴とする音声復号
化方法。
6. The speech decoding method according to claim 4 or 5, wherein sample points of a low-frequency component waveform of the residual signal generated in the speech encoding method are thinned to obtain a low-frequency component. A speech decoding method comprising: inputting a selected noise code corresponding to a noise code vector having a minimum distance to a low-frequency component; and recalculating sample points thinned out by sampling conversion based on the noise code. .
JP34985799A 1999-12-09 1999-12-09 Audio encoding method and audio decoding method Expired - Fee Related JP3510168B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP34985799A JP3510168B2 (en) 1999-12-09 1999-12-09 Audio encoding method and audio decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP34985799A JP3510168B2 (en) 1999-12-09 1999-12-09 Audio encoding method and audio decoding method

Publications (2)

Publication Number Publication Date
JP2001166800A true JP2001166800A (en) 2001-06-22
JP3510168B2 JP3510168B2 (en) 2004-03-22

Family

ID=18406599

Family Applications (1)

Application Number Title Priority Date Filing Date
JP34985799A Expired - Fee Related JP3510168B2 (en) 1999-12-09 1999-12-09 Audio encoding method and audio decoding method

Country Status (1)

Country Link
JP (1) JP3510168B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006009074A1 (en) * 2004-07-20 2006-01-26 Matsushita Electric Industrial Co., Ltd. Audio decoding device and compensation frame generation method
US7283967B2 (en) 2001-11-02 2007-10-16 Matsushita Electric Industrial Co., Ltd. Encoding device decoding device
WO2015188627A1 (en) * 2014-06-12 2015-12-17 华为技术有限公司 Method, device and encoder of processing temporal envelope of audio signal
CN113345406A (en) * 2021-05-19 2021-09-03 苏州奇梦者网络科技有限公司 Method, apparatus, device and medium for speech synthesis of neural network vocoder

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7283967B2 (en) 2001-11-02 2007-10-16 Matsushita Electric Industrial Co., Ltd. Encoding device decoding device
US7328160B2 (en) 2001-11-02 2008-02-05 Matsushita Electric Industrial Co., Ltd. Encoding device and decoding device
US7392176B2 (en) 2001-11-02 2008-06-24 Matsushita Electric Industrial Co., Ltd. Encoding device, decoding device and audio data distribution system
WO2006009074A1 (en) * 2004-07-20 2006-01-26 Matsushita Electric Industrial Co., Ltd. Audio decoding device and compensation frame generation method
US8725501B2 (en) 2004-07-20 2014-05-13 Panasonic Corporation Audio decoding device and compensation frame generation method
WO2015188627A1 (en) * 2014-06-12 2015-12-17 华为技术有限公司 Method, device and encoder of processing temporal envelope of audio signal
US9799343B2 (en) 2014-06-12 2017-10-24 Huawei Technologies Co., Ltd. Method and apparatus for processing temporal envelope of audio signal, and encoder
US10170128B2 (en) 2014-06-12 2019-01-01 Huawei Technologies Co., Ltd. Method and apparatus for processing temporal envelope of audio signal, and encoder
US10580423B2 (en) 2014-06-12 2020-03-03 Huawei Technologies Co., Ltd. Method and apparatus for processing temporal envelope of audio signal, and encoder
CN113345406A (en) * 2021-05-19 2021-09-03 苏州奇梦者网络科技有限公司 Method, apparatus, device and medium for speech synthesis of neural network vocoder
CN113345406B (en) * 2021-05-19 2024-01-09 苏州奇梦者网络科技有限公司 Method, device, equipment and medium for synthesizing voice of neural network vocoder

Also Published As

Publication number Publication date
JP3510168B2 (en) 2004-03-22

Similar Documents

Publication Publication Date Title
JP4550289B2 (en) CELP code conversion
JP5373217B2 (en) Variable rate speech coding
US7280959B2 (en) Indexing pulse positions and signs in algebraic codebooks for coding of wideband signals
JP4270866B2 (en) High performance low bit rate coding method and apparatus for non-speech speech
JP3602593B2 (en) Audio encoder and audio decoder, and audio encoding method and audio decoding method
JPH08328591A (en) Method for adaptation of noise masking level to synthetic analytical voice coder using short-term perception weightingfilter
JP2010181892A (en) Gain smoothing for speech coding
JPH09127996A (en) Voice decoding method and device therefor
JP2004537739A (en) Method and system for estimating pseudo high band signal in speech codec
JP2007034326A (en) Speech coder method and system
JPH1097296A (en) Method and device for voice coding, and method and device for voice decoding
EP1597721B1 (en) 600 bps mixed excitation linear prediction transcoding
JP2008503786A (en) Audio signal encoding and decoding
JP3558031B2 (en) Speech decoding device
JP3531780B2 (en) Voice encoding method and decoding method
WO2004090864A2 (en) Method and apparatus for the encoding and decoding of speech
US20040181398A1 (en) Apparatus for coding wide-band low bit rate speech signal
JP3510168B2 (en) Audio encoding method and audio decoding method
JP3583945B2 (en) Audio coding method
CN101496097A (en) Systems and methods for including an identifier with a packet associated with a speech signal
JP3552201B2 (en) Voice encoding method and apparatus
JP3232701B2 (en) Audio coding method
Yu et al. Harmonic+ noise coding using improved V/UV mixing and efficient spectral quantization
JP3296411B2 (en) Voice encoding method and decoding method
JP2853170B2 (en) Audio encoding / decoding system

Legal Events

Date Code Title Description
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20031209

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20031224

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090109

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090109

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100109

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110109

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110109

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120109

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130109

Year of fee payment: 9

LAPS Cancellation because of no payment of annual fees