GB2364870A - Vector quantization system for speech encoding/decoding - Google Patents

Vector quantization system for speech encoding/decoding Download PDF

Info

Publication number
GB2364870A
GB2364870A GB0017145A GB0017145A GB2364870A GB 2364870 A GB2364870 A GB 2364870A GB 0017145 A GB0017145 A GB 0017145A GB 0017145 A GB0017145 A GB 0017145A GB 2364870 A GB2364870 A GB 2364870A
Authority
GB
United Kingdom
Prior art keywords
lsf
estimate
prediction error
current
mean value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0017145A
Other versions
GB0017145D0 (en
Inventor
Jonathan Alastair Gibbs
Mark A Jasiuk
Alun Christopher Evans
Aaron Smith
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motorola Solutions Inc
Original Assignee
Motorola Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc filed Critical Motorola Inc
Priority to GB0017145A priority Critical patent/GB2364870A/en
Publication of GB0017145D0 publication Critical patent/GB0017145D0/en
Priority to EP01116530A priority patent/EP1172803A3/en
Publication of GB2364870A publication Critical patent/GB2364870A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/06Determination or coding of the spectral characteristics, e.g. of the short-term prediction coefficients
    • G10L19/07Line spectrum pair [LSP] vocoders

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A method of providing robust quantization of speech spectral parameters tolerant to spectral balance and speaker variations, includes the steps of, for each of a plurality of line spectral frequencies (LSFs) of a speech spectrum, quantizing (14) the displacement (12) of the LSF from an estimate (42) of its long-term mean, and, at an encoder (10) and a decoder (20), reconstructing (22,34) an estimate of the LSF from the quantized displacement and the long-term LSF mean estimate, and filtering (26,30,32) the reconstructed LSF estimate, thereby providing a subsequent long-term LSF mean estimate. In an alternative (fig 2), a prediction error, derived from the LSF from which a current short-term LSF mean value and a current moving average predicted LSF estimate have been subtracted, is quantized and, at an encoder and a decoder, the prediction error is dequantized and a next-current short-term LSF mean value and a next-current moving average predicted LSF estimate are determined.

Description

2364870 Vector Quantization System and Method of Operation
FIELD OF THE INVENTION
The present invention relates to speech encoding and decoding systems in 5 general, and more particularly to systems and methods for vector quantization of line spectral frequencies.
BACKGROUND OF THE INVENTION
One of the challenges of designing vector quantizers for linear predictive coding 10 (LPC) speech filters is making them robust to variations in spectral balance, as well as variations between speakers. Spectral balance variations may have several sources, but the dominant sources are the spectral responses of microphones and of anti-aliasing filters which can vary quite considerably. In order to account for these variations it is common. to train a quantizer for an LPC filter for use with a wide variety of speech input, recorded from many 15 different sources.
Conventional vector quantization (VQ) training methods use speech from as many different sources as possible in an attempt to provide robust performance for many different input spectra. However, this approach is disadvantageous in that training is relatively slow and complex as many speech samples are required. This approach furthermore 20 generally results in a quantizer which is not optimal for any one filtering condition.
One method for handling spectral balance variations is the Microphone/Speaker Adaption (MSA) method taught by Aarskog et aL in which the average spectrum of speech input presented to a speech coder is compensated for by an MSA filter prior to further compression by an inverse filter. The speech is subsequently filtered by a complementary 1 filter after decoding. This method is disadvantageous in that it requires two stages of inverse filtering, thus increasing the complexity of the quantizer due to the required autocorrelation ftinction calculations. Furthermore, two LPC filter quantizers are needed, one for the MSA filter and one for the conventional LPC filter. An additional slow-speed data path is also 5 needed to convey the quantized MSA filter parameters from encoder to decoder.
The following publications are believed to be descriptive of the current state of the art of speech encoding and decoding systems in general, and vector quantization of line spectral frequencies technologies in particular, and terms related thereto:
A. Aarskog, A. Nilsen, 0. Berg, and H. C. Gruen, "A Long-Term Predictive 10 ADPCM Coder with Short-Term Prediction and Vector Quantization," 1991 International Conference on Acoustics, Speech, and Signal Processing, ICASSP-91, vol. 1, pp. 37 -40; A. Aarskog and H. C. Gruen, "Predictive Coding of Speech Using Microphone/Speaker Adaptation and Vector Quantization," IEEE Transactions on Speech and Audio Processing, April 1994, vol. 22, pp. 266 -273; 15 W.B. Kleijn and K.K. Paliwal, "Speech Coding and Synthesis," Elsevier Press, 1995.
The disclosures of all patents, patent applications, and other publications mentioned in this specification, and of the patents, patent applications, and other publications cited therein, are hereby incorporated by reference.
SUMMARY OF THE INVENTION
The present invention seeks to provide improved systems and methods for vector quantization that account for spectral balance variations while avoiding the limitations of the prior art.
2 A quantization system and method are disclosed that achieve similar objective performance, in terms of mean spectral distortion and outliers, for speech within and outside the training database, and similar quantizer performance for different types of speech largely irrespective of the spectral balance. The present invention exploits properties of line-spectrum 5 pairs to yield a robust quantizer with superior performance under various conditions. The present invention further discloses a more error- robust system and method for deriving adaptable mean values based upon previous quantizer decisions in a uniform gain moving average fashion.
The present invention is an extension to mean-removed vector quantization and is 10 equally applicable to both auto-regressive and moving average predictive vector quantization. A system and method are disclosed for slow averaging of the positions of the inverse quantized line spectral frequencies (LSFs) using a series of simple filters (one per LSF) with one or more long time constants.
There is thus provided in accordance with a preferred embodiment of the present 15 invention a method of providing robust quantization of speech spectral parameters tolerant to spectral balance and speaker variations, the method including the steps of, for each of a plurality of line spectral frequencies (LSFs) of a speech spectrum, quantizing the displacement of the LSF from an estimate of its long-term mean, reconstructing an estimate of the LSF from the quantized displacement and the long-term LSF mean estimate, and 20 filtering the reconstructed LSF estimate, thereby providing a subsequent long-term LSF mean estimate.
Further in accordance with a preferred embodiment of the present invention the filtering step includes filtering the reconstructed LSF estimate using a first-order recursive filter.
3 Still further in accordance with a preferred embodiment of the present invention the first-order recursive filter is of unity gain and employs a time constant of about 1 second for the LSF.
There is also provided in accordance with a preferred embodiment of the present 5 invention a method of quantizing speech spectral parameters that is tolerant to spectral balance and speaker variations, the method including the steps of, for each of a plurality of line spectral frequencies (LSFs) of a speech spectrum, at an encoder a) quantizing the difference between the LSF and a current LSF mean value estimate, and at the encoder and a decoder b) dequantizing the difference, c) adding the dequantized difference to a current LSF 10 mean value estimate, thereby providing an approximation of the LSF, and d) filtering the quantized LSF together with the current LSF mean value estimate, thereby providing a new current LSF mean value estimate.
There is additionally provided in accordance with a preferred embodiment of the present invention a method of quantizing speech spectral parameters that is tolerant to 15 spectral balance and speaker variations, the method including the steps of, for each of a plurality of line spectral frequencies (LSFs) of a speech spectrum, at an encoder a) quantizing a prediction error derived from the LSF from which a current short-term LSF mean value and a current moving average predicted LSF estimate have been subtracted, and at the encoder and a decoder b) dequantizing the prediction error, c) determining a next-current short-term 20 LSF mean value from the dequantized prediction error and at least one previously dequantized prediction error, and d) determining a next- current moving average predicted LSF estimate from the dequantized prediction error and at least one previously dequantized prediction error.
Further in accordance with a preferred embodiment of the present invention the 4 next-current short-term LSF mean value is the sum of a training data derived mean and a moving average of a plurality of previously dequantized prediction error values.
Still further in accordance with a preferred embodiment of the present invention the equal gains are assigned to each dequantized prediction error value.
There is also provided in accordance with a preferred embodiment of the present invention apparatus for providing robust quantization of speech spectral parameters tolerant to spectral balance and speaker variations, the apparatus including means for quantizing the displacement of a line spectral frequency (LSF) from an estimate of its long-term mean, means for reconstructing an estimate of the LSF from the quantized displacement and the 10 long-term LSF mean estimate, and means for filtering the reconstructed LSF estimate, thereby providing a subsequent long-term LSF mean estimate.
Further in accordance with a preferred embodiment of the present invention the filtering means includes a first-order recursive filter.
Still further in accordance with a preferred embodiment of the present invention 15 the first-order recursive filter is of unity gain and employs a time constant of about I second for the LSF.
There is additionally provided in accordance with a preferred embodiment of the present invention apparatus for quantizing speech spectral parameters that is tolerant to spectral balance and speaker variations, the apparatus including an encoder including means 20 for quantizing the difference between a line spectral frequency (LSF) and a current LSF mean value estimate, means for dequantizing the difference, means for adding the dequantized difference to a current LSF mean value estimate, thereby providing an approximation of the LSF, and means for filtering the quantized LSF together with the current LSF mean value estimate, thereby providing a new current LSF mean value estimate, and a decoder including means for dequantizing the difference, means for adding the dequantized difference to a current LSF mean value estimate, thereby providing an approximation of the LSF, and means for filtering the quantized LSF together with the current LSF mean value estimate, thereby providing a new current LSF mean value estimate.
5 There is also provided in accordance with a preferred embodiment of the present invention apparatus for quantizing speech spectral parameters that is tolerant to spectral balance and speaker variations, the apparatus including an encoder including means for quantizing a prediction error derived from the LSF from which a current short-term LSF mean value and a current moving average predicted LSF estimate have been subtracted, 10 means for dequantizing the prediction error, means for determining a next-current short-term LSF mean value from the dequantized prediction error and at least one previously dequantized prediction error, and means for determining a next-current moving average predicted LSF estimate from the dequantized prediction error and at least one previously dequantized prediction error and the current short-term LSF mean value, and a decoder 15 including means for dequantizing the prediction error, means for determining a next-current short-term LSF mean value from the dequantized prediction error and at least one previously dequantized prediction error, and means for determining a next-current moving average predicted LSF estimate from the dequantized prediction error and at least one previously dequantized prediction error.
20 Further in accordance with a preferred embodiment of the present invention the next-current short-term LSF mean value is the sum of a training data derived mean and a moving average of a plurality of previously dequantized prediction error values.
Still Ru-ther in accordance with a preferred embodiment of the present invention the equal gains are assigned to each dequantized prediction error value.
6 BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be understood and appreciated more fully from the following detailed description taken in con unction with the appended drawings in which:
5 Fig. I is a simplified illustration of a system for backwards-adaptive vector quantization of line spectral frequencies (LSF), constructed and operative in accordance with a preferred embodiment of the present invention; Fig. 2 is a simplified illustration of a system for backwards-adaptive vector quantization of line spectral frequencies (LSF), constructed and operative in accordance with 10 another preferred embodiment of the present invention; and Figure 3 is a simplified graph illustration showing mean spectral distortion performance (dB) of the systems of Figs. I and 2 with fixed means, moving average mean adaptation and backwards adapted means.
15 DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Reference is now made to Fig. I which is a simplified illustration of a system for backwards-adaptive vector quantization of line spectral frequencies (LSF), constructed and operative in accordance with a preferred embodiment of the present invention. In the system of Fig. I LSFs are quantized with their previous long-term mean values removed, using any 20 conventional VQ technique, such as memoryless, AR predictive, MA predictive, or other suitable technique. The same long-term mean value is used during encoding and decoding. As each quantization process is performed, the long-term average value of the LSF changes at both the encoder and decoder. In this way, the quantizer adapts to long- term variations in the LSFs.
7 In the system of Fig. I line spectral frequencies of a speech spectrum are provided to an encoder, generally referenced 10. A subtractor 12 subtracts the current estimate of the mean value associated with the LSF from the LSF input. A quantizer 14 then quantizes the difference of the LSF from its mean value by selecting an appropriate codebook index in 5 accordance with any known and suitable quantization means. The quantization index is then provided to inverse quantizers 16 and 18 at encoder 10 and a decoder, generally referenced 20, respectively. Inverse quantizer 18 dequantizes the quantization index using any known and suitable means to determine an associated LSF. An adder 22 adds the current estimate of the mean value associated with the LSF back into the LSF determined at inverse quantizer 18, 10 thus providing an approximation of the LSF input to encoder 10.
The quantized LSF from adder 22, in addition to being used during subsequent speech encoding and decoding, is provided to a simple, firstorder filter where the LSF is multiplied by a filter value X at a multiplier 26. The previous estimate of the LSF mean value, held at a delay 28, is then multiplied by a filter value 1A at a multiplier 30. The result 15 of multiplier 30 is then added to the result from multiplier 26 at an adder 32. The result from adder 32 represents the current estimate of the LSF mean value and is stored in delay 28.
The time constant used in the system of Fig. 1 may in principle take any value, however, the filter value X is preferably determined such that the time constant of the filter is relatively long compared to the maximum duration of steady state vowels. This ensures that 20 the filter removes the slow-varying spectral shape rather than the utterance-to-utterance variations, i.e., the fast spectral variations of normal speech, which typically do not exceed a few hundred milliseconds. Too long a time constant restricts the time needed to adapt to new speakers. Experimentation has shown that a time constant of approximately I second, corresponding to a filter value of X=.037, provides satisfactory performance. Where errors 8 may occur in quantizer index transmission, the time constant is preferably selected to minimize error propagation stemming from the use of an infinite memory recursive filter.
Inverse quantizer 16 likewise dequantizes the quantization index to determine an 5 associated LSF which is then provided to an adder 34 and a simple, first-order filter which includes a multiplier 38, an adder 40, a delay 42, and a multiplier 44, all of which operate in the manner described hereinabove for adder 22, multiplier 26, delay 28, multiplier 30, and adder 32, with the notable exception that the estimate of the LSF mean value in delay 42 is provided to subtractor 12 in addition to being provided to adder 34.
10 Reference is now made to Fig. 2 which is a simplified illustration of a system for backwards-adaptive vector quantization of line spectral frequencies (LSF), constructed and operative in accordance with another preferred embodiment of the present invention. In the system of Fig. 2 the LSF means are derived from a relatively long moving average predictor in order to overcome the problems associated with infinite error propagation and incorporated within a conventional third-order (short) moving average predictive vector quantizer. The system of Fig. 2 may be implemented using a rectangular window moving average predictor for the calculation of the LSF means, such as one that is about 750 ins long (a relatively long predictor). This may be easily achieved by employing a circular buffer containing the quantizer indices from previous decisions.
20 In the system of Fig. 2 line spectral frequencies of a speech spectrum are provided to an encoder, generally referenced 50. A subtractor 52 receives the current short-term mean value associated with the LSF from an adder 54 and subtracts it from the LSF. A subtractor 56 then receives the current moving average predicted estimate of the LSF from an adder 58 and subtracts it from the output of subtractor 52. The output of subtractor 56 is then divided 9 by a tap to of the short MA predictor at a divider 92 to provide a prediction error which is then quantized at a quantizer 60 using any known and suitable quantization means. The quantization index is then provided to inverse quantizers 62 and 64 at encoder 50 and a decoder, generally referenced 66, respectively. Inverse quantizer 62 dequantizes the 5 quantization index using any known and suitable means to determine an associated LSF. The taps of the short moving average predictor (t, t, & t) may be determined by any reasonable technique, but are ideally jointly optimized with the relatively long moving average LSF mean predictor in operation.
The current output of inverse quantizer 62 is multiplied by t,, at a multiplier 68 10 and provided to an adder 70. The previous output of inverse quantizer 62, stored at a delay 72, is multiplied by a tap t, at a multiplier 74 and provided to adder 70. The twice-previous output of inverse quantizer 62, stored at a delay 76, is multiplied by a tap t, at a multiplier 78 and provided to adder 70. Adder 70 adds all three inputs and provides the result to an adder 80. The output of adder 70 represents the current quantization error component of the output 15 LSF.
The previous output of inverse quantizer 62, stored at delay 72, is multiplied by tap t, at a multiplier 82 and provided to adder 58. The twice-previous output of inverse quantizer 62, stored at delay 76, is multiplied by tap t2 at a multiplier 84 and provided to adder 58. Adder 58 adds the two inputs and provides the result to subtractor 56. The output 20 of adder 58 represents the current predicted estimate of the LSF.
The current output of inverse quantizer 62 is also provided to an ordered series of n delays 86, with each delay storing an nth previous output of inverse quantizer 62. Each previous value n is then multiplied by lln by a series of multipliers 88, thereby providing equal gain for each value n, and provided to adder 54, where they are added together with a predetermined estimate y of the mean of the LSF stored at a delay 90. The value of U may be determined from training data and represents an initial estimate of the LSF means. The output of adder 54 is then provided to adder 80 as well as subtractor 52. The output of adder 54 represents the current short-term mean value associated with the LSF, 5 The current quantization error component of the LSF is preferably added to the current short-term LSF mean value at adder 80 to provide an approximation of the LSF input.
At decoder 66, elements 68' - 90' preferably operate in the manner described hereinabove for correspondingly-numbered elements 68 - 90 with the notable exceptions that adder 54' provides input only to adder 80', delay 72' provides input only to multiplier 74', and delay 76' provides input only to multiplier 78'.
Experimentation with the system of Fig. 2 has shown that there is little degradation in performance when the moving-average derived adaptive mean method is applied to a third-order moving average predictive VQ as compared with a conventional third-order MA predictive VQ without mean adaption.
15 Experimentation has shown that the application of the systems of Figs. I and 2 leads to significant gains in performance since the long-term averaging of the LSF means removes some of the speaker and microphone/anti-aliasing spectral variation which is present in the input. Such performance gains are shown in Fig. 3 which is a simplified graph illustration showing Mean Spectral Distortion Performance (dB) of a conventional third-order 20 MA Predictive LSF VQ with Fixed Means, represented by a plot 100, the Moving Average Mean Adaptation of Fig. 2, represented by a plot 102, and the Backwards Adapted Means of Fig. 1, represented by a plot 104. Fig. 3 shows the spectral distortion figures for three identical third-order moving average predictive quantizers (MA-PVQs) plotted with and without adaptation of the mean values as described hereinabove. The test file that was used comprised 8,000 frames each of flat filtered speech, Intermediate Reference System (IRS) filtered speech, and modified IRS filtered speech. The training data that was used for both quantizers was IRS filtered.
While the methods and apparatus disclosed herein may or may not have been 5 described with reference to specific hardware or software, the methods and apparatus have been described in a manner sufficient to enable persons of ordinary skill in the art to readily adapt commercially available hardware and software as may be needed to reduce any of the embodiments of the present invention to practice without undue experimentation and using conventional techniques.
10 While the present invention has been described with reference to a few specific embodiments, the description is intended to be illustrative of the invention as a whole and is not to be construed as limiting the invention to the embodiments shown. It is appreciated that various modifications may occur to those skilled in the art that, while not specifically shown herein, are nevertheless within the true spirit and scope of the invention.
12

Claims (1)

  1. What is claimed is:
    I. A method of providing robust quantization of speech spectral parameters tolerant to spectral balance and speaker variations, the method comprising the steps of.
    5 for each of a plurality of line spectral frequencies (LSFs) of a speech spectrum:
    quantizing (14) the displacement (12) of said LSF from an estimate (28) of its long-term mean; reconstructing (22) an estimate of said LSF from said quantized displacement and said long-term LSF mean estimate; and 10 filtering (26,30,32) said reconstructed LSF estimate, thereby providing a subsequent long-term LSF mean estimate (28).
    2. A method according to claim 1 wherein said filtering step comprises filtering said reconstructed LSF estimate using a first-order recursive filter (26,30,32) .
    3. A method according to claim 2 wherein said first-order recursive filter is of unity gain and employs a time constant of about I second for said LSF.
    4. A method of quantizing speech spectral parameters that is tolerant to spectral balance and speaker variations, the method comprising the steps offor each of a plurality of line spectral frequencies (LSFs) of a speech spectrum: at an encoder (10): a) quantizing (14) the difference (12) between said LSF and a current LSF mean value estimate; 13 at said encoder (10) and a decoder (20):
    b) dequantizing (16,18) said difference; c) adding (22,34) said dequantized difference to a current LSF mean value estimate (28,42), thereby providing an approximation of said LSF; and 5 d) filtering (26,30,32) (38,40,44) said quantized LSF together with said current LSF mean value estimate (28,42), thereby providing a new current LSF mean value estimate.
    5. A method of quantizing speech spectral parameters that is tolerant to spectral balance and speaker variations, the method comprising the steps of. for each of a plurality of line spectral frequencies (LSFs) of a speech spectrum: at an encoder (50):
    a) quantizing (60) a prediction error (92) derived from said LSF from which a current short-term LSF mean value (52) and a current moving average predicted LSF estimate (56) have been subtracted; and at said encoder (50) and a decoder (66):
    b) dequantizing (62,64) said prediction error; c) determining a next-current short-term LSF mean value (54) from said dequantized prediction error and at least one previously dequantized prediction error (86); and d) detennining a next-current moving average predicted LSF estimate (58) from said dequantized prediction error (72) and at least one previously dequantized prediction error (76).
    14 6. A method according to claim 5 wherein the next-current short-term LSF mean value (54) is the sum of a training data derived mean (90) and a moving average (88) of a plurality of previously dequantized prediction error values (86).
    5 7. A method according to claim 6 wherein equal gains (88) are assigned to each dequantized prediction error value (86).
    8. Apparatus for providing robust quantization of speech spectral parameters tolerant to spectral balance and speaker variations, said apparatus comprising:
    10 means (14) for quantizing the displacement (12) of a line spectral frequency (LSF) from an estimate (28) of its long-term mean; means (22) for reconstructing an estimate of said LSF from said quantized displacement and said long-term LSF mean estimate; and means (26,30,32) for filtering said reconstructed LSF estimate, thereby providing a subsequent long-term LSF mean estimate (28).
    9. Apparatus according to claim 8 wherein said filtering means comprises a first order recursive filter (26,30,32).
    20 10. Apparatus according to claim 9 wherein said first-order recursive filter is of unity gain and employs a time constant of about I second for said LSF.
    11. Apparatus for quantizing speech spectral parameters that is tolerant to spectral balance and speaker variations, said apparatus comprising:
    an encoder (10) comprising:
    means (14) for quantizing the difference (12) between a line spectral frequency (LSF) and a current LSF mean value estimate; means (16) for dequantizing said difference; 5 means (34) for adding said dequantized difference to a current LSF mean value estimate (42), thereby providing an approximation of said LSF; and means (38,40,44) for filtering said quantized LSF together with said current LSF mean value estimate (42), thereby providing a new current LSF mean value estimate; and 10 a decoder comprising:
    means (18) for dequantizing said difference; means (22) for adding said dequantized difference to a current LSF mean value estimate (28), thereby providing an approximation of said LSF; and means (26,30,32) for filtering said quantized LSF together with said current LSF mean value estimate (28), thereby providing a new current LSF mean value estimate.
    12. Apparatus for quantizing speech spectral parameters that is tolerant to spectral balance and speaker variations, said apparatus comprising:
    20 an encoder (50) comprising:
    means (60) for quantizing a prediction error (92) derived from said LSF from which a current short-term LSF mean value (52) and a current moving average predicted LSF estimate (56) have been subtracted; means (62) for dequantizing said prediction error; 16 means (54) for determining a next-current short-term LSF mean value from said dequantized prediction error and at least one previously dequantized prediction error (86); and means (58) for determining a next-current moving average predicted LSF estimate from said dequantized. prediction error (72) and at least one previously dequantized prediction error (76); and a decoder (66) comprising: means (64) for dequantizing said prediction error; means (54') for determining a next-current short-term LSF mean value from said dequantized prediction error and at least one previously dequantized prediction error (86'); and means (58') for determining a next-current moving average predicted LSF estimate from said dequantized prediction error (72') and at least one previously dequantized prediction error (76').
    13. Apparatus according to claim 12 wherein the next-current short-term LSF mean value (54) is the sum of a training data derived mean (90) and a moving average (88) of a plurality of previously dequantized prediction error values (86).
    20 14. Apparatus according to claim 13 wherein equal gains (88) are assigned to each dequantized prediction error value (86).
    17
GB0017145A 2000-07-13 2000-07-13 Vector quantization system for speech encoding/decoding Withdrawn GB2364870A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0017145A GB2364870A (en) 2000-07-13 2000-07-13 Vector quantization system for speech encoding/decoding
EP01116530A EP1172803A3 (en) 2000-07-13 2001-07-09 Vector quantization system and method of operation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0017145A GB2364870A (en) 2000-07-13 2000-07-13 Vector quantization system for speech encoding/decoding

Publications (2)

Publication Number Publication Date
GB0017145D0 GB0017145D0 (en) 2000-08-30
GB2364870A true GB2364870A (en) 2002-02-06

Family

ID=9895545

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0017145A Withdrawn GB2364870A (en) 2000-07-13 2000-07-13 Vector quantization system for speech encoding/decoding

Country Status (2)

Country Link
EP (1) EP1172803A3 (en)
GB (1) GB2364870A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0375551A2 (en) * 1988-12-22 1990-06-27 Kokusai Denshin Denwa Co., Ltd A speech coding/decoding system
WO1996031873A1 (en) * 1995-04-03 1996-10-10 Universite De Sherbrooke Predictive split-matrix quantization of spectral parameters for efficient coding of speech
US6081776A (en) * 1998-07-13 2000-06-27 Lockheed Martin Corp. Speech coding system and method including adaptive finite impulse response filter

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5966688A (en) * 1997-10-28 1999-10-12 Hughes Electronics Corporation Speech mode based multi-stage vector quantizer

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0375551A2 (en) * 1988-12-22 1990-06-27 Kokusai Denshin Denwa Co., Ltd A speech coding/decoding system
WO1996031873A1 (en) * 1995-04-03 1996-10-10 Universite De Sherbrooke Predictive split-matrix quantization of spectral parameters for efficient coding of speech
US6081776A (en) * 1998-07-13 2000-06-27 Lockheed Martin Corp. Speech coding system and method including adaptive finite impulse response filter

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Proc 1998 IEEE Int Conf Acoustics, Speech & Signal Processing, May 1998, Seattle, vol 1, pages 41-44 *
Proc 1999 IEEE Workshop on Specch Coding, June 1999, Finland, pages 46-48 *

Also Published As

Publication number Publication date
EP1172803A3 (en) 2004-01-14
GB0017145D0 (en) 2000-08-30
EP1172803A2 (en) 2002-01-16

Similar Documents

Publication Publication Date Title
EP0503684B1 (en) Adaptive filtering method for speech and audio
EP0673014B1 (en) Acoustic signal transform coding method and decoding method
CA2483791C (en) Method and device for efficient frame erasure concealment in linear predictive based speech codecs
KR101041892B1 (en) Updating of decoder states after packet loss concealment
EP1356454B1 (en) Wideband signal transmission system
US5140638A (en) Speech coding system and a method of encoding speech
NO340674B1 (en) Information signal encoding
CA2262787C (en) Methods and devices for noise conditioning signals representative of audio information in compressed and digitized form
JP3254687B2 (en) Audio coding method
JPH02155313A (en) Coding method
US5913187A (en) Nonlinear filter for noise suppression in linear prediction speech processing devices
EP1301018A1 (en) Apparatus and method for modifying a digital signal in the coded domain
US6104994A (en) Method for speech coding under background noise conditions
EP1208413A2 (en) Coded domain noise control
US20130268268A1 (en) Encoding of an improvement stage in a hierarchical encoder
EP1172803A2 (en) Vector quantization system and method of operation
Härmä et al. Backward adaptive warped lattice for wideband stereo coding
Lee An enhanced ADPCM coder for voice over packet networks
EP1386311A1 (en) Inverse filtering method, synthesis filtering method, inverse filter device, synthesis filter device and devices comprising such filter devices
JPH04301900A (en) Audio encoding device
KR100392258B1 (en) Implementation method for reducing the processing time of CELP vocoder
JP3504485B2 (en) Tone encoding device, tone decoding device, tone encoding / decoding device, and program storage medium
Aarskog et al. Predictive coding of speech using microphone/speaker adaptation and vector quantization
Cheung Application of CVSD with delayed decision to narrowband/wideband tandem
Ning A comparison of adaptive predictors in sub-band coding

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)