WO2006096099A1 - Low-complexity code excited linear prediction encoding - Google Patents

Low-complexity code excited linear prediction encoding Download PDF

Info

Publication number
WO2006096099A1
WO2006096099A1 PCT/SE2005/000349 SE2005000349W WO2006096099A1 WO 2006096099 A1 WO2006096099 A1 WO 2006096099A1 SE 2005000349 W SE2005000349 W SE 2005000349W WO 2006096099 A1 WO2006096099 A1 WO 2006096099A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
excitation
pulse locations
candidate
signals
Prior art date
Application number
PCT/SE2005/000349
Other languages
French (fr)
Inventor
Anisse Taleb
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to JP2008500663A priority Critical patent/JP5174651B2/en
Priority to AT05722196T priority patent/ATE513290T1/en
Priority to CN2005800489816A priority patent/CN101138022B/en
Priority to PCT/SE2005/000349 priority patent/WO2006096099A1/en
Priority to BRPI0520115A priority patent/BRPI0520115B1/en
Priority to KR1020077023047A priority patent/KR101235425B1/en
Priority to EP05722196A priority patent/EP1859441B1/en
Priority to TW094144472A priority patent/TW200639801A/en
Publication of WO2006096099A1 publication Critical patent/WO2006096099A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/09Long term prediction, i.e. removing periodical redundancies, e.g. by using adaptive codebook or pitch predictor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/10Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a multipulse excitation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/10Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a multipulse excitation
    • G10L19/107Sparse pulse excitation, e.g. by using algebraic codebook

Definitions

  • the present invention relates in general to audio coding, and in particular to code excited linear prediction coding.
  • ICP inter-channel prediction
  • ICP image stabilization
  • AMR-NB Adaptive Multi-Rate Narrow Band and Adaptive Multi-Rate
  • an excitation signal at an input of a short-term LP synthesis filter is constructed by adding two excitation vectors from adaptive and fixed (innovative) codebooks, respectively.
  • the speech is synthesized by feeding the two properly chosen vectors from these codebooks through the short-term synthesis filter.
  • the optimum excitation sequence in a codebook is chosen using an analysis-by- synthesis search procedure in which the error between the original and synthesized speech is minimized according to a perceptually weighted distortion measure.
  • a first type of codebook is the so-called stochastic codebooks.
  • stochastic codebooks Such a codebook often involves substantial physical storage. Given the index in a codebook, the excitation vector is obtained by conventional table lookup. The size of the codebook is therefore limited by the bit-rate and the complexity.
  • a second type of codebook is an algebraic codebook.
  • algebraic codebooks are not random and require virtually no storage.
  • An algebraic codebook is a set of indexed code vectors whose amplitudes and positions of the pulses constituting the k ⁇ code vector are derived directly from the corresponding index k. This requires virtually no memory requirements. Therefore, the size of algebraic codebooks is not limited by memory requirements. Additionally, the algebraic codebooks are well suited for efficient search procedures.
  • the amount of bits allocated to the fixed codebook procedures ranges from 36% up to 76%.
  • a general object of the present invention is thus to provide improved methods and devices for speech coding.
  • a subsidiary object of the present invention is to provide CELP methods and devices having reduced requirement in terms of bit rates and encoder complexity.
  • excitation signals of a first signal encoded by CELP are used to derive a limited set of candidate excitation signals for a second signal.
  • the second signal is correlated with the first signal.
  • the limited set of candidate excitation signals is derived by a rule, which was selected from a predetermined set of rules based on the encoded first signal and/ or the second signal.
  • pulse locations of the excitation signals of the first encoded signal are used for determining the set of candidate excitation signals. More preferably, the pulse locations of the set of candidate excitation signals are positioned in the vicinity of the pulse locations of the excitation signals of the first encoded signal.
  • the first and second signals may be multi-channel signals of a common speech or audio signal. However, the first and second signals may also be identical, whereby the coding of the second signal can be utilized for re-encoding at a lower bit rate.
  • One advantage with the present invention is that the coding complexity is reduced. Furthermore, in the case of multi-channel signals, the required bit rate for transmitting coded signals is reduced. Also, the present invention may be efficiently applied to re-encoding the same signal at a lower rate.
  • Another advantage of the invention is the compatibility with mono signals and the possibility to be implemented as an extension to existing speech codecs with very few modifications.
  • FIG. IA is a schematic illustration of a code excited linear prediction model
  • FIG. IB is a schematic illustration of a process of deriving an excitation signal
  • FIG. 1C is a schematic illustration of an embodiment of an excitation signal for use in a code excited linear prediction model
  • FIG. 2 is a block scheme of an embodiment of an encoder and decoder according to the code excited linear prediction model;
  • FIG. 3A is a diagram illustrating one embodiment of a principle of selecting candidate excitation signals according to the present invention
  • FIG. 3B is a diagram illustrating another embodiment of a principle of selecting candidate excitation signals according to the present invention.
  • FIG. 4 illustrates a possibility to reduce required data entities according to an embodiment of the present invention
  • FIG. 5A is a block scheme of an embodiment of encoders and decoders for two signals according to the present invention.
  • FIG. 5B is a block scheme of another embodiment of encoders and decoders for two signals according to the present invention
  • FIG. 6 is a block scheme of an embodiment of encoders and decoders for re-encoding of a signal according to the present invention
  • FIG. 7 is a block scheme of an embodiment of encoders and decoders for parallel encoding of a signal for different bit rates according to the present invention
  • FIG. 8 is a diagram illustrating the perceptual quality achieved by embodiments of the present invention.
  • FIG. 9 is a flow diagram of the main steps of an embodiment of an encoding method according to the present invention.
  • FIG. 10 is a flow diagram of the main steps of another embodiment of an encoding method according to the present invention.
  • FIG. 11 is a flow diagram of the main steps of an embodiment of a decoding method according to the present invention.
  • a general CELP speech synthesis model is depicted in Fig. IA.
  • a fixed codebook 10 comprises a number of candidate excitation signals 30, characterized by a respective index k. In the case of an algebraic codebook, the index k alone characterizes the corresponding candidate excitation signal 30 completely.
  • Each candidate excitation signal 30 comprises a number of pulses 32 having a certain position and amplitude.
  • An index k determines a candidate excitation signal 30 that is amplified in an amplifier 11 giving rise to an output excitation signal Ck(n) 12.
  • the excitation signal Ck(n) and the adaptive signal v(n) are summed in an adder 17, giving a composite excitation signal u(n).
  • the composite excitation signal u(n) influences the adaptive codebook for subsequent signals, as indicated by the dashed line 13.
  • the composite excitation signal u(n) is used as input signal to a transform 1 /A(z) in a linear prediction synthesis section 20, resulting in a "predicted" signal s(n) 21 , which, typically after post-processing 22, is provided as the output from the CELP synthesis procedure.
  • the CELP speech synthesis model is used for analysis-by- synthesis coding of the speech signal of interest.
  • a target signal s(n) i.e. the signal that is going to be resembled is provided.
  • the remaining difference is the target for the fixed codebook excitation signal, whereby a codebook index k corresponding to an entry Ck should minimize the difference according to typically an objective function, e.g. a mean square measure.
  • the algebraic codebook is searched by minimizing the mean square error between the weighted input speech and the weighted synthesis speech.
  • the fixed codebook search aims to find the algebraic codebook entry c k corresponding to index k, such that
  • the matrix H is a filtering matrix whose elements are derived from the impulse response of a weighting filter.
  • y 2 is a vector of components which are dependent on the signal to be encoded.
  • This fixed codebook procedure can be illustrated as in Fig. IB, where an index k selects an entry Ck from the fixed codebook 10 as excitation signal 12.
  • the index k typically serves as an input to a table look-up, while in an algebraic fixed codebook, the excitation signal 12 are derived directly from the index k.
  • the multi-pulse excitation can be written as:
  • Fig. 1C illustrates an example of a candidate excitation signal 30 of the fixed codebook 10.
  • the candidate excitation signal 30 is characterized by a number of pulses 32, in this example 8 pulses.
  • the pulses 32 are characterized by their position P(l)-P(8) and their amplitude, which in a typical algebraic fixed codebook is either +1 or - 1.
  • the CELP model is typically implemented as illustrated in Fig. 2.
  • the different parts corresponding to the different functions of the CELP synthesis model of Fig. IA are given the same reference numbers, since the parts mainly are characterized by their function and typically not in the same degree by their actual implementation. For instance, error weighting filters, usually present in an actual implementation of a linear prediction analysis by synthesis are not represented.
  • a signal to be encoded s(n) 33 is provided to an encoder unit 40.
  • the encoder unit comprises a CELP synthesis block 25 according to the above discussed principles. (Post-processing is omitted in order to facilitate the reading of the figure.)
  • the output from the CELP synthesis block 25 is compared with the signal s(n) in a comparator block 31.
  • a difference 37 which may be weighted by a weighting filter, is provided to an codebook optimization block 35, which is arranged according to any prior-art principles to find an optimum or at least reasonably good excitation signal Ck(n) 12.
  • the codebook optimization block 35 provides the fixed codebook 10 with the corresponding index k.
  • the index k and the delay ⁇ of the adaptive codebook 12 are encoded in an index encoder 38 to provide an output signal 45 representing the index k and the delay ⁇ .
  • the representation of the index k and the delay ⁇ is provided to a decoder unit 50.
  • the decoder unit comprises a CELP synthesis block 25 according to the above discussed principles. (Post-processing is also here omitted in order to facilitate the reading of the figure.)
  • the representation of index k and delay ⁇ are decoded in an index decoder 53, and index k and delay ⁇ are provided as input parameters to the fixed codebook and the adaptive code, respectively, resulting in a synthesized signal s(n) 21, which is supposed to resemble the original signal s(n).
  • the representation of the index k and the delay ⁇ can be stored for a shorter or longer time anywhere between the encoder and decoder, enabling e.g. audio recordings storing requiring relatively small storing capability.
  • the present invention is related to speech and in general audio coding.
  • it deals with cases where a main signal s M (n) has been encoded according to the CELP technique and the desire is to encode another signal s s (n) .
  • This invention is thus directly applicable to stereo and in general multichannel coding for speech in teleconferencing applications.
  • the application of this invention can also include audio coding as part of an open-loop or closed-loop content dependent encoding.
  • the main signal s M (n) is often chosen as the sum signal and s s (n) as the difference signal of the left and right channels.
  • the presumption of the present invention is that the main signal s M (n) is available in a CELP encoded representation.
  • One basic idea of the present invention is to limit the search in the fixed codebook during the encoding of the other signal s s (n) to a subset of candidate excitation signals. This subset is selected dependent on the CELP encoding of the main signal.
  • the pulses of the candidate excitation signals of the subset are restricted to a set of pulse positions that are dependent on the pulse positions of the main signal. This is equivalent to defining constrained candidate pulse locations.
  • the set of available pulse positions can typically be set to the pulse positions of the main signal plus neighboring pulse positions.
  • a main channel and a side channel can be constructed by
  • the main channel is the first encoded channel and that the pulses locations for the fixed codebook excitation for that encoding are available.
  • g P v(n) is the adaptive codebook excitation and s c (n) is the target signal for adaptive codebook search.
  • the number of potential pulse positions of the candidate excitation signals are defined relative to the main signal pulse positions. Since they are only a fraction of all possible positions, the amount of bits required for encoding the side signal with an excitation signal within this limited set of candidate excitation signals is therefore largely reduced, compared with the case where all pulse positions may occur.
  • the selection of the pulses candidate positions relatively to the main pulse position is fundamental in determining the complexity as well as the required bit-rate.
  • pulse positions for the side signal are set equal to the pulse positions of the main signal. Then there is no encoding of the pulse positions needed and only encoding of the pulse amplitudes is needed. In the case of algebraic code books with pulses having + 1/-1 amplitudes, then only the signs (N bits) need to be encoded.
  • the pulse positions of candidate excitation signals for the side signal are selected based on the main signal pulse positions and possible additional parameters.
  • the additional parameters may consist of time delay between the two channels and/or difference of adaptive codebook index.
  • J(i,k) denote some delay index.
  • each mono pulse position generate a set of pulse positions used for constructing the candidate excitation signals for the side signal pulse search procedure.
  • P M denotes the pulse positions of the excitation signal for the main signal
  • P s * denotes possible pulse positions of the candidate excitation signals for the side signal analysis.
  • the delay index may be made dependent on the effective delay between the two channels and /or the adaptive codebook index.
  • A: max 3
  • J(i,k) j(k) e ⁇ - 1,0,+ 1).
  • the rules how to select the pulse positions can be constructed in many various manners.
  • the actual rule to use may be adapted to the actual implementation.
  • the important characteristics are, however, that the pulse positions candidates are selected dependent on the pulse positions resulting from the main signal analysis following a certain rule.
  • This rule may be unique and fixed or may be selected from a set of predetermined rules dependent on e.g. the degree of correlation between the two channels and/ or the delay between the two channels..
  • the set of pulse candidates of the side signal is constructed.
  • the set of the side signal pulse candidates is in general very small compared to the entire frame length. This allows reformulating the objective maximization problem based on a decimated frame.
  • the pulses are searched by using, for example, the depth-first algorithm described in [5] or by using an exhaustive search if the number of candidate pulses is really small. However, even with a small number of candidates it is recommended to use a fast search procedure.
  • a backward filtered signal is in general pre-computed using
  • P * (i) are the candidate pulses positions and p is their number. It should be noted that p is always less than, and typically much less than, the frame length L .
  • ⁇ 2 is symmetric and is positive definite.
  • Fig. 4 The summary of these decimation operations is illustrated in Fig. 4.
  • a reduction of an algebraic codebook 10 of ordinary size to a reduced size codebook 10' is illustrated.
  • a reduction of a weighting filter covariance matrix 60 of ordinary size to a reduced weighting filter covariance matrix 60' is illustrated.
  • a reduction of a backward filtered target 62 of ordinary size to a reduced size backward filtered target 62' is illustrated.
  • Maximizing the objective function on the decimated signals has several advantages.
  • One of them is the reduction of memory requirements, for instance the matrix ⁇ 2 needs lower memory.
  • Another advantage is the fact that because the main signal pulse locations are in all cases transmitted to the receiver, the indices of the decimated signals are always available to the decoder. This in turn allows the encoding of the other signal (side) pulse positions relatively to the main signal pulse positions, which consumes much less bits.
  • Another advantage is the reduction in computational complexity since the maximization is performed on decimated signals.
  • FIG. 5A an embodiment of a system of encoders 4OA, 4OB and decoders
  • a main signal 33A s m (n) is provided to a first encoder 4OA.
  • the first encoder 4OA operates according to any prior art CELP encoding model, producing an index k m for the fixed codebook and a delay measure ⁇ m for the adaptive codebook. The details of this encoding are not of any importance for the present invention and is omitted in order to facilitate the understanding of Fig. 5A.
  • the parameters k m and ⁇ m are encoded in a first index encoder 38A, giving representations k* m and ⁇ * m of the parameters that are sent to a first decoder
  • the representations k* m and ⁇ * m are decoded into parameters k m and ⁇ m in a first index decoder 53A. From these parameters, the original signal is reproduced according to any CELP decoding model according to prior art. The details of this decoding are not of any importance for the present invention and is omitted in order to facilitate the understanding of Fig. 5A.
  • a reproduced first output signal 2 IA s m (n) is provided.
  • a side signal 33B s s (n) is provided as an input signal to a second encoder
  • the second encoder 4OB is to most parts similar as the encoder of Fig. 2.
  • the signals are now given an index "s" to distinguish them from any signals used for encoding the main signal.
  • the second encoder 4OB comprises a CELP synthesis block 25.
  • the index k m or a representation thereof is provided from the first encoder 4OA to an input 45 of the fixed codebook 10 of the second encoder 4OB.
  • the index k m is used by a candidate deriving means 47 to extract a reduced fixed codebook 10' according to the above presented principles.
  • the synthesis of the CELP synthesis block 25' of the second encoder 4OB is thus based on indices k' s representing excitation signals c' t , (n) from the reduced fixed codebook 10'.
  • An index k' s is thus found to represent a best choice of the CELP synthesis.
  • the parameters k' s and ⁇ s are encoded in a second index encoder 38B, giving representations k'* s and ⁇ * s of the parameters that are sent to a second decoder 5OB.
  • the representations k'* s and ⁇ * s are decoded into parameters k' s and ⁇ s in a second index decoder 53B.
  • the index parameter k m is available from the first decoder 5OA and is provided to the input 55 of the fixed codebook 10 of the second decoder 5OB, in order to enabling an extraction by a candidate deriving means 57 of a reduced fixed codebook 10' equal to what was used in the second encoder 4OB.
  • the original side signal is reproduced according to ordinary CELP decoding models 25". The details of this decoding are performed essentially in analogy with Fig. 2, but using the reduced fixed codebook 10' instead.
  • a reproduced side output signal 2 IB s s (n) is thus provided.
  • Selection of the rule to construct the set of candidate pulses can advantageously be made adaptive and dependent on additional inter-channel characteristics, such as delay parameters, degree of correlation, etc.
  • the encoder has preferably to transmit to the decoder which rule has been selected for deriving the set of candidate pulses for encoding the other signal.
  • the rule selection could for instance be performed by a closed- loop procedure, where a number of rules are tested and the one giving the best result finally is selected.
  • Fig. 5B illustrates an embodiment, using the rule selection approach.
  • the mono signal s m (n) and preferably also the side signal s s (n) are here additionally provided to a rule selecting unit 39.
  • the parameter k m representing the mono signal can be used.
  • the rule selection unit 39 the signals are analysed, e.g. with respect to delay parameters or degree of correlation.
  • a rule e.g. represented by an index r is selected from a set of predefined rules.
  • the index of the selected rule is provided to the candidate deriving means 47 for determining how the candidate sets should be derived.
  • the rule index r is also provided to the second index encoder 38B giving a representation r* of the index, which subsequently is sent to the second decoder 5OB.
  • the second index decoder 53B decodes the rule index r, which then is used to govern the operation of the candidate deriving means 57.
  • the specific rule used as well as the resulting number of candidate side signal pulses are the main parameters governing the bit rate and the complexity of the algorithm.
  • FIG. 6 illustrates an embodiment, where different parts of a transmission path allows for different bit rates. It is thus applicable as part of a rate transcoding solution.
  • a signal s(n) is provided as an input signal 33A to a first encoder 4OA, which produces representations k* and ⁇ * of parameters that are transmitted according to a first bit rate. At a certain place, the available bit rate is reduced, and a re-encoding for lower bit-rates has to be performed.
  • a first decoder 5OA uses the representations k* and ⁇ * of parameters for producing a reproduced signal 2 IA s(n) .
  • This reproduced signal 2 IA s(n) is provided to a second encoder 4OB as an input signal 33B. Also the index k from the first decoder 50A is provided to the second encoder 4OB. The index k is in analogy with Fig. 6 used for extracting a reduced fixed codebook 10'.
  • the second encoder 4OB encodes the signal s(n) for a lower bit rate, giving an index k' representing the selected excitation signal c' -, (n) .
  • this index &' is of little use in a distant decoder, since the decoder does not have the information necessary to construct a corresponding reduced fixed codebook.
  • the index k' thus has to be associated with an index k , referring to the original codebook 10.
  • a first encoding is made with a bit rate n and the second encoding is made with a bit rate m, where n>m.
  • Fig. 7 illustrates a system, where a signal s(n) is provided to both a first encoder 4OA and a second encoder 4OB.
  • the second encoder provides a reduced fixed codebook 10' based on an index k a representing the first encoding.
  • the second encoding is here denoted by the index "b".
  • the second encoder 4OB thus becomes independent of the first decoder 5OB.
  • Most other parts are in analogy with Fig. 6, however, with adapted indexing.
  • the present invention offers a substantial reduction in complexity thus allowing the implementation of these applications with low cost hardware.
  • An embodiment of the above-described algorithm has been implemented in association with an AMR-WB speech codec.
  • the same adaptive codebook index is used as is used for encoding the mono excitation.
  • the LTP gain as well as the innovation vector gain was not quantized.
  • the algorithm for the algebraic codebook was based on the mono pulse positions. As described in e.g. [6], the codebook may be structured in tracks.
  • the number of tracks is equal to 4.
  • the candidate pulse positions are as follows
  • 1 io, U is 0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60
  • 2 ii is, ⁇ 9 1 , 5, 9, 13, 17, 21, 25, 29, 33, 37, 41, 45, 49, 53, 57, 61
  • the implemented algorithm retains all the mono pulses as the pulse positions of the side signal, i.e. the pulse positions are not encoded. Only the signs of the pulses are encoded.
  • each pulse will consume only 1 bit for encoding the sign, which leads to a total bit rate equal to the number of mono pulses.
  • there are 12 pulses per sub-frame and this leads to a total bit rate equal to 12 bits x 4 x 50 2.4 kbps for encoding the innovation vector. This is the same number of bits required for the very lowest AMR-WB mode (2 pulses for the 6.6kbps mode), but in this case we have higher pulses density.
  • Fig. 8 shows the results obtained with PEAQ [4] for evaluating the perceptual quality.
  • PEAQ has been chosen since to the best knowledge, it is the only tool that provides objective quality measures for stereo signals. From the results, it is clearly seen that the stereo 100 does in fact provide a quality lift with respect to the mono signal 102.
  • the used sound items were quite various, sound 1 , S l , is an extract from a movie with background noise, sound 2, S2, is a 1 min radio recording, sound 3, S3, a cart racing sport event, and sound 4, S4, is a real two microphone recoding.
  • Fig. 9 illustrates an embodiment of an encoding method according to the present invention.
  • the procedure starts in step 200.
  • a representation of a CELP excitation signal for a first audio signal is provided. Note that it is not absolutely necessary to provide the entire first audio signal, just the representation of the CELP excitation signal.
  • a second audio signal is provided, which is correlated with the first audio signal.
  • a set of candidate excitation signals is derived in step 214 depending on the first CELP excitation signal.
  • the pulse positions of the candidate excitation signals are related to the pulse positions of the CELP excitation signal of the first audio signal.
  • step 216 a CELP encoding is performed on the second audio signal, using the reduced set of candidate excitation signals derived in step 214.
  • the representation, i.e. typically an index, of the CELP excitation signal for the second audio signal is encoded, using references to the reduced candidate set. The procedure ends in step 299.
  • Fig. 10 illustrates another embodiment of an encoding method according to the present invention.
  • the procedure starts in step 200.
  • step 21 1 an audio signal is provided.
  • step 213 a representation of a first CELP excitation signal for the same audio signal is provided.
  • a set of candidate excitation signals is derived in step 215 depending on the first CELP excitation signal.
  • the pulse positions of the candidate excitation signals are related to the pulse positions of the CELP excitation signal of the first audio signal.
  • a CELP re-encoding is performed on the audio signal, using the reduced set of candidate excitation signals derived in step 215.
  • the representation, i.e. typically an index, of the second CELP excitation signal for the audio signal is encoded, using references to the non- reduced candidate set, i.e. the set used for the first CELP encoding.
  • the procedure ends in step 299.
  • Fig. 11 illustrates an embodiment of a decoding method according to the present invention.
  • the procedure starts in step 200.
  • a representation of a first CELP excitation signal for a first audio signal is provided.
  • a representation of a second CELP excitation signal for a second audio signal is provided.
  • a second excitation signal is derived from the second excitation signal and with knowledge of the first excitation signal.
  • a reduced set of candidate excitation signals is derived depending on the first CELP excitation signal, from which a second excitation signal is selected by use of an index for the second CELP excitation signal.
  • the second audio signal is reconstructed using the second excitation signal.
  • the procedure ends in step 299.
  • the invention allows a dramatic reduction of complexity (both memory and arithmetic operations) as well as bit-rate when encoding multiple audio channels by using algebraic codebooks and CELP.

Abstract

Information (km) about excitation signals of a first signal (sm(n)) encoded by CELP is used to derive a limited set (10') of candidate excitation signals for a second correlated second signal (ss(n)). Preferably, pulse locations of the excitation signals of the first encoded signal (sm(n)) are used for determining the set (10') of candidate excitation signals. More preferably, the pulse locations of the set of candidate excitation signals are positioned in the vicinity of the pulse locations of the excitation signals of the first encoded signal (sm(n)). The first and second signals (sm(n), ss(n)) may be multi-channel signals of a common speech or audio signal. However, the first and second signals (sm(n), ss(n)) may also be identical, whereby the coding of the second signal (ss(n)) can be utilized for re-encoding at a lower bit rate.

Description

LOW-COMPLEXITY CODE EXCITED LINEAR PREDICTION
ENCODING
TECHNICAL FIELD
The present invention relates in general to audio coding, and in particular to code excited linear prediction coding.
BACKGROUND
Existing stereo, or in general multi-channel, coding techniques require a rather high bit-rate. Parametric stereo is often used at very low bit-rates. However, these techniques are designed for a wide class of generic audio material, i.e. music, speech and mixed content. _ , . . < ■
In multi-channel speech coding, very little has been done. Most work has focused on an inter-channel prediction (ICP) approach. ICP techniques utilize the fact that there is correlation between a left and a right channel. Many different methods that reduce this redundancy in the stereo signal are described in the literature, e.g. in [1][2][3].
The ICP approach models quite well the case where there is only one speaker, however it fails to model multiple speakers and diffuse sound sources (e.g. diffuse background noises). Therefore, encoding a residual of ICP is a must in several cases and puts quite high demands on the required bit-rate.
Most existing speech codecs are monophonic and are based on the code- excited linear predictive (CELP) coding model. Examples include AMR-NB and AMR-WB (Adaptive Multi-Rate Narrow Band and Adaptive Multi-Rate
Wide Band). In this model, i.e. CELP, an excitation signal at an input of a short-term LP synthesis filter is constructed by adding two excitation vectors from adaptive and fixed (innovative) codebooks, respectively. The speech is synthesized by feeding the two properly chosen vectors from these codebooks through the short-term synthesis filter. The optimum excitation sequence in a codebook is chosen using an analysis-by- synthesis search procedure in which the error between the original and synthesized speech is minimized according to a perceptually weighted distortion measure.
There are two types of fixed codebooks. A first type of codebook is the so- called stochastic codebooks. Such a codebook often involves substantial physical storage. Given the index in a codebook, the excitation vector is obtained by conventional table lookup. The size of the codebook is therefore limited by the bit-rate and the complexity.
A second type of codebook is an algebraic codebook. By contrast to the stochastic codebooks, algebraic codebooks are not random and require virtually no storage. An algebraic codebook is a set of indexed code vectors whose amplitudes and positions of the pulses constituting the kΛ code vector are derived directly from the corresponding index k. This requires virtually no memory requirements. Therefore, the size of algebraic codebooks is not limited by memory requirements. Additionally, the algebraic codebooks are well suited for efficient search procedures.
It is important to note that a substantial and often also major part of the speech codec available bits are allocated to the fixed codebook excitation encoding. For instance, in the AMR-WB standard, the amount of bits allocated to the fixed codebook procedures ranges from 36% up to 76%.
Additionally, it is the fixed codebook excitation search that represents most of the encoder complexity.
In [7], a multi-part fixed codebook including an individual fixed codebook for each channel and a shared codebook common to all channels is used. With this strategy it is possible to have a good representation of the inter-channel correlations. However, this comes at an extent of increased complexity as well as storage. Additionally, the required bit rate to encode the fixed codebook excitations is quite large because in addition to each channel codebook index one needs also to transmit the shared codebook index. In [8] and [9], similar methods for encoding multi-channel signals are described where the encoding mode is made dependent on the degree of correlation of the different channels. These techniques are already well known from
Left/ Right and Mid/ Side encoding, where switching between the two encoding modes is dependent on a residual, thus dependent on correlation.
In [10], a method for encoding multichannel signals is described which generalizes different elements of a single channel linear predictive codec. The method has the disadvantage of requiring an enormous amount of computations rendering it unusable in real-time applications such as conversational applications. Another disadvantage of this technology is the amount of bits needed in order to encode the various decorrelation filters used for encoding.
Another disadvantage with the previously cited solutions described above is their incompatibility towards existing standardized monophonic conversational codecs, in the sense that no monophonic signal is separately encoded thus prohibiting the ability to directly decode a monophonic only signal.
SUMMARY
A general problem with prior art speech coding is that it requires high bit rates and complex encoders.
A general object of the present invention is thus to provide improved methods and devices for speech coding. A subsidiary object of the present invention is to provide CELP methods and devices having reduced requirement in terms of bit rates and encoder complexity. The above objects are achieved by methods and devices according to the enclosed patent claims. In general words, excitation signals of a first signal encoded by CELP are used to derive a limited set of candidate excitation signals for a second signal. Preferably, the second signal is correlated with the first signal. In a particular embodiment, the limited set of candidate excitation signals is derived by a rule, which was selected from a predetermined set of rules based on the encoded first signal and/ or the second signal. Preferably, pulse locations of the excitation signals of the first encoded signal are used for determining the set of candidate excitation signals. More preferably, the pulse locations of the set of candidate excitation signals are positioned in the vicinity of the pulse locations of the excitation signals of the first encoded signal. The first and second signals may be multi-channel signals of a common speech or audio signal. However, the first and second signals may also be identical, whereby the coding of the second signal can be utilized for re-encoding at a lower bit rate.
One advantage with the present invention is that the coding complexity is reduced. Furthermore, in the case of multi-channel signals, the required bit rate for transmitting coded signals is reduced. Also, the present invention may be efficiently applied to re-encoding the same signal at a lower rate.
Another advantage of the invention is the compatibility with mono signals and the possibility to be implemented as an extension to existing speech codecs with very few modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention, together with further objects and advantages thereof, may best be understood by making reference to the following description taken together with the accompanying drawings, in which: FIG. IA is a schematic illustration of a code excited linear prediction model;
FIG. IB is a schematic illustration of a process of deriving an excitation signal; FIG. 1C is a schematic illustration of an embodiment of an excitation signal for use in a code excited linear prediction model;
FIG. 2 is a block scheme of an embodiment of an encoder and decoder according to the code excited linear prediction model; FIG. 3A is a diagram illustrating one embodiment of a principle of selecting candidate excitation signals according to the present invention;
FIG. 3B is a diagram illustrating another embodiment of a principle of selecting candidate excitation signals according to the present invention;
FIG. 4 illustrates a possibility to reduce required data entities according to an embodiment of the present invention;
FIG. 5A is a block scheme of an embodiment of encoders and decoders for two signals according to the present invention;
FIG. 5B is a block scheme of another embodiment of encoders and decoders for two signals according to the present invention; FIG. 6 is a block scheme of an embodiment of encoders and decoders for re-encoding of a signal according to the present invention;
FIG. 7 is a block scheme of an embodiment of encoders and decoders for parallel encoding of a signal for different bit rates according to the present invention; FIG. 8 is a diagram illustrating the perceptual quality achieved by embodiments of the present invention;
FIG. 9 is a flow diagram of the main steps of an embodiment of an encoding method according to the present invention;
FIG. 10 is a flow diagram of the main steps of another embodiment of an encoding method according to the present invention; and
FIG. 11 is a flow diagram of the main steps of an embodiment of a decoding method according to the present invention.
DETAILED DESCRIPTION
A general CELP speech synthesis model is depicted in Fig. IA. A fixed codebook 10 comprises a number of candidate excitation signals 30, characterized by a respective index k. In the case of an algebraic codebook, the index k alone characterizes the corresponding candidate excitation signal 30 completely. Each candidate excitation signal 30 comprises a number of pulses 32 having a certain position and amplitude. An index k determines a candidate excitation signal 30 that is amplified in an amplifier 11 giving rise to an output excitation signal Ck(n) 12. An adaptive codebook 14, which is not the primary subject of the present invention, provides an adaptive signal v(n), via an amplifier 15. The excitation signal Ck(n) and the adaptive signal v(n) are summed in an adder 17, giving a composite excitation signal u(n). The composite excitation signal u(n) influences the adaptive codebook for subsequent signals, as indicated by the dashed line 13.
The composite excitation signal u(n) is used as input signal to a transform 1 /A(z) in a linear prediction synthesis section 20, resulting in a "predicted" signal s(n) 21 , which, typically after post-processing 22, is provided as the output from the CELP synthesis procedure.
The CELP speech synthesis model is used for analysis-by- synthesis coding of the speech signal of interest. A target signal s(n), i.e. the signal that is going to be resembled is provided. A long-term prediction is made by use of the adaptive codebook, adjusting a previous coding to the present target signal, giving an adaptive signal v(n)=gp u(n-δ). The remaining difference is the target for the fixed codebook excitation signal, whereby a codebook index k corresponding to an entry Ck should minimize the difference according to typically an objective function, e.g. a mean square measure. In general, the algebraic codebook is searched by minimizing the mean square error between the weighted input speech and the weighted synthesis speech. The fixed codebook search, aims to find the algebraic codebook entry ck corresponding to index k, such that
Figure imgf000007_0001
is maximized. The matrix H is a filtering matrix whose elements are derived from the impulse response of a weighting filter. y2 is a vector of components which are dependent on the signal to be encoded.
This fixed codebook procedure can be illustrated as in Fig. IB, where an index k selects an entry Ck from the fixed codebook 10 as excitation signal 12. In a stochastic fixed codebook, the index k typically serves as an input to a table look-up, while in an algebraic fixed codebook, the excitation signal 12 are derived directly from the index k. In general the multi-pulse excitation can be written as:
Figure imgf000008_0001
Where pi k are the pulses positions for index k, while bi k are the individual pulses amplitudes and P is the number of pulses and δ is the Dirac pulse function:
δ(0) = 1, δ{ή) = 0 for n ≠ 0.
Fig. 1C illustrates an example of a candidate excitation signal 30 of the fixed codebook 10. The candidate excitation signal 30 is characterized by a number of pulses 32, in this example 8 pulses. The pulses 32 are characterized by their position P(l)-P(8) and their amplitude, which in a typical algebraic fixed codebook is either +1 or - 1.
In an encoder/ decoder system for a single channel, the CELP model is typically implemented as illustrated in Fig. 2. The different parts corresponding to the different functions of the CELP synthesis model of Fig. IA are given the same reference numbers, since the parts mainly are characterized by their function and typically not in the same degree by their actual implementation. For instance, error weighting filters, usually present in an actual implementation of a linear prediction analysis by synthesis are not represented.
A signal to be encoded s(n) 33 is provided to an encoder unit 40. The encoder unit comprises a CELP synthesis block 25 according to the above discussed principles. (Post-processing is omitted in order to facilitate the reading of the figure.) The output from the CELP synthesis block 25 is compared with the signal s(n) in a comparator block 31. A difference 37, which may be weighted by a weighting filter, is provided to an codebook optimization block 35, which is arranged according to any prior-art principles to find an optimum or at least reasonably good excitation signal Ck(n) 12. The codebook optimization block 35 provides the fixed codebook 10 with the corresponding index k. When the final excitation signal is found, the index k and the delay δ of the adaptive codebook 12 are encoded in an index encoder 38 to provide an output signal 45 representing the index k and the delay δ.
The representation of the index k and the delay δ is provided to a decoder unit 50. The decoder unit comprises a CELP synthesis block 25 according to the above discussed principles. (Post-processing is also here omitted in order to facilitate the reading of the figure.) The representation of index k and delay δ are decoded in an index decoder 53, and index k and delay δ are provided as input parameters to the fixed codebook and the adaptive code, respectively, resulting in a synthesized signal s(n) 21, which is supposed to resemble the original signal s(n).
The representation of the index k and the delay δ can be stored for a shorter or longer time anywhere between the encoder and decoder, enabling e.g. audio recordings storing requiring relatively small storing capability.
The present invention is related to speech and in general audio coding. In a typical case, it deals with cases where a main signal sM(n) has been encoded according to the CELP technique and the desire is to encode another signal ss(n) . The other signal could be the same main signal ss(n) = sM (n) , e.g. during re-encoding at a lower bit rate, or an encoded version of the main signal ss («) = sM («) , or a signal corresponding to another channel, e.g. stereo, multi-channel 5. 1 , etc.
This invention is thus directly applicable to stereo and in general multichannel coding for speech in teleconferencing applications. The application of this invention can also include audio coding as part of an open-loop or closed-loop content dependent encoding.
There should preferably exist a correlation between the main signal and the other signal, in order for the present invention to operate in optimal conditions. However, the existence of such correlation is not a mandatory requirement for the proper operation of the invention. In fact, the invention can be operated adaptively and made dependent on the degree of correlation between the main signal and the other signal. Since there exist no causal relationship between a left and right channel in stereo applications, the main signal sM (n) is often chosen as the sum signal and ss(n) as the difference signal of the left and right channels.
The presumption of the present invention is that the main signal sM (n) is available in a CELP encoded representation. One basic idea of the present invention is to limit the search in the fixed codebook during the encoding of the other signal ss(n) to a subset of candidate excitation signals. This subset is selected dependent on the CELP encoding of the main signal. In a preferred embodiment, the pulses of the candidate excitation signals of the subset are restricted to a set of pulse positions that are dependent on the pulse positions of the main signal. This is equivalent to defining constrained candidate pulse locations. The set of available pulse positions can typically be set to the pulse positions of the main signal plus neighboring pulse positions.
This reduction of the number of candidate pulses reduces dramatically the computational complexity of the encoder. Below, an illustrative example is given for the general case of two channel signals. However, this is easily extended to multiple channels. However, in the case of multiple channels, the target may be different given different weighting filters on each channel, but also the targets on each channels may be delayed with respect to each other.
A main channel and a side channel can be constructed by
'M
2 ss y Λ J 2
where sL(n) and sR(n) are the input of the left and right channel respectively. One can clearly see that even if the left and right channel were a delayed version of each other, then this would not be the case for the main and the side channel, since in general these would contain information from both channels.
In the following, it is assumed that the main channel is the first encoded channel and that the pulses locations for the fixed codebook excitation for that encoding are available.
The target for the side signal fixed codebook excitation encoding is computed as the difference between the side signal and the adaptive codebook excitation:
sc(n) = ss(n) - gpv(n) , n = 0,...,Z - l,
where gPv(n) is the adaptive codebook excitation and sc(n) is the target signal for adaptive codebook search. In the present embodiment, the number of potential pulse positions of the candidate excitation signals are defined relative to the main signal pulse positions. Since they are only a fraction of all possible positions, the amount of bits required for encoding the side signal with an excitation signal within this limited set of candidate excitation signals is therefore largely reduced, compared with the case where all pulse positions may occur.
The selection of the pulses candidate positions relatively to the main pulse position is fundamental in determining the complexity as well as the required bit-rate.
For example, if the frame length is L and if the number of pulses in the main signal encoding is N, then one would need roughly N*log2(L) bits to encode the pulse positions. However for encoding the side signal, if one retains only the main signal pulse positions as candidates, and the number of pulses in candidate excitation signals for the side signal is P, then one needs roughly P*log2(N) bits. For reasonable numbers for N, P and L, this corresponds to quite a reduction in bit rate requirements.
One interesting aspect is when the pulse positions for the side signal are set equal to the pulse positions of the main signal. Then there is no encoding of the pulse positions needed and only encoding of the pulse amplitudes is needed. In the case of algebraic code books with pulses having + 1/-1 amplitudes, then only the signs (N bits) need to be encoded.
If we denote by PM (i),i = l,- - -n , the main signal pulse positions. The pulse positions of candidate excitation signals for the side signal are selected based on the main signal pulse positions and possible additional parameters. The additional parameters may consist of time delay between the two channels and/or difference of adaptive codebook index.
In this embodiment, the set of pulse positions for the side signal candidate excitation signal is constructed as {PM(i) + J(i,k),k = \,- - - ,k maxi,i = l,- - -,n}
where J(i,k) denote some delay index. This means that each mono pulse position generate a set of pulse positions used for constructing the candidate excitation signals for the side signal pulse search procedure. This is illustrated in Fig. 3A. Here, PM denotes the pulse positions of the excitation signal for the main signal, and Ps * denotes possible pulse positions of the candidate excitation signals for the side signal analysis.
This of course is optimal with highly correlated signals. For low correlated or uncorrelated signals the inverse strategy would be adopted. This consists in taking the pulses candidates as all pulses not belonging to the set
{PM (i) + j(i, k), k = !, ■ ■ ■ , kmaxi,i = l,- - -, n}
Since this is a complementary case, it is easily understood by those skilled in the art that both strategies are similar and only the correlated case will be described in more detail.
It is easily seen that the position and number of pulse candidates is dependent on the delay index J(i, k) . The delay index may be made dependent on the effective delay between the two channels and /or the adaptive codebook index. In Fig. 3A, A: max = 3 , and J(i,k) = j(k) e {- 1,0,+ 1).
In Fig. 3B, another slightly different selection of pulse positions is made. Here k max = 3 , but J(i,k) = j(k)e {θ,+l,+2} .
Anyone skilled in the art realizes that the rules how to select the pulse positions can be constructed in many various manners. The actual rule to use may be adapted to the actual implementation. The important characteristics are, however, that the pulse positions candidates are selected dependent on the pulse positions resulting from the main signal analysis following a certain rule. This rule may be unique and fixed or may be selected from a set of predetermined rules dependent on e.g. the degree of correlation between the two channels and/ or the delay between the two channels..
Dependent on the rule used, the set of pulse candidates of the side signal is constructed. The set of the side signal pulse candidates is in general very small compared to the entire frame length. This allows reformulating the objective maximization problem based on a decimated frame.
In the general case, the pulses are searched by using, for example, the depth-first algorithm described in [5] or by using an exhaustive search if the number of candidate pulses is really small. However, even with a small number of candidates it is recommended to use a fast search procedure.
A backward filtered signal is in general pre-computed using
dτ = y2 τn
The matrix Φ = H H is the matrix of correlations of h(n) (the impulse response of a weighting filter), elements of which are computed by
Φ(iJ) = ∑Hl - i)h(l - j), / = 0,1 -1, j = O,...,L - l. t=j
The objective function can therefore be written as
idτckf
Q* = Given the set of possible candidate pulse positions on the side signal, only a subset of indices of the backward filtered vector d and the matrix Φ are needed. The set of candidate pulses can be sorted in ascending order
{PM (i) + J(i,k),k = l,- - -,kmaxi,i = l,- - -,n} = {P*(i),i = l,- - -,p}
P*(i) are the candidate pulses positions and p is their number. It should be noted that p is always less than, and typically much less than, the frame length L .
If we denote the decimated signal
d2{i) = d{P;{i)) , i = \,-,p .
And the decimated correlations matrix Φ2
φ2{i,j) = φ{Ps {i),Ps U))J = h- ,p,j = \, -,P
Φ2 is symmetric and is positive definite. We can directly write
Figure imgf000015_0001
where c\. is the new algebraic code vector. The index becomes k' which is a new entry in a reduced size codebook.
The summary of these decimation operations is illustrated in Fig. 4. In the top of the figure, a reduction of an algebraic codebook 10 of ordinary size to a reduced size codebook 10' is illustrated. In the middle, a reduction of a weighting filter covariance matrix 60 of ordinary size to a reduced weighting filter covariance matrix 60' is illustrated. Finally, in the bottom part, a reduction of a backward filtered target 62 of ordinary size to a reduced size backward filtered target 62' is illustrated. Anyone skilled in the art realizes the reduction in complexity that is the result of such a reduction.
Maximizing the objective function on the decimated signals has several advantages. One of them is the reduction of memory requirements, for instance the matrix Φ2 needs lower memory. Another advantage is the fact that because the main signal pulse locations are in all cases transmitted to the receiver, the indices of the decimated signals are always available to the decoder. This in turn allows the encoding of the other signal (side) pulse positions relatively to the main signal pulse positions, which consumes much less bits. Another advantage is the reduction in computational complexity since the maximization is performed on decimated signals.
In Fig. 5A, an embodiment of a system of encoders 4OA, 4OB and decoders
5OA, 50B according to the present invention is illustrated. Many details are similar as those illustrated in Fig. 2 and will therefore not be discussed in detail again, if their functions are essentially unaltered. A main signal 33A sm(n) is provided to a first encoder 4OA. The first encoder 4OA operates according to any prior art CELP encoding model, producing an index km for the fixed codebook and a delay measure δm for the adaptive codebook. The details of this encoding are not of any importance for the present invention and is omitted in order to facilitate the understanding of Fig. 5A. The parameters km and δm are encoded in a first index encoder 38A, giving representations k*m and δ*m of the parameters that are sent to a first decoder
5OA. In the first decoder, the representations k*m and δ*m are decoded into parameters km and δm in a first index decoder 53A. From these parameters, the original signal is reproduced according to any CELP decoding model according to prior art. The details of this decoding are not of any importance for the present invention and is omitted in order to facilitate the understanding of Fig. 5A. A reproduced first output signal 2 IA sm (n) is provided. A side signal 33B ss (n) is provided as an input signal to a second encoder
4OB. The second encoder 4OB is to most parts similar as the encoder of Fig. 2. The signals are now given an index "s" to distinguish them from any signals used for encoding the main signal. The second encoder 4OB comprises a CELP synthesis block 25. According to the present invention, the index km or a representation thereof is provided from the first encoder 4OA to an input 45 of the fixed codebook 10 of the second encoder 4OB. The index km is used by a candidate deriving means 47 to extract a reduced fixed codebook 10' according to the above presented principles. The synthesis of the CELP synthesis block 25' of the second encoder 4OB is thus based on indices k's representing excitation signals c't, (n) from the reduced fixed codebook 10'. An index k's is thus found to represent a best choice of the CELP synthesis. The parameters k's and δs are encoded in a second index encoder 38B, giving representations k'*s and δ*s of the parameters that are sent to a second decoder 5OB.
In the second decoder 5OB, the representations k'*s and δ*s are decoded into parameters k's and δs in a second index decoder 53B. Furthermore, the index parameter km is available from the first decoder 5OA and is provided to the input 55 of the fixed codebook 10 of the second decoder 5OB, in order to enabling an extraction by a candidate deriving means 57 of a reduced fixed codebook 10' equal to what was used in the second encoder 4OB. From the parameters k's and δs and the reduced fixed codebook 10', the original side signal is reproduced according to ordinary CELP decoding models 25". The details of this decoding are performed essentially in analogy with Fig. 2, but using the reduced fixed codebook 10' instead. A reproduced side output signal 2 IB ss (n) is thus provided.
Selection of the rule to construct the set of candidate pulses, e.g. the indexing function J(i,k), can advantageously be made adaptive and dependent on additional inter-channel characteristics, such as delay parameters, degree of correlation, etc. In this case, i.e. adaptive rule selection, the encoder has preferably to transmit to the decoder which rule has been selected for deriving the set of candidate pulses for encoding the other signal. The rule selection could for instance be performed by a closed- loop procedure, where a number of rules are tested and the one giving the best result finally is selected.
Fig. 5B illustrates an embodiment, using the rule selection approach. The mono signal sm(n) and preferably also the side signal ss(n) are here additionally provided to a rule selecting unit 39. Alternatively to the mono signal, the parameter km representing the mono signal can be used. In the rule selection unit 39, the signals are analysed, e.g. with respect to delay parameters or degree of correlation. Depending on the results, a rule, e.g. represented by an index r is selected from a set of predefined rules. The index of the selected rule is provided to the candidate deriving means 47 for determining how the candidate sets should be derived. The rule index r is also provided to the second index encoder 38B giving a representation r* of the index, which subsequently is sent to the second decoder 5OB. The second index decoder 53B decodes the rule index r, which then is used to govern the operation of the candidate deriving means 57.
In this manner, a set of rules can be provided, which will be suitable for different types of signals. A further flexibility is thus achieved, just by adding a single rule index in the transfer of data.
The specific rule used as well as the resulting number of candidate side signal pulses are the main parameters governing the bit rate and the complexity of the algorithm.
As stated further above, exactly the same principles could equally well be applied for re-encoding of one and the same channel. Fig. 6 illustrates an embodiment, where different parts of a transmission path allows for different bit rates. It is thus applicable as part of a rate transcoding solution. A signal s(n) is provided as an input signal 33A to a first encoder 4OA, which produces representations k* and δ* of parameters that are transmitted according to a first bit rate. At a certain place, the available bit rate is reduced, and a re-encoding for lower bit-rates has to be performed. A first decoder 5OA uses the representations k* and δ* of parameters for producing a reproduced signal 2 IA s(n) . This reproduced signal 2 IA s(n) is provided to a second encoder 4OB as an input signal 33B. Also the index k from the first decoder 50A is provided to the second encoder 4OB. The index k is in analogy with Fig. 6 used for extracting a reduced fixed codebook 10'. The second encoder 4OB encodes the signal s(n) for a lower bit rate, giving an index k' representing the selected excitation signal c' -, (n) . However, this index &' is of little use in a distant decoder, since the decoder does not have the information necessary to construct a corresponding reduced fixed codebook. The index k' thus has to be associated with an index k , referring to the original codebook 10. This is preferably performed in connection with the fixed codebook 10 and is represented in Fig. 6 by the arrows 41 and 43 illustrating the input of k' and the output of k . The encoding of the index k is then performed with reference to a full set of candidate excitation signals.
In a typical case, a first encoding is made with a bit rate n and the second encoding is made with a bit rate m, where n>m.
In certain applications, for instance real-time transmission of live content through different types of networks with different capacities (for example teleconferencing), it may also be of interest to provide parallel encodings with differing bit rates, e.g. in situation where real time encoding of the same signal at several different bit-rates is needed in order to accommodate the different types of networks, so-called parallel multirate encoding. Fig. 7 illustrates a system, where a signal s(n) is provided to both a first encoder 4OA and a second encoder 4OB. In analogy with previous embodiments, the second encoder provides a reduced fixed codebook 10' based on an index ka representing the first encoding. The second encoding is here denoted by the index "b". The second encoder 4OB thus becomes independent of the first decoder 5OB. Most other parts are in analogy with Fig. 6, however, with adapted indexing.
For these two applications, re-encoding of the same signal at a lower rate, the present invention offers a substantial reduction in complexity thus allowing the implementation of these applications with low cost hardware.
An embodiment of the above-described algorithm has been implemented in association with an AMR-WB speech codec. For encoding a side signal, the same adaptive codebook index is used as is used for encoding the mono excitation. The LTP gain as well as the innovation vector gain was not quantized.
The algorithm for the algebraic codebook was based on the mono pulse positions. As described in e.g. [6], the codebook may be structured in tracks.
Except for the lowest mode, the number of tracks is equal to 4. For each mode a certain number of pulses positions is used. For example, for mode 5, i.e. 15.85kbps, the candidate pulse positions are as follows
Track Pulse Positions
1 io, U, is 0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60
2 ii, is, Ϊ9 1 , 5, 9, 13, 17, 21, 25, 29, 33, 37, 41, 45, 49, 53, 57, 61
3 h, iβ, iio 2, 6, 10 , 14 , 18 , 22 , 26 , 30 , 34 , 38 , 42 , 46 , 50 , 54 , 58 , 62
4 13, 17, H l 3, 7, 11 , 15 , 19 , 23 , 27 , 31 , 35 , 39 , 43 , 47 , 51 , 55 , 59 , 63
Table 1. Candidate pulse positions.
The implemented algorithm retains all the mono pulses as the pulse positions of the side signal, i.e. the pulse positions are not encoded. Only the signs of the pulses are encoded.
Figure imgf000021_0001
Table 2. Side and mono signal pulses.
Thus, each pulse will consume only 1 bit for encoding the sign, which leads to a total bit rate equal to the number of mono pulses. In the above example, there are 12 pulses per sub-frame and this leads to a total bit rate equal to 12 bits x 4 x 50 = 2.4 kbps for encoding the innovation vector. This is the same number of bits required for the very lowest AMR-WB mode (2 pulses for the 6.6kbps mode), but in this case we have higher pulses density.
It should be noted that no additional algorithmic delay is needed for encoding the stereo signal.
Fig. 8 shows the results obtained with PEAQ [4] for evaluating the perceptual quality. PEAQ has been chosen since to the best knowledge, it is the only tool that provides objective quality measures for stereo signals. From the results, it is clearly seen that the stereo 100 does in fact provide a quality lift with respect to the mono signal 102. The used sound items were quite various, sound 1 , S l , is an extract from a movie with background noise, sound 2, S2, is a 1 min radio recording, sound 3, S3, a cart racing sport event, and sound 4, S4, is a real two microphone recoding.
Fig. 9 illustrates an embodiment of an encoding method according to the present invention. The procedure starts in step 200. In step 210, a representation of a CELP excitation signal for a first audio signal is provided. Note that it is not absolutely necessary to provide the entire first audio signal, just the representation of the CELP excitation signal. In step 212, a second audio signal is provided, which is correlated with the first audio signal. A set of candidate excitation signals is derived in step 214 depending on the first CELP excitation signal. Preferably, the pulse positions of the candidate excitation signals are related to the pulse positions of the CELP excitation signal of the first audio signal. In step 216, a CELP encoding is performed on the second audio signal, using the reduced set of candidate excitation signals derived in step 214. Finally, the representation, i.e. typically an index, of the CELP excitation signal for the second audio signal is encoded, using references to the reduced candidate set. The procedure ends in step 299.
Fig. 10 illustrates another embodiment of an encoding method according to the present invention. The procedure starts in step 200. In step 21 1 , an audio signal is provided. In step 213, a representation of a first CELP excitation signal for the same audio signal is provided. A set of candidate excitation signals is derived in step 215 depending on the first CELP excitation signal. Preferably, the pulse positions of the candidate excitation signals are related to the pulse positions of the CELP excitation signal of the first audio signal. In step 217, a CELP re-encoding is performed on the audio signal, using the reduced set of candidate excitation signals derived in step 215. Finally, the representation, i.e. typically an index, of the second CELP excitation signal for the audio signal is encoded, using references to the non- reduced candidate set, i.e. the set used for the first CELP encoding. The procedure ends in step 299.
Fig. 11 illustrates an embodiment of a decoding method according to the present invention. The procedure starts in step 200. In step 210, a representation of a first CELP excitation signal for a first audio signal is provided. In step 252, a representation of a second CELP excitation signal for a second audio signal is provided. In step 254, a second excitation signal is derived from the second excitation signal and with knowledge of the first excitation signal. Preferably, a reduced set of candidate excitation signals is derived depending on the first CELP excitation signal, from which a second excitation signal is selected by use of an index for the second CELP excitation signal. In step 256, the second audio signal is reconstructed using the second excitation signal. The procedure ends in step 299.
The embodiments described above are to be understood as a few illustrative examples of the present invention. It will be understood by those skilled in the art that various modifications, combinations and changes may be made to the embodiments without departing from the scope of the present invention. In particular, different part solutions in the different embodiments can be combined in other configurations, where technically possible. The scope of the present invention is, however, defined by the appended claims.
The invention allows a dramatic reduction of complexity (both memory and arithmetic operations) as well as bit-rate when encoding multiple audio channels by using algebraic codebooks and CELP.
REFERENCES
[1] H. Fuchs, "Improving joint stereo audio coding by adaptive inter-channel prediction", in Proc. IEEE WASPAA, Mohonk, NY, Oct. 1993.
[2] S. A. Ramprashad, "Stereophonic CELP coding using cross channel prediction", in Proc. IEEE Workshop Speech Coding, pp. 136- 138, Sept. 2000.
[3] T. Liebschen, "Lossless audio coding using adaptive multichannel prediction", in Proc. AES 113th Conv., Los Angeles, CA, Oct. 2002.
[4] ITU-R BS.1387
[5] WO 96/28810.
[6] 3GPP TS 26.190, p. 28, table 7 [7] US 2004/0044524 Al
[8] US 2004/0109471 Al
[9] US 2003/0191635 Al
[10] US 6,393,392 Bl

Claims

1. Method for encoding audio signals, comprising the steps of: providing a representation (k, km, ka) of a first excitation signal of a code excited linear prediction of a first audio signal (33, 33A); providing a second audio signal (33, 33B); deriving a set (10') of candidate excitation signals (c'(n)) based on said first excitation signal; and performing a code excited linear prediction encoding of said second audio signal (33, 33B) using said set (10') of candidate excitation signals (c'(n)).
2. Method according to claim 1, wherein said second audio signal (33, 33B) being correlated to said first audio signal (33, 33A).
3. Method according to claim 1 or 2, wherein said step of deriving said set
(10') of candidate excitation signals (c'(n)) comprises selecting a rule out of a predetermined set of rules based on said first excitation signal and/ or said second audio signal, whereby said set (10') of candidate excitation signals (c'(n)) being derived according to said selected rule.
4. Method according to any of the claims 1 to 3, wherein said first excitation signal having n pulse locations (PM) out of a set of N possible pulse locations; said candidate excitation signals (c'(n)) having pulse locations (P*s) only at a subset of said N possible pulse locations; and said subset of pulse locations (P*s) being selected based on the n pulse locations (PM) of said first excitation signal.
5. Method according to claim 4, wherein pulse locations (P*s) of said subset of pulse locations are positioned at positions pj, where index j is within intervals {i+L, i+K}, where i is an index of said n pulse locations, K and L are integers and K>L.
6. Method according to claim 5, wherein K= I and L=- 1.
7. Method according to any of the claims 1 to 6, wherein said code excited linear prediction of said second audio signal (33, 33B) is performed with a global search within said set (10') of candidate excitation signals.
8. Method according to any of the claims 1 to 7, comprising the further steps of: encoding a second excitation signal of said code excited linear prediction of said second audio signal (33, 33B) with reference to said set (10') of candidate excitation signals; and providing said encoded second excitation signal together with said representation (k, km, ka) of said first excitation signal.
9. Method according to claim 3 and claim 8, comprising the further step of providing data representing an identification of said selected rule together with said representation (k, km, ka) of said first excitation signal.
10. Method according to any of the claims 1 to 7, comprising the further step of: encoding a second excitation signal of said code excited linear prediction of said second audio signal (33, 33B) with reference to a set (10) of candidate excitation signals having N possible pulse locations.
11. Method according to claim 10, wherein the second audio signal (33) is the same as the first audio signal (33).
12. Method according to any of the claims 1 to 1 1, wherein the second excitation signal has m pulse locations, where m<n.
13. Method for decoding of audio signals (33A, 33B), comprising the steps of: providing a representation (k, km, ka) of a first excitation signal of a code excited linear prediction of a first audio signal (33A); providing a representation (k's) of a second excitation signal of a code excited linear prediction of a second audio signal (33B); said second excitation signal being one of a set (10') of candidate excitation signals; said set (10') of candidate excitation signals being based on said first excitation signal; deriving said second excitation signal (c\,^ («)) from said representation (k's) of said second excitation signal and based on information related to said set (10') of candidate excitation signals; and reconstruct said second audio signal (ss («)) by prediction filtering said second excitation signal {c\, («)).
14. Method according to claim 13, wherein said second audio signal (33B) being correlated to said first audio signal (33A).
15. Method according to claim 13 or 14, wherein said information related to said set (10') of candidate excitation signals comprises identification of a rule out of a pre-determined set of rules, said rule determining derivation of said set (10') of candidate excitation signals.
16. Method according to any of the claims 13 to 15, wherein said first excitation signal having n pulse locations (PM) out of a set of N possible pulse locations; said candidate excitation signals having pulse locations (P*s) only at a subset of said N possible pulse locations; and said subset of pulse locations (P*s) being selected based on the n pulse locations (PM) of said first excitation signal.
17. Method according to claim 16, wherein pulse locations (P*s) of said subset of pulse locations are positioned at positions pj, where index j is within intervals {i+L, i+K}, where i is an index of said n pulse locations, K and L are integers and K>L.
18. Method according to claim 17, wherein K=I and L=- 1.
19. Encoder (40B) for audio signals, comprising: means (45) for providing a representation (k, km, ka) of a first excitation signal of a code excited linear prediction of a first audio signal (33, 33A); means for providing a second audio signal (33, 33B); means (47) for deriving a set (10') of candidate excitation signals, connected to receive said representation (k, km, ka) of said first excitation signal, said set (10') of candidate excitation signals being based on said first excitation signal; and means (25') for performing a code excited linear prediction connected to receive said second audio signal (33, 33B) and a representation of said set
(10') of candidate excitation signals, said means (25') for performing a code excited linear prediction being arranged for performing a code excited linear prediction of said second audio signal (33, 33B) using said set (10') of candidate excitation signals.
20. Encoder according to claim 19, wherein said second audio signal (33, 33B) being correlated to said first audio signal (33, 33A).
21. Encoder according to claim 19 or 20, wherein said means (47) for deriving a set (10') of candidate excitation signals being arranged to select a rule out of a predetermined set of rules based on said first excitation signal and/ or said second audio signal and to derive said set (10') of candidate excitation signals (c'(n)) according to said selected rule.
22. Encoder according to any of the claims 19 to 21, wherein said first excitation signal having n pulse locations (PM) out of a set of N possible pulse locations; said candidate excitation signals having pulse locations (P*s) only at a subset of said N possible pulse locations; and said subset of pulse locations (P*s) being selected based on the n pulse locations (PM) of said first excitation signal.
23. Encoder according to claim 22, wherein pulse locations (P*s) of said subset of pulse locations are positioned at positions pj, where index j is within intervals {i+L, i+K}, where i is an index of said n pulse locations, K and L are integers and K>L.
24. Encoder according to claim 23, wherein K=I and L=- 1.
25. Encoder according to any of the claims 19 to 24, wherein said means (25') for performing code excited linear prediction of said second audio signal (33, 33b) is arranged to perform a global search within said set (10') of candidate excitation signals.
26. Encoder according to any of the claims 19 to 25, further comprising: means (38B) for encoding a second excitation signal of said code excited linear prediction of said second audio signal (33B) with reference to said set (10') of candidate excitation signals; and means for providing said encoded second excitation signal together with said representation (k, km, ka) of said first excitation signal.
27. Encoder according to claim 26 and 21, further comprising: means for providing data representing an identification of said selected rule together with said representation (k, km, ka) of said first excitation signal.
28. Encoder according to any of the claims 19 to 25, further comprising: means (38B) for encoding a second excitation signal of said code excited linear prediction of said second audio signal (33, 33B) with reference to a set (10) of candidate excitation signals having N possible pulse locations.
29. Encoder according to claim 28, wherein the second audio signal (33) is the same as the first audio signal (33), whereby said encoder is a re-encoder.
30. Encoder according to any of the claims 19 to 29, wherein the second excitation signal has m pulse locations, where m<n.
31. Decoder (50B) for audio signals, comprising: means (55) for providing a representation (km) of a first excitation signal of a code excited linear prediction of a first audio signal (33A); means (53B) for providing a representation (k'έ) of a second excitation signal of a code excited linear prediction of a second audio signal (33B); said second excitation signal being one of a set (10') of candidate excitation signals; said set (10') of candidate excitation signals being based on said first excitation signal; means (57) for deriving said second excitation signal, connected to receive information associated with said representation (km) of a first excitation signal and said representation (k's) of said second excitation signal, said means (57) for deriving being arranged to derive said second excitation signal (c\, («)) from said representation (k's) of a second excitation signal and based on information related to said set (10') of candidate excitation signals; and means (25") for reconstructing said second audio signal (.?,.(«)) by prediction filtering said second excitation signal (c'A, («)).
32. Decoder according to claim 31, wherein said second audio signal (33B) being correlated to said first audio signal (33A).
33. Decoder according to claim 31 or 32, wherein said information related to said set (10') of candidate excitation signals comprises identification of a rule out of a pre-determined set of rules, said rule determining derivation of said set (10') of candidate excitation signals.
34. Decoder according to any of the claims 31 to 33, wherein said first excitation signal having n pulse locations (PM) out of a set of N possible pulse locations; said candidate excitation signals having pulse locations (P*s) only at a subset of said N possible pulse locations; and said subset of pulse locations (P*s) being selected based on the n pulse locations (PM) of said first excitation signal.
35. Decoder according to claim 34, wherein pulse locations (PM) of said subset of pulse locations are positioned at positions pj, where index j is within intervals {i+L, i+K}, where i is an index of said n pulse locations, K and L are integers and K>L.
36. Decoder according to claim 35, wherein K=I and L=- 1.
PCT/SE2005/000349 2005-03-09 2005-03-09 Low-complexity code excited linear prediction encoding WO2006096099A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
JP2008500663A JP5174651B2 (en) 2005-03-09 2005-03-09 Low complexity code-excited linear predictive coding
AT05722196T ATE513290T1 (en) 2005-03-09 2005-03-09 LESS COMPLEX CODE EXCITED LINEAR PREDICTION CODING
CN2005800489816A CN101138022B (en) 2005-03-09 2005-03-09 Low-complexity code excited linear prediction encoding and decoding method and device
PCT/SE2005/000349 WO2006096099A1 (en) 2005-03-09 2005-03-09 Low-complexity code excited linear prediction encoding
BRPI0520115A BRPI0520115B1 (en) 2005-03-09 2005-03-09 methods for encoding and decoding audio signals and encoder and decoder for audio signals
KR1020077023047A KR101235425B1 (en) 2005-03-09 2005-03-09 Low-complexity code excited linear prediction encoding
EP05722196A EP1859441B1 (en) 2005-03-09 2005-03-09 Low-complexity code excited linear prediction encoding
TW094144472A TW200639801A (en) 2005-03-09 2005-12-15 Low-complexity code excited linear prediction encoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2005/000349 WO2006096099A1 (en) 2005-03-09 2005-03-09 Low-complexity code excited linear prediction encoding

Publications (1)

Publication Number Publication Date
WO2006096099A1 true WO2006096099A1 (en) 2006-09-14

Family

ID=36953623

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2005/000349 WO2006096099A1 (en) 2005-03-09 2005-03-09 Low-complexity code excited linear prediction encoding

Country Status (8)

Country Link
EP (1) EP1859441B1 (en)
JP (1) JP5174651B2 (en)
KR (1) KR101235425B1 (en)
CN (1) CN101138022B (en)
AT (1) ATE513290T1 (en)
BR (1) BRPI0520115B1 (en)
TW (1) TW200639801A (en)
WO (1) WO2006096099A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013148913A (en) * 2007-04-29 2013-08-01 Huawei Technologies Co Ltd Encoding method, decoding method, encoder, and decoder
US8959018B2 (en) 2010-06-24 2015-02-17 Huawei Technologies Co.,Ltd Pulse encoding and decoding method and pulse codec

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6192334B1 (en) * 1997-04-04 2001-02-20 Nec Corporation Audio encoding apparatus and audio decoding apparatus for encoding in multiple stages a multi-pulse signal
EP1132893A2 (en) * 2000-02-15 2001-09-12 Lucent Technologies Inc. Constraining pulse positions in CELP vocoding
US20040024595A1 (en) * 1997-01-27 2004-02-05 Toshiyuki Nomura Speech coder/decoder

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3139602B2 (en) * 1995-03-24 2001-03-05 日本電信電話株式会社 Acoustic signal encoding method and decoding method
JPH1097295A (en) * 1996-09-24 1998-04-14 Nippon Telegr & Teleph Corp <Ntt> Coding method and decoding method of acoustic signal
JP3622365B2 (en) * 1996-09-26 2005-02-23 ヤマハ株式会社 Voice encoding transmission system
JP3329216B2 (en) * 1997-01-27 2002-09-30 日本電気株式会社 Audio encoding device and audio decoding device
JP3134817B2 (en) * 1997-07-11 2001-02-13 日本電気株式会社 Audio encoding / decoding device
US6161086A (en) * 1997-07-29 2000-12-12 Texas Instruments Incorporated Low-complexity speech coding with backward and inverse filtered target matching and a tree structured mutitap adaptive codebook search
SE521225C2 (en) * 1998-09-16 2003-10-14 Ericsson Telefon Ab L M Method and apparatus for CELP encoding / decoding
JP3343082B2 (en) * 1998-10-27 2002-11-11 松下電器産業株式会社 CELP speech encoder
JP2004302259A (en) * 2003-03-31 2004-10-28 Matsushita Electric Ind Co Ltd Hierarchical encoding method and hierarchical decoding method for sound signal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040024595A1 (en) * 1997-01-27 2004-02-05 Toshiyuki Nomura Speech coder/decoder
US6192334B1 (en) * 1997-04-04 2001-02-20 Nec Corporation Audio encoding apparatus and audio decoding apparatus for encoding in multiple stages a multi-pulse signal
EP1132893A2 (en) * 2000-02-15 2001-09-12 Lucent Technologies Inc. Constraining pulse positions in CELP vocoding

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9912350B2 (en) 2007-04-29 2018-03-06 Huawei Technologies Co., Ltd. Coding method, decoding method, coder, and decoder
US10666287B2 (en) 2007-04-29 2020-05-26 Huawei Technologies Co., Ltd. Coding method, decoding method, coder, and decoder
US8988256B2 (en) 2007-04-29 2015-03-24 Huawei Technologies Co., Ltd. Coding method, decoding method, coder, and decoder
JP2013148913A (en) * 2007-04-29 2013-08-01 Huawei Technologies Co Ltd Encoding method, decoding method, encoder, and decoder
US9225354B2 (en) 2007-04-29 2015-12-29 Huawei Technologies Co., Ltd. Coding method, decoding method, coder, and decoder
US9444491B2 (en) 2007-04-29 2016-09-13 Huawei Technologies Co., Ltd. Coding method, decoding method, coder, and decoder
US10425102B2 (en) 2007-04-29 2019-09-24 Huawei Technologies Co., Ltd. Coding method, decoding method, coder, and decoder
US10153780B2 (en) 2007-04-29 2018-12-11 Huawei Technologies Co.,Ltd. Coding method, decoding method, coder, and decoder
US9020814B2 (en) 2010-06-24 2015-04-28 Huawei Technologies Co., Ltd. Pulse encoding and decoding method and pulse codec
US9858938B2 (en) 2010-06-24 2018-01-02 Huawei Technologies Co., Ltd. Pulse encoding and decoding method and pulse codec
US9508348B2 (en) 2010-06-24 2016-11-29 Huawei Technologies Co., Ltd. Pulse encoding and decoding method and pulse codec
US10446164B2 (en) 2010-06-24 2019-10-15 Huawei Technologies Co., Ltd. Pulse encoding and decoding method and pulse codec
US8959018B2 (en) 2010-06-24 2015-02-17 Huawei Technologies Co.,Ltd Pulse encoding and decoding method and pulse codec

Also Published As

Publication number Publication date
CN101138022A (en) 2008-03-05
EP1859441A1 (en) 2007-11-28
CN101138022B (en) 2011-08-10
KR101235425B1 (en) 2013-02-20
JP2008533522A (en) 2008-08-21
TW200639801A (en) 2006-11-16
ATE513290T1 (en) 2011-07-15
BRPI0520115A2 (en) 2009-09-15
BRPI0520115B1 (en) 2018-07-17
EP1859441B1 (en) 2011-06-15
KR20070116869A (en) 2007-12-11
JP5174651B2 (en) 2013-04-03

Similar Documents

Publication Publication Date Title
US8000967B2 (en) Low-complexity code excited linear prediction encoding
US7778827B2 (en) Method and device for gain quantization in variable bit rate wideband speech coding
US8856012B2 (en) Apparatus and method of encoding and decoding signals
US5778335A (en) Method and apparatus for efficient multiband celp wideband speech and music coding and decoding
US9928843B2 (en) Method and apparatus for encoding/decoding speech signal using coding mode
Atal et al. Speech and audio coding for wireless and network applications
JP2006525533A5 (en)
CN101218628A (en) Apparatus and method of encoding and decoding an audio signal
KR20020077389A (en) Indexing pulse positions and signs in algebraic codebooks for coding of wideband signals
US20050258983A1 (en) Method and apparatus for voice trans-rating in multi-rate voice coders for telecommunications
WO2015157843A1 (en) Methods, encoder and decoder for linear predictive encoding and decoding of sound signals upon transition between frames having different sampling rates
US7634402B2 (en) Apparatus for coding of variable bitrate wideband speech and audio signals, and a method thereof
JP3396480B2 (en) Error protection for multimode speech coders
JP2002268686A (en) Voice coder and voice decoder
EP1859441B1 (en) Low-complexity code excited linear prediction encoding
AU2018338424B2 (en) Method and device for efficiently distributing a bit-budget in a CELP codec
US20070276655A1 (en) Method and apparatus to search fixed codebook and method and apparatus to encode/decode a speech signal using the method and apparatus to search fixed codebook
KR100389898B1 (en) Method for quantizing linear spectrum pair coefficient in coding voice
Rutherford Improving the performance of Federal Standard 1016 (CELP)
Zhou et al. A unified framework for ACELP codebook search based on low-complexity multi-rate lattice vector quantization

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200580048981.6

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2005722196

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2008500663

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

WWE Wipo information: entry into national phase

Ref document number: 1020077023047

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2005722196

Country of ref document: EP

ENP Entry into the national phase

Ref document number: PI0520115

Country of ref document: BR

Kind code of ref document: A2