EP1873754B1 - Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik - Google Patents

Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik Download PDF

Info

Publication number
EP1873754B1
EP1873754B1 EP06013604A EP06013604A EP1873754B1 EP 1873754 B1 EP1873754 B1 EP 1873754B1 EP 06013604 A EP06013604 A EP 06013604A EP 06013604 A EP06013604 A EP 06013604A EP 1873754 B1 EP1873754 B1 EP 1873754B1
Authority
EP
European Patent Office
Prior art keywords
filter
audio
coding
signal
warping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP06013604A
Other languages
English (en)
French (fr)
Other versions
EP1873754A1 (de
Inventor
Stefan Wabnik
Gerald Schuller
Jürgen HERRE
Bernhard Grill
Markus Multrus
Stefan Bayer
Ulrich Krämer
Jens Hirschfeld
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to EP08014723A priority Critical patent/EP1990799A1/de
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority to AT06013604T priority patent/ATE408217T1/de
Priority to DE602006002739T priority patent/DE602006002739D1/de
Priority to EP06013604A priority patent/EP1873754B1/de
Priority to MX2008016163A priority patent/MX2008016163A/es
Priority to EP07725316.9A priority patent/EP2038879B1/de
Priority to JP2009516921A priority patent/JP5205373B2/ja
Priority to KR1020087032110A priority patent/KR101145578B1/ko
Priority to RU2009103010/09A priority patent/RU2418322C2/ru
Priority to PL07725316T priority patent/PL2038879T3/pl
Priority to MYPI20085310A priority patent/MY142675A/en
Priority to ES07725316.9T priority patent/ES2559307T3/es
Priority to US12/305,936 priority patent/US8682652B2/en
Priority to AU2007264175A priority patent/AU2007264175B2/en
Priority to BRPI0712625-5A priority patent/BRPI0712625B1/pt
Priority to PCT/EP2007/004401 priority patent/WO2008000316A1/en
Priority to CN2007800302813A priority patent/CN101501759B/zh
Priority to CA2656423A priority patent/CA2656423C/en
Priority to TW096122715A priority patent/TWI348683B/zh
Priority to ARP070102797A priority patent/AR061696A1/es
Publication of EP1873754A1 publication Critical patent/EP1873754A1/de
Priority to HK08103465A priority patent/HK1109817A1/xx
Application granted granted Critical
Publication of EP1873754B1 publication Critical patent/EP1873754B1/de
Priority to IL195983A priority patent/IL195983A/en
Priority to NO20090400A priority patent/NO340436B1/no
Priority to HK09108366.0A priority patent/HK1128811A1/zh
Priority to AU2011200461A priority patent/AU2011200461B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/20Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/26Pre-filtering or post-filtering
    • G10L19/265Pre-filtering, e.g. high frequency emphasis prior to encoding

Definitions

  • the present invention relates to audio processing using warped filters and, particularly, to multi-purpose audio coding.
  • general audio coders like MPEG-1 Layer 3, or MPEG-2/4 Advanced Audio Coding, AAC usually do not perform as well for speech signals at very low data rates as dedicated LPC-based speech coders due to the lack of exploitation of a speech source model.
  • LPC-based speech coders usually do not achieve convincing results when applied to general music signals because of their inability to flexibly shape the spectral envelope of the coding distortion according to a masking threshold curve. It is the object of the present invention to provide a concept that combines the advantages of both LPC-based coding and perceptual audio coding into a single framework and thus describes unified audio coding that is efficient for both general audio and speech signals.
  • perceptual audio coders use a filterbank-based approach to efficiently code audio signals and shape the quantization distortion according to an estimate of the masking curve.
  • Figure 9 shows the basic block diagram of a monophonic perceptual coding system.
  • An analysis filterbank is used to map the time domain samples into sub sampled spectral components.
  • the system is also referred to as a subband coder (small number of subbands, e.g. 32) or a filterbank-based coder (large number of frequency lines, e.g. 512).
  • a perceptual ("psycho-acoustic") model is used to estimate the actual time dependent masking threshold.
  • the spectral (“subband” or “frequency domain”) components are quantized and coded in such a way that the quantization noise is hidden under the actual transmitted signal and is not perceptible after decoding. This is achieved by varying the granularity of quantization of the spectral values over time and frequency.
  • a perceptual audio coder which separates the aspects of irrelevance reduction (i.e. noise shaping according to perceptual criteria) and redundancy reduction (i.e. obtaining a mathematically more compact representation of information) by using a so-called pre-filter rather than a variable quantization of the spectral coefficients over frequency.
  • the principle is illustrated in the following figure.
  • the input signal is analyzed by a perceptual model to compute an estimate of the masking threshold curve over frequency.
  • the masking threshold is converted into a set of pre-filter coefficients such that the magnitude of its frequency response is inversely proportional to the masking threshold.
  • the pre-filter operation applies this set of coefficients to the input signal which produces an output signal wherein all frequency components are represented according to their perceptual importance ("perceptual whitening").
  • This signal is subsequently coded by any kind of audio coder which produces a "white” quantization distortion, i.e. does not apply any perceptual noise shaping.
  • the transmission / storage of the audio signal includes both the coder's bit-stream and a coded version of the pre-filtering coefficients.
  • the coder bit-stream is decoded into an intermediate audio signal which is then subjected to a post-filtering operation according to the transmitted filter coefficients.
  • the post-filter Since the post-filter performs the inverse filtering process relative to the pre-filter, it applies a spectral weighting to its input signal according to the masking curve. In this way, the spectrally flat ("white”) coding noise appears perceptually shaped at the decoder output, as intended.
  • the frequency resolution of the pre-/post-filter In order to enable appropriate spectral noise shaping by using pre-/post-filtering techniques, it is important to adapt the frequency resolution of the pre-/post-filter to that of the human auditory system. Ideally, the frequency resolution would follow well-known perceptual frequency scales, such as the BARK or ERB frequency scale [Zwi]. This is especially desirable in order to minimize the order of the pre-/post-filter model and thus the associated computational complexity and side information transmission rate.
  • the adaptation of the pre-/post-filter frequency resolution can be achieved by the well-known frequency warping concept [KHL97].
  • the unit delays within a filter structure are replaced by (first or higher order) allpass filters which leads to a non-uniform deformation ("warping") of the frequency response of the filter.
  • warping non-uniform deformation
  • SA99 allpass coefficient
  • audio coders typically use a filter order between 8 and 20 at common sampling rates like 48kHz or 44.1kHz [WSKH05].
  • warped filtering e.g. modeling of room impulse responses [HKS00] and parametric modeling of a noise component in the audio signal (under the equivalent name Laguerre / Kauz filtering) [SOB03]
  • LPC Linear Predictive Coding
  • MPE Multi-Pulse Excitation
  • RPE Regular Pulse Excitation
  • CELP Code-Excited Linear Prediction
  • Linear Predictive Coding attempts to produce an estimate of the current sample value of a sequence based on the observation of a certain number of past values as a linear combination of the past observations.
  • the encoder LPC filter "whitens" the input signal in its spectral envelope, i.e. its frequency response is a model of the inverse of the signal's spectral envelope.
  • the frequency response of the decoder LPC filter is a model of the signal's spectral envelope.
  • AR auto-regressive linear predictive analysis
  • narrow band speech coders i.e. speech coders with a sampling rate of 8kHz
  • LPC filter with an order between 8 and 12. Due to the nature of the LPC filter, a uniform frequency resolution is effective across the full frequency range. This does not correspond to a perceptual frequency scale.
  • TML94 proposes a speech coder that models the speech spectral envelope by cepstral coefficients c(m) which are updated sample by sample according to the time-varying input signal.
  • the frequency scale of the model is adapted to approximate the perceptual MEL scale [Zwi] by using a first order all-pass filter instead of the usual unit delay.
  • a fixed value of 0.31 for the warping coefficient is used at the coder sampling rate of 8kHz.
  • the approach has been developed further to include a CELP coding core for representing the excitation signal in [KTK95], again using a fixed value of 0.31 for the warping coefficient at the coder sampling rate of 8kHz.
  • warped LPC and CELP coding are known, e.g. [HLM99] for which a warping factor of 0.723 is used at a sampling rate of 44.1kHz.
  • general audio coders are optimized to perfectly hide the quantization noise below the masking threshold, i.e., are optimally adapted to perform an irrelevance reduction. To this end, they have a functionality for accounting for the non-uniform frequency resolution of the human hearing mechanism.
  • they due to the fact that they are general audio encoders, they cannot specifically make use of any a-priori knowledge on a specific kind of signal patterns which are the reason for obtaining the very low bitrates known from e.g. speech coders.
  • an audio encoder of claim 1 an audio decoder of claim 25, an encoded audio signal of claim 43, a method of encoding an audio signal of claim 44, a method of decoding an encoded audio signal of claim 45, an audio processor of claim 46, a method of processing an audio signal of claim 48 or a computer program of claim 49.
  • the present invention is based on the finding that a pre-filter having a variable warping characteristic on the audio encoder side is the key feature for integrating different coding algorithms to a single encoder frame. These two different coding algorithms are different from each other.
  • the first coding algorithm is adapted to a specific signal pattern such as speech signals, but also any other specifically harmonic patterns, pitched patterns or transient patterns are an option, while the second coding algorithm is suitable for encoding a general audio signal.
  • the pre-filter on the encoder-side or the post-filter on the decoder-side make it possible to integrate the signal specific coding module and the general coding module within a single encoder/decoder framework.
  • the input for the general audio encoder module or the signal specific encoder module can be warped to a higher or lower or no degree. This depends on the specific signal and the implementation of the encoder modules. Thus, the interrelation of which warp filter characteristic belongs to which coding module can be signaled. In several cases the result might be that the stronger warping characteristic belongs to the general audio coder and the lighter or no warping characteristic belongs to the signal specific module. This situation can - in some embodiments - fixedly set or can be the result of dynamically signaling the encoder module for a certain signal portion.
  • the coding algorithm adapted for specific signal patterns normally does not heavily rely on using the masking threshold for irrelevance reduction, this coding algorithm does not necessarily need any warping pre-processing or only a "soft" warping pre-processing.
  • the first coding algorithm adapted for a specific signal pattern advantageously uses a-priori knowledge on the specific signal pattern but does not rely that much on the masking threshold and, therefore, does not need to approach the non-uniform frequency resolution of the human listening mechanism.
  • the non-uniform frequency resolution of the human listening mechanism is reflected by scale factor bands having different bandwidths along the frequency scale. This non-uniform frequency scale is also known as the BARK or ERB scale.
  • the second coding algorithm can only produce an acceptable output bitrate together with an acceptable audio quality, when any measure is taken which accounts for the non-uniform frequency resolution of the human listening mechanism so that optimum benefit can be drawn from the masking threshold.
  • the inventive pre-filter only warps to a strong degree, when there is a signal portion not having the specific signal pattern, while for a signal not having the specific signal pattern, no warping at all or only a small warping characteristic is applied.
  • the pre-filter can perform different tasks using the same filter.
  • the pre-filter works as an LPC analysis filter so that the first encoding algorithm is only related to the encoding of the residual signal or the LPC excitation signal.
  • the pre-filter is controlled to have a strong warping characteristic and, preferably, to perform LPC filtering based on the psycho-acoustic masking threshold so that the pre-filtered output signal is filtered by the frequency-warped filter and is such that psychoacoustically more important spectral portions are amplified with respect to psychoacoustically less important spectral portions.
  • a straight-forward quantizer can be used, or, generally stated, quantization during encoding can take place without having to distribute the coding noise non-uniformly over the frequency range in the output of the warped filter.
  • the noise shaping of the quantization noise will automatically take place by the post-filtering action obtained by the time-varying warped filter on the decoder-side, which is - with respect to the warping characteristic - identical to the encoder-side pre-filter and, due to the fact that this filter is inverse to the pre-filter on the decoder side, automatically produces the noise shaping to obtain a maximum irrelevance reduction while maintaining a high audio quality.
  • Preferred embodiments of the present invention provide a uniform method that allows coding of both general audio signals and speech signals with a coding performance that - at least - matches the performance of the best known coding schemes for both types of signals. It is based on the following considerations:
  • this dilemma is solved by a coding system that includes an encoder filter that can smoothly fade in its characteristics between a fully warped operation, as it is generally preferable for coding of music signals, and a non-warped operation, as it is generally preferable for coding of speech signals.
  • the proposed inventive approach includes a linear filter with a time-varying warping factor . This filter is controlled by an extra input that receives the desired warping factor and modifies the filter operation accordingly.
  • the inverse decoder filtering mechanism is similarly equipped, i.e. a linear decoder filter with a time-varying warping factor and can act as a perceptual pre-filter as well as an LPC filter.
  • the corresponding decoder works accordingly: It receives the transmitted information, decodes the speech and generic audio parts according to the coding mode information, combines them into a single intermediate signal (e.g. by adding them), and filters this intermediate signal using the coding mode / warping factor and filter coefficients to form the final output signal.
  • the Fig. 1 audio encoder is operative for encoding an audio signal input at line 10.
  • the audio signal is input into a pre-filter 12 for generating a pre-filtered audio signal appearing at line 14.
  • the pre-filter has a variable warping characteristic, the warping characteristic being controllable in response to a time-varying control signal on line 16.
  • the control signal indicates a small or no warping characteristic or a comparatively high warping characteristic.
  • the time-varying warp control signal can be a signal having two different states such as "1" for a strong warp or a "0" for no warping.
  • the intended goal for applying warping is to obtain a frequency resolution of the pre-filter similar to the BARK scale. However, also different states of the signal / warping characteristic setting are possible.
  • the inventive audio encoder includes a controller 18 for providing the time-varying control signal, wherein the time varying control signal depends on the audio signal as shown by line 20 in Fig. 1 .
  • the inventive audio encoder includes a controllable encoding processor 22 for processing the pre-filtered audio signal to obtain an encoded audio signal output at line 24.
  • the encoding processor 22 is adapted to process the pre-filtered audio signal in accordance with a first coding algorithm adapted to a specific signal pattern, or in accordance with a second, different encoding algorithm suitable for encoding a general audio signal.
  • the encoding processor 22 is adapted to be controlled by the controller 18 preferably via a separate encoder control signal on line 26 so that an audio signal portion being filtered using the comparatively high warping factor is processed using the second encoding algorithm to obtain the encoded signal for this audio signal portion, so that an audio signal portion being filtered using no or only a small warping characteristic is processed using the first encoding algorithm.
  • the filter for a signal being filtered in accordance with the first coding algorithm in some situations when processing an audio signal, no or only a small warp is performed by the filter for a signal being filtered in accordance with the first coding algorithm, while, when a strong and preferably perceptually full-scale warp is applied by the pre-filter, the time portion is processed using the second coding algorithm for general audio signals, which is preferably based on hiding quantization noise below a psycho-acoustic masking threshold.
  • the invention also covers the case that for a further portion of the audio signal, which has the signal-specific pattern, a high warping characteristic is applied while for an even further portion not having the specific signal pattern, a low or no warping characteristic is used.
  • the encoder module control can also be fixedly set depending on the transmitted warping factor or the warping factor can be derived from a transmitted coder module indication.
  • both information items can be transmitted as side information, i.e., the coder module and the warping factor.
  • Fig. 2 illustrates an inventive decoder for decoding an encoded audio signal input at line 30.
  • the encoded audio signal has a first portion encoded in accordance with a first coding algorithm adapted to a specific signal pattern, and has a second portion encoded in accordance with a different second coding algorithm suitable for encoding a general audio signal.
  • the inventive decoder comprises a detector 32 for detecting a coding algorithm underlying the first or the second portion. This detection can take place by extracting side information from the encoded audio signal as illustrated by broken line 34, and/or can take place by examining the bit-stream coming into a decoding processor 36 as illustrated by broken line 38.
  • the decoding processor 36 is for decoding in response to the detector as illustrated by control line 40 so that for both the first and second portions the correct coding algorithm is selected.
  • the decoding processor is operative to use the first coding algorithm for decoding the first time portion and to use the second coding algorithm for decoding the second time portion so that the first and the second decoded time portions are output on line 42.
  • Line 42 carries the input into a post-filter 44 having a variable warping characteristic.
  • the post-filter 44 is controllable using a time-varying warp control signal on line 46 so that this post-filter has only small or no warping characteristic in a first state and has a high warping characteristic in a second state.
  • the post-filter 44 is controlled such that the first time portion decoded using the first coding algorithm is filtered using the small or no warping characteristic and the second time portion of the decoded audio signal is filtered using the comparatively strong warping characteristic so that an audio decoder output signal is obtained at line 48.
  • the first coding algorithm determines the encoder-related steps to be taken in the encoding processor 22 and the corresponding decoder-related steps to be implemented in decoding processor 36. Furthermore, the second coding algorithm determines the encoder-related second coding algorithm steps to be used in the encoding processor and corresponding second coding algorithm-related decoding steps to be used in decoding processor 36.
  • pre-filter 12 and the post-filter 44 are, in general, inverse to each other.
  • the warping characteristics of those filters are controlled such that the post-filter has the same warping characteristic as the pre-filter or at least a similar warping characteristic within a 10 percent tolerance range.
  • the post-filter also does not have to be a warped filter.
  • the pre-filter 12 as well as the post-filter 44 can implement any other pre-filter or post-filter operations required in connection with the first coding algorithm or the second coding algorithm as will be outlined later on.
  • Fig. 3a illustrates an example of an encoded audio signal as obtained on line 24 of Fig. 1 and as can be found on line 30 of Fig. 2 .
  • the encoded audio signal includes a first time portion in encoded form, which has been generated by the first coding algorithm as outlined at 50 and corresponding side information 52 for the first portion.
  • the bit-stream includes a second time portion in encoded form as shown at 54 and side information 56 for the second time portion.
  • the order of the items in Fig. 3a may vary.
  • the side information does not necessarily have to be multiplexed between the main information 50 and 54. Those signals can even come from separate sources as dictated by external requirements or implementations.
  • Fig. 3b illustrates side information for the explicit signaling embodiment of the present invention for explicitly signaling the warping factor and encoder mode, which can be used in 52 and 56 of Fig. 3a .
  • the side information may include a coding mode indication explicitly signaling the first or the second coding algorithm underlying this portion to which the side information belongs to.
  • a warping factor can be signaled. Signaling of the warping factor is not necessary, when the whole system can only use two different warping characteristics, i.e., no warping characteristic as the first possibility and a perceptually full-scale warping characteristic as the second possibility. In this case, a warping factor can be fixed and does not necessarily have to be transmitted.
  • the warping factor can have more than these two extreme values so that an explicit signaling of the warping factor such as by absolute values or differentially coded values is used.
  • the pre-filter not only is warped but also implements tasks dictated by the first coding algorithm and the second coding algorithm, which leads to a more efficient functionality of the first and the second coding algorithms.
  • the pre-filter also performs the functionality of the LPC analysis filter and the post-filter on the decoder-side performs the functionality of an LPC synthesis filter.
  • the pre-filter is preferably an LPC filter, which pre-filters the audio signal so that, after pre-filtering, psychoacoustically more important portions are amplified with respect to psychoacoustically less important portions.
  • the post-filter is implemented as a filter for regenerating a situation similar to a situation before pre-filtering, i.e. an inverse filter which amplifies less important portions with respect to more important portions so that the signal after post-filtering is - apart from coding errors - similar to the original audio signal input into the encoder.
  • the filter coefficients for the above described pre-filter are preferably also transmitted via side information from the encoder to the decoder.
  • the pre-filter as well as the post-filter will be implemented as a warped FIR filter, a structure of which is illustrated in Fig. 4 , or as a warped IIR digital filter.
  • the Fig. 4 filter is described in detail in [KHL 97].
  • Examples for warped IIR filters are also shown in [KHL 97]. All those digital filters have in common that they have warped delay elements 60 and weighting coefficients or weighting elements indicated by ⁇ 0, . ⁇ 1, ⁇ 2. ,... .
  • a filter structure is transformed to a warped filter, when a delay element in an unwarped filter structure (not shown here) is replaced by an all-pass filter, such as a first-order all-pass filter D(z), as illustrated in on both sides of the filter structures in Fig. 4 .
  • an all-pass filter such as a first-order all-pass filter D(z)
  • D(z) a first-order all-pass filter
  • the filter structure to the right of Fig. 4 can easily be implemented within the pre-filter as well as within the post-filter, wherein the warping factor is controlled by the parameter ⁇ , while the filter characteristic, i.e., the filter coefficients of the LPC analysis/synthesis or pre-filtering or post-filtering for amplifying/damping psycho-acoustically more important portions is controlled by setting the weighting parameters ⁇ 0, ⁇ 1, ⁇ 2 ,... . to appropriate values.
  • Fig. 5 illustrates the dependence of the frequency-warping characteristic on the warping factor ⁇ for ⁇ s between -0.8 and +0.8. No warping at all will be obtained, when ⁇ is set to 0.0.
  • a psycho-acoustically full-scale warp is obtained by setting ⁇ between 0.3 and 0.4.
  • the optimum warping factor depends on the chosen sampling rate and has a value of between about 0.3 and 0.4 for sampling rates between 32 and 48 kHz.
  • the then obtained non-uniform frequency resolution by using the warped filter is similar to the BARK or ERB scale. Substantially stronger warping characteristics can be implemented, but those are only useful in certain situations, which can happen when the controller determines that those higher warping factors are useful.
  • the pre-filter on the encoder-side will preferably have positive warping factors ⁇ to increase the frequency resolution in the low frequency range and to decrease the frequency resolution in the high frequency range.
  • the post-filter on the decoder-side will also have the positive warping factors.
  • a preferred inventive time-varying warping filter is shown in Fig. 6 at 70 as a part of the audio processor.
  • the inventive filter is, preferably, a linear filter, which is implemented as a pre-filter or a post-filter for filtering to amplify or damp psycho-acoustically more/less important portions or which is implemented as an LPC analysis/synthesis filter depending on the control signal of the system.
  • the warped filter is a linear filter and does not change the frequency of a component such as a sine wave input into the filter.
  • the filter before warping is a low pass filter
  • the Fig. 5 diagram has to be interpreted as set out below.
  • the filter would apply - for a warping factor equal to 0.0 - the phase and amplitude weighting defined by the filter impulse response of this unwarped filter.
  • the sine wave having a normalized frequency of 0.6 will be filtered such that the output is weighted by the phase and amplitude weighting which the unwarped filter has for a normalized frequency of 0.97 in Fig. 5 . Since this filter is a linear filter, the frequency of the sine wave is not changed.
  • the filter coefficients ⁇ i are derived from the masking threshold. These filter coefficients can be pre- or post-filter coefficients, or LPC analysis/synthesis filter coefficients, or any other filter coefficients useful in connection with any first or second coding algorithms.
  • an audio processor in accordance with the present invention includes, in addition to the filter having variable warping characteristics, the controller 18 of Fig. 1 or the controller implemented as the coding algorithm detector 32 of Fig. 2 or a general audio input signal analyzer looking for a specific signal pattern in the audio input 10/42 so that a certain warping characteristic can be set, which fits to the specific signal pattern so that a time-adapted variable warping of the audio input be it an encoded or a decoded audio input can be obtained.
  • the pre-filter coefficients and the post-filter coefficients are identical.
  • the output of the audio processor illustrated in Fig. 6 which consists of the filter 70 and the controller 74 can then be stored for any purposes or can be processed by encoding processor 22, or by an audio reproduction device when the audio processor is on the decoder-side, or can be processed by any other signal processing algorithms.
  • Figs. 7 and 8 show preferred embodiments of the inventive encoder ( Fig. 7 ) and the inventive decoder ( Fig. 8 ).
  • the functionalities of the devices are similar to the Fig. 1 , Fig. 2 devices.
  • Fig. 7 illustrates the embodiment, wherein the first coding algorithm is a speech-coder like coding algorithm, wherein the specific signal pattern is a speech pattern in the audio input 10.
  • the second coding algorithm 22b is a generic audio coder such as the straight-forward filterbank-based audio coder as illustrated and discussed in connection with Fig. 9 , or the pre-filter/post-filter audio coding algorithm as illustrated in Fig. 10 .
  • the first coding algorithm corresponds to the Fig. 11 speech coding system, which, in addition to an LPC analysis/synthesis filter 1100 and 1102 also includes a residual/excitation coder 1104 and a corresponding excitation decoder 1106.
  • the time-varying warped filter 12 in Fig. 7 has the same functionality as the LPC filter 1100, and the LPC analysis implemented in block 1108 in Fig. 11 is implemented in controller 18.
  • the residual/excitation coder 1104 corresponds to the residual/excitation coder kernel 22a in Fig. 7 .
  • the excitation decoder 1106 corresponds to the residual/excitation decoder 36a in Fig. 8
  • the time-varying warped filter 44 has the functionality of the inverse LPC filter 1102 for a first time portion being coded in accordance with the first coding algorithm.
  • the LPC filter coefficients generated by LPC analysis block 1108 correspond to the filter coefficients shown at 90 in Fig. 7 for the first time portion and the LPC filter coefficients input into block 1102 in Fig. 11 correspond to the filter coefficients on line 92 of Fig. 8 .
  • the Fig. 7 encoder includes an encoder output interface 94, which can be implemented as a bit-stream multiplexer, but which can also be implemented as any other device producing a data stream suitable for transmission and/or storage.
  • the Fig. 8 decoder includes an input interface 96, which can be implemented as a bit-stream demultiplexer for de-multiplexing the specific time portion information as discussed in connection with Fig. 3a and for also extracting the required side-information as illustrated in Fig. 3b .
  • both encoding kernels 22a, 22b have a common input 96, and are controlled by the controller 18 via lines 97a and 97b. This control makes sure that, at a certain time instant, only one of both encoder kernels 22a, 22b outputs main and side information to the output interface.
  • both encoding kernels could work fully parallel, and the encoder controller 18 would make sure that only the output of the encoding kernel is input into the bit-stream, which is indicated by the coding mode information while the output of the other encoder is discarded.
  • both decoders can operate in parallel and outputs thereof can be added.
  • this embodiment processes e.g. a speech portion of a signal such as a certain frequency range or - generally - signal portion by the first coding algorithm and the remainder of the signal by the second general coding algorithm. Then outputs of both coders are transmitted from the encoder to the decoder side.
  • the decoder-side combination makes sure that the signal is rejoined before being post-filtered.
  • any kind of specific controls can be implemented as long as they make sure that the output encoded audio signal 24 has a sequence of first and second portions as illustrated in Fig. 3 or a correct combination of signal portions such as a speech portion and a general audio portion.
  • the coding mode information is used for decoding the time portion using the correct decoding algorithm so that a time-staggered pattern of first portions and second portions obtain at the outputs of decoder kernels 36a, and 36b, which are, then, multiplexed into a single time domain signal, which is illustrated schematically using the adder symbol 36c. Then, at the output of element 36c, there is a time-domain audio signal, which only has to be post-filtered so that the decoded audio signal is obtained.
  • both the encoder in Fig. 7 as well as the decoder in Fig. 8 may include an interpolator 100 or 102 so that a smooth transition via a certain time portion, which at least includes two samples, but which preferably includes more than 50 samples and even more than 100 samples, is implementable. This makes sure that coding artifacts are avoided, which might be caused by rapid changes of the warping factor and the filter coefficients. Since, however, the post-filter as well as the pre-filter fully operate in the time domain, there are no problems related to block-based specific implementations. Thus, one can change, when Fig.
  • the generic audio coder kernel 22b as illustrated in Fig. 7 may be identical to the coder 1000 in Fig. 10 .
  • the pre-filter 12 will also perform the functionality of the pre-filter 1002 in Fig. 10 .
  • the perceptual model 1004 in Fig. 10 will then be implemented within controller 18 of Fig. 7 .
  • the filter coefficients generated by the perceptual model 1004 correspond to the filter coefficients on line 90 in Fig. 7 for a time portion, for which the second coding algorithm is on.
  • the decoder 1006 in Fig. 10 is implemented by the generic audio decoder kernel 36b in Fig. 8
  • the post-filter 1008 is implemented by the time-varying warped filter 44 in Fig. 8 .
  • the preferably coded filter coefficients generated by the perceptual model are received, on the decoder-side, on line 92, so that a line titled "filter coefficients" entering post-filter 1008 in Fig. 10 corresponds to line 92 in Fig. 8 for the second coding algorithm time portion.
  • the inventive encoder devices and the inventive decoder devices only use a single, but controllable filter and perform a discrimination on the input audio signal to find out whether the time portion of the audio signal has the specific pattern or is just a general audio signal.
  • a variety of different implementations can be used for determining, whether a portion of an audio signal is a portion having the specific signal pattern or whether this portion does not have this specific signal pattern, and, therefore, has to be processed using the general audio encoding algorithm.
  • the specific signal pattern is a speech signal
  • other signal-specific patterns can be determined and can be encoded using such signal-specific first encoding algorithms such as encoding algorithm for harmonic signals, for noise signals, for tonal signals, for pulse-train-like signals, etc.
  • Straightforward detectors are analysis by synthesis detectors, which, for example, try different encoding algorithms, together with different warping detectors to find out the best warping factor together with the best filter coefficients and the best coding algorithm.
  • Such analysis by synthesis detectors are in some cases quite computationally expensive. This does not matter in a situation, wherein there is a small number of encoders and a high number of decoders, since the decoder can be very simple in that case. This is due to the fact that only the encoder performs this complex computational task, while the decoder can simply use the transmitted side-information.
  • the inventive methods can be implemented in hardware or in software.
  • the implementation can be performed using a digital storage medium, in particular a disk or a CD having electronically readable control signals stored thereon, which can cooperate with a programmable computer system such that the inventive methods are performed.
  • the present invention is, therefore, a computer program product with a program code stored on a machine-readable carrier, the program code being configured for performing at least one of the inventive methods, when the computer program products runs on a computer.
  • the inventive methods are, therefore, a computer program having a program code for performing the inventive methods, when the computer program runs on a computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Reduction Or Emphasis Of Bandwidth Of Signals (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
  • Stereophonic System (AREA)

Claims (49)

  1. Audiocodierer zum Codieren eines Audiosignals, der folgende Merkmale umfasst:
    ein Vorfilter (12) zum Erzeugen eines vorgefilterten Audiosignals, wobei das Vorfilter (12) eine variable Frequenzverwerfungscharakteristik aufweist, wobei die Frequenzverwerfungscharakteristik ansprechend auf ein zeitvariables Steuersignal steuerbar ist, wobei das Steuersignal eine kleine oder keine Frequenzverwerfungscharakteristik oder eine vergleichsweise hohe Frequenzverwerfungscharakteristik anzeigt;
    eine Steuerung (18) zum Bereitstellen des zeitvariablen Steuersignals, wobei das zeitvariable Steuersignal von dem Audiosignal abhängt; und
    einen steuerbaren Codierprozessor (22) zum Verarbeiten des vorgefilterten Audiosignals, um ein codiertes Audiosignal zu erhalten, wobei der Codierprozessor (22) gesteuert wird, um das vorgefilterte Audiosignal gemäß einem ersten Codieralgorithmus zu verarbeiten, der an ein spezifisches Signalmuster angepasst ist, oder gemäß einem zweiten anderen Codieralgorithmus, der zum Codieren eines allgemeinen Audiosignals geeignet ist.
  2. Audiocodierer gemäß Anspruch 1,
    wobei der Codierprozessor (22) angepasst ist, um zumindest einen Teil eines Sprachcodieralgorithmus als den ersten Codieralgorithmus zu verwenden.
  3. Audiocodierer gemäß Anspruch 1, bei dem der Codierprozessor (22) angepasst ist, um einen Rest/Erregungscodieralgorithmus als einen Teil des ersten Codieralgorithmus zu verwenden, wobei der Rest/Erregungscodieralgorithmus einen Code-erregtelinear-Prädiktion- (CELP-) Codieralgorithmus, einen Multi-Puls-Erregungs- (MPE-) Codieralgorithmus oder einen Regulär-Puls-Erregungs- (RPE-) Codieralgorithmus umfasst.
  4. Audiocodierer gemäß Anspruch 1, bei dem der Codierprozessor (22) angepasst ist, um einen filterbankbasierten, filterbankbasierten oder zeitbereichsbasierten Codieralgorithmus als den zweiten Codieralgorithmus zu verwenden.
  5. Audiocodierer gemäß Anspruch 1, der ferner ein psychoakustisches Modul zum Bereitstellen von Informationen über einen Maskierungsschwellenwert umfasst, und
    wobei das Vorfilter (12) wirksam ist, um basierend auf dem Maskierungsschwellenwert eine Filteroperation durchzuführen, so dass in dem vorgefilterten Audiosignal psychoakustisch wichtigere Abschnitte bezüglich psychoakustisch weniger wichtigen Abschnitten verstärkt werden.
  6. Audiocodierer gemäß Anspruch 5, bei dem das Vorfilter (12) ein lineares Filter ist, das einen steuerbaren Verwerfungsfaktor aufweist, wobei der steuerbare Verwerfungsfaktor durch das zeitvariable Steuersignal bestimmt wird, und
    wobei Filterkoeffizienten durch eine Analyse basierend auf dem Maskierungsschwellenwert bestimmt werden.
  7. Audiocodierer gemäß Anspruch 1, bei dem der erste Codieralgorithmus einen Rest- oder Erregungscodierschritt umfasst, und der zweite Codieralgorithmus einen allgemeinen Audiocodierschritt umfasst.
  8. Audiocodierer gemäß Anspruch 1, bei dem der Codierprozessor (22) folgende Merkmale umfasst:
    einen ersten Codierkernel (22a) zum Anlegen des ersten Codieralgorithmus an das Audiosignal;
    einen zweiten Codierkernel (22b) zum Anlegen des zweiten Codieralgorithmus an das Audiosignal,
    wobei beide Codierkernel (22a, 22b) einen gemeinsamen Eingang aufweisen, der mit einem Ausgang des Vorfilters (12) verbunden ist, wobei beide Codierkernel getrennte Ausgänge aufweisen,
    wobei der Audiocodierer ferner eine Ausgangsstufe (94) zum Ausgeben des codierten Signals umfasst, und
    wobei die Steuerung (18) wirksam ist, um nur einen Ausgang des Codierkernels mit der Ausgabestufe zu verbinden, der durch die Steuerung als für einen Zeitabschnitt aktiv angezeigt ist.
  9. Audiocodierer gemäß Anspruch 1, bei dem der Codierprozessor (22) folgende Merkmale umfasst:
    einen ersten Codierkernel (22a) zum Anlegen des ersten Codieralgorithmus an das Audiosignal;
    einen zweiten Codierkernel (22b) zum Anlegen des zweiten Codieralgorithmus an das Audiosignal,
    wobei beide Codierkernel (22a, 22b) einen gemeinsamen Eingang aufweisen, der mit einem Ausgang des Vorfilters (12) verbunden ist, wobei beide Codierkernel einen getrennten Ausgang aufweisen, und
    wobei die Steuerung (18) wirksam ist, um den Codierkernel zu aktivieren, der durch eine Codiermodusanzeige ausgewählt ist, und den Codierkernel zu deaktivieren, der nicht durch die Codiermodusanzeige ausgewählt ist, oder um beide Codierkernel für unterschiedliche Teile des gleichen Zeitabschnitts des Audiosignals zu aktivieren.
  10. Audiocodierer gemäß Anspruch 1, der ferner eine Ausgabestufe (94) umfasst zum Ausgeben des zeitvariablen Steuersignals oder eines Signals, das von dem zeitvariablen Steuersignal abgeleitet ist durch Quantisierung oder Codierung als Nebeninformation zu dem codierten Signal.
  11. Audiocodierer gemäß Anspruch 6, der ferner eine Ausgabestufe (94) umfasst zum Ausgeben von Informationen über den Maskierungsschwellenwert als Nebeninformation zu dem codierten Audiosignal.
  12. Audiocodierer gemäß Anspruch 6, bei dem der Codierprozessor (22), wenn der zweite Codieralgorithmus angelegt ist, wirksam ist, um das vorgefilterte Audiosignal zu quantisieren unter Verwendung eines Quantisierers, der eine Quantisierungscharakteristik aufweist, die ein Quantisierungsrauschen mit einer flachen spektralen Verteilung einführt.
  13. Audiocodierer gemäß Anspruch 12, bei dem der Codierprozessor (22), wenn ein zweiter Codieralgorithmus angelegt ist, wirksam ist, um vorgefilterte Zeitbereichsabtastwerte oder Teilbandabtastwerte, Frequenzkoeffizienten oder Restabtastwerte, die von dem vorgefilterten Audiosignal abgeleitet sind, zu quantisieren.
  14. Audiocodierer gemäß Anspruch 1, bei dem die Steuerung (18) wirksam ist, um das zeitvariable Steuersignal bereitzustellen, so dass eine Verwerfungsoperation eine Frequenzauflösung in einem Niedrigfrequenzbereich erhöht und eine Frequenzauflösung in einem Hochfrequenzbereich verringert, für die vergleichsweise hohe Verwerfungscharakteristik des Vorfilters im Vergleich zu der kleinen oder keinen Verwerfungscharakteristik des Vorfilters.
  15. Audiocodierer gemäß Anspruch 1, bei dem die Steuerung (18) einen Audiosignalanalysierer zum Analysieren des Audiosignals umfasst, um das zeitvariable Steuersignal zu bestimmen.
  16. Audiocodierer gemäß Anspruch 1, bei dem die Steuerung (18) wirksam ist, um ein zeitvariables Steuersignal zu erzeugen, das zusätzlich zu einem ersten extremen Zustand, der keine oder nur eine kleine Verwerfungscharakteristik anzeigt, und einem zweiten extremen Zustand, der die maximale Verwerfungscharakteristik anzeigt, null, einen oder mehrere Zwischenzustände aufweist, die eine Verwerfungscharakteristik zwischen den extremen Zuständen anzeigen.
  17. Audiocodierer gemäß Anspruch 1, der ferner einen Interpolator (100) umfasst, wobei der Interpolator wirksam ist, um das Vorfilter zu steuern, so dass die Verwerfungscharakteristik ausgeblendet ist zwischen zwei Verwerfungszuständen, die durch das zeitvariable Steuersignal über eine Fadingzeitperiode signalisiert werden, die zumindest zwei Zeitbereichsabtastwerte aufweist.
  18. Audiocodierer gemäß Anspruch 17, bei dem die Fadingzeitperiode zumindest 50 Zeitbereichsabtastwerte zwischen einer Filtercharakteristik, die keine oder eine kleine Verwerfung verursacht, und einer Filtercharakteristik, die eine vergleichsweise hohe Verwerfung verursacht, umfasst, was zu einer verworfenen Frequenzauflösung ähnlich einer BARK- oder ERB-Skala führt.
  19. Audiocodierer gemäß Anspruch 17, bei dem der Interpolator (100) wirksam ist, um einen Verwerfungsfaktor zu verwenden, was zu einer Verwerfungscharakteristik zwischen zwei Verwerfungscharakteristika führt, die durch das zeitvariable Steuersignal in der Ausblendzeitperiode angezeigt ist.
  20. Audiocodierer gemäß Anspruch 1, bei dem das Vorfilter (12) ein digitales Filter ist, das eine verworfene FIR- oder verworfene IIR-Struktur aufweist, wobei die Struktur Verzögerungselemente (60) umfasst, wobei ein Verzögerungselement so gebildet ist, dass das Verzögerungselement eine Allpassfiltercharakteristik erster oder höherer Ordnung aufweist.
  21. Audiocodierer gemäß Anspruch 20, bei dem die Allpassfiltercharakteristik auf den folgenden Filtercharakteristika basiert: z - 1 - λ / 1 - λ z - 1 ,
    Figure imgb0003

    wobei z-1 eine Verzögerung in dem zeitdiskreten Bereich anzeigt und wobei λ ein Verwerfungsfaktor ist, der eine stärkere Verwerfungscharakteristik für Verwerfungsfaktorgrößen näher zu "1" anzeigt und eine kleinere Verwerfungscharakteristik für Größen des Verwerfungsfaktors näher zu "0" anzeigt.
  22. Audiocodierer gemäß Anspruch 20, bei dem die FIR- oder IIR-Struktur ferner Gewichtungselemente umfasst, wobei jedes Gewichtungselement einen zugeordneten Gewichtungsfaktor aufweist,
    wobei die Gewichtungsfaktoren bestimmt werden durch die Filterkoeffizienten für das Vorfilter, wobei die Filterkoeffizienten LPC-Analyse- oder Synthesefilterkoeffizienten oder maskierungsschwellenwertbestimmte Analyse- oder Synthesefilterkoeffizienten umfassen.
  23. Audiocodierer gemäß Anspruch 20, bei dem das Vorfilter (12) eine Filterordnung zwischen 6 und 30 aufweist.
  24. Audiocodierer gemäß Anspruch 1, bei dem der Codierprozessor (22) angepasst ist, um durch die Steuerung (18) gesteuert zu werden, so dass ein Audiosignalabschnitt, der unter Verwendung der vergleichsweise hohen Verwerfungscharakteristik gefiltert wird, unter Verwendung des zweiten Codieralgorithmus verarbeitet wird, um das codierte Signal zu erhalten, und ein Audiosignal, das unter Verwendung der kleinen oder keinen Verwerfungscharakteristik gefiltert wird, unter Verwendung des ersten Codieralgorithmus verarbeitet wird.
  25. Audiodecodierer zum Decodieren eines codierten Audiosignals, wobei das codierte Audiosignal einen ersten Abschnitt aufweist, der gemäß einem ersten Codieralgorithmus codiert ist, der an ein spezifisches Signalmuster angepasst ist, und einen zweiten Abschnitt aufweist, der gemäß einem anderen zweiten Codieralgorithmus codiert ist, der zum Codieren eines allgemeinen Audiosignals geeignet ist, der folgende Merkmale umfasst:
    einen Detektor (32) zum Erfassen eines Codieralgorithmus, der dem ersten Abschnitt oder dem zweiten Abschnitt zugrunde liegt;
    einen Decodierprozessor (36) zum Decodieren des ersten Abschnitts, ansprechend auf den Detektor (32), unter Verwendung des ersten Codieralgorithmus, um einen ersten decodierten Zeitabschnitt zu erhalten, und zum Decodieren des zweiten Abschnitts unter Verwendung des zweiten Codieralgorithmus, um einen zweiten decodierten Zeitabschnitt zu erhalten; und
    ein Nachfilter (44) mit einer variablen Frequenzverwerfungscharakteristik, das zwischen einem ersten Zustand mit einer kleinen oder keinen Frequenzverwerfungscharakteristik und einem zweiten Zustand mit einer vergleichsweise hohen Frequenzverwerfungscharakteristik steuerbar ist.
  26. Audiodecodierer gemäß Anspruch 25, bei dem das Post-filter (44) eingestellt ist, so dass die Verwerfungscharakteristik während des Nachfilterns ähnlich ist einer Verwerfungscharakteristik, die während des Vorfilterns verwendet wird, innerhalb eines Toleranzbereichs von 10 % bezüglich einer Verwerfungsstärke.
  27. Audiodecodierer gemäß Anspruch 25, bei dem das codierte Audiosignal einen Codiermodusindikator oder Verwerfungsfaktorinformationen umfasst,
    wobei der Detektor (32) wirksam ist, um Informationen über den Codiermodus oder einen Verwerfungsfaktor von dem codierten Audiosignal zu extrahieren (34), und
    wobei der Decodierprozessor (36) oder das Postfilter (44) wirksam sind, um unter Verwendung der extrahierten Informationen gesteuert zu werden.
  28. Audiodecodierer gemäß Anspruch 27, bei dem ein Verwerfungsfaktor, der von den extrahierten Informationen abgeleitet ist und zum Steuern des Nachfilters (44) verwendet wird, ein positives Vorzeichen aufweist.
  29. Audiodecodierer gemäß Anspruch 25, bei dem das codierte Signal ferner Informationen über Filterkoeffizienten umfasst, abhängig von einem Maskierungsschwellenwert eines ursprünglichen Signals, das dem codierten Signal zugrunde liegt, und
    wobei der Detektor (32) wirksam ist, um die Informationen über die Filterkoeffizienten von dem codierten Audiosignal zu extrahieren (34), und
    wobei das Nachfilter (44) angepasst ist, um basierend auf den extrahierten Informationen über die Filterkoeffizienten gesteuert zu werden, so dass ein nachgefiltertes Signal einem ursprünglichen Signal ähnlicher ist als das Signal vor dem Nachfiltern.
  30. Audiodecodierer gemäß Anspruch 25, bei dem der Decodierprozessor (36) angepasst ist, um einen Sprachcodieralgorithmus als den ersten Codieralgorithmus zu verwenden.
  31. Audiodecodierer gemäß Anspruch 25, bei dem der Decodierprozessor (36) angepasst ist, um einen Rest/Erregungsdecodieralgorithmus (36a) als den ersten Codieralgorithmus zu verwenden.
  32. Audiodecodierer gemäß Anspruch 25, bei dem der Rest/Erregungsdecodieralgorithmus (36a) einen Abschnitt des ersten Codieralgorithmus umfasst, wobei der Rest/Erregungscodieralgorithmus einen Code-erregtelinear-Prädiktion- (CELP-) Codieralgorithmus, einen Multi-Puls-Erregungs- (MPE-) Codieralgorithmus oder einen Regulär-Puls-Erregungs- (RPE-) Codieralgorithmus umfasst.
  33. Audiodecodierer gemäß Anspruch 25, bei dem der Decodierprozessor (36) angepasst ist, um einen filterbankbasierten oder transformiertebasierten oder zeitbereichsbasierten Decodieralgorithmus (36b) als einen zweiten Codieralgorithmus zu verwenden.
  34. Audiodecodierer gemäß Anspruch 25, bei dem der Decodierprozessor (36) einen ersten Codierkernel (36a) zum Anlegen des ersten Codieralgorithmus an das codierte Audiosignal umfasst;
    einen zweiten Codierkernel (36b) zum Anlegen eines zweiten Codieralgorithmus an das codierte Audiosignal,
    wobei beide Codierkernel einen Ausgang aufweisen, wobei jeder Ausgang mit einem Kombinierer (36c) verbunden ist, wobei der Kombinierer einen Ausgang aufweist, der mit einem Eingang des Nachfilters (44) verbunden ist, wobei die Codierkernel so gesteuert werden, dass nur ein decodierter Zeitabschnitt, der durch einen ausgewählten Codieralgorithmus ausgegeben wird, an den Kombinierer und das Nachfilter weitergeleitet wird, oder unterschiedliche Teile des gleichen Zeitabschnitts des Audiosignals durch unterschiedliche Codierkernel verarbeitet werden, und der Kombinierer wirksam ist, um decodierte Darstellungen der unterschiedlichen Teile zu kombinieren.
  35. Audiodecodierer gemäß Anspruch 25, bei dem der Decodierprozessor (36), wenn der zweite Codieralgorithmus angelegt ist, wirksam ist, um ein Audiosignal zu dequantisieren, das unter Verwendung eines Quantisierers mit einer Quantisierungscharakteristik quantisiert wurde, die ein Quantisierungsrauschen mit einer flachen spektralen Verteilung einführt.
  36. Audiodecodierer gemäß Anspruch 25, bei dem der Decodierprozessor (36), wenn der zweite Codieralgorithmus angelegt ist, wirksam ist, um quantisierte Zeitbereichsabtastwerte, quantisierte Teilbandabtastwerte, quantisierte Frequenzkoeffizienten oder quantisierte Restabtastwerte zu dequantisieren.
  37. Audiodecodierer gemäß Anspruch 25, bei dem der Detektor (32) wirksam ist, um ein zeitvariables Nachfiltersteuersignal (92) bereitzustellen, so dass ein verworfenes Filterausgangssignal eine verringerte Frequenzauflösung in einem Hochfrequenzbereich und eine erhöhte Frequenzauflösung in einem Niedrigfrequenzbereich für die vergleichsweise hohe Verwerfungscharakteristik des Nachfilters aufweist, verglichen mit einem Filterausgangssignal eines Nachfilters, das eine kleine oder keine Verwerfungscharakteristik aufweist.
  38. Audiodecodierer gemäß Anspruch 25, der ferner einen Interpolator (102) zum Steuern des Nachfilters umfasst, so dass die Verwerfungscharakteristik zwischen zwei Verwerfungszuständen über eine Fadingzeitperiode mit zumindest zwei Zeitbereichsabtastwerten ausgeblendet ist.
  39. Audiodecodierer gemäß Anspruch 25, bei dem das Nachfilter (44) ein digitales Filter ist, das eine verworfene FIR- oder verworfene IIR-Struktur aufweist, wobei die Struktur Verzögerungselemente umfasst, wobei ein Verzögerungselement so gebildet ist, dass das Verzögerungselement eine Allpassfiltercharakteristik erster oder höherer Ordnung aufweist.
  40. Audiodecodierer gemäß Anspruch 25, bei dem die Allpassfiltercharakteristik auf der folgenden Filtercharakteristika basiert: z - 1 - λ / 1 - λ z - 1 ,
    Figure imgb0004

    wobei z-1 eine Verzögerung in dem zeitdiskreten Bereich anzeigt und wobei λ ein Verwerfungsfaktor ist, der eine stärkere Verwerfungscharakteristik für Verwerfungsfaktorgrößen näher zu "1" anzeigt und eine kleinere Verwerfungscharakteristik für Größen des Verwerfungsfaktors näher zu "0" anzeigt.
  41. Audiodecodierer gemäß Anspruch 25, bei dem die verworfene FIR- oder verworfene IIR-Struktur ferner Gewichtungselemente umfasst, wobei jedes Gewichtungselement einen zugeordneten Gewichtungsfaktor aufweist,
    wobei die Gewichtungsfaktoren bestimmt werden durch die Filterkoeffizienten für das Vorfilter, wobei die Filterkoeffizienten LPC-Analyse- oder Synthesefilterkoeffizienten oder maskierungsschwellenwertbestimmte Analyse- oder Synthesefilterkoeffizienten umfassen.
  42. Audiodecodierer gemäß Anspruch 25, bei dem das Nachfilter (44) so gesteuert ist, dass der erste decodierte Zeitabschnitt unter Verwendung der kleinen oder keinen Verwerfungscharakteristik gefiltert wird, und der zweite decodierte Zeitabschnitt unter Verwendung einer vergleichsweise hohen Verwerfungscharakteristik gefiltert wird.
  43. Codiertes Audiosignal, das einen ersten Zeitabschnitt (50) aufweist, der gemäß einem ersten Codieralgorithmus (22a) codiert ist, der an ein spezifisches Signalmuster angepasst ist, und einen zweiten Zeitabschnitt (54) aufweist, der gemäß einem anderen zweiten Codieralgorithmus (22b) codiert ist, der zum Codieren eines allgemeinen Audiosignals geeignet ist, und als Nebeninformation (52, 56) einen Frequenzverwerfungsfaktor, der eine Frequenzverwerfungsstärke anzeigt, die dem ersten oder dem zweiten Abschnitt des codierten Audiosignals zugrunde liegt, oder Filterkoeffizienteninformationen, die ein Vorfilter anzeigen, das zum Codieren des Audiosignals verwendet wird, oder ein Nachfilter anzeigen, das beim Decodieren des Audiosignals zu verwenden ist.
  44. Verfahren zum Codieren eines Audiosignals, das folgende Schritte umfasst:
    Erzeugen (12) eines vorgefilterten Audiosignals unter Verwendung eines Vorfilters, wobei das Vorfilter eine variable Frequenzverwerfungscharakteristik aufweist, wobei die Frequenzverwerfungscharakteristik ansprechend auf ein zeitvariables Steuersignal steuerbar ist, wobei das Steuersignal eine kleine oder keine Frequenzverwerfungscharakteristik oder eine vergleichsweise hohe Frequenzverwerfungscharakteristik aufweist;
    Bereitstellen (18) des zeitvariablen Steuersignals, wobei das zeitvariable Steuersignal von dem Audiosignal abhängt; und
    Verarbeiten des vorgefilterten Audiosignals zum Erhalten eines codierten Audiosignals gemäß einem ersten Codieralgorithmus (22a), der an ein spezifisches Signalmuster angepasst ist, oder gemäß einem zweiten anderen Codieralgorithmus (22b), der zum Codieren eines allgemeinen Audiosignals geeignet ist, wobei der Schritt des Verarbeitens durchgeführt wird, so dass ein Audiosignalabschnitt, der unter Verwendung einer vergleichsweise hohen Frequenzverwerfungscharakteristik gefiltert wird, unter Verwendung des zweiten Codieralgorithmus verarbeitet wird, und ein Audiosignalabschnitt, der unter Verwendung einer kleinen oder keiner Frequenzverwerfungscharakteristik gefiltert wird, unter Verwendung des ersten Codieralgorithmus verarbeitet wird.
  45. Verfahren zum Decodieren eines codierten Audiosignals, wobei das codierte Audiosignal einen ersten Abschnitt aufweist, der gemäß einem ersten Codieralgorithmus codiert ist, der an ein spezifisches Signalmuster angepasst ist, und einen zweiten Abschnitt aufweist, der gemäß einem anderen zweiten Codieralgorithmus codiert ist, der zum Codieren eines allgemeinen Audiosignals geeignet ist, wobei das Verfahren folgende Schritte umfasst:
    Erfassen (32) eines Codieralgorithmus, der dem ersten Abschnitt oder dem zweiten Abschnitt zugrunde liegt;
    Decodieren (36), ansprechend auf den Schritt des Erfassens, des ersten Abschnitts unter Verwendung des ersten Codieralgorithmus, um einen ersten decodierten Zeitabschnitt zu erhalten, und Decodieren des zweiten Abschnitts unter Verwendung des zweiten Codieralgorithmus, um einen zweiten decodierten Zeitabschnitt zu erhalten; und
    Nachfiltern (44) unter Verwendung einer variablen Frequenzverwerfungscharakteristik, die zwischen einem ersten Zustand, der eine kleine oder keine Frequenzverwerfungscharakteristik aufweist, und einem zweiten Zustand, der eine vergleichsweise hohe Frequenzverwerfungscharakteristik aufweist, steuerbar ist.
  46. Audioprozessor zum Verarbeiten eines Audiosignals (10, 42), der folgende Merkmale umfasst:
    ein Filter (70, 12, 44) zum Erzeugen eines gefilterten Audiosignals (14, 48), wobei das Filter eine variable Frequenzverwerfungscharakteristik aufweist, wobei die Frequenzverwerfungscharakteristik ansprechend auf ein zeitvariables Steuersignal (72, 16, 46) steuerbar ist, wobei das Steuersignal eine kleine oder keine Frequenzverwerfungscharakteristik oder eine vergleichsweise hohe Frequenzverwerfungscharakteristik aufweist, wobei das Filter ein lineares Filter ist, das abhängig von dem Steuersignal als ein gemeinsames Vorfilter für zwei unterschiedliche Codieralgorithmen oder ein gemeinsames Nachfilter für zwei unterschiedliche Decodieralgorithmen zum Filtern implementiert ist, um psychoakustisch mehr oder weniger wichtige Abschnitte zu verstärken oder zu dämpfen, oder als ein LPC-Analyse-oder Synthesefilter implementiert ist; und
    eine Steuerung (74, 18, 32) zum Bereitstellen des zeitvariablen Steuersignals (72, 16, 46), wobei das zeitvariable Steuersignal von dem Audiosignal abhängt.
  47. Audioprozessor gemäß Anspruch 46, bei dem das lineare Filter (70, 12, 44) ein Tiefpassfilter ist.
  48. Verfahren zum Verarbeiten eines Audiosignals (10, 42), das folgende Schritte umfasst:
    Erzeugen (70, 12, 44) eines gefilterten Audiosignals (14, 48) unter Verwendung eines Filters, wobei das Filter eine variable Frequenzverwerfungscharakteristik aufweist, wobei die Frequenzverwerfungscharakteristik ansprechend auf ein zeitvariables Steuersignal (72, 16, 46) steuerbar ist, wobei das Steuersignal eine kleine oder keine Frequenzverwerfungscharakteristik oder eine vergleichsweise hohe Frequenzverwerfungscharakteristik aufweist, wobei das Filter ein lineares Filter ist, das abhängig von dem Steuersignal als ein gemeinsames Vorfilter für zwei unterschiedliche Codieralgorithmen oder ein gemeinsames Nachfilter für zwei unterschiedliche Decodieralgorithmen zum Filtern implementiert ist, um psychoakustisch mehr oder weniger wichtige Abschnitte zu verstärken oder zu dämpfen, oder als ein LPC-Analyse- oder Synthesefilter implementiert ist; und
    Bereitstellen (74, 18, 32) des zeitvariablen Steuersignals (72, 16, 46), wobei das zeitvariable Steuersignal von dem Audiosignal abhängt.
  49. Computerprogramm mit einem Programmcode zum Durchführen aller Schritte des Verfahrens gemäß Anspruch 44, 45 oder 48, wenn dasselbe auf einem Computer läuft.
EP06013604A 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik Active EP1873754B1 (de)

Priority Applications (25)

Application Number Priority Date Filing Date Title
AT06013604T ATE408217T1 (de) 2006-06-30 2006-06-30 Audiokodierer, audiodekodierer und audioprozessor mit einer dynamisch variablen warp-charakteristik
DE602006002739T DE602006002739D1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik
EP06013604A EP1873754B1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik
EP08014723A EP1990799A1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik
CA2656423A CA2656423C (en) 2006-06-30 2007-05-16 Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
JP2009516921A JP5205373B2 (ja) 2006-06-30 2007-05-16 動的可変ワーピング特性を有するオーディオエンコーダ、オーディオデコーダ及びオーディオプロセッサ
KR1020087032110A KR101145578B1 (ko) 2006-06-30 2007-05-16 동적 가변 와핑 특성을 가지는 오디오 인코더, 오디오 디코더 및 오디오 프로세서
RU2009103010/09A RU2418322C2 (ru) 2006-06-30 2007-05-16 Аудиокодер, аудиодекодер и аудиопроцессор, имеющий динамически изменяющуюся характеристику перекоса
PL07725316T PL2038879T3 (pl) 2006-06-30 2007-05-16 Koder audio i dekoder audio mające dynamicznie zmienną charakterystykę odkształcania
MYPI20085310A MY142675A (en) 2006-06-30 2007-05-16 Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
ES07725316.9T ES2559307T3 (es) 2006-06-30 2007-05-16 Codificador de audio y decodificador de audio que tiene una característica de deformación dinámicamente variable
US12/305,936 US8682652B2 (en) 2006-06-30 2007-05-16 Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
MX2008016163A MX2008016163A (es) 2006-06-30 2007-05-16 Codificador de audio, decodificador de audio y procesador de audio con caracteristicas de warping variable de manera dinamica.
BRPI0712625-5A BRPI0712625B1 (pt) 2006-06-30 2007-05-16 Codificador de áudio, decodificador de áudio, e processador de áudio tendo uma caractéristica de distorção ("warping") dinamicamente variável
PCT/EP2007/004401 WO2008000316A1 (en) 2006-06-30 2007-05-16 Audio encoder, audio decoder and audio processor having a dynamically variable harping characteristic
CN2007800302813A CN101501759B (zh) 2006-06-30 2007-05-16 具有动态可变规整特性的音频编码器、音频解码器和音频处理器
EP07725316.9A EP2038879B1 (de) 2006-06-30 2007-05-16 Audiokodierer und audiodekodierer mit einer dynamisch variablen warping-charakteristik
AU2007264175A AU2007264175B2 (en) 2006-06-30 2007-05-16 Audio encoder, audio decoder and audio processor having a dynamically variable harping characteristic
TW096122715A TWI348683B (en) 2006-06-30 2007-06-23 Audio encoder,audio decoder,audio processor having a dynamically variable warping characteristic,storage medium having stored thereon an encoded audio signal,audio encoding method,audio decoding method,audio processing method and program for executing th
ARP070102797A AR061696A1 (es) 2006-06-30 2007-06-25 Codificador de audio , decodificador de audio y procesador de audio que poseen una caracteristica de distorsion variable dinamicamente
HK08103465A HK1109817A1 (en) 2006-06-30 2008-03-27 Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
IL195983A IL195983A (en) 2006-06-30 2008-12-16 Audio encoder, audio decoder and audio processor that have a dynamic variable distortion characteristic
NO20090400A NO340436B1 (no) 2006-06-30 2009-01-27 Audiokoder, audiodekoder og audioprosessor med en dynamisk, variabel forvrengningskarakteristikk
HK09108366.0A HK1128811A1 (zh) 2006-06-30 2009-09-11 具有動態可變規整特性的音頻編碼器和音頻解碼器
AU2011200461A AU2011200461B2 (en) 2006-06-30 2011-02-04 Audio encoder, audio decoder and audio processor having a dynamically variable harping characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP06013604A EP1873754B1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik

Related Child Applications (1)

Application Number Title Priority Date Filing Date
EP08014723A Division EP1990799A1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik

Publications (2)

Publication Number Publication Date
EP1873754A1 EP1873754A1 (de) 2008-01-02
EP1873754B1 true EP1873754B1 (de) 2008-09-10

Family

ID=37402718

Family Applications (2)

Application Number Title Priority Date Filing Date
EP08014723A Withdrawn EP1990799A1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik
EP06013604A Active EP1873754B1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP08014723A Withdrawn EP1990799A1 (de) 2006-06-30 2006-06-30 Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik

Country Status (4)

Country Link
EP (2) EP1990799A1 (de)
AT (1) ATE408217T1 (de)
DE (1) DE602006002739D1 (de)
HK (1) HK1109817A1 (de)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2445719C2 (ru) * 2010-04-21 2012-03-20 Государственное образовательное учреждение высшего профессионального образования Академия Федеральной службы охраны Российской Федерации (Академия ФСО России) Способ улучшения восприятия синтезированной речи при реализации процедуры анализа через синтез в вокодерах с линейным предсказанием
CA3160488C (en) 2010-07-02 2023-09-05 Dolby International Ab Audio decoding with selective post filtering
AU2014211583B2 (en) 2013-01-29 2017-01-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for selecting one of a first audio encoding algorithm and a second audio encoding algorithm
EP2980794A1 (de) 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audiocodierer und -decodierer mit einem Frequenzdomänenprozessor und Zeitdomänenprozessor
EP2980801A1 (de) 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Verfahren zur Schätzung des Rauschens in einem Audiosignal, Rauschschätzer, Audiocodierer, Audiodecodierer und System zur Übertragung von Audiosignalen
EP2980795A1 (de) 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audiokodierung und -decodierung mit Nutzung eines Frequenzdomänenprozessors, eines Zeitdomänenprozessors und eines Kreuzprozessors zur Initialisierung des Zeitdomänenprozessors
EP3079151A1 (de) * 2015-04-09 2016-10-12 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audiocodierer und verfahren zur codierung eines audiosignals

Also Published As

Publication number Publication date
EP1873754A1 (de) 2008-01-02
ATE408217T1 (de) 2008-09-15
EP1990799A1 (de) 2008-11-12
DE602006002739D1 (de) 2008-10-23
HK1109817A1 (en) 2008-06-20

Similar Documents

Publication Publication Date Title
US7873511B2 (en) Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
US8682652B2 (en) Audio encoder, audio decoder and audio processor having a dynamically variable warping characteristic
EP2038879B1 (de) Audiokodierer und audiodekodierer mit einer dynamisch variablen warping-charakteristik
CA2691993C (en) Audio encoder for encoding an audio signal having an impulse-like portion and stationary portion, encoding methods, decoder, decoding method, and encoded audio signal
RU2485606C2 (ru) Схема кодирования/декодирования аудио сигналов с низким битрейтом с применением каскадных переключений
EP2144171B1 (de) Audiokodierer und -dekodierer zur Kodierung und Dekodierung von Frames eines abgetasteten Audiosignals
US20160225384A1 (en) Post filter
Edler et al. Audio coding using a psychoacoustic pre-and post-filter
EP1873754B1 (de) Audiokodierer, Audiodekodierer und Audioprozessor mit einer dynamisch variablen Warp-Charakteristik
AU2016204672B2 (en) Audio encoder and decoder with multiple coding modes
AU2017276209A1 (en) Pitch Filter for Audio Signals and Method for Filtering an Audio Signal with a Pitch Filter

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070525

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA HR MK YU

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1109817

Country of ref document: HK

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 602006002739

Country of ref document: DE

Date of ref document: 20081023

Kind code of ref document: P

REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1109817

Country of ref document: HK

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081210

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081221

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090210

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090110

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

26N No opposition filed

Effective date: 20090611

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081210

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081211

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090630

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100630

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090311

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20080910

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230512

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230621

Year of fee payment: 18

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240620

Year of fee payment: 19

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240617

Year of fee payment: 19