EP2036400B1 - Génération de signaux décorrélés - Google Patents

Génération de signaux décorrélés Download PDF

Info

Publication number
EP2036400B1
EP2036400B1 EP08735224A EP08735224A EP2036400B1 EP 2036400 B1 EP2036400 B1 EP 2036400B1 EP 08735224 A EP08735224 A EP 08735224A EP 08735224 A EP08735224 A EP 08735224A EP 2036400 B1 EP2036400 B1 EP 2036400B1
Authority
EP
European Patent Office
Prior art keywords
audio input
input signal
signal
output signal
decorrelator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP08735224A
Other languages
German (de)
English (en)
Other versions
EP2036400A1 (fr
Inventor
Jürgen HERRE
Karsten Linzmeier
Harald Popp
Jan Plogsties
Harald Mundt
Sascha Disch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of EP2036400A1 publication Critical patent/EP2036400A1/fr
Application granted granted Critical
Publication of EP2036400B1 publication Critical patent/EP2036400B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/03Application of parametric coding in stereophonic audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/05Application of the precedence or Haas effect, i.e. the effect of first wavefront, in order to improve sound-source localisation

Definitions

  • the present invention relates to an apparatus and a method for generating decorrelated signals, and more particularly to how decorrelated signals from a signal containing transients can be derived such that in the reconstruction of a multi-channel audio signal or a subsequent combination of the decorrelated signal and the transient signal will result in no audible signal degradation.
  • stereo-up mix of a mono signal the multichannel up-mix based on a mono or stereo signal, called artificial reverb generation or broadening the stereo base.
  • FIGS. 7 and 8 To illustrate the problem show the FIGS. 7 and 8 the use of decorrelators in signal processing. It should first briefly on the in Fig. 7 received mono-to-stereo decoder will be received.
  • the mono-to-stereo decoder serves to fed mono signal 14 into a stereo signal 16, consisting of a left channel 16a and a right channel 16b to transform. From the fed mono signal 14, the standard decorrelator 10 generates a decorrelated signal 18 (D), which is applied to the inputs of the mix matrix 12 together with the fed-in mono signal 14.
  • the untreated mono signal is often referred to as a "dry” signal, whereas the decorrelated signal D is called a "wet" signal.
  • the mix matrix 12 combines the decorrelated signal 18 and the injected mono signal 14 to produce the stereo signal 16.
  • the coefficients of the mix matrix 12 (H) can be fixed, signal-dependent or even dependent on a user input.
  • this mixing process performed by the mix matrix 12 may also be frequency selective. That is, different mixing operations or matrix coefficients can be applied for different frequency ranges (frequency bands).
  • the fed-in mono signal 14 can be pre-processed by a filter bank, so that this is present together with the decorrelated signal 18 in a filter bank representation in which the signal components belonging to different frequency bands are processed separately.
  • the control of the up-mix process can be done through user interaction via a mix control 20.
  • the coefficients of the mix matrix 12 (H) can also be effected by so-called "side information", which are transmitted together with the fed-in mono signal 14 (the downmix).
  • the page information contains a parametric description of how the generated multichannel signal is to be generated from the injected mono signal 14 (the transmitted signal). This spatial page information is usually provided by an encoder before the actual down-mix, ie the generation of the injected mono signal 14, generated.
  • FIG. 8 A typical example of a parametric stereo decoder is in Fig. 8 shown.
  • the Fig. 7 is shown in FIG Fig. 8 1
  • an analysis filter bank 30 and a synthesis filter bank 32 have been shown. This is the case, since decorrelation is performed frequency-dependent (in the spectral domain). Therefore, first of all, the fed-in mono signal 14 is split by the analysis filter bank 30 into signal components for different frequency ranges. That is, for each frequency band, a separate decorrelated signal is generated analogously to the example described above.
  • spatial parameters 34 are transmitted which serve to determine or vary the matrix elements of the mix matrix 12 in order to generate a mixed signal which is transformed back into the time domain by means of the synthesis filter bank 32 to form the stereo signal 16.
  • the spatial parameters 34 can optionally be changed via a parameter control 36 in order to generate the up-mix or the stereo signal 16 differently for different reproduction scenarios or to adapt the quality of reproduction optimally to the respective scenario.
  • the spatial parameters 34 can be combined with parameters of the binaural filters in order to form the parameters controlling the mix matrix 12.
  • the parameters may be changed by direct user interaction or other tools or algorithms (see for example: Breebart, Jeroen; Herre, Jurgen; Jin, Craig; Kjörling, Kristofer; Koppens, Jeroen; Plogisties, Jan; Villemoes, Lars: Multi-Channel Goes Mobile: MPEG Surround Binaural Rendering. AES 29th International Conference, Seoul, Korea, 2006 September 2 - 4 ).
  • the proportion of the decorrelated signal 18 (D) contained in the output signal is set in the mix matrix 12.
  • the mixing ratio is temporally varied based on the transmitted spatial parameters 34.
  • These parameters can be, for example, parameters which describe the correlation between two original signals (parameters of this type are used, for example, in MPEG surround coding, where they are referred to inter alia as ICC).
  • parameters may be transmitted which transmit the energy relationships between two originally present channels contained in the fed mono signal 14 (ICLD or ICD in MPEG surround).
  • the matrix elements can be varied by direct user input.
  • Parametric Stereo and MPEG Surround use all-pass filters, which are filters that allow the entire spectral range to pass, but have a spectral-dependent filter characteristic.
  • Binaural Cue Coding BCC, Faller and Baumgarte, see for example: C. Faller: "Parametric Coding Of Spatial Audio", Ph.D. thesis, EPFL, 2004 ) a "group delay" is proposed for decorrelation. For this, a frequency-dependent group delay is applied to the signal by changing the phases in the DFT spectrum of the signal. So different frequency ranges are delayed for different lengths. Such a method generally falls under the generic term of phase manipulations.
  • the US Patent Application 2006/0053018 describes a synthesizer for generating a decorrelated signal using a plurality of subband signals to produce a decorrelated signal.
  • Each subband signal is filtered with a Hall filter.
  • the Hall-filtered sub-signals are combined to form a decorrelation signal.
  • the decorrelated signals are thus carried out by signal manipulation on a plurality of subband signals.
  • the international patent application WO2005 / 086139 describes the decoding of a mono downmix signal obtained from a multi-channel signal.
  • Decorrelated signals used for reconstruction are obtained by dividing the down-mix signal (mixing signal) with a filter bank into subband signals which are subjected to variable phase angles.
  • transient detection is performed to otherwise produce the decorrelated signals in the presence of transient signals.
  • the object of the present invention is to provide an apparatus and a method for decorrelating signals, which improves the signal quality in the presence of transient signals.
  • the present invention is based on the finding that decorrelated output signals can be generated for transient audio input signals by mixing the audio input signal with a delayed representation of the audio input signal by a delay time such that a first output signal is output in a first time interval the audio input signal and a second output signal of the delayed representation of the audio input signal, wherein in a second time interval, the first output signal of the delayed representation of the audio input signal and the second output signal corresponds to the audio input signal.
  • two mutually decorrelated signals are derived from an audio input signal such that a time-delayed copy of the audio input signal is first generated. Then, the two output signals are generated by mutually using the audio input signal and the delayed representation of the audio input signal for the two output signals.
  • a time delay is used which is frequency-independent and therefore does not blur the attacks of the gossip noise over time.
  • a time delay chain which has a small number of memory elements is a good compromise between the achievable spatial width of a reconstructed signal and the additional memory requirement.
  • the selected delay time is preferably less than 50 ms, particularly preferably less than or equal to 30 ms.
  • the problem of precedence is solved by making the audio input signal directly the left channel in a first time interval, while in the subsequent second time interval the delayed representation of the audio input signal is used as the left channel. For the right channel, the procedure applies accordingly.
  • the switching time between the individual transposition operations is chosen to be greater than the duration of a transient event typically occurring in the signal.
  • the decorrelators of the invention use only an extremely small number of arithmetic operations. In particular, only a single time delay and a small number of multiplications are required to produce decorrelated signals according to the invention.
  • the exchange of individual channels is a simple copy operation, so requires no additional computational effort.
  • Optional signal conditioning or post-processing techniques also require only addition or subtraction, that is, operations that typically can be taken over from existing hardware. Thus, only a small amount of additional memory for the implementation of the delay device or the delay line is required. This exists in many systems and can be shared if necessary.
  • Fig. 1 shows an example of a decorrelator according to the invention for generating a first output signal 50 (L ') and a second output signal 52 (R') based on an audio input signal 54 (M).
  • the decorrelator further includes a delay 56 to produce a delayed representation of the audio input signal 58 (M_d).
  • the decorrelator further includes a mixer 60 for combining the delayed representation of the audio input signal 58 with the audio input signal 54; to obtain the first output signal 50 and the second output signal 52.
  • the mixer 60 is formed by the two switches shown schematically, by means of which alternately the audio input signal 54 is switched to the left output signal 50 or the right output signal 52. The same also applies to the delayed representation of the audio input signal 58.
  • the mixer 60 of the decorrelator thus functions so that in a first time interval the first output signal 50 corresponds to the audio input signal 54 and the second output signal corresponds to the delayed representation of the audio input signal 58, wherein in a second time interval the first output signal 50 corresponds to the delayed representation of the audio input signal and the second output signal 52 corresponds to the audio input signal 54.
  • a decorrelation is achieved by making a time-delayed copy of the audio input channel 54 and then alternately using the audio input signal 54 and the delayed representation of the audio input signal 58 as output channels.
  • the components forming the output signals are interchanged in a clocked manner.
  • the length of the time interval, for each of which is reversed or for each of which corresponds to an input signal to the output signal variable.
  • the time intervals for which the individual components are exchanged can have different lengths. That is, the ratio of those times in which the first output signal 50 consists of the audio input signal 54 and the delayed representation of the audio input signal 58 is variably adjustable.
  • the duration of the time intervals is greater than the average duration of transient components included in the audio input signal 54 in order to obtain a good reproduction of the signal.
  • Suitable durations are in the time interval between 10 ms and 200 ms, with a typical period of time being 100 ms, for example.
  • the duration of the time delay can be adapted to the events of the signal or even be time-variable.
  • the delay times are preferably in an interval of 2 ms to 50 ms. Examples of suitable delay times are 3, 6, 9, 12, 15 or 30 ms.
  • the decorrelator according to the invention can be applied both for continuous audio signals as well as for sampled audio signals, that is, for signals that are present as a result of discrete samples.
  • Fig. 2 shows on the basis of such a signal present in discrete samples the operation of the decorrelator of Fig. 1 ,
  • the audio input signal 54 consisting of a sequence of discrete sample values and the delayed representation of the audio input signal 58 are considered.
  • the mixer 60 is shown here only schematically as two possible connection paths between the audio input signal 54 and the delayed representation of the audio input signal 58 and the two output signals 50 and 52.
  • a first time interval 70 is shown, in which the first output signal 50 corresponds to the audio input signal 54 and the second output signal 52 corresponds to the delayed representation of the audio input signal 58.
  • the first output signal 50 of the delayed representation of the audio input signal 58 and the second output signal 52 correspond to the audio input signal 54.
  • the time duration of the first time interval 70 and the second time interval 72 is identical, although this is not a prerequisite, as already mentioned above.
  • the inventive concept for decorrelating signals can be applied in the time domain, ie with the temporal resolution that is given by the sample frequency.
  • FIG. 12 shows another embodiment in which the mixer 60 is arranged such that, in a first time interval, the first output signal 50 results in a portion X (t) of the audio input signal 54 and a portion (1-X (t)) of the delayed representation of the audio input signal 58 is formed. Accordingly, in the first time interval, the second output signal 52 is formed into a portion X (t) of the delayed representation of the audio input signal 58 and a portion (1-X (t)) of the audio input signal 54.
  • Possible implementation of the function X (t) which could also be called a crossfade function, is in Fig. 2b shown.
  • the mixer 60 functions to combine a delay-delayed representation of the audio input signal 58 with the audio input signal 54 to provide the first output signal 50 and the second output signal 52 with time varying portions of the audio Input signal 54 and the delayed representation of the audio input signal 58.
  • the first output signal 50 to a more than 50% proportion of the audio input signal 54 and the second output signal 52 to a more than 50% share of the delayed representation of the audio input signal 58 is formed.
  • the first output signal 50 is off a more than 50% proportion of the delayed representation of the audio input signal 58 and the second output signal 52 are formed from a more than 50% proportion of the audio input signal.
  • Fig. 2b shows possible control functions for the mixer 60, as shown in Fig. 2a is shown.
  • Plotted on the x-axis is the time t in arbitrary units and on the y-axis the function X (t), which has possible function values from zero to one.
  • other functions X (t) can also be used, which also need not necessarily have a value range of 0 to 1.
  • Other ranges of values for example from 0 to 10, are conceivable.
  • a first function 66 which is box-shaped, corresponds to that in FIG Fig. 2 described case of exchanging the channels, or the fade-free switching, the schematically au ch in Fig. 1 is shown.
  • the first output signal 50 of FIG Fig. 2a this is completely formed by the audio input signal 54 in the first time interval 62, while in the first time interval 62 the second output signal 52 is completely formed by the delayed representation of the audio input signal 58.
  • the second time interval 64 the same applies vice versa, wherein the length of the time intervals does not necessarily have to be identical.
  • a second, dashed, function 58 does not completely switch the signals or generate first and second output signals 50 and 52 which at no time are formed entirely from the audio input signal 54 or the delayed representation of the audio input signal 58 , However, in the first time interval 62, the first output signal 50 is in a more than 50% proportion formed from the audio input signal 54, which also applies to the second output signal 52 accordingly.
  • a third function 69 is implemented to provide fade timings 69a-69c corresponding to the transition timings between the first time interval 62 and the second time interval 64, thus marking those times at which the audio output signals are varied this achieves a crossfade effect. That is, in a start interval and in an end interval at the beginning and end of the first time interval 62, the first output signal 50 and the second output signal 52 contain both portions of the audio input signal 58 and the delayed representation of the audio input signal.
  • the first output signal 50 corresponds to the audio input signal 54 and the second output signal 52 corresponds to the delayed representation of the audio input signal 58.
  • the steepness of the function 69 at the fade times 69a to 69c can be varied within wide limits be adapted to the perceived reproduction quality of the audio signal to the circumstances.
  • the first output signal 50 contains more than 50% of the audio input signal 54 and the second output signal 52 contains more than 50% of the delayed representation of the audio input signal 58 and that in a second time interval 64, the first output signal 50 includes a greater than 50% portion of the delayed representation of the audio input signal 58 and the second output signal 52 contains greater than 50% proportion of the audio input signal.
  • Fig. 3 shows a further embodiment of a decorrelator implementing the inventive concept.
  • the in Fig. 3 shown decorrelator differs from the in Fig. 1 schematically illustrated decorrelator in that the audio input signal 54 and the delayed representation of the audio input signal 58 can be scaled by means of an optional scaling device 74 before they are supplied to the mixer 60.
  • the optional scaler 74 includes a first scaler 76a and a second scaler 76b, wherein the first scaler 76a may scale the audio input 54 and the second scaler 76b may scale the delayed representation of the audio input 58.
  • the delay 56 is fed by the audio input (monophonic) 54.
  • the first scaler 76a and the second scaler 76b may optionally vary the intensity of the audio input signal and the delayed representation of the audio input signal.
  • the intensity of the temporally following signal (G_lagging), ie the delayed representation of the audio input signal 58, is increased and / or the intensity of the leading signal (G_leading), ie the audio input signal 54, is lowered.
  • the amplification factors can be chosen so that the total energy is obtained.
  • the gain factors can be defined so that they change signal-dependent.
  • the amplification factors can also be dependent on the side information, so that these are varied depending on the acoustic scenario to be reconstructed.
  • the precedence effect (the effect resulting from the time-delayed repetition of the same signal) can be compensated by varying the intensity of the direct component with respect to the delayed component so as to amplify delayed components and / or attenuate the non-delayed component.
  • the precedence effect caused by the introduced delay can thus be partially compensated for by volume adjustments (intensity adjustments) which are important for spatial hearing.
  • the time interval of the exchange is preferably an integer multiple of the frame length.
  • An example of a typical Interchange time or permutation period is 100 ms.
  • the first output signal 50 and the second output signal 52 may be output directly as an output signal, as in FIG Fig. 1 shown.
  • the decorrelator in Fig. 3 additionally has an optional post-processor 80 which combines the first output signal 50 and the second output signal 52 to provide at its output a post-processed output signal 82 and a second post-processed output signal 84, the post-processor may have several beneficial effects.
  • it can serve to reprocess the signal for further method steps, for example a subsequent up-mix in a multi-channel reconstruction, so that an already existing decorrelator can be replaced by the decorrelator according to the invention without having to modify the rest of the signal processing chain.
  • FIGS. 1 and 2 show the decorrelators or the standard decorrelators 10 corresponding to the prior art FIGS. 7 and 8 completely replace, whereby the advantages of Dekorrelatoren invention can be easily integrated into existing decoder set-ups.
  • the post-processor 80 is used to reduce the degree of mixing of the direct signal and the delayed signal.
  • the normal combination represented by the above formula can be modified so that, for example, substantially the first output signal 50 is scaled and used as the first post-processed output signal 82, while the second output signal 52 is used as the basis for the second post-processed output signal 84.
  • the post-processor or the mix-matrix describing the post-processor can either be completely bypassed or the matrix coefficients controlling the combination of the signals in the post-processor 80 can be varied such that little or no additional mixing of the signals occurs.
  • Fig. 4 shows another way to avoid the precedence effect by using a suitable correlator.
  • the im Fig. 3 shown first and second scaler units 76a and 76b compulsory, where, however, the mixer 60 can be omitted.
  • either the audio input signal 54 and / or the delayed representation of the audio input signal 58 is changed or varied in its intensity.
  • the intensity is preferably changed as a function of the delay time of the delay device 56, so that with a shorter delay time a greater reduction the intensity of the audio input signal 54 is achieved.
  • the scaled signals can then be mixed arbitrarily, for example by means of a mid-side coder described above or one of the other blending algorithms described above.
  • Fig. 5 12 schematically illustrates an example of a method according to the invention for generating output signals based on an audio input signal 54.
  • a delay-delayed representation of the audio input signal 54 is combined with the audio input signal 54 to produce a first output signal 52 and obtaining a second output signal 54, wherein in a first time interval the first output signal 52 corresponds to the audio input signal 54 and the second output signal corresponds to the delayed representation of the audio input signal and wherein in a second time interval the first output signal 52 of the delayed representation of the audio input signal input signal and the second output signal 54 corresponds to the audio input signal.
  • An audio decoder 100 includes a standard decorrelator 102 and a decorrelator 104 that corresponds to one of the above-described decorrelators of the invention.
  • the audio decoder 100 is used to generate a multi-channel output signal 106, which in the case shown has two channels by way of example.
  • the multi-channel output is generated based on an audio input signal 108, which may be a mono signal as shown.
  • the standard decorrelator 102 corresponds to the prior art known decorrelators, and the audio decoder is arranged to use the standard decorrelator 102 in a standard mode of operation to alternately supply the decorrelator 104 to a transient audio input signal 108 use.
  • the multichannel representation generated by the audio decoder becomes possible even in the presence of transient input signals or transient downmix signals with good quality.
  • the basic intention is therefore to apply decorrelators according to the invention, if highly decorrelated and transient signals are to be processed. If it is possible to detect transient signals, the decorrelator according to the invention can be used as an alternative to a standard decorrelator.
  • decorrelation information for example, an ICC parameter describing the correlation between two output signals of a multichannel update mix in the MPEG-Surround standard
  • it may additionally be used as a decision criterion to decide which decorrelator to use.
  • outputs of the decorrelators according to the invention for example, the decorrelator of the Fig. 1 and 3
  • standard decorrelators are used to ensure the best possible reproduction quality at all times.
  • the application of the decorrelators according to the invention in the audio decoder 100 is thus signal-dependent.
  • transient signal components for example LPC prediction in the signal spectrum or a comparison of the energies contained in the signal in the low-frequency spectral range with those in the high-frequency spectral range.
  • these detection mechanisms already exist or can be easily implemented.
  • An example of already existing indicators are the above-mentioned correlation or coherence parameters of a signal.
  • these parameters can be used to control the amount of decorrelation of the output channels produced.
  • Examples of the use of existing transient signal detection algorithms are MPEG-Surround, where the control information of the STP tool is suitable for detection and the inter-channel coherence parameters (ICC) can be used.
  • the detection can be done both on the encoder and on the decoder side. In the former case, a signal flag or bit should be transmitted which is evaluated by the audio decoder 100 to switch between the various decorrelators. If the signal processing scheme of the audio decoder 100 is based on overlapping windows for reconstruction of the final audio signal and the overlap of the adjacent windows (frames) is large enough, a simple switch between different decorrelators can be made without introducing audible artifacts.
  • a cross-fading technique can be used in which initially both decorrelators are used in parallel.
  • the signal of the standard decorrelator 102 is then faded out in intensity during the transition to the Dekorrealator 104, while the signal of the decorrelator 104 is simultaneously displayed.
  • hysteresis switching curves can be used in the switching back and forth, which ensure that after switching to a decorrelator this is used for a predetermined minimum time to prevent multiple immediate switching back and forth between the different decorrelators.
  • the decorrelators according to the invention can produce a particularly "wide" sound field.
  • a certain amount of a decorrelated signal is added to a direct signal.
  • the quantity of the decorrelated signal or the dominance of the decorrelated signal in the generated output signal usually determines the width of the perceived sound field.
  • the matrix coefficients of this mixed matrix (mix matrix) are usually controlled by the above-mentioned transmitted correlation parameters or other spatial parameters. Therefore, before switching to a decorrelator according to the invention, the width of the sound field can be first artificially increased by the coefficients of the mix matrix are changed so that the broad sound impression slowly arises before switching to the decorrelators according to the invention. In the other case of switching from the decorrelator according to the invention In the same way, the width of the sound impression can be reduced before the actual switchover takes place.
  • the decorrelators according to the invention have a number of advantages over the prior art, which come into play particularly in the reconstruction of applause-like signals, that is to say of signals which have a high transient signal component.
  • an extremely wide sound field is generated without introducing additional artifacts, which is a great advantage, in particular in the case of transient, applause-like signals.
  • the decorrelators according to the invention can be easily integrated into already existing reproduction chains or decoders and even controlled by parameters which already exist within these decoders in order to achieve the best possible reproduction of a signal. Examples of integration into such existing decoder structures have previously been called Parametric Stereo and MPEG-Surround.
  • the concept according to the invention makes it possible to provide decorrelators which only make extraordinarily small demands on the available computing power, so that on the one hand no expensive investment in hardware is required and, on the other hand, the additional energy consumption of the decorrelators according to the invention is negligible.
  • the first and second time intervals are temporally adjacent and follow each other.
  • the scaler 74 is configured to scale the intensity of the audio input signal 54 as a function of the delay time such that a shorter delay time achieves a greater reduction in the intensity of the audio input signal 54.
  • the mixer 60 is configured to use a delayed representation of the audio input signal 58 whose delay time is greater than 2 ms and less than 50 ms.
  • the delay time is 3, 6, 9, 12, 15 or 30 ms.
  • the mixer 60 is configured to combine the audio input signal 54 and the delayed representation of the audio input signal 58 such that the first and second time intervals are the same length.
  • the mixer 60 is configured to perform the combination such that the time duration of the time intervals in a first pair of first 70 and second 72 time intervals is determined by the sequence of time intervals of a time period of the time intervals in a second pair of first and second time intervals a second time interval.
  • the duration of the first 70 and second 72 time intervals is greater than that twice the average time duration of transient signal components contained in the audio input signal 54.
  • the duration of the first 70 and the second 72 time intervals is greater than 10 ms and less than 200 ms.
  • the first output signal corresponds to the audio input signal 54 and the second output signal 52 corresponds to the delayed representation of the audio input signal 58
  • the first output signal 50 corresponds to the delayed representation of the audio input signal 58 and the second output signal 52 corresponds to the audio input signal 54.
  • the first output signal and the second output signal 52 include portions of the audio input signal 58 and the delayed representation of the input audio signal 58, respectively, in an intermediate interval the first interval corresponds to the first input signal to the audio input signal 54 and the second output signal 52 corresponds to the delayed representation of the audio input signal 58, the start interval and the end interval of the first time interval; and wherein in a start interval and in an end interval at the beginning and end of the second time interval 70, the first output signal and the second output signal 52 include portions of the audio input signal 58 and the delayed representation of the audio input signal 58, respectively, at an intermediate interval between the start interval and the end interval of the second time interval corresponds to the first output signal of the delayed representation of the audio input signal 58 and the second output signal 52 corresponds to the audio input signal 54.
  • the inventive method generating output signals can be implemented in hardware or in software.
  • the implementation can be carried out on a digital storage medium, in particular a floppy disk or CD with electronically readable control signals, which can interact with a programmable computer system in such a way that the inventive method of generating output signals is executed.
  • the invention thus also consists in a computer program product with a program code stored on a machine-readable carrier for carrying out the method according to the invention, when the computer program product runs on a computer.
  • the invention can thus be realized as a computer program with a program code for carrying out the method when the computer program runs on a computer.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Detergent Compositions (AREA)
  • Photoreceptors In Electrophotography (AREA)
  • Developing Agents For Electrophotography (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Claims (15)

  1. Décorrélateur pour générer des signaux de sortie (50, 52) sur base d'un signal d'entrée audio (54), aux caractéristiques suivantes:
    un mélangeur (60) destiné à combiner une représentation du signal d'entrée audio (58) retardée d'un temps de temporisation avec le signal d'entrée audio (54), pour obtenir un premier (50) et un deuxième (52) signal de sortie avec des parts variables dans le temps du signal d'entrée audio (54) et de la représentation retardée du signal d'entrée audio (58),
    dans un premier intervalle de temps (70), le premier signal de sortie (50) contenant une part de plus de 50 pour cent du signal d'entrée audio (54) et le deuxième signal de sortie (52) contenant une part de plus de 50 pour cent de la représentation retardée du signal d'entrée audio (58), et
    dans un deuxième intervalle de temps (72), le premier signal de sortie (50) contenant une part de plus de 50 pour cent de la représentation retardée du signal d'entrée audio (58) et le deuxième signal de sortie (52) contenant une part de plus de 50 pour cent du signal d'entrée audio (54).
  2. Décorrélateur selon la revendication 1, dans lequel, dans le premier intervalle de temps (70), le premier signal de sortie correspond au signal d'entrée audio (54) et le deuxième signal de sortie (52) correspond à la représentation retardée du signal d'entrée audio (58),
    dans le deuxième intervalle de temps (72), le premier signal de sortie (50) correspond à la représentation retardée du signal d'entrée audio (58) et le deuxième signal de sortie (52) correspond au signal d'entrée audio (54).
  3. Décorrélateur selon la revendication 1, dans lequel, dans un intervalle initial et dans un intervalle final au début et à la fin du premier intervalle de temps (70), le premier signal de sortie et le deuxième signal de sortie (52) contiennent des parts du signal d'entrée audio (58) et de la représentation retardée du signal d'entrée audio (58),
    dans un intervalle intermédiaire entre l'intervalle initial et l'intervalle final du premier intervalle de temps, le premier signal de sortie correspond au signal d'entrée audio (54) et le deuxième signal de sortie (52) correspond à la représentation retardée du signal d'entrée audio (58); et
    dans un intervalle initial et un intervalle final au début et à la fin du deuxième intervalle de temps (70), le premier signal de sortie et le deuxième signal de sortie (52) contiennent des parts du signal d'entrée audio (58) et de la représentation retardée du signal d'entrée audio (58),
    dans un intervalle intermédiaire entre l'intervalle initial et l'intervalle final du deuxième intervalle de temps, le premier signal de sortie correspond à la représentation retardée du signal d'entrée audio (58) et le deuxième signal de sortie (52) correspond au signal d'entrée audio (54).
  4. Décorrélateur selon l'une des revendications 1 à 3, comprenant par ailleurs un dispositif de temporisation (56), pour générer la représentation retardée du signal d'entrée audio (58) par temporisation dans le temps du signal d'entrée audio (54) du temps de temporisation.
  5. Décorrélateur selon l'une des revendications 1 à 4, comprenant par ailleurs un dispositif de modulation (74), pour faire varier une intensité du signal d'entrée audio (54) et/ou de la représentation retardée du signal d'entrée audio (58).
  6. Décorrélateur selon l'une des revendications précédentes, comprenant par ailleurs un post-processeur (80) destiné à combiner le premier (50) et le deuxième signal de sortie (52), pour obtenir un premier (82) et un deuxième (84) signal de sortie post-traité, tant le premier (82) que le deuxième (84) signal de sortie post-traité présentant des contributions de signal du premier (50) et du deuxième (52) signal de sortie.
  7. Décorrélateur selon la revendication 6, dans lequel le post-processeur (80) est réalisé pour former le premier signal de sortie post-traité M (82) et le deuxième signal de sortie post-traité D (84) à partir du premier signal de sortie L' (50) et du deuxième signal de sortie R' (52) de sorte que soient remplies les conditions suivantes: M = 0 , 707 × + ,
    Figure imgb0012

    et D = 0 , 707 × - .
    Figure imgb0013
  8. Décorrélateur selon l'une des revendications précédentes, dans lequel le mélangeur (60) est réalisé de manière à combiner un signal d'entrée audio composé de valeurs de balayage discrètes (54) et une représentation retardée, composée de valeurs de balayage discrètes, du signal d'entrée audio (58) par échange des valeurs de balayage du signal d'entrée audio (54) et des valeurs de balayage de la représentation retardée du signal d'entrée audio (58).
  9. Décorrélateur selon l'une des revendications précédentes, dans lequel le mélangeur (60) est réalisé de manière à procéder à la combinaison du signal d'entrée audio (54) et de la représentation retardée du signal d'entrée audio (58) pour une succession de paires de premier (70) et deuxième (72) intervalles de temps adjacents dans le temps.
  10. Décorrélateur selon la revendication 9, dans lequel le mélangeur (60) est réalisé de manière à omettre la combinaison avec une probabilité prédéterminée pour une paire parmi la succession de paires de premier (70) et deuxième (72) intervalles de temps adjacents dans le temps, de sorte que dans la paire, dans le premier (70) et le deuxième (72) intervalles de temps, le premier signal de sortie (50) corresponde au signal d'entrée audio (54) et le deuxième signal de sortie (52) corresponde à la représentation retardée du signal d'entrée audio (58).
  11. Procédé pour générer des signaux de sortie (50, 52) sur base d'un signal d'entrée audio (54), aux étapes suivantes:
    combiner une représentation du signal d'entrée audio (58) retardée d'un temps de temporisation avec le signal d'entrée audio (54), pour obtenir un premier (50) et un deuxième (52) signal de sortie avec des parts variables dans le temps du signal d'entrée audio (58) et de la représentation retardée du signal d'entrée audio (58),
    dans un premier intervalle de temps (70), le premier signal de sortie (50) contenant une part de plus de 50 pour cent du signal d'entrée audio (54) et le deuxième signal de sortie (52) contenant une part de plus de 50 pour cent de la représentation retardée du signal d'entrée audio (58), et
    dans un deuxième intervalle de temps (72), le premier signal de sortie (50) contenant une part de plus de 50 pour cent de la représentation retardée du signal d'entrée audio (58) et le deuxième signal de sortie (52) contenant une part de plus de 50 pour cent du signal d'entrée audio (54).
  12. Procédé selon la revendication 11, à l'étape additionnelle suivante:
    faire varier l'intensité du signal d'entrée audio (54) et/ou de la représentation retardée du signal d'entrée audio (58).
  13. Procédé selon l'une des revendications 11 à 12, à l'étape additionnelle suivante:
    combiner le premier (50) et le deuxième (52) signal de sortie, pour obtenir un premier (82) et un deuxième (84) signal de sortie post-traité, tant le premier (82) que le deuxième (84) signal de sortie post-traité contenant des contributions du premier (50) et du deuxième (52) signal de sortie.
  14. Décodeur audio pour générer un signal de sortie multicanal sur base d'un signal d'entrée audio (54), aux caractéristiques suivantes:
    un décorrélateur selon l'une des revendications 1 à 10; et
    un décorrélateur standard,
    le décodeur audio étant réalisé de manière à utiliser, dans un mode de fonctionnement standard, le décorrélateur standard et à utiliser, dans le cas d'un signal d'entrée audio transitoire (54), le décorrélateur selon l'invention.
  15. Programme d'ordinateur avec un code de programme pour réaliser le procédé selon l'une des revendications 11 à 13 lorsque le programme est exécuté sur un ordinateur.
EP08735224A 2007-04-17 2008-04-14 Génération de signaux décorrélés Active EP2036400B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102007018032A DE102007018032B4 (de) 2007-04-17 2007-04-17 Erzeugung dekorrelierter Signale
PCT/EP2008/002945 WO2008125322A1 (fr) 2007-04-17 2008-04-14 Génération de signaux décorrélés

Publications (2)

Publication Number Publication Date
EP2036400A1 EP2036400A1 (fr) 2009-03-18
EP2036400B1 true EP2036400B1 (fr) 2009-12-16

Family

ID=39643877

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08735224A Active EP2036400B1 (fr) 2007-04-17 2008-04-14 Génération de signaux décorrélés

Country Status (16)

Country Link
US (1) US8145499B2 (fr)
EP (1) EP2036400B1 (fr)
JP (1) JP4682262B2 (fr)
KR (1) KR101104578B1 (fr)
CN (1) CN101543098B (fr)
AT (1) ATE452514T1 (fr)
AU (1) AU2008238230B2 (fr)
CA (1) CA2664312C (fr)
DE (2) DE102007018032B4 (fr)
HK (1) HK1124468A1 (fr)
IL (1) IL196890A0 (fr)
MY (1) MY145952A (fr)
RU (1) RU2411693C2 (fr)
TW (1) TWI388224B (fr)
WO (1) WO2008125322A1 (fr)
ZA (1) ZA200900801B (fr)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BRPI0820488A2 (pt) * 2007-11-21 2017-05-23 Lg Electronics Inc método e equipamento para processar um sinal
KR101342425B1 (ko) * 2008-12-19 2013-12-17 돌비 인터네셔널 에이비 다중-채널의 다운믹싱된 오디오 입력 신호에 리버브를 적용하기 위한 방법 및 다중-채널의 다운믹싱된 오디오 입력 신호에 리버브를 적용하도록 구성된 리버브레이터
EP3144932B1 (fr) 2010-08-25 2018-11-07 Fraunhofer Gesellschaft zur Förderung der Angewand Appareil de codage de signal audio à canaux multiples
EP2477188A1 (fr) * 2011-01-18 2012-07-18 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Codage et décodage des positions de rainures d'événements d'une trame de signaux audio
CN105163398B (zh) 2011-11-22 2019-01-18 华为技术有限公司 连接建立方法和用户设备
US9424859B2 (en) * 2012-11-21 2016-08-23 Harman International Industries Canada Ltd. System to control audio effect parameters of vocal signals
US9830917B2 (en) 2013-02-14 2017-11-28 Dolby Laboratories Licensing Corporation Methods for audio signal transient detection and decorrelation control
TWI618051B (zh) 2013-02-14 2018-03-11 杜比實驗室特許公司 用於利用估計之空間參數的音頻訊號增強的音頻訊號處理方法及裝置
TWI618050B (zh) 2013-02-14 2018-03-11 杜比實驗室特許公司 用於音訊處理系統中之訊號去相關的方法及設備
WO2014126689A1 (fr) 2013-02-14 2014-08-21 Dolby Laboratories Licensing Corporation Procédés pour contrôler la cohérence inter-canaux de signaux audio mélangés
CN105359448B (zh) * 2013-02-19 2019-02-12 华为技术有限公司 一种滤波器组多载波波形的帧结构的应用方法及设备
WO2014187987A1 (fr) * 2013-05-24 2014-11-27 Dolby International Ab Procédés de codage et de décodage audio, support lisible par ordinateur correspondant et codeur et décodeur audio correspondants
JP6242489B2 (ja) * 2013-07-29 2017-12-06 ドルビー ラボラトリーズ ライセンシング コーポレイション 脱相関器における過渡信号についての時間的アーチファクトを軽減するシステムおよび方法
JP6479786B2 (ja) * 2013-10-21 2019-03-06 ドルビー・インターナショナル・アーベー オーディオ信号のパラメトリック再構成
EP2866227A1 (fr) * 2013-10-22 2015-04-29 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Procédé de décodage et de codage d'une matrice de mixage réducteur, procédé de présentation de contenu audio, codeur et décodeur pour une matrice de mixage réducteur, codeur audio et décodeur audio
WO2015173423A1 (fr) * 2014-05-16 2015-11-19 Stormingswiss Sàrl Mixage élévateur de signaux audio avec retards temporels exacts
US11234072B2 (en) 2016-02-18 2022-01-25 Dolby Laboratories Licensing Corporation Processing of microphone signals for spatial playback
US10560661B2 (en) 2017-03-16 2020-02-11 Dolby Laboratories Licensing Corporation Detecting and mitigating audio-visual incongruence
CN110740404B (zh) * 2019-09-27 2020-12-25 广州励丰文化科技股份有限公司 一种音频相关性的处理方法及音频处理装置
CN110740416B (zh) * 2019-09-27 2021-04-06 广州励丰文化科技股份有限公司 一种音频信号处理方法及装置

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4792974A (en) * 1987-08-26 1988-12-20 Chace Frederic I Automated stereo synthesizer for audiovisual programs
US6526091B1 (en) * 1998-08-17 2003-02-25 Telefonaktiebolaget Lm Ericsson Communication methods and apparatus based on orthogonal hadamard-based sequences having selected correlation properties
US6175631B1 (en) * 1999-07-09 2001-01-16 Stephen A. Davis Method and apparatus for decorrelating audio signals
AUPQ942400A0 (en) * 2000-08-15 2000-09-07 Lake Technology Limited Cinema audio processing system
US7107110B2 (en) * 2001-03-05 2006-09-12 Microsoft Corporation Audio buffers with audio effects
SE0301273D0 (sv) * 2003-04-30 2003-04-30 Coding Technologies Sweden Ab Advanced processing based on a complex-exponential-modulated filterbank and adaptive time signalling methods
KR101079066B1 (ko) * 2004-03-01 2011-11-02 돌비 레버러토리즈 라이쎈싱 코오포레이션 멀티채널 오디오 코딩
KR101097000B1 (ko) * 2004-03-11 2011-12-20 피에스에스 벨기에 엔브이 사운드 신호들을 프로세싱하는 방법 및 시스템
WO2006008697A1 (fr) * 2004-07-14 2006-01-26 Koninklijke Philips Electronics N.V. Conversion de canal audio
US7508947B2 (en) * 2004-08-03 2009-03-24 Dolby Laboratories Licensing Corporation Method for combining audio signals using auditory scene analysis
TWI393121B (zh) * 2004-08-25 2013-04-11 Dolby Lab Licensing Corp 處理一組n個聲音信號之方法與裝置及與其相關聯之電腦程式
EP1803115A2 (fr) * 2004-10-15 2007-07-04 Koninklijke Philips Electronics N.V. Systeme et procede de donnees audio de traitement, un element de programme et un support visible par ordinateur
SE0402649D0 (sv) * 2004-11-02 2004-11-02 Coding Tech Ab Advanced methods of creating orthogonal signals
EP1829424B1 (fr) 2005-04-15 2009-01-21 Dolby Sweden AB Mise en forme de l'enveloppe temporaire de signaux decorrélés
JP2007065497A (ja) * 2005-09-01 2007-03-15 Matsushita Electric Ind Co Ltd 信号処理装置

Also Published As

Publication number Publication date
TW200904229A (en) 2009-01-16
KR20090076939A (ko) 2009-07-13
CN101543098B (zh) 2012-09-05
ATE452514T1 (de) 2010-01-15
KR101104578B1 (ko) 2012-01-11
US8145499B2 (en) 2012-03-27
US20090326959A1 (en) 2009-12-31
CA2664312A1 (fr) 2008-10-23
CA2664312C (fr) 2014-09-30
JP2010504715A (ja) 2010-02-12
AU2008238230A1 (en) 2008-10-23
MY145952A (en) 2012-05-31
HK1124468A1 (en) 2009-07-10
WO2008125322A1 (fr) 2008-10-23
DE502008000252D1 (de) 2010-01-28
DE102007018032A1 (de) 2008-10-23
JP4682262B2 (ja) 2011-05-11
RU2009116268A (ru) 2010-11-10
CN101543098A (zh) 2009-09-23
RU2411693C2 (ru) 2011-02-10
AU2008238230B2 (en) 2010-08-26
IL196890A0 (en) 2009-11-18
EP2036400A1 (fr) 2009-03-18
TWI388224B (zh) 2013-03-01
DE102007018032B4 (de) 2010-11-11
ZA200900801B (en) 2010-02-24

Similar Documents

Publication Publication Date Title
EP2036400B1 (fr) Génération de signaux décorrélés
DE102006050068B4 (de) Vorrichtung und Verfahren zum Erzeugen eines Umgebungssignals aus einem Audiosignal, Vorrichtung und Verfahren zum Ableiten eines Mehrkanal-Audiosignals aus einem Audiosignal und Computerprogramm
EP2005421B1 (fr) Dispositif et procédé pour la génération d'un signal d'ambiance
EP1854334B1 (fr) Dispositif et procede de production d'un signal stereo code d'un morceau audio ou d'un flux de donnees audio
EP2206113B1 (fr) Dispositif et procédé permettant de générer un signal multicanal par traitement d'un signal vocal
EP1687809B1 (fr) Appareil et procede pour la reconstitution d'un signal audio multi-canaux et pour generer un enregistrement des parametres correspondants
DE602004005020T2 (de) Audiosignalsynthese
DE602004001868T2 (de) Verfahren zum bearbeiten komprimierter audiodaten zur räumlichen wiedergabe
DE602004005846T2 (de) Audiosignalgenerierung
DE602005006385T2 (de) Vorrichtung und verfahren zum konstruieren eines mehrkanaligen ausgangssignals oder zum erzeugen eines downmix-signals
DE69827775T2 (de) Tonkanalsmischung
WO2015049334A1 (fr) Procédé et dispositif de downmixage d'un signal multicanaux et d'upmixage d'un signal downmixé
EP2917908A1 (fr) Codage inverse non linéaire de signaux multicanaux
DE102019135690B4 (de) Verfahren und Vorrichtung zur Audiosignalverarbeitung für binaurale Virtualisierung
EP1123638A2 (fr) Procede et dispositif pour evaluer la qualite de signaux audio a canaux multiples
WO2015128379A1 (fr) Codage et décodage d'un canal basse fréquence dans un signal audio multicanal
EP2120486A1 (fr) Dispositif et procédé destinés à produire un son spatial
DE102017121876A1 (de) Verfahren und vorrichtung zur formatumwandlung eines mehrkanaligen audiosignals

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20090129

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1124468

Country of ref document: HK

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

RIN2 Information on inventor provided after grant (corrected)

Inventor name: POPP, HARALD

Inventor name: LINZMEIER, KARSTEN

Inventor name: HERRE, JUERGEN

Inventor name: MUNDT, HARALD

Inventor name: PLOGSTIES, JAN

Inventor name: DISCH, SASCHA

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 502008000252

Country of ref document: DE

Date of ref document: 20100128

Kind code of ref document: P

RIN2 Information on inventor provided after grant (corrected)

Inventor name: PLOGSTIES, JAN

Inventor name: DISCH, SASCHA

Inventor name: HERRE, JUERGEN

Inventor name: MUNDT, HARALD

Inventor name: LINZMEIER, KARSTEN

Inventor name: POPP, HARALD

RIN2 Information on inventor provided after grant (corrected)

Inventor name: MUNDT, HARALD

Inventor name: PLOGSTIES, JAN

Inventor name: LINZMEIER, KARSTEN

Inventor name: DISCH, SASCHA

Inventor name: POPP, HARALD

Inventor name: HERRE, JUERGEN

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20091216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100316

REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1124468

Country of ref document: HK

LTIE Lt: invalidation of european patent or patent extension

Effective date: 20091216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100416

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100327

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100316

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100317

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

BERE Be: lapsed

Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FORDERUNG DER ANGEWAN

Effective date: 20100430

26N No opposition filed

Effective date: 20100917

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100430

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100516

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20100617

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100414

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091216

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120430

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20120430

REG Reference to a national code

Ref country code: AT

Ref legal event code: MM01

Ref document number: 452514

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130414

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20130414

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 9

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230512

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230417

Year of fee payment: 16

Ref country code: DE

Payment date: 20230418

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230420

Year of fee payment: 16