EP1600042B1 - Verfahren zum bearbeiten komprimierter audiodaten zur räumlichen wiedergabe - Google Patents

Verfahren zum bearbeiten komprimierter audiodaten zur räumlichen wiedergabe Download PDF

Info

Publication number
EP1600042B1
EP1600042B1 EP04712070A EP04712070A EP1600042B1 EP 1600042 B1 EP1600042 B1 EP 1600042B1 EP 04712070 A EP04712070 A EP 04712070A EP 04712070 A EP04712070 A EP 04712070A EP 1600042 B1 EP1600042 B1 EP 1600042B1
Authority
EP
European Patent Office
Prior art keywords
matrix
signals
filter
sound
spatialization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP04712070A
Other languages
English (en)
French (fr)
Other versions
EP1600042A1 (de
Inventor
Abdellatif Benjelloun Touimi
Marc Emerit
Jean-Marie Pernaux
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom SA filed Critical France Telecom SA
Publication of EP1600042A1 publication Critical patent/EP1600042A1/de
Application granted granted Critical
Publication of EP1600042B1 publication Critical patent/EP1600042B1/de
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/02Systems employing more than two channels, e.g. quadraphonic of the matrix type, i.e. in which input signals are combined algebraically, e.g. after having been phase shifted with respect to each other
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/01Multi-channel, i.e. more than two input channels, sound reproduction with two speakers wherein the multi-channel information is substantially preserved
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • the invention relates to a sound data processing for spatialized reproduction of acoustic signals.
  • Sound spatialization covers two different types of treatment. From a monophonic audio signal, we try to give the illusion to a listener that the sound source (s) are in good positions. accurate space (that we want to be able to change in real time), and immersed in a space with particular acoustic properties (reverberation, or other acoustic phenomena such as occlusion). For example, on mobile type telecommunication terminals, it is natural to consider a sound reproduction with a stereo headset. The most effective sound source positioning technique is then binaural synthesis.
  • HRTFs of the "Head Related Transfer Functions"
  • the HRTFs are thus functions of a spatial position, more particularly of an azimuth angle ⁇ and of an elevation angle ( ⁇ , and of the sound frequency f, which gives a given subject a database of acoustic transfer functions of N space positions for each ear, in which a sound can be "placed” (or “spatialized” according to the terminology used hereinafter).
  • a similar spatialization treatment consists of a so-called “transaural” synthesis , in which more than two loudspeakers are simply provided in one playback device (which is then in a different form of a helmet with two left and right atria).
  • the implementation of this technique is in the so-called "two-channel" form (treatment shown schematically in FIG. 1 relating to the prior art).
  • the signal of the source is filtered by the HRTF function of the left ear and by the HRTF function of the right ear.
  • the two left and right channels deliver acoustic signals that are then broadcast to the listener's ears with a stereo headset.
  • This two-channel binaural synthesis is of type hereinafter "static", because in this case, the positions of the sound sources do not evolve in time.
  • the filters used to model the HRTFs must be modified.
  • these filters are mostly of the finite impulse response (FIR) or infinite impulse response (IIR) type, discontinuity problems in the left and right output signals appear, causing audible clicks .
  • the technical solution conventionally used to overcome this problem is to run two sets of binaural filters in parallel. The first game simulates a position [ ⁇ 1, ⁇ 1] at time t1, the second a position [ ⁇ 2, ⁇ 2] at the instant t2.
  • the signal giving the illusion of a displacement between the positions at the instants t1 and t2 is then obtained by a fade-out of the left and right signals resulting from the filtering processes for the position [ ⁇ 1, ⁇ 1] and for the position [ ⁇ 2, ⁇ 2].
  • the complexity of the positioning system of sound sources is then multiplied by two (two positions at two instants) compared to the static case.
  • linear decomposition techniques are also of interest in the case of dynamic binaural synthesis (ie when the position of sound sources varies course of time). In fact, in this configuration, the coefficients of the filters are no longer varied, but the values of the weighting coefficients and delays as a function solely of the position.
  • the principle described above of linear decomposition of the sound rendering filters is generalized to other approaches, as will be seen below.
  • the audio and / or speech streams are transmitted in a compressed coded format.
  • frequency-domain (or frequency-converted) encoders such as those operating according to the MPEG-1 standard (Layer I-II-III), the MPEG-2/4 standard, are considered below.
  • AAC the MPEG-4 TwinVQ standard
  • the Dolby AC-2 standard the Dolby AC-3 standard
  • ITU-T G.722.1 standard in speech coding or the Applicant's TDAC coding method.
  • time / frequency transformation can take the form of a bank of filters in frequency sub-bands or a transform of the MDCT type (for "Modified Discrete Cosine Transform").
  • subband domain means a domain defined in a frequency subband space, a frequency-transformed time domain domain, or a frequency domain.
  • the conventional method consists in first decoding, performing the sound spatialization processing on the time signals, and then recoding the resulting signals, for transmission to a rendering terminal.
  • This sequence of steps, tedious, is often very expensive in terms of computing power, the memory required for processing and algorithmic delay introduced. It is therefore often unsuited to the constraints imposed by the machines where the processing and the communication constraints are carried out.
  • document US Pat. No. 6,470,087 describes a device for rendering a multichannel acoustic signal compressed on two loudspeakers. All calculations are done in the entire frequency band of the input signal, which therefore must be completely decoded.
  • the present invention improves the situation.
  • One of the aims of the present invention is to propose a sound data processing method comprising the coding / decoding operations in compression of the audio streams and the spatialization of said streams.
  • Another object of the present invention is to propose a sound data processing method, by spatialization, which adapts to a variable number (dynamically) of sound sources to be positioned.
  • a general object of the present invention is to propose a method for processing sound data, for example spatialization, allowing a wide dissemination of spatialized sound data, in particular a broadcast for the general public, the playback devices being simply equipped with a decoder of received signals and playback speakers.
  • Each acoustic signal in step a) of the method according to the invention is at least partially coded in compression and is expressed in the form of a subsignal vector associated with respective frequency sub-bands, and each filtering unit is arranged to perform a matrix filtering applied to each vector, in the frequency subband space.
  • each matrix filtering is obtained by converting, in the space of the frequency sub-bands, an impulse response filter (finite or infinite) defined in the time space.
  • an impulse response filter is preferably obtained by determining an acoustic transfer function depending on a direction of perception of a sound and the frequency of this sound.
  • these transfer functions are expressed by a linear combination of terms dependent on the frequency and weighted by terms dependent on the direction, which allows, as indicated above, on the one hand , process a variable number of acoustic signals in step a) and, secondly, dynamically vary the position of each source in time.
  • such an expression of the transfer functions "integrates" the interaural delay which is conventionally applied to one of the output signals, with respect to the other, before the restitution, in binaural processing.
  • gain filter matrices associated with each signal are provided.
  • the combination of linear decomposition techniques of the HRTFs with filtering techniques in the field of the sub-bands makes it possible to take advantage of the advantages of the two techniques to arrive at sound spatialization systems. Low complexity and reduced memory for multiple coded audio signals.
  • the direct filtering of the signals in the coded domain allows the economy of a complete decoding by audio stream before proceeding to the spatialization of the sources, which implies a considerable gain in complexity.
  • the sound spatialization of audio streams can occur at different points of a transmission chain (servers, network nodes or terminals).
  • the nature of the application and the architecture of the communication used may favor one case or another.
  • the spatialization processing is preferably performed at the terminals in a decentralized architecture and, conversely, at the audio bridge (or MCU for "Multipoint Control Unit") in a centralized architecture.
  • the spatialization can be performed either in the server or in the terminal, or during the creation of content.
  • a reduction in the processing complexity and also the memory required for storing the HRTF filters is still appreciated.
  • preferably spatialization processing is provided directly at the level of the device. a content server.
  • the present invention can also find applications in the field of the transmission of multiple audio streams included in structured sound scenes, as provided by the MPEG-4 standard.
  • FIG. 1 a "binational" binaural synthesis treatment.
  • This treatment consists in filtering the signal of the sources (Si) that one wishes to position at a position chosen in space by the acoustic transfer functions left (HRTF_1) and right (HRTF_r) corresponding to the direction ⁇ i, ⁇ i) appropriate. Two signals are obtained which are then added to the left and right signals resulting from the spatialization of the other sources, to give the global signals L and R diffused to the left and right ears of a listener. The number of necessary filters is then 2.N for static binaural synthesis and 4.N for dynamic binaural synthesis, where N is the number of audio streams to spatialize.
  • each HRTF filter is first decomposed into a minimum phase filter, characterized by its modulus, and a pure delay ⁇ i .
  • the spatial and frequency dependencies of the HRTFs modules are separated by a linear decomposition.
  • These modules of transfer functions HRTFs are then written as a sum of spatial functions C n ( ⁇ , ⁇ ) and of reconstruction filters L n (f), as expressed hereafter:
  • the N signals of all the sources weighted by the "directional" coefficient C ni are then summed (for the right channel and the left channel, separately), then filtered by the filter corresponding to the nth base vector.
  • the coefficients C ni correspond to the directional coefficients for the source i at the position ( ⁇ i, ⁇ i) and for the reconstruction filter n. They are noted C for the left channel (L) and D for the right channel (R). It is indicated that the processing principle of the right channel R is the same as that of the left channel L. However, the dotted line arrows for the treatment of the right channel have not been shown for the sake of clarity of the drawing. Between the two vertical lines in dashed line of Figure 2, then defines a system noted I, of the type shown in Figure 3.
  • a first method is based on a so-called Karhunen-Loeve decomposition and is described in particular in WO94 / 10816.
  • Another method is based on principal component analysis of HRTFs and is described in WO96 / 13962.
  • the document FR-2782228 more recent also describes such an implementation.
  • a step of decoding the N signals is necessary before the actual spatialization processing.
  • This step requires considerable computing resources (which is problematic on current communication terminals, particularly of the portable type). Moreover, this step causes a delay on the processed signals, which interferes with the interactivity of the communication. If the transmitted sound scene comprises a large number of sources (N), the decoding step may in fact become more expensive in computing resources than the sound spatialization step itself. Indeed, as indicated above, the calculation cost of binaural synthesis "multichannel" depends very little on the number of sound sources to spatialize.
  • the decoding of N coded streams is necessary before the step of spatialization of the sound sources, which leads to an increase in the calculation cost and the addition of a delay due to the processing of the decoder. It indicates that the original audio sources are usually stored directly in coded format, in the current content servers.
  • the number of signals resulting from the spatialization processing is generally greater than two, which further increases the cost of calculation to completely recode these signals before their transmission through the communication network.
  • this operation consists mainly in recovering the parameters of the subbands from the encoded binary audio stream. This operation depends on the initial encoder used. It may for example consist of entropy decoding followed by inverse quantization as in an MPEG-1 Layer III coder. Once these parameters of the subbands found, the treatment is performed in the field of sub-bands, as will be seen below.
  • the overall calculation cost of the spatialization operation of the coded audio streams is then considerably reduced. Indeed, the initial decoding operation in a conventional system is replaced by a partial decoding operation of much less complexity.
  • the computing load in a system according to the invention becomes substantially constant as a function of the number of audio streams that it is desired to spatialize. Compared to conventional systems, we obtain a gain in terms of calculation cost which then becomes proportional to the number of audio streams that we wish to spatialize.
  • the partial decoding operation results in a lower processing delay than the full decoding operation, which is particularly interesting in a context of interactive communication.
  • System II The system for implementing the method according to the invention, performing spatialization in the field of sub-bands, is denoted "System II" in Figure 4.
  • the following describes obtaining the parameters in the subband domain from binaural impulse responses.
  • the binaural transfer functions or HRTFs are accessible in the form of temporal impulse responses. These functions generally consist of 256 time samples, at a sampling frequency of 44.1 kHz (typical in the audio field). These impulse responses can be derived from measurements or acoustic simulations.
  • the filter matrices Gi applied independently to each source "integrate" a conventional delay calculation operation for adding the interaural delay between a signal L i and a signal R i to return.
  • delay lines ⁇ i are conventionally provided (FIG. 2) to be applied to a "left ear” signal with respect to a "right ear” signal.
  • a matrix of filters G i is provided in the field of sub-bands, which in addition makes it possible to adjust gains (for example in energy) of certain sources with respect to others.
  • the modification of the spectrum of the signal provided by a filtering in the time domain can not be carried out directly on the signals of the subbands without taking into account the phenomenon of spectrum overlap ("aliasing") introduced by the filter bank d 'analysis.
  • aliasing phenomenon of spectrum overlap
  • the dependency relationship between the aliasing components of the different sub-bands is preferentially retained during the filtering operation so that their deletion is ensured by the synthesis filter bank.
  • a method for transposing a rational-type filter S (z) of type FIR or IIR is described below (its z-transform being a quotient of two polynomials) in the case of linear decomposition of HRTFs or transfer functions. of this type, in the area of sub-bands, for a filterbank M subband and critical sampling, respectively defined by its filter analysis and synthesis of H k (z) and F k (z), where 0 ⁇ k ⁇ M-1.
  • critical sampling means that the number of all the output samples of the subbands corresponds to the number of samples in inputs. This filter bank is also supposed to satisfy the condition of perfect reconstruction.
  • S (z) [ S 0 ( z ) S 1 ( z ) ... S M - 1 ( z ) z - 1 S M - 1 ( z ) S 0 ( z ) S 1 ( z ) ... S M - two ( z ) z - 1 S M - two ( z ) z - 1 S M - 1 ( z ) S 0 ( z ) S 1 ( z ) ...
  • Polyphase matrices E (z) and R (z) corresponding to the analysis and synthesis filter banks are then determined. These matrices are definitively determined for the filter bank under consideration.
  • the chosen number ⁇ corresponds to the number of bands which overlap sufficiently on one side with the bandwidth of a filter of the filterbank. It therefore depends on the type of filter banks used in the chosen coding.
  • can be taken as 2 or 3.
  • is taken as 1.
  • the result of this transposition of a finite or infinite impulse response filter to the domain of the sub-bands is a matrix of filters of size MxM.
  • MxM filters of size
  • the filters of the main diagonal and some adjacent sub-diagonals can be used to obtain a result similar to that obtained by filtering in the time domain (without altering the quality of the restitution).
  • the matrix S sb (z) resulting from this transposition, then reduced, is that used for the subband filtering.
  • the expression of the polyphase matrices E (z) and R (z) for an MDCT filter bank is indicated below. / 4 AAC, or Dolby AC-2 & AC-3, or TDAC of the Applicant.
  • the following treatment can also be adapted to a Pseudo-QMF filter bank of the MPEG-1/2 Layer I-II coder.
  • the values of the window (-1) 1 h (2 lM + k ) are typically provided, with 0 ⁇ k ⁇ 2M-1, 0 ⁇ l ⁇ m-1.
  • partial N- coded audio sources S 1 ,..., S i ,..., S N are decoded in compression to obtain S 1 , ..., S i , ..., S N signals preferably corresponding to signal vectors whose coefficients are values each assigned to a sub-band.
  • Partial decoding is understood to mean a processing that makes it possible to obtain, from the compression-coded signals, such signal vectors in the field of the sub-bands.
  • Position information from which respective values of gains G 1 ,..., G i ,..., G N (for binaural synthesis) and coefficients C ni (for the left ear) can be obtained.
  • the spatialization processing is conducted directly in the field of sub-bands and apply the 2P matrices L n and R n of basic filters, obtained as indicated above, to the signal vectors S 1 weighted by the scalar coefficients C ni and D ni , respectively.
  • the signal vectors L and R, resulting from the spatialization processing in the subband domain are then expressed by the following relations, in a representation by their transformed into z:
  • [ ⁇ i 1 NOT VS not i . S i ( z ) ]
  • [ ⁇ i 1 NOT D not i . S i ( z ) ]
  • the spatialization processing is carried out in a server connected to a communication network.
  • these L and R signal vectors can be recoded completely in compression to broadcast the compressed signals L and R (left and right channels) in the communication network and to the playback terminals.
  • an initial step of partially decoding the coded signals S i is provided before the spatialization processing.
  • this step is much less expensive and faster than the complete decoding operation that was necessary in the prior art (FIG. 3).
  • the signal vectors L and R are already expressed in the subband domain and the partial recoding of FIG. 4 to obtain the coded signals in L and R compression is faster and less expensive than a complete coding such as shown in Figure 3.
  • the latter document presents a method for transposing a finite impulse response (FIR) filter in the sub-band domain of the pseudo-QMF filterbanks of the MPEG-1 Layer I-II and MDCT encoder of the MPEG-2/4 encoder AAC.
  • the equivalent filtering operation in the subband field is represented by an array of FIR filters.
  • this proposal is in the context of a transposition of HRTFs filters, directly in their classical form and not in the form of a linear decomposition as expressed by equation Eq [1] above and on a filter basis in the sense of the invention.
  • a disadvantage of the method in the sense of the latter document is that the spatialization processing can not adapt to any number of sources or audio streams encoded spatialize.
  • each HRTF filter (of order 200 for a FIR and order 12 for an IIR) gives rise to a matrix of filters (square) of dimension equal to the number of subbands of the bench of filters used.
  • a matrix of filters square
  • an adaptation of a linear decomposition of the HRTFs in the field of the subbands does not present this problem since the number (P) of basic filter matrices L n and R n is much smaller. These matrices are then permanently stored in a memory (of the content server or of the rendering terminal) and allow simultaneous spatialization processing of any number of sources, as shown in FIG.
  • a sound rendering system can be generally in the form of a real or virtual sound recording system (for a simulation) consisting of an encoding of the sound field.
  • This phase consists in recording p sound signals in a real way or in simulating such signals (virtual encoding) corresponding to the whole of a sound scene including all the sounds, as well as a room effect.
  • the aforementioned system may also be in the form of a sound rendering system of decoding the sound output signals to suit the sound rendering translators (such as a plurality of loudspeakers or a loudspeaker). stereophonic headphones).
  • the p signals are transformed into n signals which supply the n loudspeakers.
  • binaural synthesis consists of making a real sound recording, using a pair of microphones introduced into the ears of a human head (artificial or real).
  • the recording can also be simulated by convolving a monophonic sound with the pair of HRTFs corresponding to a desired direction of the virtual sound source. From one or more monophonic signals coming from predetermined sources, two signals (left ear and right ear) are obtained corresponding to a so-called " binaural encoding" phase, these two signals being then simply applied to a two-ear headset (such as a stereo headset).
  • N audio streams S j represented in the field of the sub-bands after partial decoding undergo a spatialization processing, for example an ambisonic encoding, to deliver p signals E i encoded in the sub domain. -bands.
  • a spatialization processing for example an ambisonic encoding
  • Such Spatialization processing therefore respects the general case governed by equation Eq [2] above.
  • the application to the signals S j of the filter matrix G j (to define the interaural delay ITD) is no longer necessary here, in the ambisonic context.
  • the filters K ji (f) are fixed and depend, at constant frequency, only on the sound rendering system and its arrangement with respect to a listener. This situation is shown in Figure 6 (to the right of the dashed vertical line), in the example of the ambisonic context.
  • the signals E i spatially encoded in the field of the subbands are recoded completely in compression, transmitted in a communication network, recovered in a rendering terminal, partially decoded in compression to obtain a representation in the sub domain. -bands.
  • Eq [3] A processing in the field of the subbands of the type expressed by the equation Eq [3] then makes it possible to recover m signals D j , spatially decoded and ready to be restored after decoding in compression.
  • decoding systems can be arranged in series, depending on the intended application.
  • the filters K ji (f) take the constant numerical values on these two frequency bands, given in Tables I and II below.
  • coded signals ( S i ) emanate from N remote terminals. They are spatialised at the teleconference server (for example at an audio bridge for a star architecture as shown in FIG. 8) for each participant. This step, performed in the field of sub-bands after a partial decoding phase, is followed by partial recoding.
  • the signals thus coded in compression are then transmitted via the network and, upon receipt by a rendering terminal, are decoded completely in compression and applied to the two left and right lanes 1 and r, respectively, of the rendering terminal, in the case of a binaural spatialization.
  • the compression decoding process thus makes it possible to deliver two left and right temporal signals which contain the information of positions of N distant speakers and which feed two respective loudspeakers (two-ear headphones).
  • m channels can be retrieved at the output of the communication server, if the encoding / decoding in spatialization are performed by the server.
  • This spatialization can be static or dynamic and, in addition, interactive. Thus, the position of the speakers is fixed or may vary over time. If the spatialization is not interactive, the position of the different speakers is fixed: the listener can not modify it. On the other hand, if the spatialization is interactive, each listener can configure his terminal to position the voice of the other N speakers where it wishes, substantially in real time.
  • the rendering terminal receives N audio streams (Si) encoded in compression (MPEG, AAC, or others) of a communication network.
  • the terminal After partial decoding to obtain the signal vectors (Si), the terminal (“System II") processes these signal vectors to spatialize the audio sources, here in binaural synthesis, in two signal vectors L and R which are then applied to banks synthesis filters for compression decoding.
  • the left and right PCM signals, respectively 1 and r, resulting from this decoding are then intended to directly supply loudspeakers.
  • This type of processing advantageously adapts to a decentralized teleconferencing system (several terminals connected in point-to-point mode).
  • This scene can be simple, or complex as often in the context of MPEG-4 transmissions where the sound scene is transmitted in a structured format.
  • the client terminal receives, from a multimedia server, a multiplexed bitstream corresponding to each of the coded primitive audio objects, as well as instructions as to their composition for reconstructing the sound scene.
  • Audio object an elementary bit stream obtained by an MPEG-4 Audio encoder.
  • the MPEG-4 System standard provides a special format, called "AudioBIFS" (for "BInary Format for Scene Description"), to convey these instructions.
  • the role of this format is to describe the spatio-temporal composition of audio objects.
  • these different decoded streams can undergo further processing.
  • a sound spatialization processing step can be performed.
  • the operations to be performed are represented by a graph.
  • the decoded audio signals are provided at the input of the graph.
  • Each node of the graph represents a type of processing to be performed on an audio signal.
  • At the output of the graph are provided the different sound signals to be restored or associated with other media objects (images or other).
  • transform coders used mainly for high quality audio transmission. (monophonic and multichannel). This is the case of AAC and TwinVQ encoders based on the MDCT transform.
  • the low decoding layer at the nodes of the upper layer which provides particular processing, such as binaural spatialization by HRTFs filters.
  • the nodes of the "AudioBIFS" graph which involve a binaural spatialization can be processed directly in the field of subbands (MDCT for example).
  • the filter bank synthesis operation is performed only after this step.
  • signal processing for spatialization can only be performed at the audio bridge level. Indeed, terminals TER1, TER2, TER3 and TER4 receive streams already mixed and therefore no treatment can be achieved at their level for spatialization.
  • the audio bridge must realize a spatialization of the speakers from the terminals for each of the N subsets consisting of (N- 1) speakers among the N participating in the conference.
  • a treatment in the coded domain brings more benefit.
  • Figure 9 shows schematically the processing system provided in the audio bridge. This processing is thus performed on a subset of ( N -1) coded audio signals among the N at the input of the bridge.
  • the left and right coded audio frames in the case of a binaural spatialization, or the m coded audio frames in the case of a general spatialization (for example in ambisonic encoding) as represented in FIG. 9, which result from this processing are thus transmitted to the remaining terminal which participates in the teleconference but which is not included in this subset (corresponding to a "listener terminal ") .
  • N treatments of the type described above are performed in the audio bridge ( N subsets of ( N -1) coded signals). It is indicated that the partial coding of FIG.
  • the coded audio frame after the spatialization processing designates the operation of constructing the coded audio frame after the spatialization processing and to transmit on a channel (left or right).
  • it may be a quantization of the L and R signal vectors that result from the spatialization processing, based on a number of bits allocated and calculated according to a selected psychoacoustic criterion.
  • Conventional compression coding treatments after the application of the Analysis filters can therefore be maintained and performed with spatialization in the subband field.
  • the position of the sound source to be spatialised may vary over time, which amounts to varying over time the directional coefficients of the domain of the subbands C ni and D ni .
  • the variation of the value of these coefficients is preferentially done in a discrete manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Mathematical Optimization (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Stereophonic System (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Claims (26)

  1. Verfahren zur Verarbeitung von Tondaten für eine raumangepasste Wiedergabe von akustischen Signalen, bei dem:
    a) für jedes akustische Signal (Si) mindestens ein erster Satz (Cni) und ein zweiter Satz (Dni) von Gewichtungstermen erhalten wird, die für eine Wahrnehmungsrichtung des akustischen Signals durch einen Hörer repräsentativ sind; und
    b) die akustischen Signale an mindestens zwei Sätzen von parallel angeordneten Filterungseinheiten angelegt werden, um mindestens ein erstes Ausgangssignal (L) und ein zweites Ausgangssignal (R) zu liefern, die je einer linearen Kombination der von der Gesamtheit der Gewichtungsterme des ersten Satzes (Cni) bzw. des zweiten Satzes (Dni) gewichteten und von den Filterungseinheiten gefilterten akustischen Signale entsprechen,
    dadurch gekennzeichnet, dass jedes akustische Signal im Schritt a) zumindest teilweise kompressionscodiert und in Form eines Vektors von Untersignalen ausgedrückt wird, die Frequenz-Unterbändern zugeordnet sind,
    und dass jede Filterungseinheit ausgelegt ist, um im Raum der Frequenzunterbänder eine Matrixfilterung durchzuführen, die an jeden Vektor angewendet wird.
  2. Verfahren nach Anspruch 1, dadurch gekennzeichnet, dass jede Matrixfilterung durch Konversion, im Raum der Frequenzunterbänder, eines durch eine Impulsantwort im zeitlichen Raum dargestellten Filters erhalten wird.
  3. Verfahren nach Anspruch 2, dadurch gekennzeichnet, dass jedes Filter mit Impulsantwort durch Bestimmung einer akustischen Übertragungsfunktion erhalten wird, die von einer Wahrnehmungsrichtung eines Tons und der Frequenz dieses Tons abhängt.
  4. Verfahren nach Anspruch 3, dadurch gekennzeichnet, dass die Übertragungsfunktionen durch eine lineare Kombination von Termen ausgedrückt wird, die von der Frequenz abhängen und von Termen gewichtet werden, die von der Richtung abhängen (Eq[1]).
  5. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass die Gewichtungsterme des ersten und des zweiten Satzes von der Richtung des Tons abhängen.
  6. Verfahren nach Anspruch 5, dadurch gekennzeichnet, dass die Richtung von einem Azimutwinkel (θ) und von einem Elevationswinkel (ϕ) definiert wird.
  7. Verfahren nach einem der Ansprüche 2 und 3, dadurch gekennzeichnet, dass die Matrixfilterung ausgehend von einem Matrixprodukt, das mehrphasige Matrizen (E(z), R(z)) einsetzt, die Analyse- und Synthesefilterbänken entsprechen, und von einer Übertragungsmatrix (S(z)) ausgedrückt wird, deren Elemente von dem Filter mit Impulsantwort abhängen.
  8. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass die Matrix der Matrixfilterung von reduzierter Form ist und eine Diagonale und eine vorbestimmte Zahl (δ) von unteren und oberen benachbarten Unterdiagonalen aufweist, deren Elemente nicht alle Null sind.
  9. Verfahren nach Anspruch 8 in Kombination mit Anspruch 7, dadurch gekennzeichnet, dass die Zeilen der Matrix der Matrixfilterung ausgedrückt werden durch:
    [0 ... Ssb i1(Z) ... Ssb ii (Z) ... Ssb in(Z) ... 0], wobei
    - i der Index der (i+1)ten Zeile ist und zwischen 0 und M-1 liegt, wobei M einer Gesamtzahl von Unterbändern entspricht,
    - 1 = i-δ mod[M], wobei δ der Anzahl von benachbarten Unterdiagonalen entspricht, während die Schreibweise mod[M] einer Subtraktionsoperation modulo M entspricht,
    - n = i+δ mod[M], wobei die Schreibweise mod[M] einer Additionsoperation modulo M entspricht,
    - und Ssb ij(z) die Koeffizienten der Produktmatrix sind, die die mehrphasigen Matrizen der Analyse- und Synthesefilterbänke und der Übertragungsmatrix verwendet.
  10. Verfahren nach einem der Ansprüche 7 bis 9, dadurch gekennzeichnet, dass die Produktmatrix durch Ssb (z) = zKE (z)S(z)R(z) ausgedrückt wird, wobei
    - zK ein Vorschub ist, der durch den Term K=(L/M)-1 definiert wird, wobei L die Länge der Impulsantwort der Analyse- und Synthesefilter der Filterbänke und M die Gesamtanzahl von Unterbändern ist,
    - E(z) die mehrphasige Matrix ist, die der Analysefilterbank entspricht,
    - R(z) die mehrphasige Matrix ist, die der Synthesefilterbank entspricht, und
    - S(z) der Übertragungsmatrix entspricht.
  11. Verfahren nach einem der Ansprüche 7 bis 10, dadurch gekennzeichnet, dass die Übertragungsmatrix ausgedrückt wird durch: S ( z ) = [ S 0 ( z ) S 1 ( z )    S M 1 ( z ) z 1 S M 1 ( z ) S 0 ( z ) S 1 ( z )    S M 2 ( z ) z 1 S M 2 ( z ) z 1 S M 1 ( z ) S 0 ( z ) S 1 ( z )       S M 3 ( z )                        S 1 ( z ) z 1 S 1 ( z )     z 1 S M 1 ( z ) S 0 ( z ) ]
    Figure imgb0024

    wobei Sk(z) die mehrphasigen Komponenten des Filters mit Impulsantwort S (z) sind, mit k zwischen 0 und M-1, und wobei M einer Gesamtanzahl von Unterbändern entspricht.
  12. Verfahren nach einem der Ansprüche 7 bis 11, dadurch gekennzeichnet, dass die Filterbänke in kritischer Abtastung arbeiten.
  13. Verfahren nach einem der Ansprüche 7 bis 12, dadurch gekennzeichnet, dass die Filterbänke eine perfekte Rekonstruktionseigenschaft erfüllen.
  14. Verfahren nach einem der Ansprüche 2 bis 13, dadurch gekennzeichnet, dass das Filter mit Impulsantwort ein rationales Filter ist, das sich in Form eines Bruchteils von zwei Polynomen ausdrückt.
  15. Verfahren nach Anspruch 14, dadurch gekennzeichnet, dass die Impulsantwort unendlich ist.
  16. Verfahren nach einem der Ansprüche 8 bis 15, dadurch gekennzeichnet, dass die vorbestimmte Anzahl (δ) von benachbarten Unterdiagonalen von einem Filterbank-Typ abhängt, der in der gewählten Kompressionscodierung verwendet wird.
  17. Verfahren nach Anspruch 16, dadurch gekennzeichnet, dass die vorbestimmte Anzahl (δ) zwischen 1 und 5 liegt.
  18. Verfahren nach einem der Ansprüche 7 bis 17, dadurch gekennzeichnet, dass die Matrixelemente (Ln, Rn) , die aus dem Matrixprodukt resultieren, in einem Speicher gespeichert und für alle teilweise codierten und an den Raum anzupassenden akustischen Signale wieder verwendet werden.
  19. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass es außerdem einen Schritt d) aufweist, der darin besteht, eine Synthesefilterbank an das erste (L) und das zweite Ausgangssignal (R) vor ihrer Wiedergabe anzuwenden.
  20. Verfahren nach Anspruch 19, dadurch gekennzeichnet, dass es weiter einen Schritt c) vor dem Schritt d) aufweist, der darin besteht, das erste und das zweite Signal in einem Kommunikationsnetz ausgehend von einem fernen Server und zu einer Wiedergabevorrichtung in codierter und raumangepasster Form zu übermitteln, und dass der Schritt b) im fernen Server ausgeführt wird.
  21. Verfahren nach Anspruch 19, dadurch gekennzeichnet, dass es außerdem einen Schritt c) vor dem Schritt d) aufweist, der darin besteht, das erste und das zweite Signal in einem Kommunikationsnetz ausgehend von einer Audiobrücke eines Mehrpunkt-Konferenzschaltungssystems mit zentralisierter Architektur und zu einer Wiedergabevorrichtung des Konferenzschaltungssystems in codierter und raumangepasster Form zu übermitteln, und dass der Schritt b) in der Audiobrücke ausgeführt wird.
  22. Verfahren nach Anspruch 19, dadurch gekennzeichnet, dass es außerdem einen nach dem Schritt a) liegenden Schritt aufweist, der darin besteht, die akustischen Signale in kompressionscodierter Form in einem Kommunikationsnetz zu übermitteln, ausgehend von einem ferner Server und zu einem Wiedergabeterminal, und dass die Schritte b) und d) im Wiedergabeterminal ausgeführt werden.
  23. Verfahren nach einem der vorhergehenden Ansprüche, dadurch gekennzeichnet, dass im Schritt b) eine Klangraumanpassung durch binaurale Synthese beruhend auf einer linearen Zersetzung von akustischen Übertragungsfunktionen angewendet wird.
  24. Verfahren nach Anspruch 23, dadurch gekennzeichnet, dass außerdem im Schritt b) eine Matrix von Verstärkungsfiltern (Gi) an jedes teilweise codierte akustische Signal (Si) angewendet wird,
    dass das erste und das zweite Ausgangssignal dazu bestimmt sind, in erste und zweite Wiedergabesignale (1, r) decodiert zu werden,
    und dass die Anwendung der Matrix von Verstärkungsfiltern darauf hinausläuft, eine gewählte Zeitverschiebung (ITD) zwischen dem ersten und dem zweiten Wiedergabesignal anzuwenden.
  25. Verfahren nach einem der Ansprüche 1 bis 22, dadurch gekennzeichnet, dass im Schritt a) mehr als zwei Sätze von Gewichtungstermen erhalten werden, und dass an die akustischen Signale im Schritt b) mehr als zwei Sätze von Filterungseinheiten angewendet werden, um mehr als zwei Ausgangssignale zu liefern, die codierte ambisonische Signale enthalten.
  26. System zur Verarbeitung von Tondaten, dadurch gekennzeichnet, dass es Mittel zur Anwendung des Verfahrens nach einem der vorhergehenden Ansprüche aufweist.
EP04712070A 2003-02-27 2004-02-18 Verfahren zum bearbeiten komprimierter audiodaten zur räumlichen wiedergabe Expired - Lifetime EP1600042B1 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
FR0302397 2003-02-27
FR0302397A FR2851879A1 (fr) 2003-02-27 2003-02-27 Procede de traitement de donnees sonores compressees, pour spatialisation.
PCT/FR2004/000385 WO2004080124A1 (fr) 2003-02-27 2004-02-18 Procede de traitement de donnees sonores compressees, pour spatialisation

Publications (2)

Publication Number Publication Date
EP1600042A1 EP1600042A1 (de) 2005-11-30
EP1600042B1 true EP1600042B1 (de) 2006-08-09

Family

ID=32843028

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04712070A Expired - Lifetime EP1600042B1 (de) 2003-02-27 2004-02-18 Verfahren zum bearbeiten komprimierter audiodaten zur räumlichen wiedergabe

Country Status (7)

Country Link
US (1) US20060198542A1 (de)
EP (1) EP1600042B1 (de)
AT (1) ATE336151T1 (de)
DE (1) DE602004001868T2 (de)
ES (1) ES2271847T3 (de)
FR (1) FR2851879A1 (de)
WO (1) WO2004080124A1 (de)

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100606734B1 (ko) 2005-02-04 2006-08-01 엘지전자 주식회사 삼차원 입체음향 구현 방법 및 그 장치
DE102005010057A1 (de) * 2005-03-04 2006-09-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung und Verfahren zum Erzeugen eines codierten Stereo-Signals eines Audiostücks oder Audiodatenstroms
US8345890B2 (en) 2006-01-05 2013-01-01 Audience, Inc. System and method for utilizing inter-microphone level differences for speech enhancement
US8194880B2 (en) 2006-01-30 2012-06-05 Audience, Inc. System and method for utilizing omni-directional microphones for speech enhancement
US8204252B1 (en) 2006-10-10 2012-06-19 Audience, Inc. System and method for providing close microphone adaptive array processing
US8744844B2 (en) 2007-07-06 2014-06-03 Audience, Inc. System and method for adaptive intelligent noise suppression
US9185487B2 (en) 2006-01-30 2015-11-10 Audience, Inc. System and method for providing noise suppression utilizing null processing noise subtraction
KR100754220B1 (ko) 2006-03-07 2007-09-03 삼성전자주식회사 Mpeg 서라운드를 위한 바이노럴 디코더 및 그 디코딩방법
EP1994526B1 (de) * 2006-03-13 2009-10-28 France Telecom Gemeinsame schallsynthese und -spatialisierung
EP1994796A1 (de) * 2006-03-15 2008-11-26 Dolby Laboratories Licensing Corporation Binaurales rendering mit subbandfiltern
FR2899423A1 (fr) * 2006-03-28 2007-10-05 France Telecom Procede et dispositif de spatialisation sonore binaurale efficace dans le domaine transforme.
US8266195B2 (en) * 2006-03-28 2012-09-11 Telefonaktiebolaget L M Ericsson (Publ) Filter adaptive frequency resolution
US8204253B1 (en) 2008-06-30 2012-06-19 Audience, Inc. Self calibration of audio device
US8949120B1 (en) 2006-05-25 2015-02-03 Audience, Inc. Adaptive noise cancelation
US8150065B2 (en) 2006-05-25 2012-04-03 Audience, Inc. System and method for processing an audio signal
US8934641B2 (en) 2006-05-25 2015-01-13 Audience, Inc. Systems and methods for reconstructing decomposed audio signals
US8849231B1 (en) 2007-08-08 2014-09-30 Audience, Inc. System and method for adaptive power control
US8259926B1 (en) 2007-02-23 2012-09-04 Audience, Inc. System and method for 2-channel and 3-channel acoustic echo cancellation
US20080273708A1 (en) * 2007-05-03 2008-11-06 Telefonaktiebolaget L M Ericsson (Publ) Early Reflection Method for Enhanced Externalization
US8189766B1 (en) 2007-07-26 2012-05-29 Audience, Inc. System and method for blind subband acoustic echo cancellation postfiltering
JP2009128559A (ja) * 2007-11-22 2009-06-11 Casio Comput Co Ltd 残響効果付加装置
US8180064B1 (en) 2007-12-21 2012-05-15 Audience, Inc. System and method for providing voice equalization
US8143620B1 (en) 2007-12-21 2012-03-27 Audience, Inc. System and method for adaptive classification of audio sources
US8194882B2 (en) 2008-02-29 2012-06-05 Audience, Inc. System and method for providing single microphone noise suppression fallback
US8355511B2 (en) 2008-03-18 2013-01-15 Audience, Inc. System and method for envelope-based acoustic echo cancellation
US8521530B1 (en) 2008-06-30 2013-08-27 Audience, Inc. System and method for enhancing a monaural audio signal
US8774423B1 (en) 2008-06-30 2014-07-08 Audience, Inc. System and method for controlling adaptivity of signal modification using a phantom coefficient
KR101496760B1 (ko) * 2008-12-29 2015-02-27 삼성전자주식회사 서라운드 사운드 가상화 방법 및 장치
US8639046B2 (en) * 2009-05-04 2014-01-28 Mamigo Inc Method and system for scalable multi-user interactive visualization
CN102577441B (zh) * 2009-10-12 2015-06-03 诺基亚公司 用于音频处理的多路分析
US9838784B2 (en) 2009-12-02 2017-12-05 Knowles Electronics, Llc Directional audio capture
US8786852B2 (en) 2009-12-02 2014-07-22 Lawrence Livermore National Security, Llc Nanoscale array structures suitable for surface enhanced raman scattering and methods related thereto
US8718290B2 (en) 2010-01-26 2014-05-06 Audience, Inc. Adaptive noise reduction using level cues
US9008329B1 (en) 2010-01-26 2015-04-14 Audience, Inc. Noise reduction using multi-feature cluster tracker
US9378754B1 (en) 2010-04-28 2016-06-28 Knowles Electronics, Llc Adaptive spatial classifier for multi-microphone systems
US9395304B2 (en) 2012-03-01 2016-07-19 Lawrence Livermore National Security, Llc Nanoscale structures on optical fiber for surface enhanced Raman scattering and methods related thereto
US9491299B2 (en) * 2012-11-27 2016-11-08 Dolby Laboratories Licensing Corporation Teleconferencing using monophonic audio mixed with positional metadata
US9536540B2 (en) 2013-07-19 2017-01-03 Knowles Electronics, Llc Speech signal separation and synthesis based on auditory scene analysis and speech modeling
FR3009158A1 (fr) * 2013-07-24 2015-01-30 Orange Spatialisation sonore avec effet de salle
DE102013223201B3 (de) * 2013-11-14 2015-05-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Verfahren und Vorrichtung zum Komprimieren und Dekomprimieren von Schallfelddaten eines Gebietes
CN107112025A (zh) 2014-09-12 2017-08-29 美商楼氏电子有限公司 用于恢复语音分量的***和方法
US10249312B2 (en) * 2015-10-08 2019-04-02 Qualcomm Incorporated Quantization of spatial vectors
US9820042B1 (en) 2016-05-02 2017-11-14 Knowles Electronics, Llc Stereo separation and directional suppression with omni-directional microphones
US10598506B2 (en) * 2016-09-12 2020-03-24 Bragi GmbH Audio navigation using short range bilateral earpieces
FR3065137B1 (fr) 2017-04-07 2020-02-28 Axd Technologies, Llc Procede de spatialisation sonore

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5583962A (en) * 1991-01-08 1996-12-10 Dolby Laboratories Licensing Corporation Encoder/decoder for multidimensional sound fields
KR100206333B1 (ko) * 1996-10-08 1999-07-01 윤종용 두개의 스피커를 이용한 멀티채널 오디오 재생장치및 방법
US7116787B2 (en) * 2001-05-04 2006-10-03 Agere Systems Inc. Perceptual synthesis of auditory scenes

Also Published As

Publication number Publication date
ES2271847T3 (es) 2007-04-16
DE602004001868D1 (de) 2006-09-21
FR2851879A1 (fr) 2004-09-03
WO2004080124A1 (fr) 2004-09-16
EP1600042A1 (de) 2005-11-30
ATE336151T1 (de) 2006-09-15
DE602004001868T2 (de) 2007-03-08
US20060198542A1 (en) 2006-09-07

Similar Documents

Publication Publication Date Title
EP1600042B1 (de) Verfahren zum bearbeiten komprimierter audiodaten zur räumlichen wiedergabe
EP2374123B1 (de) Verbesserte codierung von mehrkanaligen digitalen audiosignalen
EP2042001B1 (de) Binaurale spatialisierung kompressionsverschlüsselter tondaten
JP5090436B2 (ja) 変換ドメイン内で効率的なバイノーラルサウンド空間化を行う方法およびデバイス
EP1794748B1 (de) Datenverarbeitungsverfahren durch Übergang zwischen verschiedenen Subband-domänen
EP2374124B1 (de) Verwaltete codierung von mehrkanaligen digitalen audiosignalen
EP2005420B1 (de) Einrichtung und verfahren zur codierung durch hauptkomponentenanalyse eines mehrkanaligen audiosignals
EP1992198B1 (de) Optimierung des binauralen raumklangeffektes durch mehrkanalkodierung
EP2304721B1 (de) Raumsynthese mehrkanaliger tonsignale
EP2319037B1 (de) Rekonstruktion von mehrkanal-audiodaten
EP3025514B1 (de) Klangverräumlichung mit raumwirkung
EP1994526B1 (de) Gemeinsame schallsynthese und -spatialisierung
FR3065137A1 (fr) Procede de spatialisation sonore
EP4042418B1 (de) Bestimmung von korrekturen zur anwendung auf ein mehrkanalaudiosignal, zugehörige codierung und decodierung
WO2006075079A1 (fr) Procede d’encodage de pistes audio d’un contenu multimedia destine a une diffusion sur terminaux mobiles
Touimi et al. Efficient method for multiple compressed audio streams spatialization
Pernaux Efficient Method for Multiple Compressed Audio Streams Spatialization

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20050825

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIN1 Information on inventor provided before grant (corrected)

Inventor name: PERNAUX, JEAN-MARIE

Inventor name: BENJELLOUN TOUIMI, ABDELLATIF

Inventor name: EMERIT, MARC

DAX Request for extension of the european patent (deleted)
GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060809

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED.

Effective date: 20060809

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060809

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060809

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060809

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060809

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060809

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060809

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060809

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REF Corresponds to:

Ref document number: 602004001868

Country of ref document: DE

Date of ref document: 20060921

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061109

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061109

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061109

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20070109

GBT Gb: translation of ep patent filed (gb section 77(6)(a)/1977)

Effective date: 20061220

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20070228

REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2271847

Country of ref document: ES

Kind code of ref document: T3

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20070510

BERE Be: lapsed

Owner name: FRANCE TELECOM

Effective date: 20070228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20070228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20061110

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060809

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080229

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080229

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060809

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20070218

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060809

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20070210

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230119

Year of fee payment: 20

Ref country code: ES

Payment date: 20230301

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20230120

Year of fee payment: 20

Ref country code: GB

Payment date: 20230121

Year of fee payment: 20

Ref country code: DE

Payment date: 20230119

Year of fee payment: 20

REG Reference to a national code

Ref country code: DE

Ref legal event code: R071

Ref document number: 602004001868

Country of ref document: DE

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20240226

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

Expiry date: 20240217

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20240219

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20240219

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20240217