EP2987339B1 - Method for acoustical reproduction of a numerical audio signal - Google Patents

Method for acoustical reproduction of a numerical audio signal Download PDF

Info

Publication number
EP2987339B1
EP2987339B1 EP14721466.2A EP14721466A EP2987339B1 EP 2987339 B1 EP2987339 B1 EP 2987339B1 EP 14721466 A EP14721466 A EP 14721466A EP 2987339 B1 EP2987339 B1 EP 2987339B1
Authority
EP
European Patent Office
Prior art keywords
frequency
sound
audio
digital
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP14721466.2A
Other languages
German (de)
French (fr)
Other versions
EP2987339A1 (en
Inventor
Jean-Luc HAURAIS
Franck Rosset
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP2987339A1 publication Critical patent/EP2987339A1/en
Application granted granted Critical
Publication of EP2987339B1 publication Critical patent/EP2987339B1/en
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S5/00Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2205/00Details of stereophonic arrangements covered by H04R5/00 but not provided for in any of its subgroups
    • H04R2205/021Aspects relating to docking-station type assemblies to obtain an acoustical effect, e.g. the type of connection to external loudspeakers or housings, frequency improvement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/01Multi-channel, i.e. more than two input channels, sound reproduction with two speakers wherein the multi-channel information is substantially preserved
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/05Generation or adaptation of centre channel in multi-channel audio systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/07Generation or adaptation of the Low Frequency Effect [LFE] channel, e.g. distribution or signal processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones

Definitions

  • the present invention relates to the field of processing audio signals to improve perception during sound reproduction.
  • the international patent application is known WO2012088336 describing a method of processing an audio sound source to create four dimensions of spatialized sound.
  • a virtual sound source can be moved along a path in a three-dimensional space over a specified period of time to obtain the location of the four-dimensional sound.
  • the various embodiments described herein provide methods and systems for the existing mono, 2-channel and / or multi-channel conversion of audio signals into spatialized audio signals having two or more audio channels.
  • the various embodiments also describe methods, systems, and apparatus for producing low frequency effects and center channel signals from incoming audio signals having one or more channels.
  • the European patent EP2119306 describes apparatus for processing an audio sound source to create four dimensions of spatialized sound.
  • a virtual sound source can be moved along a path in a three-dimensional space over a specified period of time to obtain the location of the four-dimensional sound.
  • a binaural filter for a desired spatial point is applied to the audio waveform to produce a spatialized waveform that, when the spatialized waveform is played from a pair of speakers, the sound appears to come from the chosen point space instead of speakers.
  • a binaural filter for a point in space is simulated by interpolation of the nearest neighbor binaural filters selected from a plurality of predefined binaural filters.
  • the audio waveform can be numerically processed by overlapping data blocks using a short Fourier transformation time.
  • Localized sound can be processed later for Doppler shift and chamber simulation.
  • the present invention relates to a method for processing an original audio signal of Nx channels, N being greater than 1 and x being greater than or equal to 0, comprising a multichannel processing step of said input audio signal by a multichannel convolution with a predefined imprint, said imprint being elaborated by the capture of a reference sound by a set of speakers arranged in a reference space characterized in that it comprises an additional step of selecting at least one of a plurality of fingerprints previously developed in different sound contexts.
  • the patent application WO2012172264 discloses a method of processing an original audio signal of Nx channels, N being greater than 1 and x being greater than or equal to 0, comprising a multichannel processing step of said input audio signal by a multichannel convolution with a predefined imprint, said fingerprint being developed by the capture of a reference sound by a set of speakers arranged in a reference space characterized in that it comprises an additional step of selecting at least one of a plurality of fingerprints previously developed in different sound contexts.
  • the patent application WO9725834 proposes another method and device for processing multichannel audio signals, each channel corresponding to a loudspeaker arranged at a particular point in a room so as to give, via an audio headset, the impression that multiple speakers' ghosts 'are distributed in the room.
  • Heading Transfer Functions are selected by taking into consideration the height and azimuth of each speaker in relation to the listener.
  • Each channel is HRTF filtered so that when these channels are combined in the left and right channels and output by headphones, the listener feels that the sound actually comes from ghost speakers distributed in the virtual room.
  • HRTF coefficient sets entered in base data from a large number of individuals and the use of an optimal HRTF game for the listener provides listening impressions similar to those of an isolated listener listening to multiple high speakers distributed in the volume of a room.
  • Applying an HRTF function to the output of the right and left channels allows, in the case of listening to the headphones, to give the impression of listening without headphones.
  • the object of the present invention is to improve the perceived quality and in particular the extent of the spatialization, including with means of reproduction of average quality, such as docking stations of tablets or mobile phones ("docks"). ).
  • the invention relates, in its most general sense, to a method of sound reproduction of a digital audio signal, characterized in that an oversampling step consisting in producing from a sampled signal at a first time is carried out.
  • frequency F a signal sampled at a frequency NxF, where N corresponds to an integer greater than 1, and then to applying a convolution process on a first digital file sampled at a frequency NxF corresponding to the acquisition of the sound environment of a reference sound space, a second digital file sampled at an NxF frequency corresponding to the acquisition of the soundprint of a reference playback equipment, and third file digital sample sampled at an NxF frequency corresponding to the acquisition of an equalizer's audio fingerprint and a fourth file corresponding to said oversampled audio file, the resulting digital packets being then digitally converted to a digital sampling frequency F / M corresponding to the working frequency of the listening equipment.
  • the processing is based on a mathematical convolution operation, and uses several prerecorded audio samples of the impulse response of the modeled space as well as an equalizer and playback equipment.
  • the method comprises an additional step of recalculating the file corresponding to said sound footprint of the reference sound space, in order to modify the balance between the spatial channels of said sound footprint.
  • the treatment method according to the invention consists in producing different acoustic impressions of a sound source, with a view to convolving these different sound tracks.
  • convolutions are a known technique of user capture, then the reproduction of the acoustic behavior of a place or an apparatus.
  • convolution reverbs allow to propose to use the acoustics of many real places, famous concert halls or others: these acoustics, previously sampled, likely to be reused at will within the program.
  • the principle is then to sample the acoustics of the sets in which the scenes of the film have been shot, in order to be able to easily apply this acoustics to the elements recorded a posteriori so that they fit perfectly to the sounds from the direct shots. .
  • the Impulse Response sensor to obtain the impulse response of a material or room constituting the sound footprint is based on the "deconvolution". It uses the excitation of the system by a known signal (here called f (t)). This signal is such that if we apply a transform (deconvolution function), the result is the Dirac function.
  • the types of signals used to capture impulse responses look like a Gaussian noise or a "white noise”.
  • the excitation sequences are generated by a deterministic algorithm and are periodic (periods of the order of a few seconds or tens of seconds for our application) and constitute a pseudo-random signal.
  • LFSRs linear feedback shift registers
  • This register structure whose order is determined by the number of registers, is such that over its period it will produce the set of possible binary values for its order (if order 4 structure, there are 2 n possible values) .
  • MLS, Maximum Length Sequence the longest possible sequence of binary numbers without repeating the same value twice.
  • the initial popularity of MLS stems from the ease of the deconvolution process.
  • the signal MLS is such that for its deconvolution, one can use a transform called Hadamard transform, which simplifies the calculations and has the advantage of being computationally computationally using few resources.
  • the first uses the passage in the frequency domain to make the calculations before returning to time.
  • Each of these impulse responses is captured from a reference signal at a high sampling rate, which is higher than the nominal sampling frequency of the playback equipment.
  • the hallmark (3) is acquired from a white noise producing a file of 6 Mbytes per speaker, for a long time greater than 500 milliseconds, preferably between one and two seconds.
  • the file corresponding to the impulse response is then compressed without loss (ZIP compression for example) and encrypted.
  • the footprint of the headphones (1) is acquired in the same way with a white or pink signal of a duration of about 200 milliseconds, preferably between 100 and 500 milliseconds.
  • the imprint of the equalizer (2) is acquired in the same way with a white or pink signal with a duration of about 200 milliseconds, advantageously between 100 and 500 milliseconds for each of the equalizer settings.
  • This function can be controlled by the gyro sensor to create a dynamic displacement of the sound scene according to the user's movements

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Stereophonic System (AREA)

Description

Domaine de l'inventionField of the invention

La présente invention concerne le domaine du traitement des signaux audio pour améliorer la perception lors de la restitution sonore.The present invention relates to the field of processing audio signals to improve perception during sound reproduction.

On connaît par exemple la demande de brevet internationale WO2012088336 décrivant un procédé de traitement d'une source sonore audio pour créer des quatre dimensions du son spatialisé.For example, the international patent application is known WO2012088336 describing a method of processing an audio sound source to create four dimensions of spatialized sound.

Une source sonore virtuelle peut être déplacé le long d'un chemin dans un espace tridimensionnel sur une période de temps spécifiée pour obtenir la localisation du son à quatre dimensions.A virtual sound source can be moved along a path in a three-dimensional space over a specified period of time to obtain the location of the four-dimensional sound.

Les divers modes de réalisation décrits ici fournissent des méthodes et des systèmes pour la conversion mono existant, 2-canal et / ou multi-canaux de signaux audio en signaux audio spatialisées ont deux ou plusieurs canaux audio.The various embodiments described herein provide methods and systems for the existing mono, 2-channel and / or multi-channel conversion of audio signals into spatialized audio signals having two or more audio channels.

Les divers modes de réalisation décrivent également les méthodes, les systèmes et appareils pour la production effets basse fréquence et les signaux du canal central à partir de signaux audio entrants ayant un ou plusieurs canaux.The various embodiments also describe methods, systems, and apparatus for producing low frequency effects and center channel signals from incoming audio signals having one or more channels.

On connaît par la demande de brevet WO9914983 un dispositif permettant de créer et d'utiliser une paire de haut-parleurs opposés d'un casque d'écoute, la sensation d'une source sonore étant éloignée de la zone située entre lesdits haut-parleurs. Le dispositif comprend :

  • une série d'entrées audio représentant des signaux audio projetés depuis une source sonore théorique située à distance de l'auditeur théorique;
  • une première matrice de mixage, connectée aux entrées audio et à une série d'entrées de retour, qui produit une combinaison prédéterminée desdites entrées audio constituant des signaux de sortie intermédiaires;
  • un système de filtre, qui filtre lesdits signaux de sortie intermédiaires et produit des signaux de sortie intermédiaires filtrés et la série d'entrées de retour, et qui comprend des filtres séparés pour filtrer la réponse directe et la réponse rapide et une approximation de la réponse réverbérée, et pour filtrer la réponse de retour de façon à produire les entrées de retour; et
  • une seconde matrice de mixage, qui combine les signaux de sortie intermédiaires filtrés afin de produire des sorties stéréophoniques de canal droit et de canal gauche.
It is known from the patent application WO9914983 a device for creating and using a pair of opposing speakers of a headset, the feeling of a sound source being remote from the area between said speakers. The device comprises:
  • a series of audio inputs representing audio signals projected from a theoretical sound source remote from the theoretical listener;
  • a first mixing matrix, connected to the audio inputs and a series of return inputs, which produces a predetermined combination of said audio inputs constituting intermediate output signals;
  • a filter system, which filters said intermediate output signals and produces filtered intermediate output signals and the series of return inputs, and which comprises separate filters for filtering the forward response and the fast response and an approximation of the response reverberated, and to filter the return response so as to produce the return inputs; and
  • a second mixing matrix, which combines the filtered intermediate output signals to produce right channel and left channel stereo outputs.

Le brevet européen EP2119306 décrit appareil pour le traitement d'une source sonore audio pour créer des quatre dimensions du son spatialisé. Une source sonore virtuelle peut être déplacé le long d'un chemin dans un espace tridimensionnel sur une période de temps spécifiée pour obtenir la localisation du son à quatre dimensions.The European patent EP2119306 describes apparatus for processing an audio sound source to create four dimensions of spatialized sound. A virtual sound source can be moved along a path in a three-dimensional space over a specified period of time to obtain the location of the four-dimensional sound.

Un filtre binaural pour un point spatial souhaité est appliqué à la forme d'onde audio pour produire une forme d'onde spatialisée que, lorsque la forme d'onde spatialisée est joué depuis une paire d'enceintes, le son semble provenir du point choisi spatial au lieu des haut-parleurs.A binaural filter for a desired spatial point is applied to the audio waveform to produce a spatialized waveform that, when the spatialized waveform is played from a pair of speakers, the sound appears to come from the chosen point space instead of speakers.

Un filtre binaural pour un point de l'espace est simulé par interpolation du plus proche voisin filtres binauraux choisis parmi une pluralité de filtres prédéfinis binauraux.A binaural filter for a point in space is simulated by interpolation of the nearest neighbor binaural filters selected from a plurality of predefined binaural filters.

La forme d'onde audio peut être traitée numériquement en chevauchement des blocs de données à l'aide d'un court temps de transformation de Fourier.The audio waveform can be numerically processed by overlapping data blocks using a short Fourier transformation time.

Le son localisé peut être traité ultérieurement pour la simulation de décalage Doppler et de chambre.Localized sound can be processed later for Doppler shift and chamber simulation.

La présente invention concerne un procédé de traitement d'un signal audio originel de N.x canaux, N étant supérieur à 1 et x étant supérieur ou égal à 0, comportant une étape traitement multicanal dudit signal audio d'entrée par une convolution multicanal avec une empreinte prédéfinie, ladite empreinte étant élaborée par la capture d'un son de référence par un ensemble d'enceintes disposé dans un espace de référence caractérisé en ce qu'il comporte une étape additionnelle de sélection d'au moins une empreinte parmi une pluralité d'empreintes préalablement élaborées dans des contextes sonores différents.The present invention relates to a method for processing an original audio signal of Nx channels, N being greater than 1 and x being greater than or equal to 0, comprising a multichannel processing step of said input audio signal by a multichannel convolution with a predefined imprint, said imprint being elaborated by the capture of a reference sound by a set of speakers arranged in a reference space characterized in that it comprises an additional step of selecting at least one of a plurality of fingerprints previously developed in different sound contexts.

La demande de brevet WO2012172264 décrit un procédé de traitement d'un signal audio originel de N.x canaux, N étant supérieur à 1 et x étant supérieur ou égal à 0, comportant une étape traitement multicanal dudit signal audio d'entrée par une convolution multicanal avec une empreinte prédéfinie, ladite empreinte étant élaborée par la capture d'un son de référence par un ensemble d'enceintes disposé dans un espace de référence caractérisé en ce qu'il comporte une étape additionnelle de sélection d'au moins une empreinte parmi une pluralité d'empreintes préalablement élaborées dans des contextes sonores différents.The patent application WO2012172264 discloses a method of processing an original audio signal of Nx channels, N being greater than 1 and x being greater than or equal to 0, comprising a multichannel processing step of said input audio signal by a multichannel convolution with a predefined imprint, said fingerprint being developed by the capture of a reference sound by a set of speakers arranged in a reference space characterized in that it comprises an additional step of selecting at least one of a plurality of fingerprints previously developed in different sound contexts.

La demande de brevet WO9725834 propose un autre procédé et dispositif de traitement de signaux audio multicanaux, chaque canal correspondant à un haut-parleur disposé en un point particulier une pièce de façon à donner, via un casque audio, l'impression que de multiples de haut-parleurs 'fantômes' sont répartis dans la pièce. On sélectionne des fonctions HRTF de transfert par rapport à la tête (Head Related Transfer Functions) en prenant en considération la hauteur et l'azimut de chaque haut-parleur considéré par rapport à l'auditeur. Chaque canal fait l'objet d'un filtrage HRTF de sorte que, lorsque ces canaux sont combinés dans les canaux gauche et droit et restitués par un casque audio, l'auditeur a l'impression que le son provient effectivement de haut-parleurs fantômes répartis dans la pièce virtuelle. Des jeux de coefficients HRTF saisis en base de données à partir d'un grand nombre d'individus et l'utilisation pour l'auditeur concerné d'un jeu HRTF optimal lui fournit des impressions d'écoute semblables à celle qu'aurait un auditeur isolé s'il écoutait de multiples haut-parleurs répartis dans le volume d'un local. L'application d'une fonction HRTF à la sortie des canaux droit et gauche permet, dans le cas d'une écoute au casque, de donner l'impression d'une écoute sans casque.The patent application WO9725834 proposes another method and device for processing multichannel audio signals, each channel corresponding to a loudspeaker arranged at a particular point in a room so as to give, via an audio headset, the impression that multiple speakers' ghosts 'are distributed in the room. Heading Transfer Functions (HRTF) are selected by taking into consideration the height and azimuth of each speaker in relation to the listener. Each channel is HRTF filtered so that when these channels are combined in the left and right channels and output by headphones, the listener feels that the sound actually comes from ghost speakers distributed in the virtual room. HRTF coefficient sets entered in base data from a large number of individuals and the use of an optimal HRTF game for the listener provides listening impressions similar to those of an isolated listener listening to multiple high speakers distributed in the volume of a room. Applying an HRTF function to the output of the right and left channels allows, in the case of listening to the headphones, to give the impression of listening without headphones.

Le document US 2006/045294 est considéré comme représentant l'état de la technique le plus pertinent et décrit un procédé de restitution sonore d'un signal numérique audio dans lequel l'on procède à une étape de suréchantillonnage consistant à produire à partir d'un signal échantillonné à une fréquence F un signal échantillonné à une fréquence NxF, où N correspond à un entier supérieur à 1, puis à appliquer un traitement de convolution sur un fichier numérique échantillonné à une fréquence NxF correspondant à l'acquisition de l'empreinte sonore d'un équaliseur ainsi qu'un fichier correspondant audit fichier audio suréchantillonné, les paquets numériques résultant faisant ensuite l'objet d'un traitement numérique de conversion à une fréquence d'échantillonnage F/M correspondant à la fréquence de travail de l'équipement d'écoute.The document US 2006/045294 is considered to represent the most relevant state of the art and describes a method of sound reproduction of a digital audio signal in which an oversampling step of producing from a signal sampled at a frequency is performed F a signal sampled at a frequency NxF, where N corresponds to an integer greater than 1, then to apply a convolution processing on a digital file sampled at a frequency NxF corresponding to the acquisition of the audio fingerprint of an equalizer and a file corresponding to said oversampled audio file, the resulting digital packets then being digitally converted at an F / M sampling rate corresponding to the working frequency of the listening equipment.

D'autant plus, il est référé au document US 2012/014527 , qui divulgue l'application d'une transformation ou convolution combinée en chaîne comprenant deux ou plus de transformations de signaux, les signaux pouvant être fournis par une base de données, qui contient des transformations développées par l'utilisateur lui-même, ou par un module d'extension, qui contient des opérations spatiales qui peuvent être appliquées sur le champs sonore.All the more, he is referred to the document US 2012/014527 , which discloses the application of a combined chain transformation or convolution comprising two or more signal transformations, the signals that can be provided by a database, which contains transformations developed by the user himself, or by an extension module, which contains spatial operations that can be applied to the sound field.

Inconvénients de l'art antérieurDisadvantages of prior art

Les solutions de l'art antérieur restent limitées par les qualités intrinsèques du moyen de restitution (casque ou haut-parleurs) ainsi que de leur adéquation au traitement appliqué au signal audio.The solutions of the prior art remain limited by the intrinsic qualities of the rendering means (headphones or loudspeakers) as well as their adequacy to the processing applied to the audio signal.

Par ailleurs, certains traitements de l'art antérieur nécessitent des puissances de calcul importantes, peu compatibles avec les capacités des tablettes, téléphones ou lecteurs portatifs.Furthermore, some prior art treatments require significant computing power, not compatible with the capabilities of tablets, phones or portable players.

Solution apportée par l'inventionSolution provided by the invention

L'objet de la présente invention est d'améliorer la qualité perçue et notamment l'étendue de la spatialisation, y compris avec des moyens de reproduction de qualité moyenne, tels que des stations d'accueil de tablettes ou téléphones portables («docks»).The object of the present invention is to improve the perceived quality and in particular the extent of the spatialization, including with means of reproduction of average quality, such as docking stations of tablets or mobile phones ("docks"). ).

A cet effet, l'invention concerne selon son acception la plus générale un procédé de restitution sonore d'un signal numérique audio caractérisé en ce que l'on procède à une étape de suréchantillonnage consistant à produire à partir d'un signal échantillonné à une fréquence F un signal échantillonné à une fréquence NxF, où N correspond à un entier supérieur à 1, puis à appliquer un traitement de convolution sur un premier fichier numérique échantillonné à une fréquence NxF correspondant à l'acquisition de l'ambiance sonore d'un espace sonore de référence, un second fichier numérique échantillonné à une fréquence NxF correspondant à l'acquisition de l'empreinte sonore d'un équipement de restitution de référence, et troisième fichier numérique échantillonné à une fréquence NxF correspondant à l'acquisition de l'empreinte sonore d'un équaliseur ainsi qu'un quatrième fichier correspondant audit fichier audio suréchantillonné, les paquets numériques résultant faisant ensuite l'objet d'un traitement numérique de conversion à une fréquence d'échantillonnage F/M correspondant à la fréquence de travail de l'équipement d'écoute.For this purpose, the invention relates, in its most general sense, to a method of sound reproduction of a digital audio signal, characterized in that an oversampling step consisting in producing from a sampled signal at a first time is carried out. frequency F a signal sampled at a frequency NxF, where N corresponds to an integer greater than 1, and then to applying a convolution process on a first digital file sampled at a frequency NxF corresponding to the acquisition of the sound environment of a reference sound space, a second digital file sampled at an NxF frequency corresponding to the acquisition of the soundprint of a reference playback equipment, and third file digital sample sampled at an NxF frequency corresponding to the acquisition of an equalizer's audio fingerprint and a fourth file corresponding to said oversampled audio file, the resulting digital packets being then digitally converted to a digital sampling frequency F / M corresponding to the working frequency of the listening equipment.

Le traitement est basé sur une opération de convolution mathématique, et utilise plusieurs échantillons audios préenregistrés de la réponse impulsionnelle de l'espace modélisé ainsi que d'un équaliseur et d'un équipement de restitution.The processing is based on a mathematical convolution operation, and uses several prerecorded audio samples of the impulse response of the modeled space as well as an equalizer and playback equipment.

Selon une variante, le procédé comporte une étape supplémentaire de recalcule du fichier correspondant à ladite empreinte sonore de l'espace sonore de référence, pour modifier l'équilibre entre les voies spatiale de ladite empreinte sonore.According to one variant, the method comprises an additional step of recalculating the file corresponding to said sound footprint of the reference sound space, in order to modify the balance between the spatial channels of said sound footprint.

Description détaillée d'exemples de réalisation non limitatifsDetailed description of nonlimiting exemplary embodiments

L'invention sera mieux comprise à la lecture de la description qui suit, se référant au dessin annexé correspondant à des exemples de réalisation non limitatifs où :

  • la figure 1 représente une vue schématique des traitements du signal selon l'invention.
The invention will be better understood on reading the description which follows, with reference to the appended drawing corresponding to nonlimiting exemplary embodiments where:
  • the figure 1 represents a schematic view of the signal processing according to the invention.

Le procédé de traitement selon l'invention consiste à produire différentes empreintes acoustiques d'une source sonore, en vue de réaliser une convolution de ces différentes empreintes sonores.The treatment method according to the invention consists in producing different acoustic impressions of a sound source, with a view to convolving these different sound tracks.

La technologie des convolutions est une technique connue de captation par l'utilisateur, puis la reproduction du comportement acoustique d'un lieu ou d'un appareil. A titre d'exemple, les réverbérations à convolution permettent de proposer d'utiliser les acoustiques de nombreux lieux réels, salles de concert célèbres ou autres : ces acoustiques, préalablement échantillonnées, susceptibles d'être réutilisées à volonté au sein du programme.The technology of convolutions is a known technique of user capture, then the reproduction of the acoustic behavior of a place or an apparatus. For example, convolution reverbs allow to propose to use the acoustics of many real places, famous concert halls or others: these acoustics, previously sampled, likely to be reused at will within the program.

Dans le cas du son à l'image, la première idée d'exploitation de cette possibilité a été la captation des acoustiques des décors de tournages dans le but d'obtenir des raccords acoustiques directs entre les sons directs et sons rajoutés en post-production (post-synchronisation, bruitages)In the case of sound to the image, the first idea of exploitation of this possibility was the capture of the acoustics of the sets of shoots in order to obtain direct acoustic connections between the direct sounds and sounds added in post-production (post-synchronization, sound effects)

Le principe est alors de réaliser l'échantillonnage des acoustiques des décors dans lesquels les scènes du films ont été tournées, afin de pouvoir aisément appliquer cette acoustique aux éléments enregistrés a posteriori pour que ceux-ci s'intègrent parfaitement aux sons issus des prises directes.The principle is then to sample the acoustics of the sets in which the scenes of the film have been shot, in order to be able to easily apply this acoustics to the elements recorded a posteriori so that they fit perfectly to the sounds from the direct shots. .

Le capteur de Réponses Impulsionnelles pour obtenir la réponse impulsionnelle d'un matériel ou d'une salle constituant l'empreinte sonore est basée sur la "déconvolution". Elle utilise l'excitation du système par un signal connu (appelé ici f(t)). Ce signal est tel que si on lui applique une transformée (fonction de déconvolution), le résultat est la fonction de Dirac.The Impulse Response sensor to obtain the impulse response of a material or room constituting the sound footprint is based on the "deconvolution". It uses the excitation of the system by a known signal (here called f (t)). This signal is such that if we apply a transform (deconvolution function), the result is the Dirac function.

La fonction de déconvolution est choisie telle que, pour le signal d'excitation f(t) et une fonction h(t) quelconque : G f t = δ t

Figure imgb0001
G f t * h t = G h t * f t = G f t * h t
Figure imgb0002
The deconvolution function is chosen such that for the excitation signal f (t) and any function h (t): BOY WUT f t = δ t
Figure imgb0001
BOY WUT f t * h t = BOY WUT h t * f t = BOY WUT f t * h t
Figure imgb0002

Grâce à cette fonction de déconvolution, on produit un signal de réponse impulsionnel d'un système à partir de la réponse de celui-ci à un signal d'excitation différent de l'impulsion de Dirac.With this deconvolution function, a system impulse response signal is generated from the response thereof to an excitation signal different from the Dirac pulse.

Les types de signaux utilisés pour la capture de réponses impulsionnelles ressemble, à l'écoute, à un bruit gaussien ou un « bruit blanc ». Les séquences d'excitation sont générées par un algorithme déterministe et sont périodiques (des périodes de l'ordre de quelques secondes ou dizaines de seconde pour notre application) et constituent un signal pseudo-aléatoire.The types of signals used to capture impulse responses look like a Gaussian noise or a "white noise". The excitation sequences are generated by a deterministic algorithm and are periodic (periods of the order of a few seconds or tens of seconds for our application) and constitute a pseudo-random signal.

Ces séquences sont créées par des registres à décalage à rétroaction linéaire (linear feedback shift registers, LFSR). Cette structure de registres, dont l'ordre est déterminé par le nombre de registres, est telle que sur sa période elle produira l'ensemble des valeurs binaires possible pour son ordre (si structure d'ordre 4, il existe 2n valeurs possibles). Ces séquences sont connues par l'homme du métier sous le terme de « MLS, Maximum Length Sequence »: la séquence de nombre binaires la plus longue possible sans répéter deux fois la même valeur.These sequences are created by linear feedback shift registers (LFSRs). This register structure, whose order is determined by the number of registers, is such that over its period it will produce the set of possible binary values for its order (if order 4 structure, there are 2 n possible values) . These sequences are known to those skilled in the art as "MLS, Maximum Length Sequence": the longest possible sequence of binary numbers without repeating the same value twice.

La popularité initiale de la MLS est issue de la facilité du procédé de déconvolution.The initial popularity of MLS stems from the ease of the deconvolution process.

En effet, le signal MLS est tel que pour sa déconvolution, on peut utiliser une transformée appelée transformée d'Hadamard, qui simplifie les calculs et a l'avantage d'être calculable informatiquement en utilisant peu de ressources.Indeed, the signal MLS is such that for its deconvolution, one can use a transform called Hadamard transform, which simplifies the calculations and has the advantage of being computationally computationally using few resources.

Une autre solution de signal d'excitation est basée sur la technique dite « sweep logarithmique », ou « sweep exponentiel », correspondant comme son nom l'indique à un sinus glissant dont la fréquence est liée au temps par une loi exponentielle. Cela implique que le glissement est plus rapide aux fréquences élevées qu'aux fréquences basses, et par conséquent son spectre est celui d'un bruit rose (moins d'énergie est dégagé dans les fréquences hautes puisque moins de temps y est consacré).Another excitation signal solution is based on the so-called "logarithmic sweep" technique, or "exponential sweep", corresponding as its name indicates to a sliding sinus whose frequency is related to time by an exponential law. This implies that the slipping is faster at high frequencies than at low frequencies, and therefore its spectrum is that of a pink noise (less energy is released in high frequencies since less time is spent).

Il existe deux façons de déconvoluer les mesures ainsi effectuées. La première utilise le passage dans le domaine fréquentiel pour faire les calculs avant de revenir en temporel. La seconde consiste à convoluer non-périodiquement le signal enregistré avec le signal d'excitation retourné temporellement : h t = r t * s T t

Figure imgb0003
avec T la durée du sweepThere are two ways to deconvolve the measurements made. The first uses the passage in the frequency domain to make the calculations before returning to time. The second consists in non-periodically convolving the recorded signal with the excitation signal returned temporally: h t = r t * s T - t
Figure imgb0003
with T the duration of the sweep

En procédant ainsi, deux avantages apparaissent :

  • Les distorsions non-linéaires du système sont totalement rejetées et ne perturbent pas la mesure de la réponse impulsionnelle linéaire du système
  • La méthode supporte bien les légères désynchronisation : on peut diffuser le sweep depuis un appareil et l'enregistrer avec un autre sans que ces deux machine soient synchronisées par une horloge.
By doing so, two advantages appear:
  • Nonlinear distortions of the system are totally rejected and do not disturb the measurement of the linear impulse response of the system
  • The method supports slight desynchronization: you can broadcast the sweep from one device and record with another without these two machines are synchronized by a clock.

Dans la présente invention, on procède à la capture de trois empreintes sonores ou réponses impulsionnelles, correspondant :

  • à une empreinte sonore d'un moyen d'écoute, par exemple d'un casque
  • à une empreinte sonore d'un équaliseur
  • à une empreinte sonore d'un espace sonore de référence.
In the present invention, three sound tracks or impulse responses are captured, corresponding to:
  • a sound impression of a listening medium, for example a headset
  • to a soundprint of an equalizer
  • to a sound footprint of a reference sound space.

Chacune de ces réponses impulsionnelles est capturée à partir d'un signal de référence à un échantillonnage élevé, supérieur à la fréquence d'échantillonnage nominale de l'équipement de restitution.Each of these impulse responses is captured from a reference signal at a high sampling rate, which is higher than the nominal sampling frequency of the playback equipment.

A titre d'exemple, l'empreinte de salle (3) est acquise à partir d'un bruit blanc produisant un fichier de 6 Moctets par enceinte, pendant une durée longue supérieure à 500 millisecondes, de préférence comprise entre une et deux secondes. Le fichier correspondant à la réponse impulsionnelle est ensuite comprimé sans perte (compression ZIP par exemple) et crypté.For example, the hallmark (3) is acquired from a white noise producing a file of 6 Mbytes per speaker, for a long time greater than 500 milliseconds, preferably between one and two seconds. The file corresponding to the impulse response is then compressed without loss (ZIP compression for example) and encrypted.

L'empreinte du casque (1) (ou d'une série d'enceintes) est acquise de la même façon avec un signal blanc ou rose d'une durée d'environ 200 millisecondes, avantageusement entre 100 et 500 millisecondes.The footprint of the headphones (1) (or a series of speakers) is acquired in the same way with a white or pink signal of a duration of about 200 milliseconds, preferably between 100 and 500 milliseconds.

L'empreinte de l'équaliseur (2) est acquise de la même façon avec un signal blanc ou rose d'une durée d'environ 200 millisecondes, avantageusement entre 100 et 500 millisecondes pour chacun des réglages de l'équaliseur.The imprint of the equalizer (2) is acquired in the same way with a white or pink signal with a duration of about 200 milliseconds, advantageously between 100 and 500 milliseconds for each of the equalizer settings.

Ces trois fichiers de réponse impulsionnelle (1 à 3) ainsi que le fichier numérique du signal audio (4) font l'objet d'un traitement de convolution (5) basé sur un traitement par transformée de fourier rapide FFT.These three impulse response files (1 to 3) as well as the digital file of the audio signal (4) are subject to a convolutional processing (5) based on FFT fast four-transform processing.

Pour réduire les temps de calcul, on procède à une étape (6) permettant de recalculer dynamiquement les empreintes gauches et droites en fonction des particularités de l'équipement de restitution et le cas échéant des particularités sensorielles de l'auditeur. Il dispose par exemple d'un moyen de réglage permettant de modifier la position spatiale virtuelle. Une modification de ce réglage commande le calcul d'un nouveau couple d'empreintes sonores à partir des empreintes initialement fournies, par morphose (« morphing ») :

  • on prend en compte une enceinte virtuelle centrale et deux empreintes pour l'enceinte droite et l'enceinte gauche
  • on recalcule les empreintes gauche / droite en temps réel pour déplacer la scène sonore
To reduce the calculation time, a step (6) is performed to dynamically recalculate the left and right footprints according to the particularities of the playback equipment and, if appropriate, the sensory peculiarities of the listener. It has for example a setting means for changing the virtual spatial position. A modification of this setting controls the calculation of a new pair of fingerprints from the fingerprints initially provided, by morphing ("morphing"):
  • we take into account a central virtual speaker and two fingerprints for the right speaker and the left speaker
  • we recalculate the left / right footprints in real time to move the sound stage

Cette fonction peut être pilotée par le capteur gyroscopique pour créer un déplacement dynamique de la scène sonore en fonction des mouvements de l'utilisateurThis function can be controlled by the gyro sensor to create a dynamic displacement of the sound scene according to the user's movements

Elle permet de centrer la voix en temps réel par rapport à la tête.It allows to center the voice in real time with respect to the head.

Claims (2)

  1. A method of sound reproduction of a digital audio signal characterized in that a step of oversampling is executed, which consists in generating, from a signal sampled at a frequency F, a signal sampled at a frequency NxF, where N corresponds to an integer greater than 1, and then in applying a convolution processing to a first digital file sampled at a frequency NxF corresponding to the acquisition of the background noise of a reference sound space, a second digital file sampled at a frequency NxF corresponding to the acquisition of the noise footprint of a reference reproduction device, and a third digital file sampled at a frequency NxF corresponding to the acquisition of the noise footprint of an equalizer as well as a fourth file corresponding to said oversampled audio file, with the resulting digital packets being then submitted to a digital conversion processing at a sampling frequency F/M corresponding to the working frequency of the audio-monitoring device.
  2. A method of sound reproduction of a digital audio signal according to claim 1, characterized in that it comprises an additional step of recalculating the file corresponding to said noise footprint of the reference sound space, in order to change the balance between the spatial channels of said noise footprint.
EP14721466.2A 2013-04-17 2014-04-09 Method for acoustical reproduction of a numerical audio signal Not-in-force EP2987339B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1353473A FR3004883B1 (en) 2013-04-17 2013-04-17 METHOD FOR AUDIO RECOVERY OF AUDIO DIGITAL SIGNAL
PCT/FR2014/050846 WO2014170580A1 (en) 2013-04-17 2014-04-09 Method for playing back the sound of a digital audio signal

Publications (2)

Publication Number Publication Date
EP2987339A1 EP2987339A1 (en) 2016-02-24
EP2987339B1 true EP2987339B1 (en) 2017-07-12

Family

ID=48782399

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14721466.2A Not-in-force EP2987339B1 (en) 2013-04-17 2014-04-09 Method for acoustical reproduction of a numerical audio signal

Country Status (7)

Country Link
US (1) US9609454B2 (en)
EP (1) EP2987339B1 (en)
JP (1) JP6438004B2 (en)
CN (1) CN105308989B (en)
CA (1) CA2909580A1 (en)
FR (1) FR3004883B1 (en)
WO (1) WO2014170580A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2018120366A (en) 2015-12-14 2020-01-16 Рэд.Ком, Ллс MODULAR DIGITAL CAMERA AND CELL PHONE

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0593228B1 (en) * 1992-10-13 2000-01-05 Matsushita Electric Industrial Co., Ltd. Sound environment simulator and a method of analyzing a sound space
JPH08191225A (en) * 1995-01-09 1996-07-23 Matsushita Electric Ind Co Ltd Sound field reproducing device
WO1997025834A2 (en) 1996-01-04 1997-07-17 Virtual Listening Systems, Inc. Method and device for processing a multi-channel signal for use with a headphone
JP4627880B2 (en) 1997-09-16 2011-02-09 ドルビー ラボラトリーズ ライセンシング コーポレイション Using filter effects in stereo headphone devices to enhance the spatial spread of sound sources around the listener
JP2001224100A (en) * 2000-02-14 2001-08-17 Pioneer Electronic Corp Automatic sound field correction system and sound field correction method
JP2005217837A (en) * 2004-01-30 2005-08-11 Sony Corp Sampling rate conversion apparatus and method thereof, and audio apparatus
GB0419346D0 (en) * 2004-09-01 2004-09-29 Smyth Stephen M F Method and apparatus for improved headphone virtualisation
WO2008106680A2 (en) 2007-03-01 2008-09-04 Jerry Mahabub Audio spatialization and environment simulation
GB2476747B (en) * 2009-02-04 2011-12-21 Richard Furse Sound system
TWI517028B (en) 2010-12-22 2016-01-11 傑奧笛爾公司 Audio spatialization and environment simulation
FR2976759B1 (en) 2011-06-16 2013-08-09 Jean Luc Haurais METHOD OF PROCESSING AUDIO SIGNAL FOR IMPROVED RESTITUTION

Also Published As

Publication number Publication date
JP2016519526A (en) 2016-06-30
WO2014170580A1 (en) 2014-10-23
EP2987339A1 (en) 2016-02-24
JP6438004B2 (en) 2018-12-12
FR3004883A1 (en) 2014-10-24
US20160080882A1 (en) 2016-03-17
CA2909580A1 (en) 2014-10-23
US9609454B2 (en) 2017-03-28
CN105308989B (en) 2017-06-20
FR3004883B1 (en) 2015-04-03
CN105308989A (en) 2016-02-03

Similar Documents

Publication Publication Date Title
EP1836876B1 (en) Method and device for individualizing hrtfs by modeling
EP2000002B1 (en) Method and device for efficient binaural sound spatialization in the transformed domain
FR2852779A1 (en) Stereophonic sound signal processing method, involves broadcasting sound signals obtained by combining two processed right sound signals and two processed left sound signals with right and left sound signals, respectively
EP2042001B1 (en) Binaural spatialization of compression-encoded sound data
WO2007110520A1 (en) Method for binaural synthesis taking into account a theater effect
WO2004049299A1 (en) Method for processing audio data and sound acquisition device therefor
EP1586220B1 (en) Method and device for controlling a reproduction unit using a multi-channel signal
EP2009891B1 (en) Transmission of an audio signal in an immersive audio conference system
EP3400599B1 (en) Improved ambisonic encoder for a sound source having a plurality of reflections
FR3065137A1 (en) SOUND SPATIALIZATION METHOD
JP2005198251A (en) Three-dimensional audio signal processing system using sphere, and method therefor
EP2987339B1 (en) Method for acoustical reproduction of a numerical audio signal
WO2012172264A1 (en) Method for processing an audio signal for improved restitution
EP3384688B1 (en) Successive decompositions of audio filters
FR3069693B1 (en) METHOD AND SYSTEM FOR PROCESSING AUDIO SIGNAL INCLUDING ENCODING IN AMBASSIC FORMAT
EP2815589B1 (en) Transaural synthesis method for sound spatialization
WO2023156578A1 (en) Method for processing a digital sound signal for vinyl disc emulation
EP3449643B1 (en) Method and system of broadcasting a 360° audio signal

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20151007

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170102

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 909362

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170715

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602014011727

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: ABREMA AGENCE BREVET ET MARQUES, GANGUILLET, CH

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 909362

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170712

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171012

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171013

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171012

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171112

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 602014011727

Country of ref document: DE

Representative=s name: KOENIG SZYNKA TILMANN VON RENESSE PATENTANWAEL, DE

Ref country code: DE

Ref legal event code: R081

Ref document number: 602014011727

Country of ref document: DE

Owner name: A3D TECHNOLOGIES LLC, LOS ANGELES, US

Free format text: FORMER OWNERS: HAURAIS, JEAN-LUC, PARIS, FR; ROSSET, FRANCK, BRUXELLES, BE

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 5

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602014011727

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

REG Reference to a national code

Ref country code: FR

Ref legal event code: TP

Owner name: A3D TECHNOLOGIES LLC, US

Effective date: 20180412

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

26N No opposition filed

Effective date: 20180413

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20180709 AND 20180711

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

REG Reference to a national code

Ref country code: NL

Ref legal event code: PD

Owner name: A3D TECHNOLOGIES LLC; US

Free format text: DETAILS ASSIGNMENT: CHANGE OF OWNER(S), ASSIGNMENT; FORMER OWNER NAME: ROSSET, FRANCK

Effective date: 20180703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20180430

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180409

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180409

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20190325

Year of fee payment: 6

Ref country code: CH

Payment date: 20190326

Year of fee payment: 6

Ref country code: IT

Payment date: 20190321

Year of fee payment: 6

Ref country code: FR

Payment date: 20190325

Year of fee payment: 6

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20190327

Year of fee payment: 6

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20190220

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170712

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20140409

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170712

REG Reference to a national code

Ref country code: CH

Ref legal event code: PFUS

Owner name: A3D TECHNOLOGIES LLC, US

Free format text: FORMER OWNER: A3D TECHNOLOGIES LLC, US

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602014011727

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: NL

Ref legal event code: MM

Effective date: 20200501

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20201103

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200430

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200430

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200430

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20200409

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200501

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200409

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200409