EP0790753B1 - System für Raumklangeffekt und Verfahren dafür - Google Patents
System für Raumklangeffekt und Verfahren dafür Download PDFInfo
- Publication number
- EP0790753B1 EP0790753B1 EP97400248A EP97400248A EP0790753B1 EP 0790753 B1 EP0790753 B1 EP 0790753B1 EP 97400248 A EP97400248 A EP 97400248A EP 97400248 A EP97400248 A EP 97400248A EP 0790753 B1 EP0790753 B1 EP 0790753B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- sound
- microphones
- transfer functions
- head
- spatial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
- H04S3/004—For headphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/01—Multi-channel, i.e. more than two input channels, sound reproduction with two speakers wherein the multi-channel information is substantially preserved
Definitions
- the present invention relates to a personalization method of a spatialization system sound system customization of a spatialization system sound.
- Radiocommunications can be content stereophonic, or even monophonic, while the on-board communications and alarms cannot be located by report to the pilot (or co-pilot ).
- the subject of the present invention is a communication system audiophonic, which allows to easily discriminate the location of a determined sound source, in particular in the case of the existence of several sound sources near the user.
- the personalization process according to the invention is a method for customizing a sound spatialization system, by estimation of the head transfer functions of the user characterized by the we measure these functions in a finite number of points in space the environment, in two series of surveys, the first consisting of spatial sampling by placing a sound source at different points a sphere in the center of which we have a pair of microphones whose mutual distance is of the order of the width of the head of the subject whose we want to collect the head transfer functions, the second series of being raised with the subject placed so that his ears are located at the location of the microphones, the subject being provided with individualized ear plugs in which are placed miniature microphones, then by interpolation of the values thus measured at calculate the head transfer functions, for each of the two ears of the user, at the point in the space where the sound source is located, and create a spatial signal from the monophonic signal to be processed by convolving it with each of the two transfer functions thus estimated.
- the personalization system is a customization system of a source spatialization system sound each producing monophonic channels, comprising, for each monophonic channel to be spatialized, a binaural processor with two convolution filter channels combined linearly in each channel, this (s) processor (s) being connected (s) to an orienting device for calculating spatial location of sound sources, itself linked to at least one localization device, characterized in that it includes a tool for head transfer function measurement installed in a room anechoic comprising a semi-circular rail mounted on a motorized pivot, on which moves a speaker connected to a sound source, a pair microphones being placed in the center of the sphere described by the rail, the distance separating the microphones being of the order of the width of the head the system user.
- the invention is described below with reference to a system aircraft audio, especially fighter aircraft, but it's fine understood that it is not limited to such an application, and that it can be implementation as well in other types of vehicles (land or maritime) than in fixed installations.
- the user of this system is, in this case, the pilot of a combat aircraft, but it is of course that there can be multiple users simultaneously, especially if it is of a civil transport aircraft, specific devices for each user being provided in corresponding number.
- the spatialization module 1 represented in FIG. 1 has for role of making audible signals (tones, speech, alarms, ...) using a stereo headset so that they are perceived by the listener as if they came from a particular point in space, this point can be the actual position of the sound source or a position arbitrary. So, for example, the pilot of a fighter jet hears the voice from his co-pilot as if it actually came from behind him, or an audible missile attack alert is spatially positioned at the point arrival of the threat.
- the position of the sound source changes in function of the pilot's head movements and of the airplane movements: for example an alarm generated at the azimuth "3 hours" must be found at "noon" if the pilot turns his head 90 degrees to the right.
- the module 1 is for example connected to a digital bus 2 from which it receives information provided by: a head position detector 3, a inertial unit 4 and / or a location device such as a goniometer, radar, ..., countermeasures 5 (threat detection such as missiles) and an alarm management device 6 (reporting in particular breakdowns of instruments or equipment the plane).
- Module 1 includes an interpolator 7, the input of which is connected to bus 2 to which various sound sources are connected (microphones, alarms, ). In general, these sources are sampled at frequencies relatively weak (6, 12 or 24 kHz for example).
- the interpolator 7 allows to raise these frequencies to a common multiple, for example 48 kHz in the present case, frequency necessary for processors located downstream.
- This interpolator 7 is connected to n binaural processors, referenced 8 in their together, n being the maximum number of channels to be spatialized simultaneously.
- the outputs of processors 8 are connected to an adder 9 whose output constitutes the output of module 1.
- Module 1 also includes in the connection between at least one output of the interpolator 7 and the input of the corresponding processor of set 8 an adder 10 of which the other input is connected to the output of a sound illustration device 11 complementary.
- This device 11 produces an audible signal covering in particular the high frequencies (for example from 5 to 16 kHz) of the audio spectrum. he thus completes the useful bandwidth of the transmission channel at which its output signal is added.
- This transmission channel can be advantageously a radio channel, but it is understood that any other channel can be so completed, and that multiple channels can be completed in the same system, providing a corresponding number of adders such as 10. Indeed, radiocommunications use reduced bandwidths (3 to 4 kHz in general). Such a width of band is insufficient for correct spatialization of the sound signal. of the tests have shown that high frequencies (above about 14 kHz), located beyond the limit of the speech spectrum, allow better location of the source of the sound. The device 11 is then a device bandwidth expansion.
- the additional sound signal can for example be a background noise characteristic of a radio link.
- the device 11 can also be, for example, a device simulating the acoustic behavior of a room, a building ..., or a device simulating a Doppler effect, or even a device producing different sound symbols each corresponding to a source or an alarm determined.
- the processors 8 each generate a signal of the type stereophonic from the monophonic signal from the interpolator 7 to which is added, where appropriate, the signal from the device 11, taking account of the data provided by the head position detector 3 of the pilot.
- Module 1 also includes a device 12 for managing sources to be spatialized followed by an orienter 13 with n inputs (n being defined above) controlling the n different processors of set 8.
- the device 13 is a calculator calculating, from the data supplied by the pilot's head position detector, the orientation of the airplane by report to the landmark (provided by the aircraft's inertial unit) and the location of the source, the spatial coordinates of the point from where must seem to come from the sounds emitted by this source.
- n2 is advantageously equal to four at maximum.
- the device 12 for managing the n sources to be spatialized is a computer which receives, via bus 2, information concerning the characteristics of the sources to be spatialized (site, deposit and distance by report to the pilot), personalization criteria at the user's choice and priority information (threats, alarms, radio communications important, ).
- the device 12 receives information from the device 4 concerning the evolution of the localization of certain sources (or all sources, if applicable). From this information, the device 12 selects the source (or at most the n2 sources) to spatialize.
- a card reader 15 is used.
- memory 16 for device 1 in order to personalize the management of sources by the device 12.
- the reader 15 is connected to the bus 2.
- the card 16 then contains the characteristics of the filtering carried out by the pavilions of the ears of each user. In the preferred embodiment, it is a set of pairs of digital filters (i.e. coefficients representing their impulse responses) corresponding to the filtering acoustic "left ear" and "right ear” made for various points of the space surrounding the user.
- the database thus constituted is loaded, via bus 2, into the memory associated with the various processors 8.
- Processors 8 each essentially have two channels (say “left ear” and “right ear”) convolution filtering. More precisely, the role of each of the processors 8 is on the one hand to calculate by interpolation the head transfer functions (right and left) at the point in which the source will be placed, on the other hand to create the spatial signal on two channels from the original monophonic signal.
- the different equipment determining the orientation of the sound source and the orientation and location of the user's head provide their respective data every 20 or 40 ms ( ⁇ T), i.e. every ⁇ A couple of transfer functions are available.
- ⁇ T 20 or 40 ms
- the signal to spatialize is in fact convoluted by a pair of filters obtained by "temporal" interpolation carried out between the convolution filters spatially interpolated at the instants T and T + ⁇ T. It only remains to convert the digital signals as well obtained in analog before their restitution in the headphones of the user.
- attitude sensors On the diagram of figure 3, which relates to a track to spatialize, the different attitude (position) sensors have been represented implemented. These are: a head attitude sensor 17, a sensor 18 attitude of the sound source, and an attitude sensor 19 of the carrier mobile (plane for example).
- the information from these sensors is provided to the orienter 13, which determines from this information the spatial position from the source relative to the user's head (in line of sight and in distance).
- the guide 13 is connected to a database 20 (included in the card 16) which it commands to be loaded onto processors 8 of the "left" and "right” transfer functions of the four closest points the position of the source (see Figure 2), or possibly the point of measure (if the position of the source coincides with that of one of the measurement of grid G).
- transfer functions are subject to a spatial interpolation at 21, then a temporal interpolation at 22, and the resulting values are convoluted in 23 with the signal 24 to be spatialized.
- functions 21 and 23 are performed by the same interpolator (interpolator 7 in Figure 1), and the convolutions are performed by the binaural processor 8 corresponding to the spatialized channel.
- a digital-analog conversion is carried out, in 25, and sound reproduction (amplification and sending to headphones stereophonic) at 26.
- operations 20 to 23 and 25, 26 are do separately for the left track and for the right track.
- the "personalized" convolution filters forming the basis of previously mentioned data are established from measurements making use of a method described below with reference to FIG. 4.
- a tool is installed in an anechoic chamber automated mechanical 27 consisting of a semi-circular rail 28 mounted on a motorized pivot 29 fixed to the floor of this chamber.
- Rail 28 is arranged vertically, so that its ends are on the same perpendicular.
- a support 30 moves on which is mounted a broadband speaker 31.
- This device allows to place the speaker at any point on the sphere defined by the rail when it rotates 360 degrees around a passing vertical axis by pivot 29.
- the positioning accuracy of the loudspeaker is one degree in site and deposit, for example.
- loudspeaker 31 is successively placed in X points of the sphere, that is to say that the space is "discretized”: it is a spatial sampling.
- a pseudo-random code is generated and returned by the loudspeaker 31.
- the sound signal emitted is picked up by a pair of microphones reference placed at the center 32 of the sphere (the distance separating the microphones is about the width of the head of the subject you want collect the transfer functions), in order to measure the sound pressure resulting as a function of frequency.
- the method is the same but this time the subject is placed so that his ears are located at the location of the microphones (the subject controls the position of his head by video return).
- the subject is provided with earplugs individual shutters in which microphones are placed thumbnails.
- the complete sealing of the duct has the advantages following: the ear is acoustically protected, and the stapedian reflex (nonexistent in this case) does not modify the acoustic impedance of all.
- the database transfer functions can consist of either pairs of responses in frequency (convolution by multiplication in the frequency domain) either pairs of impulse responses (temporal convolution classical), inverse Fourier transforms of the previous ones.
- acoustic sources emitting binary signals pseudorandom tends to generalize in the technique of measuring impulse response, especially with regard to the characterization of a acoustic room by the correlation method.
- the impulse response is obtained over the duration (2n-1) / fe where N is the order of the sequence and where fe is the sampling frequency. he it is up to the experimenter to choose a couple of values (order of the sequence, fe) sufficient in order to have all the useful decrease of the response.
- the sound spatialization device described above allows to increase the intelligibility of the sound sources it processes, to decrease the operator reaction time to alarm and alert signals or other sound indicators, the sources of which appear to be located respectively at different points in space, therefore easier to discriminate between them and easier to rank in order of importance or emergency.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
Claims (5)
- Verfahren zur individuellen Abstimmung eines Raumklangsystems durch Schätzen der Kopfübertragungsfunktionen des Nutzers, dadurch gekennzeichnet, dass diese Funktionen an einer endlichen Anzahl von Punkten des umgebenden Raums in zwei Serien von Aufnahmen gemessen werden, wobei die erste darin besteht, eine räumliche Abtastung durchzuführen, indem die Klangquelle an verschiedenen Punkte einer Kugel platziert wird, in deren Mitte ein Paar von Mikrophonen angeordnet ist, deren wechselseitiger Abstand in der Größenordnung der Breite des Kopfs des Subjekts liegt, dessen Kopfübertragungsfunktionen man aufzunehmen wünscht, wobei die zweite Serie von Messwerten mit dem derart platzierten Subjekt durchgeführt wird, dass seine Ohren sich am Ort der Mikrophone befinden, wobei das Subjekt mit individuell angepassten Ohrstopfen ausgestattet ist, in denen Miniaturmikrophone platziert sind, dann durch Interpolation der so gemessenen Werte die Kopfübertragungsfunktionen für jedes der zwei Ohren des Nutzers am Ort des Raumes zu berechnen, an dem sich die Klangquelle befindet, und ein raumklanglich angepasstes Signal auf der Basis des zu verarbeitenden Monosignals zu erzeugen, indem es mit jeder der zwei so geschätzten Übertragungsfunktionen gefaltet wird.
- Verfahren nach Anspruch 1, dadurch gekennzeichnet, dass die Interpolation eine räumliche Interpolationsphase und eine zeitliche Interpolationsphase umfasst.
- Verfahren nach Anspruch 1 oder 2, dadurch gekennzeichnet, dass die Klangquelle ein binäres, pseudo-stochastisches Signal aussendet.
- Verfahren nach einem der Ansprüche 1 bis 3, dadurch gekennzeichnet, dass die Kopfübertragungsfunktionen an etwa 100 Punkten geschätzt werden.
- System zur individuellen Abstimmung eines jeden der Monokanäle erzeugenden Raumklangsystems, das für jeden raumklanglich anzupassenden Monokanal einen binauralen Prozessor (8) mit zwei Strecken von Faltungsfiltern umfasst, die in jeder Strecke linear gekoppelt sind, wobei diese(r) Prozessor(en) mit einer Orientierungsvorrichtung (13) zur Berechnung der räumlichen Lokalisierung der Klangquellen verbunden ist (sind), die ihrerseits mit zumindest einer Lokalisierungsvorrichtung (3, 4, 12) verbunden ist, dadurch gekennzeichnet, dass es ein Gerät zur Messung der Kopfübertragungsfunktion umfasst, das in einem schalltoten Raum installiert ist und eine auf einer motorisierten Drehachse montierte, halbkreisförmige Schiene (28) umfasst, auf der sich ein Lautsprecher (31) bewegt, der mit einer Klangquelle verbunden ist, wobei ein Paar von Mikrophonen in der Mitte (32) von der durch die Schiene beschriebenen Kugel platziert ist, wobei der die Mikrophone trennende Abstand von der Größenordnung der Breite des Kopfs des Nutzers des Systems getrennt wird.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR9601740A FR2744871B1 (fr) | 1996-02-13 | 1996-02-13 | Systeme de spatialisation sonore, et procede de personnalisation pour sa mise en oeuvre |
FR9601740 | 1996-02-13 |
Publications (2)
Publication Number | Publication Date |
---|---|
EP0790753A1 EP0790753A1 (de) | 1997-08-20 |
EP0790753B1 true EP0790753B1 (de) | 2004-01-28 |
Family
ID=9489132
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP97400248A Expired - Lifetime EP0790753B1 (de) | 1996-02-13 | 1997-02-05 | System für Raumklangeffekt und Verfahren dafür |
Country Status (6)
Country | Link |
---|---|
US (1) | US5987142A (de) |
EP (1) | EP0790753B1 (de) |
JP (1) | JPH1042399A (de) |
CA (1) | CA2197166C (de) |
DE (1) | DE69727328T2 (de) |
FR (1) | FR2744871B1 (de) |
Families Citing this family (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2744277B1 (fr) * | 1996-01-26 | 1998-03-06 | Sextant Avionique | Procede de reconnaissance vocale en ambiance bruitee, et dispositif de mise en oeuvre |
FR2765715B1 (fr) | 1997-07-04 | 1999-09-17 | Sextant Avionique | Procede de recherche d'un modele de bruit dans des signaux sonores bruites |
AUPP272898A0 (en) * | 1998-03-31 | 1998-04-23 | Lake Dsp Pty Limited | Time processed head related transfer functions in a headphone spatialization system |
FR2786107B1 (fr) | 1998-11-25 | 2001-02-16 | Sextant Avionique | Masque inhalateur d'oxygene avec dispositif de prise de son |
WO2001055833A1 (en) * | 2000-01-28 | 2001-08-02 | Lake Technology Limited | Spatialized audio system for use in a geographical environment |
JP4304845B2 (ja) * | 2000-08-03 | 2009-07-29 | ソニー株式会社 | 音声信号処理方法及び音声信号処理装置 |
WO2002052895A1 (de) * | 2000-12-22 | 2002-07-04 | Harman Audio Electronic Systems Gmbh | Anordnung zur auralisation eines lautsprechers in einem abhörraum bei beliebigen eingangssignalen |
US20030227476A1 (en) * | 2001-01-29 | 2003-12-11 | Lawrence Wilcock | Distinguishing real-world sounds from audio user interface sounds |
GB2372923B (en) * | 2001-01-29 | 2005-05-25 | Hewlett Packard Co | Audio user interface with selective audio field expansion |
GB2374507B (en) * | 2001-01-29 | 2004-12-29 | Hewlett Packard Co | Audio user interface with audio cursor |
GB2374506B (en) * | 2001-01-29 | 2004-11-17 | Hewlett Packard Co | Audio user interface with cylindrical audio field organisation |
GB2374502B (en) * | 2001-01-29 | 2004-12-29 | Hewlett Packard Co | Distinguishing real-world sounds from audio user interface sounds |
GB0127776D0 (en) * | 2001-11-20 | 2002-01-09 | Hewlett Packard Co | Audio user interface with multiple audio sub-fields |
US7346172B1 (en) * | 2001-03-28 | 2008-03-18 | The United States Of America As Represented By The United States National Aeronautics And Space Administration | Auditory alert systems with enhanced detectability |
US7079658B2 (en) * | 2001-06-14 | 2006-07-18 | Ati Technologies, Inc. | System and method for localization of sounds in three-dimensional space |
SE0202159D0 (sv) * | 2001-07-10 | 2002-07-09 | Coding Technologies Sweden Ab | Efficientand scalable parametric stereo coding for low bitrate applications |
US6956955B1 (en) * | 2001-08-06 | 2005-10-18 | The United States Of America As Represented By The Secretary Of The Air Force | Speech-based auditory distance display |
FR2842064B1 (fr) * | 2002-07-02 | 2004-12-03 | Thales Sa | Systeme de spatialisation de sources sonores a performances ameliorees |
CN1714598B (zh) * | 2002-11-20 | 2010-06-09 | 皇家飞利浦电子股份有限公司 | 基于音频的数据表示设备和方法 |
GB0419346D0 (en) | 2004-09-01 | 2004-09-29 | Smyth Stephen M F | Method and apparatus for improved headphone virtualisation |
US7756281B2 (en) * | 2006-05-20 | 2010-07-13 | Personics Holdings Inc. | Method of modifying audio content |
JP4780119B2 (ja) * | 2008-02-15 | 2011-09-28 | ソニー株式会社 | 頭部伝達関数測定方法、頭部伝達関数畳み込み方法および頭部伝達関数畳み込み装置 |
JP2009206691A (ja) * | 2008-02-27 | 2009-09-10 | Sony Corp | 頭部伝達関数畳み込み方法および頭部伝達関数畳み込み装置 |
JP2011516830A (ja) * | 2008-03-20 | 2011-05-26 | フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン | 聴覚的な表示のための装置及び方法 |
JP5540581B2 (ja) | 2009-06-23 | 2014-07-02 | ソニー株式会社 | 音声信号処理装置および音声信号処理方法 |
JP5163685B2 (ja) * | 2010-04-08 | 2013-03-13 | ソニー株式会社 | 頭部伝達関数測定方法、頭部伝達関数畳み込み方法および頭部伝達関数畳み込み装置 |
JP5024418B2 (ja) * | 2010-04-26 | 2012-09-12 | ソニー株式会社 | 頭部伝達関数畳み込み方法および頭部伝達関数畳み込み装置 |
JP5533248B2 (ja) | 2010-05-20 | 2014-06-25 | ソニー株式会社 | 音声信号処理装置および音声信号処理方法 |
JP2012004668A (ja) | 2010-06-14 | 2012-01-05 | Sony Corp | 頭部伝達関数生成装置、頭部伝達関数生成方法及び音声信号処理装置 |
US9031256B2 (en) | 2010-10-25 | 2015-05-12 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for orientation-sensitive recording control |
US8855341B2 (en) | 2010-10-25 | 2014-10-07 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for head tracking based on recorded sound signals |
US9552840B2 (en) | 2010-10-25 | 2017-01-24 | Qualcomm Incorporated | Three-dimensional sound capturing and reproducing with multi-microphones |
FR2977335A1 (fr) * | 2011-06-29 | 2013-01-04 | France Telecom | Procede et dispositif de restitution de contenus audios |
JP6065370B2 (ja) | 2012-02-03 | 2017-01-25 | ソニー株式会社 | 情報処理装置、情報処理方法、及びプログラム |
US8929573B2 (en) * | 2012-09-14 | 2015-01-06 | Bose Corporation | Powered headset accessory devices |
EP2917760B1 (de) * | 2012-11-09 | 2018-03-28 | Nederlandse Organisatie voor toegepast- natuurwetenschappelijk onderzoek TNO | Fahrzeugabstandssteuerung |
WO2014171791A1 (ko) | 2013-04-19 | 2014-10-23 | 한국전자통신연구원 | 다채널 오디오 신호 처리 장치 및 방법 |
KR102150955B1 (ko) | 2013-04-19 | 2020-09-02 | 한국전자통신연구원 | 다채널 오디오 신호 처리 장치 및 방법 |
US9319819B2 (en) | 2013-07-25 | 2016-04-19 | Etri | Binaural rendering method and apparatus for decoding multi channel audio |
FR3002205A1 (fr) * | 2013-08-14 | 2014-08-22 | Airbus Operations Sas | Systeme indicateur d'attitude d'un aeronef par spatialisation sonore tridimensionnelle |
US10382880B2 (en) | 2014-01-03 | 2019-08-13 | Dolby Laboratories Licensing Corporation | Methods and systems for designing and applying numerically optimized binaural room impulse responses |
CN105120419B (zh) * | 2015-08-27 | 2017-04-12 | 武汉大学 | 一种多声道***效果增强方法及*** |
WO2017135063A1 (ja) * | 2016-02-04 | 2017-08-10 | ソニー株式会社 | 音声処理装置、および音声処理方法、並びにプログラム |
US9832587B1 (en) | 2016-09-08 | 2017-11-28 | Qualcomm Incorporated | Assisted near-distance communication using binaural cues |
KR102283964B1 (ko) * | 2019-12-17 | 2021-07-30 | 주식회사 라온에이엔씨 | 인터콤시스템 통신명료도 향상을 위한 다채널다객체 음원 처리 장치 |
EP4085660A4 (de) | 2019-12-30 | 2024-05-22 | Comhear Inc. | Verfahren zum bereitstellen eines räumlichen schallfeldes |
FR3110762B1 (fr) | 2020-05-20 | 2022-06-24 | Thales Sa | Dispositif de personnalisation d'un signal audio généré automatiquement par au moins un équipement matériel avionique d'un aéronef |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4700389A (en) * | 1985-02-15 | 1987-10-13 | Pioneer Electronic Corporation | Stereo sound field enlarging circuit |
FR2633125A1 (fr) * | 1988-06-17 | 1989-12-22 | Sgs Thomson Microelectronics | Appareil acoustique avec carte de filtrage vocal |
US4959015A (en) * | 1988-12-19 | 1990-09-25 | Honeywell, Inc. | System and simulator for in-flight threat and countermeasures training |
FR2652164A1 (fr) * | 1989-09-15 | 1991-03-22 | Thomson Csf | Procede de formation de voies pour sonar, notamment pour sonar remorque. |
WO1991011080A1 (fr) * | 1990-01-19 | 1991-07-25 | Sony Corporation | Appareil de reproduction de signaux acoustiques |
CA2049295C (en) * | 1990-01-19 | 1998-06-23 | Kiyofumi Inanaga | Acoustic signal reproducing apparatus |
EP1304797A3 (de) * | 1992-07-07 | 2007-11-28 | Dolby Laboratories Licensing Corporation | Digitales Filter mit hoher genauigkeit und effizienz |
FR2700055B1 (fr) * | 1992-12-30 | 1995-01-27 | Sextant Avionique | Procédé de débruitage vectoriel de la parole et dispositif de mise en Óoeuvre. |
US5371799A (en) * | 1993-06-01 | 1994-12-06 | Qsound Labs, Inc. | Stereo headphone sound source localization system |
US5438623A (en) * | 1993-10-04 | 1995-08-01 | The United States Of America As Represented By The Administrator Of National Aeronautics And Space Administration | Multi-channel spatialization system for audio signals |
US5659619A (en) * | 1994-05-11 | 1997-08-19 | Aureal Semiconductor, Inc. | Three-dimensional virtual audio display employing reduced complexity imaging filters |
-
1996
- 1996-02-13 FR FR9601740A patent/FR2744871B1/fr not_active Expired - Fee Related
-
1997
- 1997-02-05 DE DE69727328T patent/DE69727328T2/de not_active Expired - Fee Related
- 1997-02-05 EP EP97400248A patent/EP0790753B1/de not_active Expired - Lifetime
- 1997-02-10 CA CA002197166A patent/CA2197166C/fr not_active Expired - Fee Related
- 1997-02-11 US US08/797,212 patent/US5987142A/en not_active Expired - Fee Related
- 1997-02-13 JP JP9029372A patent/JPH1042399A/ja active Pending
Also Published As
Publication number | Publication date |
---|---|
FR2744871A1 (fr) | 1997-08-14 |
EP0790753A1 (de) | 1997-08-20 |
CA2197166C (fr) | 2005-08-16 |
US5987142A (en) | 1999-11-16 |
DE69727328T2 (de) | 2004-10-21 |
FR2744871B1 (fr) | 1998-03-06 |
DE69727328D1 (de) | 2004-03-04 |
CA2197166A1 (fr) | 1997-08-14 |
JPH1042399A (ja) | 1998-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0790753B1 (de) | System für Raumklangeffekt und Verfahren dafür | |
EP2898707B1 (de) | Optimierte kalibrierung eines klangwiedergabesystems mit mehreren lautsprechern | |
EP3320692B1 (de) | Räumliche audioverarbeitungsvorrichtung | |
US10334357B2 (en) | Machine learning based sound field analysis | |
US20180376273A1 (en) | System and method for determining audio context in augmented-reality applications | |
US20180249279A1 (en) | Apparatus and method for generating a filtered audio signal realizing elevation rendering | |
US9237398B1 (en) | Motion tracked binaural sound conversion of legacy recordings | |
EP0813688B1 (de) | Persönliches ortungsgerät | |
JP2007158731A (ja) | 収音・再生方法および装置 | |
CN104756526A (zh) | 信号处理装置、信号处理方法、测量方法及测量装置 | |
EP1658755B1 (de) | Tonquelle-raumklangssystem | |
EP1586220B1 (de) | Verfahren und einrichtung zur steuerung einer wiedergabeeinheitdurch verwendung eines mehrkanalsignals | |
EP1502475B1 (de) | Verfahren und system zum repräsentieren eines schallfeldes | |
EP1258168B1 (de) | Verfahren und anordnung zum signalenvergleich zur wandlersteuerung und wandlersteuerungssystem | |
Martin | A computational model of spatial hearing | |
JP5867799B2 (ja) | 収音再生装置、プログラム及び収音再生方法 | |
FR3065137A1 (fr) | Procede de spatialisation sonore | |
CN111679324A (zh) | 地震数据零相位化处理方法、装置、设备和存储介质 | |
FR3112017A1 (fr) | Equipement électronique comprenant un simulateur de distorsion | |
US20240163630A1 (en) | Systems and methods for a personalized audio system | |
US20240137720A1 (en) | Generating restored spatial audio signals for occluded microphones | |
FR3120449A1 (fr) | Procédé de détermination d’une direction de propagation d’une source sonore par création de signaux sinusoïdaux à partir des signaux sonores reçus par des microphones. | |
Iida et al. | Acoustic VR System | |
FR2600163A1 (fr) | Procede et dispositif de detection et de localisation de fuites dans une conduite parcourue par un fluide | |
EP3484185A1 (de) | Modellierung einer menge von akustischen übertragungsfunktionen einer person, 3d-soundkarte und 3d-sound-reproduktionssystem |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): DE GB NL |
|
17P | Request for examination filed |
Effective date: 19980115 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: THOMSON-CSF SEXTANT |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: THALES AVIONICS S.A. |
|
17Q | First examination report despatched |
Effective date: 20021203 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): DE GB NL |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REF | Corresponds to: |
Ref document number: 69727328 Country of ref document: DE Date of ref document: 20040304 Kind code of ref document: P |
|
GBT | Gb: translation of ep patent filed (gb section 77(6)(a)/1977) | ||
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20041029 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: NL Payment date: 20090203 Year of fee payment: 13 Ref country code: DE Payment date: 20090129 Year of fee payment: 13 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20090204 Year of fee payment: 13 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: V1 Effective date: 20100901 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20100205 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100901 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100901 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20100205 |