EP3149969B1 - Détermination et utilisation de fonctions de transfert acoustiquement optimisées - Google Patents

Détermination et utilisation de fonctions de transfert acoustiquement optimisées Download PDF

Info

Publication number
EP3149969B1
EP3149969B1 EP15724972.3A EP15724972A EP3149969B1 EP 3149969 B1 EP3149969 B1 EP 3149969B1 EP 15724972 A EP15724972 A EP 15724972A EP 3149969 B1 EP3149969 B1 EP 3149969B1
Authority
EP
European Patent Office
Prior art keywords
room
transfer functions
listening
optimized
listening room
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP15724972.3A
Other languages
German (de)
English (en)
Other versions
EP3149969A1 (fr
Inventor
Karlheinz Brandenburg
Stephan Werner
Christoph SLADECZEK
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Technische Universitaet Ilmenau
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Technische Universitaet Ilmenau
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV, Technische Universitaet Ilmenau filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of EP3149969A1 publication Critical patent/EP3149969A1/fr
Application granted granted Critical
Publication of EP3149969B1 publication Critical patent/EP3149969B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space
    • H04S7/306For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • Exemplary embodiments of the present invention relate to a device for determining "listening space-optimized transmission functions" for a monitoring room, to a corresponding method and to a device for the spatial reproduction of an audio signal using corresponding methods.
  • the reproduction is carried out by means of a binaural Nah Kunststoffsschallwandlers, such as. using a stereo headset or stereo in-ear earpiece.
  • Further embodiments relate to a system comprising the two devices, and to a computer method for carrying out the mentioned methods.
  • the perceptual quality in the presentation of a spatial auditory scene depends crucially on the acoustically artistic design of the content of the presentation, the playback system and the room acoustics of the listening room or listening room.
  • a major goal in the development of audio playback systems is the generation of auditory events that are considered to be plausible by the listener. This plays a special role, for example, in the reproduction of image-sound content.
  • different perceptive quality features such as localizability, distance perception, perception of space and sound aspects of the image must meet the expectations. Ideally, therefore, the perception of the reproduced corresponds to the real situation in space.
  • speaker-based audio playback systems two- or multi-channel audio is played back in the listening room.
  • This audio material can come from a channel-based mix that already has ready-made speaker signals.
  • the speaker signals can also be generated by an object-based sound reproduction method.
  • the speaker reproduction signals are generated.
  • these phantom sound sources can be perceived by the listener in different directions and distances.
  • the room acoustics themselves have a decisive influence on the euphony of the reproduced auditory scene.
  • speaker systems are not practical in all listening situations. Furthermore, it is not possible everywhere to install speakers. Examples of such situations may include music listening on mobile devices, use in changing rooms, user acceptance, or the acoustic harassment of others.
  • loudspeakers near-range sound transducers, e.g. In-ears or headphones that are "worn" directly to or in the immediate vicinity of the ear used.
  • the classic stereo reproduction via transducers which are each equipped with one acoustic driver per side or ear, for example, generate the listener's perception that the imaging phantom sound sources are located in the head on the connecting axis between the two ears. It comes to the so-called "in-head localization”.
  • a plausible-looking external perception (externality) of the phantom sound sources does not come about.
  • the phantom sound sources thus generated typically have neither a user-decodable direction (-sinformation) nor distance (-sinformation) that would be present in the listening room, for example, in the reproduction of the same acoustic scene via a speaker system (e.g., 2.0 or 5.1).
  • Binaural synthesis uses so-called "head-related transfer function" (HRTF) for the left and right ear.
  • HRTF head-related transfer function
  • These outer ear transmission functions comprise for each ear a plurality of outer ear transmission functions associated with respective directional vectors for virtual sound sources, corresponding to which the audio signals are filtered during the reproduction thereof, so that an auditory scene is spatially emulated.
  • the binaural synthesis makes use of the fact that interaural features are significantly responsible for the realization of a directional perception of a sound source, these interaural features being reflected in the outer ear transmission functions.
  • the US 2013/0272527 A1 describes an audio system with a receiver for receiving an audio signal in a so-called "binaural circuit for generating a binaural signal with the aid of which a virtual sound source can be positioned in space".
  • a binaural circuit for generating a binaural signal with the aid of which a virtual sound source can be positioned in space.
  • an adaptation of the binaural transfer function in dependence on the acoustic environmental parameters is possible, so that the sound seems very natural.
  • the US 2008/0273708 A1 shows how sound signals can be used to simulate past reflections using HRTF processing.
  • the object of the present invention is to provide an improved spatial reproduction by means of short-range sound transducers, in particular with respect to the conformity of acoustic synthesizing and the consumer's expectation horizon.
  • Embodiments of the present invention provide a (portable) device for determining for a listening room "hearing room optimized transmission functions" based on an analysis of the room acoustics.
  • the hearing room-optimized transfer functions are used for audio-optimized post-processing of audio signals in spatial reproduction, based on the outer ear transfer functions (HRTFs) a space to be synthesized is emulated and based on the hearing room optimized transfer functions of the listening room can be emulated.
  • HRTFs outer ear transfer functions
  • the present invention provides another (portable) device for spatial reproduction of an audio signal by means of a binaural Nah Kunststoffsschallwandlers, in which the spatial reproduction is emulated using knownproofohrübertragungsfunktionen and with the help of a Hörraumoptim believing transfer functions, so that in the reproduction of Audio contents of the outgoing sound transducer by means of the Nah Schlsschallwandlers the Ab technologicalraum characterizing is impressed.
  • the present invention thus provides the prerequisites for considering cognitive effects in the reproduction of multichannel stereo.
  • hearing-space-optimized transmission functions are determined for the respective listening room, in which, for example, an auditory scene is to be reproduced by means of a headphone (generally by means of a binaural close-range sound transducer).
  • the determination of the hearing-space-optimized transfer function corresponds in principle to the derivation of a room-acoustic filter on the basis of the determined or measured room acoustics with the objective of reproducing the acoustic properties of the real space synthetically.
  • the auditory scene can then be reproduced in accordance with a second aspect of the invention, both with the aid of the HRTFs and with the aid of the hearing room-optimized transfer functions as a room sound simulation.
  • the spatiality is generated by means of HRTFs, while the adaptation of the spatiality to the current listening room situation is achieved by means of hearing room-optimized transmission functions.
  • the hearing-space-optimized transmission functions perform an adaptation or post-processing of the HRTFs or the signals processed by the HRTFs.
  • the hearing-space-optimized transmission functions namely the metrological determination with the aid of a test sound source and a microphone, so that the room acoustics can be analyzed over a test track in the listening room in order to obtain an acoustic model of the room.
  • a second variant can Also naturally occurring noises, such as a voice, are used as test signals.
  • the second variant offers the particular advantage that virtually every electrical terminal with a microphone, such as a mobile phone or a smartphone, on which the functionality described above is implemented, sufficient to determine the room acoustics.
  • the analysis of the listening room or the determination of the acoustic spatial model can be based on geometric models.
  • the background to this is that the room acoustics or acoustic perception changes accordingly, depending on whether the listening position is closer to the wall or in which direction the listener is looking.
  • a plurality of direction-dependent and / or position-dependent transfer functions be deposited within the hearing room optimized transfer functions, which are selected here, for example, depending on the position of the listener in the listening room or from the viewpoint of the listener.
  • the spatial reproduction apparatus may also comprise a position determining device, such as e.g. include a GPS.
  • acoustic characteristics such as differences in transit time between left / right and (frequency-dependent) level differences between left / right are decisive.
  • runtime differences can be distinguished in particular between phase delay at low frequencies and group delay at high frequencies.
  • runtime differences can be simulated via any stereo driver via signal processing.
  • the determination of the direction of incidence in the medial plane is based in particular on the fact that the auricle and / or the auditory canal entrance performs a direction-selective filtering of the acoustic signal. This filtering is frequency selective, so that an audio signal can be filtered in advance with such a frequency filter to simulate a particular direction of arrival or to emulate a spatiality.
  • the determination of the distance of a sound source from the listener is based on different mechanisms.
  • the main mechanisms are volume, frequency-selective filtering of the traveled sound path, sound reflection and initial time gap. Much of the above factors are person-specific. Person-individual variables may be, for example, the distance between the ears and the shape of the auricle, which has an effect, in particular, on the lateral and medial localization.
  • manipulating an audio signal with regard to the mechanisms mentioned are the spatial sound emulation, wherein the manipulation parameters (per spatial direction and distance) are stored in the HRTFs.
  • HRTFs outer ear transmission functions
  • HRTFs outer ear transmission functions
  • the background to this is that the above three factors for localization for indoor use are distorted to the extent that the sound emitted by a sound source reaches the listener not only directly but also in a reflected form (eg via walls) Change in the acoustic perception has consequences. In rooms, therefore, there is direct sound and reflected (later arriving) reflected sound, these sounds for the listener, for example, based on maturity for certain frequency groups and / or position of the secondary sound source in space are differentiable. These (Hall) parameters also depend on the room size and nature (e.g., attenuation, shape) so that a listener can estimate the room size and texture.
  • the room acoustics can also be binaurally emulated.
  • the RRTF extends the HRTF to the binaural room impulse response (BRIR), which simulates the listener with certain acoustic room conditions in the case of headphone reproduction.
  • BRIR binaural room impulse response
  • cognitive effects also play a major role in the listener.
  • Studies on such cognitive effects have shown that the relevance of parameters such as the degree of agreement between the listening room and the space to be synthesized, the emergence of a plausible auditory illusion are high.
  • the expert speaks in the case of a small divergence between listening room and room to be reproduced by low externality of the auditory event.
  • the binaural synthesis is now to be extended so that the binaural simulation of an auditory scene can be adapted to the context of use.
  • the simulation is adapted to the listening conditions, such as a current room acoustics (damping) and the geometry of the listening room.
  • the perception of distance, the perception of spatiality and the perception of direction can be varied in such a way that they relate to the current listening room seem plausible.
  • Variation parameters are, for example, the HRTF or RRTF features, such as time differences, level differences, frequency-selective filtering or initial time gap.
  • the adaptation takes place, for example, in such a way that a room size with a certain Hall behavior (reverberation behavior or reflection behavior) is emulated or distances between the listener and the sound source, for example, are limited to a maximum value.
  • Another factor influencing the surround sound behavior is the position of the user in the listening room, since it is crucial in terms of reverberation and reflection whether the user is centrally located in the room or in the vicinity of a wall.
  • This behavior can also be emulated by adjusting the HRTF or RRTF parameters.
  • the following section explains how or by which means the adaptation of the HRTF or RRTF parameters is carried out in order to improve the plausibility of the acoustic simulation on-site.
  • the concept of auralization of room acoustics in the basic structure comprises two components, which are represented on the one hand by two independent devices and on the other by two corresponding methods.
  • the first component namely the acquisition of hearing-space-optimized transmission functions TF will be explained
  • FIG Fig. 2a and 2 B the use of the hearing room optimized transfer functions TF are explained.
  • Fig. 1a shows a device 10 for determining optimized for a listening room 12 transfer functions TF (transfer function).
  • the device 10 includes an interface, eg, as here illustrated, a microphone interface (see reference numeral 14) for acquiring auditory-related data.
  • a microphone interface for acquiring auditory-related data.
  • the hearing-space-optimized transfer functions TF on the basis of which the listening space characteristic is to be impressed on an acoustic material by means of binaural synthesis, is typically designed such that already existing HRTFs are adapted, the device 10 can determine the transfer functions TF taking into account the HRTFs to be used.
  • the device 10 optionally includes a further interface for reading in or forwarding HRTFs.
  • the detection of the prevailing room acoustic conditions of the listening room is metrologically possible.
  • the room acoustics of the listening room 12 are measured by means of an acoustic measuring method with the aid of the device 10.
  • a test signal transmitted via an optional speaker (not shown), is used.
  • the reproduction of the test signal or the control of the loudspeaker can in this case take place via the device 10, if the device 10 for this purpose comprises a loudspeaker interface (not shown) or the loudspeaker itself.
  • the measuring signal radiated into the room 12 via the loudspeaker is recorded by means of the microphone 14, so that the room acoustics can be determined based on the signal change over the measuring path (between loudspeaker microphone), so that at least one hearing room optimized transfer function TF eg for one spatial direction or a plurality is derivable at listening room optimized transfer functions TF. From the measured transfer function from one direction relevant room acoustic parameters are derived for the listening room. These are used to generate the hearing-room-optimized transmission functions TF for the other required directions.
  • the discrete first reflections can be adapted to other spatial directions and distance of the virtual sound source positions to be imaged.
  • the information relevant for directional perception is available in the HRTFs.
  • the determination of the room acoustics can be estimated by using acoustic signals that are already behaving through the listening room 12. Examples of such signals are the ambient sounds that are present anyway, as well as a voice signal of a user.
  • the algorithms used for this purpose are derived from algorithms for removing reverberation from a speech signal. The background to this is that an estimation of the space transfer function lying on the signal to be contained is typically carried out in the reverberation algorithms. To date, these algorithms are used to determine a filter which, when applied to the original signal, results in the most likely unattenuated signal. In the application in the analysis of room acoustics, the filter function is not determined, but only an estimation method used to the properties of the listening room to recognize. In this procedure, again, the microphone 14, which is coupled to the device 10, is used.
  • the room acoustics can be simulated based on geometric spatial data. This approach is based on the fact that geometric data (e.g., edge dimensions, free path length) of a room 12 make it possible to estimate the room acoustics.
  • the room acoustics of room 12 can either be simulated directly or approximated based on room acoustic filter database comprising comparative acoustic models.
  • methods such as the acoustic ray tracing or the mirror sound source method in conjunction with a diffuse sound model can be mentioned. The two methods mentioned are based on geometric models of the listening room.
  • the above-explained interface for recording hearing-room-related data of the device 10 does not necessarily have to be a microphone interface, but can also generally be referred to as a data interface which serves for reading geometry data. Furthermore, it is also possible that further data on the room acoustics are read in addition by means of the interface, which include, for example, information about an existing in the listening room speaker setup.
  • the data can be retrieved from a geometry database, such as e.g. Google Maps are taken in-house.
  • a geometry database such as e.g. Google Maps are taken in-house.
  • These databases typically include geometric models, e.g. Vector models of spatial geometries, from which primarily the distances, but also reflection characteristics can be determined.
  • an image database can also be used as input, in which case the geometric parameters are subsequently determined by means of image recognition in an intermediate step.
  • the hearing room-optimized transmission functions TF are derived in a subsequent step for at least one, preferably for a plurality of rooms.
  • the derivation of the hearing room optimized Transfer functions TF which is comparable in terms of their parameters with the RRTFs, corresponds in principle to the determination of a filter function (per spatial direction), by means of which the acoustic behavior in space, eg in the sound propagation in a particular spatial direction, can be reproduced.
  • the ear room-specific transmission functions TF per room typically comprise a plurality of transmission functions by means of which the outer ear transmission functions (assigned to individual solid angles) can be adapted accordingly (comparable to the procedure for processing the room impulse response).
  • the number of hearing-room-optimized transmission functions TF therefore typically depends on the number of outer-ear transmission functions which occur as a functional group and a multiplicity, namely for left / right and for the relevant directions.
  • the exact number of outer ear transmission functions in the HRTF model depends on the desired spatial resolution capability, and may vary considerably due to the fact that HRTF models in which a large number of directional vectors are determined by means of interpolation exist. From this context, it becomes clear why it makes sense for the device for determining the hearing-space-optimized transfer function TF to use the HRTF model.
  • the determined auditory-space-optimized transfer functions TF are stored, for example, in a room-acoustic filter database.
  • a multitude of hearing-room-optimized transmission function groups can also be determined and stored per monitoring room, which takes into account that the listening room functions or the acoustic behavior in the listening room differs depending on the position of the listener.
  • a separate hearing room-optimized transmission characteristic can be determined, the determination of which can be based on one and the same acoustic model of the listening room 12.
  • the analysis of the listening room is advantageously carried out only once.
  • different space-optimized transmission functions flocks can also be determined per spatial direction into which the user is viewing.
  • the device 10 described above can be designed differently.
  • the device 10 is designed as a mobile device, in which case the sensor 14, such as the microphone or the camera, can be integrated accordingly.
  • the analysis unit 10 can be implemented, for example, as hardware or software-based.
  • embodiments of the device 10 include an internal or cloud-connected CPU or other logic configured to perform the determination of the hearing-space optimized transfer functions TF and / or the listening room analysis.
  • Fig. 1b the method or in particular the basic steps of the method on which the algorithm for software-implemented determination of hearing-space optimized transfer functions TF is based, explained.
  • Fig. 1b shows a flowchart 100 of the method in the determination of the for a hearing room optimized transfer functions TF.
  • the method 100 comprises the central step 110 of determining the hearing room optimized transfer functions TF.
  • step 110 is based on the analysis of room acoustics 120 (see step 120 "Analyzing Room Acoustics") and optionally also on existing HRTF functions.
  • a further optional step namely the storage of the transfer functions TF, can follow. This step is provided with the reference numeral 130.
  • Fig. 2a shows a device for spatial reproduction 20 with the aid of a binaural Nah Schlallwandlers 22.
  • the functionality of the device 20 is inter alia with the aid of the flowchart from Fig. 2b which illustrates the method 200 of the reproduction.
  • the device 20 is adapted to the audio signal 24, such as to reproduce a multi-channel stereo audio signal (or an object-based audio signal or an audio signal based on a Wave Field Synthesis Algorithm (WFS)) and simultaneously emulate surround sound (see step 210).
  • WFS Wave Field Synthesis Algorithm
  • the reproduction apparatus 20 carries out a processing of the audio signal with the aid of HRTFs and with the aid of the hearing room-optimized transmission functions TF.
  • the device 20 may comprise an HRTF / TF memory or is connected, for example, to a database on which the HRTFs as well as the hearing-room-optimized transmission functions TF determined in accordance with the above methods are stored.
  • combining before the signal processing of the audio signal, combining (see step 220) the HRTF with the TF or adjusting the HRTF based on the TF.
  • the result of this combining is a BRIR 'transfer function comparable to the BRIR (spatial impulse response), which is then used to process the audio signal 24 to emulate the surround sound (see step 210). This processing corresponds in principle to applying a BRIR'-based filter to the audio signal.
  • the synthesized space (at least approximately) matches the user's expectation horizon, which increases the plausibility of the scene.
  • the device 20 may also include the position determination unit, such as a GPS receiver, by means of which the current position of the listener is detected. Starting from the determined position, the monitoring room can now be determined and the hearing room-optimized transmission functions TF associated with the monitoring room can be loaded (and, if necessary, updated during a room change).
  • the position determination unit such as a GPS receiver
  • the hearing room-optimized transmission functions TF associated with the monitoring room can be loaded (and, if necessary, updated during a room change).
  • this position determination unit can also be extended by an orientation determination unit so that the viewing direction of the listener can also be determined and the TFs are correspondingly loaded depending on the particular viewing direction in order to cope with the direction-dependent monitoring room acoustics.
  • FIG. 12 shows a schematic representation of the signal flow in listening to adapted room acoustic simulations for use with binaural synthesis from a system 10 + 20 comprising the device for determining the TFs and the device for reproducing the audio signals using the TFs.
  • Such a system 10 + 20 can be implemented, for example, as a mobile terminal (eg as a smartphone), on which the file to be played back is also stored.
  • the system 10 + 20 is principally a combination of the device 10 Fig. 1a and the device 20 from Fig. 1b , wherein the individual components are subdivided differently for function-oriented explanation.
  • the system 10 + 20 comprises a functional unit for the auralization of the listening room 20a and a functional unit for the binaural synthesis 20b. Furthermore, the system 10 + 20 includes a function block 10a for modeling the room acoustics and a function block 10b for modeling the transmission behavior. The modeling of the room acoustics in turn is based on a detection of the listening room, which is performed with the function block 10c for detecting the room acoustics. Further, in the illustrated embodiment, the system 10 + 20 includes two memories, one for storing scene position data 30a and one for storing HRTF data 30b.
  • the functionality of the system 10 + 20 is explained on the basis of the information flow in the reproduction, assuming that the listening room is known to the system 10 + 20 or already determined by a position determining method (see above).
  • the audio data is supplied in a first step to the signal processing unit 20a, which applies the previously modeled space transfer function TF to the signal 24 and this reverberates.
  • the modeling of the space transfer function TF takes place in a signal processing block 10a, this modeling being superimposed by the modeling transfer behavior (see function block 10b), as will be explained below.
  • This second (optional) function block 10b models a virtual speaker setup in the respective listening room.
  • the user can be emulated an acoustic behavior as if the audio file to be played on a particular speaker setup (2.0, 5.1, 9.2) is played.
  • the loudspeaker position is fixedly connected to the listening room and the respective loudspeakers also have a certain transmission behavior, e.g. defined by the frequency response and directional characteristic or different level behavior, assigned.
  • special sound source types e.g. to fix a mirror sound source fixed in the room.
  • the speaker setup is modeled based on the scene position data that includes information about the position, distance, or type of the virtual speaker. This scene position data may correspond to a real existing speaker setup, or based on virtual speaker setup, and is typically customizable by the user.
  • the reverberated signals are fed to the binaural synthesis 20b which impress the direction of the virtual loudspeakers onto the audio material associated with the loudspeaker through a set of directional HRTF filters (see Fig. 30b).
  • the binaural synthesis system can optionally evaluate the head rotation of the listener. The result is a headphone signal, which can be adjusted by appropriate equalization to a particular headphone, with the acoustic signal behaving as if it were being delivered with a specific speaker setup in the respective listening room.
  • the system 10 + 20 may be implemented, for example, as a mobile terminal or components of a home theater system.
  • applications are the reproduction of music and entertainment content, such as music. Sound for film or play sound over the binaural Nah Kunststoffsschallwandler.
  • the device 20 from Fig. 2a may also be configured to emulate a particular speaker setup or playback of an audio signal for a particular speaker setup based on scene location data.
  • the device 10 may be adapted to the scene position data of a speaker setup in the listening room 12 (eg via a acoustic measurement), so that this speaker setup can be emulated with the device 20.
  • aspects have been described in the context of a device, it will be understood that these aspects also constitute a description of the corresponding method, so that a block or a component of a device is also to be understood as a corresponding method step or as a feature of a method step. Similarly, aspects described in connection with or as a method step also represent a description of a corresponding block or detail or feature of a corresponding device.
  • Some or all of the method steps may be performed by a hardware device (or using a hardware device). Apparatus), such as a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some or more of the most important method steps may be performed by such an apparatus.
  • a signal coded according to the invention such as an audio signal or a video signal or a transport stream signal, may be stored on a digital storage medium or may be stored on a transmission medium such as a wireless transmission medium or a wired transmission medium, e.g. the internet
  • the encoded audio signal of the present invention may be stored on a digital storage medium, or may be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet.
  • embodiments of the invention may be implemented in hardware or in software.
  • the implementation may be performed using a digital storage medium, such as a floppy disk, a DVD, a Blu-ray Disc, a CD, a ROM, a PROM, an EPROM, an EEPROM or FLASH memory, a hard disk, or other magnetic disk or optical memory are stored on the electronically readable control signals that can cooperate with a programmable computer system or cooperate such that the respective method is performed. Therefore, the digital storage medium can be computer readable.
  • some embodiments according to the invention include a data carrier having electronically readable control signals capable of interacting with a programmable computer system such that one of the methods described herein is performed.
  • embodiments of the present invention may be implemented as a computer program product having a program code, wherein the program code is operable to perform one of the methods when the computer program product runs on a computer.
  • the program code can also be stored, for example, on a machine-readable carrier.
  • inventions include the computer program for performing any of the methods described herein, wherein the computer program is stored on a machine-readable medium.
  • an embodiment of the method according to the invention is thus a computer program which has a program code for performing one of the methods described herein when the computer program runs on a computer.
  • a further embodiment of the inventive method is thus a data carrier (or a digital storage medium or a computer-readable medium) on which the computer program is recorded for carrying out one of the methods described herein.
  • a further embodiment of the method according to the invention is thus a data stream or a sequence of signals, which represent the computer program for performing one of the methods described herein.
  • the data stream or the sequence of signals may be configured, for example, to be transferred via a data communication connection, for example via the Internet.
  • Another embodiment includes a processing device, such as a computer or a programmable logic device, that is configured or adapted to perform one of the methods described herein.
  • a processing device such as a computer or a programmable logic device, that is configured or adapted to perform one of the methods described herein.
  • Another embodiment includes a computer on which the computer program is installed to perform one of the methods described herein.
  • Another embodiment according to the invention comprises a device or system adapted to transmit a computer program for performing at least one of the methods described herein to a receiver.
  • the transmission can be done for example electronically or optically.
  • the receiver may be, for example, a computer, a mobile device, a storage device or a similar device.
  • the device or system may include a file server for transmitting the computer program to the recipient.
  • a programmable logic device eg, a field programmable gate array, an FPGA
  • a field programmable gate array may cooperate with a microprocessor to perform one of the methods described herein.
  • the methods are performed by any hardware device. This may be a universal hardware such as a computer processor (CPU) or hardware specific to the process, such as an ASIC.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Stereophonic System (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Claims (15)

  1. Dispositif (10) de détermination de fonctions de transmission (TF) acoustiquement optimisées pour une salle d'écoute (12) qui sont dérivées pour la salle d'écoute (12) et qui servent pour le post-traitement acoustiquement optimisé de signaux audio (24) lors de la reproduction spatiale, où la reproduction spatiale des signaux audio (24) est émulée au moyen d'un transducteur acoustique binaural de proximité (22) à l'aide de fonctions de transmission de l'oreille externe (HRTF) connues et à l'aide des fonctions de transmission (TF) acoustiquement optimisées,
    dans lequel un local à synthétiser peut être émulé sur base des fonctions de transmission de l'oreille externe (HRTF) et dans lequel la salle d'écoute (12) peut être émulée sur base des fonctions de transmission acoustiquement optimisées (TF),
    dans lequel le dispositif (10) est conçu pour analyser une acoustique de la salle d'écoute (12) et pour déterminer, partant de l'analyse de l'acoustique de la salle, les fonctions de transmission (TF) acoustiquement optimisées pour la salle d'écoute (12) dans laquelle doit avoir lieu la reproduction spatiale au moyen du transducteur acoustique binaural de proximité (22),
    dans lequel le dispositif (10) comporte une mémoire dans laquelle peuvent être mémorisés une pluralité de groupes de fonctions de transmission (TF) acoustiquement optimisées pour une pluralité de salles d'écoute (12),
    caractérisé par le fait que
    les fonctions de transmission (TF) acoustiquement optimisées comportent, par salle, une pluralité de fonctions de transmission associées à des angles spatiaux individuels, où chaque angle spatial représente une direction de propagation du son dans l'espace.
  2. Dispositif (10) selon la revendication 1, dans lequel le dispositif (10) comporte un microphone (14) d'un dispositif portable pour la mesure acoustique et/ou dans lequel l'analyse de l'acoustique spatiale de la salle d'écoute (12) a lieu au moyen d'une mesure acoustique dans la salle d'écoute (12) à l'aide du bruit ambiant et/ou à l'aide d'un signal de test.
  3. Dispositif (10) selon la revendication 1, dans lequel l'analyse acoustique spatiale de la salle d'écoute (12) est basée sur le calcul d'un modèle géométrique de la salle d'écoute (12) et/ou la modélisation du modèle géométrique sur base d'un modèle de la salle d'écoute (12) basé sur une caméra.
  4. Dispositif (10) selon la revendication 2 ou 3, dans lequel les fonctions de transmission (TF) acoustiquement optimisées sont sélectionnées de sorte qu'une acoustique spatiale de la salle d'écoute (12) puisse être émulée sur base de ces dernières.
  5. Dispositif (10) selon l'une des revendications 1 à 4, dans lequel le dispositif (10) est conçu pour déterminer les fonctions de transmission (TF) acoustiquement optimisées en tenant compte d'une configuration de haut-parleurs virtuelle selon laquelle un nombre de haut-parleurs virtuels sont positionnés dans la salle d'écoute (12).
  6. Dispositif (10) selon l'une des revendications 1 à 5, dans lequel les fonctions de transmission de l'oreille externe (HRTF) connues comportent une pluralité de fonctions de transmission individuelles pour l'oreille gauche et l'oreille droite (TF) associées à des vecteurs liés à la direction pour une pluralité de sources de son virtuelles.
  7. Dispositif (10) selon l'une des revendications 1 à 6, dans lequel l'émulation de la reproduction spatiale est basée sur des caractéristiques interaurales, des caractéristiques d'équilibre et des caractéristiques de distance,
    dans lequel les caractéristiques interaurales comportent un rapport entre une direction d'incidence dans le plan médian et un filtrage individuel ou non individuel de l'oreille externe, dans lequel les caractéristiques d'équilibre comportent un rapport entre une direction d'incidence latérale et une différence de volume sonore et/ou un rapport entre la direction d'incidence latérale et une différence de durée, dans lequel les caractéristiques de distance comportent un rapport entre une distance virtuelle et un filtrage dépendant de la fréquence et/ou un rapport entre la distance virtuelle et un intervalle de temps initial et/ou un rapport entre la distance virtuelle et un comportement de réflexion.
  8. Dispositif (10) selon l'une des revendications 1 à 7, dans lequel le transducteur acoustique binaural de proximité (22) est un casque qui est configuré pour sortir un signal stéréo multicanal, un signal audio à base d'objet (24) et/ou un signal audio (24) sur base d'un algorithme de synthèse de champ d'onde comme signal audio (24).
  9. Procédé (100) pour déterminer des fonctions de transmission (TF) acoustiquement optimisées pour une salle d'écoute (12) qui sont dérivées pour la salle d'écoute (12) et qui peuvent servir pour le post-traitement acoustiquement optimisé de signaux audio (24) lors de la reproduction spatiale, dans lequel la reproduction spatiale des signaux audio (24) est émulée au moyen d'un transducteur acoustique binaural de proximité (22) à l'aide de fonctions de transmission de l'oreille externe (HRTF) connues et à l'aide des fonctions de transmission (TF) acoustiquement optimisées, dans lequel un local à synthétiser peut être émulé sur base des fonctions de transmission de l'oreille externe (HRTF) et dans lequel la salle d'écoute (12) peut être émulée sur base des fonctions de transmission acoustiquement optimisées (TF),
    aux étapes consistant à:
    analyser (120) une acoustique spatiale prédominante de la salle d'écoute (12); et
    déterminer (110) les fonctions de transmission (TF) acoustiquement optimisées pour la salle d'écoute (12) dans laquelle doit avoir lieu la reproduction spatiale au moyen du transducteur acoustique binaural de proximité (22), sur base de l'analyse de l'acoustique spatiale;
    mémoriser dans une mémoire une pluralité de groupes de fonctions de transmission (TF) acoustiquement optimisées pour une pluralité de salles d'écoute (12),
    caractérisé par le fait que
    les fonctions de transmission (TF) acoustiquement optimisées comportent, par salle, une pluralité de fonctions de transmission associées à des angles spatiaux individuels, où chaque angle spatial représente une direction de propagation du son dans l'espace.
  10. Dispositif (20) de reproduction spatiale d'un signal audio (24) au moyen d'un transducteur acoustique binaural de proximité (22), dans lequel la reproduction spatiale est émulée à l'aide de fonctions de transmission de l'oreille externe (HRTF) connues et à l'aide de fonctions de transmission (TF) acoustiquement optimisées pour une salle d'écoute (12),
    dans lequel un local à synthétiser peut être émulé sur base des fonctions de transmission de l'oreille externe (HRTF) et dans lequel la salle d'écoute (12) peut être émulée sur base des fonctions de transmission acoustiquement optimisées (TF),
    dans lequel les fonctions de transmission (TF) acoustiquement optimisées sont déterminées auparavant pour la salle d'écoute (12) respective; dans lequel le dispositif (20) comporte une première mémoire dans laquelle est mémorisée une première pluralité de groupes de fonctions de transmission (TF) pour différentes salles d'écoute (12), et une unité de détermination de position,
    dans lequel l'unité de détermination de position est conçue pour déterminer la position et pour déterminer la salle d'écoute (12) sur base de la position déterminée; et
    dans lequel le dispositif (20) est conçu pour sélectionner, pour l'émulation de la reproduction spatiale, les fonctions de transmission (TF) correspondantes pour la salle d'écoute respective (12) parmi les groupes de fonctions de transmission,
    caractérisé par le fait que
    les fonctions de transmission (TF) acoustiquement optimisées pour la salle d'écoute comportent, par salle, une pluralité de fonctions de transmission associées à des angles spatiaux individuels, où chaque angle spatial représente une direction de propagation de son dans l'espace.
  11. Dispositif (20) selon la revendication 10, dans lequel le dispositif (20) comporte une deuxième mémoire dans laquelle est mémorisée une deuxième pluralité de groupes de fonctions de transmission (TF) pour différentes orientations, et une unité de détermination d'orientation,
    dans lequel l'unité de détermination d'orientation est conçue pour déterminer une orientation dans la salle d'écoute (12), et
    dans lequel le dispositif (20) est conçu pour sélectionner, pour l'émulation de la reproduction spatiale, les fonctions de transmission (TF) correspondantes pour l'orientation respective parmi les groupes de fonctions de transmission; et/ou
    dans lequel le dispositif (20) comporte une troisième mémoire dans laquelle est mémorisée une troisième pluralité de groupes de fonctions de transmission (TF) pour différentes positions dans la salle d'écoute (12), et une autre unité de détermination de position,
    dans lequel l'autre unité de détermination de position est conçue pour déterminer une position dans la salle d'écoute (12), et
    dans lequel le dispositif (20) est conçu pour sélectionner, pour l'émulation de la reproduction spatiale, les fonctions de transmission (TF) correspondantes pour la position respective dans la salle d'écoute (12) parmi les groupes de fonctions de transmission; et/ou
    dans lequel l'unité de détermination de position est conçue pour déterminer à nouveau les positions pendant la reproduction, et dans lequel le dispositif (20) est conçu pour mettre à jour les fonctions de transmission (TF) acoustiquement optimisées sur base de la position mise à jour.
  12. Procédé (200) de reproduction spatiale d'un signal audio (24) au moyen d'un transducteur acoustique binaural de proximité (22), aux étapes consistant à:
    post-traiter (210) le signal audio (24) à l'aide de fonctions de transmission de l'oreille externe (HRTF) connues et à l'aide de fonctions de transmission (TF) acoustiquement optimisées pour une salle d'écoute (12) qui sont déterminées auparavant pour la salle d'écoute (12) dans laquelle doit avoir lieu la reproduction au moyen du transducteur acoustique binaural de proximité (22), où un espace à synthétiser peut être émulé sur base des fonctions de transmission de l'oreille externe (HRTF), et où la salle d'écoute (12) peut être émulée sur base des fonctions de transmission (TF) acoustiquement optimisées de la salle d'écoute;
    mémoriser une première pluralité de groupes de fonctions de transmission (TF) pour différentes salles d'écoute (12) dans une première mémoire;
    déterminer une position; et
    déterminer la salle d'écoute (12) à l'aide de la position,
    dans lequel le dispositif (20) est conçu pour sélectionner, pour l'émulation de la reproduction spatiale, les fonctions de transmission (TF) correspondantes pour la salle d'écoute (12) respective parmi les groupes de fonctions de transmission,
    caractérisé par le fait que
    les fonctions de transmission (TF) acoustiquement optimisées comportent, par salle, une pluralité de fonctions de transmission associées à des angles spatiaux individuels, où chaque angle spatial représente une direction de propagation de son dans l'espace.
  13. Procédé (200) selon la revendication 12, dans lequel a lieu, avant la reproduction, une combinaison (220) des fonctions de transmission de l'oreille externe (HRTF) et des fonctions de transmission (TF) acoustiquement optimisées pour une réponse impulsionnelle spatiale sur base de l'espace (BRIR').
  14. Système (10 + 20) comportant:
    un dispositif (10) selon l'une des revendications 1 à 8; et
    un dispositif (20) selon l'une des revendications 10 à 11.
  15. Programme d'ordinateur avec un code de programme qui provoque la réalisation du procédé (100; 200) selon la revendication 9 ou 12 lorsque le programme est exécuté sur un ordinateur, une unité centrale de traitement (CPU) ou un terminal mobile.
EP15724972.3A 2014-05-28 2015-05-15 Détermination et utilisation de fonctions de transfert acoustiquement optimisées Active EP3149969B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102014210215.4A DE102014210215A1 (de) 2014-05-28 2014-05-28 Ermittlung und Nutzung hörraumoptimierter Übertragungsfunktionen
PCT/EP2015/060792 WO2015180973A1 (fr) 2014-05-28 2015-05-15 Détermination et utilisation de fonctions de transfert acoustiquement optimisées

Publications (2)

Publication Number Publication Date
EP3149969A1 EP3149969A1 (fr) 2017-04-05
EP3149969B1 true EP3149969B1 (fr) 2019-09-18

Family

ID=53268781

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15724972.3A Active EP3149969B1 (fr) 2014-05-28 2015-05-15 Détermination et utilisation de fonctions de transfert acoustiquement optimisées

Country Status (7)

Country Link
US (1) US10003906B2 (fr)
EP (1) EP3149969B1 (fr)
JP (1) JP6446068B2 (fr)
KR (1) KR102008771B1 (fr)
CN (1) CN106576203B (fr)
DE (1) DE102014210215A1 (fr)
WO (1) WO2015180973A1 (fr)

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2546504B (en) 2016-01-19 2020-03-25 Facebook Inc Audio system and method
US9591427B1 (en) * 2016-02-20 2017-03-07 Philip Scott Lyren Capturing audio impulse responses of a person with a smartphone
US10187740B2 (en) * 2016-09-23 2019-01-22 Apple Inc. Producing headphone driver signals in a digital audio signal processing binaural rendering environment
US11197119B1 (en) 2017-05-31 2021-12-07 Apple Inc. Acoustically effective room volume
CN109286889A (zh) * 2017-07-21 2019-01-29 华为技术有限公司 一种音频处理方法及装置、终端设备
EP3454578B1 (fr) * 2017-09-06 2020-11-04 Sennheiser Communications A/S Système de communication pour communication de signaux audio entre une pluralité de dispositifs de communication dans un environnement sonore virtuel
US10390171B2 (en) * 2018-01-07 2019-08-20 Creative Technology Ltd Method for generating customized spatial audio with head tracking
US10764703B2 (en) 2018-03-28 2020-09-01 Sony Corporation Acoustic metamaterial device, method and computer program
EP3547305B1 (fr) * 2018-03-28 2023-06-14 Fundació Eurecat Technique de réverbération pour audio 3d
US11617050B2 (en) 2018-04-04 2023-03-28 Bose Corporation Systems and methods for sound source virtualization
US10966046B2 (en) * 2018-12-07 2021-03-30 Creative Technology Ltd Spatial repositioning of multiple audio streams
US11418903B2 (en) 2018-12-07 2022-08-16 Creative Technology Ltd Spatial repositioning of multiple audio streams
US11113092B2 (en) 2019-02-08 2021-09-07 Sony Corporation Global HRTF repository
KR20210138006A (ko) * 2019-03-19 2021-11-18 소니그룹주식회사 음향 처리 장치, 음향 처리 방법, 및 음향 처리 프로그램
US11451907B2 (en) 2019-05-29 2022-09-20 Sony Corporation Techniques combining plural head-related transfer function (HRTF) spheres to place audio objects
US11347832B2 (en) 2019-06-13 2022-05-31 Sony Corporation Head related transfer function (HRTF) as biometric authentication
US11146908B2 (en) 2019-10-24 2021-10-12 Sony Corporation Generating personalized end user head-related transfer function (HRTF) from generic HRTF
US11330371B2 (en) 2019-11-07 2022-05-10 Sony Group Corporation Audio control based on room correction and head related transfer function
US11070930B2 (en) * 2019-11-12 2021-07-20 Sony Corporation Generating personalized end user room-related transfer function (RRTF)
US20230007430A1 (en) * 2019-11-29 2023-01-05 Sony Group Corporation Signal processing device, signal processing method, and program
CN111031467A (zh) * 2019-12-27 2020-04-17 中航华东光电(上海)有限公司 一种hrir前后方位增强方法
CN111372167B (zh) * 2020-02-24 2021-10-26 Oppo广东移动通信有限公司 音效优化方法及装置、电子设备、存储介质
JP7463796B2 (ja) * 2020-03-25 2024-04-09 ヤマハ株式会社 デバイスシステム、音質制御方法および音質制御プログラム
US11356795B2 (en) 2020-06-17 2022-06-07 Bose Corporation Spatialized audio relative to a peripheral device
EP3945729A1 (fr) * 2020-07-31 2022-02-02 FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. Système et procédé d'égalisation de casque d'écoute et d'adaptation spatiale pour la représentation binaurale en réalité augmentée
US11982738B2 (en) 2020-09-16 2024-05-14 Bose Corporation Methods and systems for determining position and orientation of a device using acoustic beacons
US11523243B2 (en) * 2020-09-25 2022-12-06 Apple Inc. Systems, methods, and graphical user interfaces for using spatialized audio during communication sessions
US11696084B2 (en) 2020-10-30 2023-07-04 Bose Corporation Systems and methods for providing augmented audio
US11700497B2 (en) 2020-10-30 2023-07-11 Bose Corporation Systems and methods for providing augmented audio
CN112584277B (zh) * 2020-12-08 2022-04-22 北京声加科技有限公司 一种室内音频均衡的方法
WO2022231338A1 (fr) * 2021-04-28 2022-11-03 Samsung Electronics Co., Ltd. Dispositif récepteur portable pour étalonnage de configurations de dispositif à l'aide d'une géométrie de pièce, et système le comprenant
KR102652559B1 (ko) * 2021-11-24 2024-04-01 주식회사 디지소닉 음향실 및 이를 이용한 brir 획득 방법

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10249416B4 (de) * 2002-10-23 2009-07-30 Siemens Audiologische Technik Gmbh Verfahren zum Einstellen und zum Betrieb eines Hörhilfegerätes sowie Hörhilfegerät
JP2005223713A (ja) * 2004-02-06 2005-08-18 Sony Corp 音響再生装置、音響再生方法
GB0419346D0 (en) * 2004-09-01 2004-09-29 Smyth Stephen M F Method and apparatus for improved headphone virtualisation
US20080273708A1 (en) * 2007-05-03 2008-11-06 Telefonaktiebolaget L M Ericsson (Publ) Early Reflection Method for Enhanced Externalization
CN102440003B (zh) * 2008-10-20 2016-01-27 吉诺迪奥公司 音频空间化和环境仿真
TR201815799T4 (tr) * 2011-01-05 2018-11-21 Anheuser Busch Inbev Sa Bir audio sistemi ve onun operasyonunun yöntemi.
WO2012168765A1 (fr) * 2011-06-09 2012-12-13 Sony Ericsson Mobile Communications Ab Réduction du volume des données des fonctions de transfert relatives à la tête
EP2766901B1 (fr) * 2011-10-17 2016-09-21 Nuance Communications, Inc. Amélioration de signal de paroles à l'aide d'informations visuelles

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
CN106576203B (zh) 2020-02-07
KR20170013931A (ko) 2017-02-07
KR102008771B1 (ko) 2019-08-09
DE102014210215A1 (de) 2015-12-03
US20170078820A1 (en) 2017-03-16
US10003906B2 (en) 2018-06-19
JP6446068B2 (ja) 2018-12-26
WO2015180973A1 (fr) 2015-12-03
EP3149969A1 (fr) 2017-04-05
CN106576203A (zh) 2017-04-19
JP2017522771A (ja) 2017-08-10

Similar Documents

Publication Publication Date Title
EP3149969B1 (fr) Détermination et utilisation de fonctions de transfert acoustiquement optimisées
DE60304358T2 (de) Verfahren zur verarbeitung von audiodateien und erfassungsvorrichtung zur anwendung davon
CN105684467B (zh) 使用元数据处理的耳机的双耳呈现
CN106797525A (zh) 用于生成和回放音频信号的方法和设备
CN111294724B (zh) 多个音频流的空间重新定位
US11317233B2 (en) Acoustic program, acoustic device, and acoustic system
KR20200047414A (ko) 헤드셋을 통한 공간 오디오 렌더링을 위한 룸 특성 수정 시스템 및 방법
CN112005559B (zh) 改进环绕声的定位的方法
EP3225039B1 (fr) Système et procédé pour produire un audio tridimensionnel (3d) externalisé sur la tête par l'intermédiaire de casques d'écoute
DE102019107302A1 (de) Verfahren zum Erzeugen und Wiedergeben einer binauralen Aufnahme
EP3044972A2 (fr) Dispositif et procédé de décorrélation de signaux de haut-parleurs
DE102021103210A1 (de) Surround-Sound-Wiedergabe basierend auf Raumakustik
DE112021003592T5 (de) Informationsverarbeitungsvorrichtung, Ausgabesteuerverfahren und Programm
US20200059750A1 (en) Sound spatialization method
US20230104111A1 (en) Determining a virtual listening environment
DE102011003450A1 (de) Erzeugung von benutzerangepassten Signalverarbeitungsparametern
Brandenburg et al. Auditory illusion through headphones: History, challenges and new solutions
DE112021001695T5 (de) Schallverarbeitungsvorrichtung, schallverarbeitungsverfahren und schallverarbeitungsprogramm
US20190394583A1 (en) Method of audio reproduction in a hearing device and hearing device
EP2503799B1 (fr) Procédé et système de calcul de fonctions HRTF par synthèse locale virtuelle de champ sonore
Aspöck et al. Dynamic real-time auralization for experiments on the perception of hearing impaired subjects
CN118301536A (zh) 音频的虚拟环绕处理方法、装置、电子设备和存储介质
Urbanietz Advances in binaural technology for dynamic virtual environments
CN117793609A (zh) 一种声场渲染方法和装置
Fernandes Spatial Effects: Binaural Simulation of Sound Source Motion

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20161123

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180201

REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1236308

Country of ref document: HK

RIN1 Information on inventor provided before grant (corrected)

Inventor name: BRANDENBURG, KARLHEINZ

Inventor name: WERNER, STEPHAN

Inventor name: SLADECZEK, CHRISTOPH

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20190401

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 502015010407

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1182731

Country of ref document: AT

Kind code of ref document: T

Effective date: 20191015

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20190918

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191218

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191218

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191219

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200120

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200224

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 502015010407

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG2D Information on lapse in contracting state deleted

Ref country code: IS

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200119

26N No opposition filed

Effective date: 20200619

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20200531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

REG Reference to a national code

Ref country code: AT

Ref legal event code: MM01

Ref document number: 1182731

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200515

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190918

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230517

Year of fee payment: 9

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240522

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240517

Year of fee payment: 10