EP3228096A1 - Audioanschluss - Google Patents

Audioanschluss

Info

Publication number
EP3228096A1
EP3228096A1 EP14777648.8A EP14777648A EP3228096A1 EP 3228096 A1 EP3228096 A1 EP 3228096A1 EP 14777648 A EP14777648 A EP 14777648A EP 3228096 A1 EP3228096 A1 EP 3228096A1
Authority
EP
European Patent Office
Prior art keywords
audio
channel
terminal
speaker
audio data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP14777648.8A
Other languages
English (en)
French (fr)
Other versions
EP3228096B1 (de
Inventor
Detlef Wiese
Lars IMMISCH
Hauke Krüger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Binauric Se
Original Assignee
Binauric Se
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Binauric Se filed Critical Binauric Se
Publication of EP3228096A1 publication Critical patent/EP3228096A1/de
Application granted granted Critical
Publication of EP3228096B1 publication Critical patent/EP3228096B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S3/004For headphones

Definitions

  • the present invention generally relates to the field of audio data processing. More particularly, the present invention relates to an audio terminal.
  • Everybody uses a telephone - either using a wired telephone connected to the well- known PSTN (Public Switched Telephone Network) via cable or a modern mobile phone, such as a smartphone, which is connected to the world via wireless connections based on, e.g., UMTS (Universal Mobile Telecommunications System).
  • PSTN Public Switched Telephone Network
  • UMTS Universal Mobile Telecommunications System
  • speech signals cover a frequency bandwidth between 50 Hz and 7 kHz (so-called “wideband speech”) and even more, for instance, a frequency bandwidth between 50 Hz and 14 kHz (so-called “super- wideband speech”) (see 3GPP TS 26.290, "Audio codec processing functions; Extended Adaptive Multi-Rate - Wideband (AMR-WB+) codec; Transcoding functions", 3GPP Technical Specification Group Services and System Aspects, 2005) or an even higher frequency bandwidth (e.g., "full band speech”).
  • AMR-WB+ Extended Adaptive Multi-Rate - Wideband
  • Audio-3D - also denoted as binaural communication - is expected by the present inventors to be the next emerging technology in communication.
  • the benefit of Audio-3D in comparison to conventional (HD-)Voice communication lies in the use of a binaural instead of a monaural audio signal. Audio contents will be captured and played back by novel binaural terminals involving two microphones and two speakers, yielding an acoustical reproduction that better resembles what the remote communication partner really hears.
  • binaural telephony is "listening to the audio ambience with the ears of the remote speaker", wherein the pure content of the recorded speech is extended by the capturing of the acoustical ambience.
  • the virtual representation of room acoustics in binaural signals is, preferably, based on differences in the time of arrival of the signals reaching the left and the right ear as well as attenuation and filtering effects caused by the human head, the body and the ears allowing the location of sources also in vertical direction.
  • Audio-3D is expected to represent the first radical change of the more than 100 years known old form of audio communication, which the society has named telephone or phoning. It targets particularly a new mobile type of communication which may be called "audio portation".
  • everybody being equipped with a future binaural terminal equipment as well as a smartphone app to handle the communication will be able to effectively capture the acoustical environment, i.e., the acoustical events of real life, preferably, as they are perceived with the two ears of the user, and provide them as captured, like a listening picture, to another user, anywhere in the world.
  • the present invention has been made in view of the above situation and considerations and embodiments of the present invention aim at providing technology that may be used in various Audio-3D usage scenarios.
  • the term “binaurai' or “binaurally” is not used in an as strict sense as in some publications, where only audio signals captured with an artificial head (also called “Kunstkopf) are considered truly binaural. Rather the term is used here for audio any signals that compared to a conventional stereo signal more closely resemble the acoustical ambience as it would be perceived by a real human. Such audio signals may be captured, for instance, by the audio terminals described in more detail in sections 3 to 9 below.
  • an audio terminal comprising: at least a first and a second microphone for capturing multi-channel audio data comprising at least a first and a second audio channel,
  • a communication unit for voice and/or data communication and/or a recording unit for recording the captured multi-channel audio data, and, optionally,
  • At least a first speaker for playing back audio data comprising at least a first audio channel
  • the first and the second microphone are provided in a first device and the communication unit is provided in a second device which is separate from the first device, wherein the first and the second device are adapted to be connected with each other via a local wireless transmission link, wherein the first device is adapted to stream the multichannel audio data to the second device via the local wireless transmission link and the second device is adapted to receive and process and/or store the multi-channel audio data streamed from the first device.
  • the local wireless transmission link is a transmission link complying with the Bluetooth standard.
  • the multi-channel audio data are streamed using the Bluetooth Serial Port Profile (SPP) or the iPod Accessory Protocol (iAP).
  • SPP Bluetooth Serial Port Profile
  • iAP iPod Accessory Protocol
  • the first device is adapted to stream samples from the first audio channel and synchronous samples from the second audio channel in a same data packet via the local wireless transmission link.
  • An audio terminal may comprise: at least a first and a second microphone for capturing multi-channel audio data comprising at least a first and a second audio channel, and/or
  • the audio terminal is adapted to generate or utilize metadata provided with the multi-channel audio data, wherein the metadata indicates that the multi-channel audio data is binaurally captured.
  • the audio terminal is adapted to generate or utilize metadata provided with the multi-channel audio data, wherein the metadata indicates one or more of: a setup of the first and the second microphone, a microphone use case, a microphone attenuation level, a beamforming processing profile, a signal processing profile, and an audio encoding format.
  • the first and the second microphone are provided in a first device and the communication unit is provided in a second device which is separate from the first device, wherein the audio terminal allows over-the-air flash updates and device control of the first device from the second device.
  • the audio terminal further comprises:
  • At least a second speaker for playing back audio data comprising at least the first or a second audio channel
  • the first and the second speaker are provided in different devices, wherein the devices are connectable for together providing stereo playback or double mono playback, and/or wherein the audio terminal is adapted to generate or utilize metadata provided with the multi-channel audio data, wherein the metadata indicates a position of the first and the second speaker relative to each other.
  • the audio terminal is further adapted for performing a crosstalk cancellation between the first and the second speaker in order to achieve a binaural sound reproduction.
  • the audio terminal is adapted to provide instructions to a user how to place the first and the second speaker in relation to each other. It is also preferred that the audio terminal is adapted to detect a position of the first and the second speaker relative to each other and adapt coefficients of a pre-processing filter for pre-processing the audio channels to be reproduced by the first and the second speaker to create a binaural sound reproduction.
  • the audio terminal further comprises an image capturing unit for capturing a still or moving picture, wherein the audio terminal is adapted to provide an information associating the captured still or moving picture with the captured multi-channel audio data.
  • the audio terminal further comprises a text inputting unit for inputting text, wherein the audio terminal is adapted to provide an information associating the inputted text with the captured multi-channel audio data. It is preferred that the audio terminal is adapted to stream, preferably, by means of the communication unit, the multi-channel audio data via a transmission link, preferably, a dial-in or IP transmission link, supporting at least a first and a second audio channel, such that a remote user is able to listen to the multi-channel audio data.
  • a transmission link preferably, a dial-in or IP transmission link
  • the first and the second microphone and the first speaker are provided in a headset or an in-ear phone.
  • an audio system for providing a communication between at least two remote locations, comprising a first and a second audio terminal according to claim 16, wherein each of the first and the second audio terminal further comprises at least a second speaker for playing back audio data compris- ing at least the first or a second audio channel, wherein the first and the second audio terminal are adapted, preferably, by means of the communication unit, to be connected with each other via a transmission link, preferably, a dial-in or IP transmission link, supporting at least a first and a second audio channel, wherein the first audio terminal is adapted to stream the multi-channel audio data to the second audio terminal via the dial- in or IP transmission link and the second audio terminal is adapted to receive the multichannel audio data streamed from the first audio terminal and play it back by means of the first and the second speaker, and vice versa.
  • a transmission link preferably, a dial-in or IP transmission link
  • the second audio terminal is adapted to receive the multichannel audio data streamed from the first audio terminal and play it back by means of the first
  • the audio system further comprises one or more headsets, each comprising at least a first and a second speaker, wherein the second audio terminal and the one or more headsets are adapted to be connected with each other via a wireless or wired transmission link supporting at least a first and a second audio channel, wherein the second audio terminal is adapted to stream the multi-channel audio data streamed from the first audio terminal to the one or more headsets via the wireless or wired transmission link.
  • one or more headsets each comprising at least a first and a second speaker
  • the second audio terminal and the one or more headsets are adapted to be connected with each other via a wireless or wired transmission link supporting at least a first and a second audio channel
  • the second audio terminal is adapted to stream the multi-channel audio data streamed from the first audio terminal to the one or more headsets via the wireless or wired transmission link.
  • the audio system is adapted for providing a communication between at least three remote locations and further comprises a third audio terminal according to claim 16 and a conference bridge being connectable with the first, the second and the third audio terminal via a transmission link, preferably, a dial-in or IP transmission link, supporting at least a first and a second audio channel, respectively, wherein the conference bridge is adapted to mix the multi-channel audio data streamed from one or more of the first, the second and the third audio terminal to generate a multi-channel audio mix comprising at least a first and a second audio channel and to stream the multi-channel audio mix to the third audio terminal.
  • a transmission link preferably, a dial-in or IP transmission link
  • the conference bridge is adapted to monaurally mix the multi-channel audio data streamed from the first and the second audio terminal to the multi-channel audio data streamed from the third audio terminal to generate the multi-channel audio mix.
  • the conference bridge is further adapted to spatially position the monaurally mixed multi-channel audio data streamed from the first and the second audio terminal when generating the multi-channel audio mix.
  • the audio system is adapted for providing a communication between at least three remote locations and further comprises a telephone comprising a microphone and a speaker and a conference bridge being connectable with the first and the second audio terminal via a transmission link, preferably, a dial-in or IP transmission link, supporting at least a first and a second audio channel, respectively, and the telephone, wherein the conference bridge is adapted to mix the multi-channel audio data streamed from the first and the second audio terminal into a single-channel audio mix comprising a single audio channel and to stream the single-channel audio mix to the telephone.
  • a transmission link preferably, a dial-in or IP transmission link
  • a preferred embodiment of the audio terminal can also be any combination of the dependent claims or above embodiments with the respective inde- pendent claim.
  • Fig. 1 shows schematically and exemplarily a basic configuration of an audio terminal that may be used for Audio-3D
  • Fig. 2 shows schematically and exemplarily a possible usage scenario for Audio-3D, here "Audio Portation",
  • FIG. 3 shows schematically and exemplarily a possible usage scenario for Audio-3D, here "Sharing Audio Snapshots",
  • Fig. 4 shows schematically and exemplarily a possible usage scenario for Audio-3D, here "Attending a Conference from Remote",
  • FIG. 5 shows schematically and exemplarily a possible usage scenario for Audio-3D, here "Multiple User Binaural Teleconference” ,
  • Fig. 6 shows schematically and exemplarily a possible usage scenario for Audio-3D, here "Binaural Conference with Multiple Endpoints",
  • Fig. 7 shows schematically and exemplarily a possible usage scenario for Audio-3D, here "Binaural Conference with Conventional Telephone Endpoints",
  • Fig. 8 shows an example of an artificial head equipped with a prototype headset for Audio-3D
  • Fig. 9 shows schematically and exemplarily a signal processing chain in an Audio-3D terminal device, here a headset,
  • Fig. 10 shows schematically and exemplarily a signal processing chain in another Audio-3D terminal device, here a speakerbox,
  • Fig. 1 1 shows schematically and exemplarily a typical functionality of an Au- dio-3D conference bridge, based on an exemplary setup composed of three participants,
  • Fig. 12 shows schematically and exemplarily a conversion of monaural, narrowband signals to Audio-3D signals in the Audio-3D conference bridge shown in Fig. 1 1
  • Fig. 13 shows schematically and exemplarily a conversion of Audio-3D signals to monaural, narrowband signals in the Audio-3D conference bridge shown in Fig. 1 1.
  • a basic configuration of an audio terminal 100 that may be used for Audio-3D is schematically and exemplarily shown in Fig. 1.
  • the audio terminal 100 comprises a first device 10 and a second device 20 which is separate from the first device 10.
  • the first device 10 there are provided a first and a second microphone 1 1 , 12 for capturing multi-channel audio data comprising a first and a second audio channel.
  • the second device 20 there is provided a communication unit 21 for, here, voice and data communication.
  • the first and the second device 10, 20 are adapted to be connected with each other via a local wireless transmission link 30.
  • the first device 10 is adapted to stream the multi-channel audio data, i.e., the data comprising the first and the second audio channel, to the second device 20 via the local wireless transmission link 30 and the second device 20 is adapted to receive and process and/or store the multi-channel audio data streamed from the first device 10.
  • the first device 10 is an external speaker/microphone apparatus as described in detail in the unpublished International patent application PCT/EP2013/067534, filed on 23 August 2013, the contents of which are herewith incorporated in their entirety.
  • it comprises a housing 17 that is formed in the shape of a (regular) icosahe- dron, i.e., a polyhedron with 20 triangular faces.
  • Such an external speaker/microphone apparatus in this specification also designated as a "speakerbox”, is marketed by the company Binauric SE under the name "BoomBoom”.
  • the first and the second microphone 1 1 , 12 are arranged at opposite sides of the housing 17, at a distance of, for example, about 12.5 cm.
  • the multi-channel audio data captured by the two microphones 1 1 , 12 can more closely resemble the acoustical ambience as it would be perceived by a real human (compared to a conventional stereo signal).
  • the audio terminal 100 here, in particular, the first device 10, further comprises a first and a second speaker 15, 16 for playing back multi-channel audio data comprising at least a first and a second audio channel.
  • the audio terminal 10 is adapted to stream the multi-channel audio data from the second device 20 to the first device 10 via a local wireless transmission link, for instance, a transmission link complying with the Bluetooth standard, preferably, the current Bluetooth Core Specification 4.1.
  • the second device 20, here, is a smartphone, such as an Apple iPhone or a Samsung Galaxy.
  • the data communication unit 21 supports voice and data communi- cation via one or more mobile communication standards, such as GSM (Global System for Mobile Communication), UMTS (Universal Mobile Telecommunication terminal) or LTE (Long-Term Evolution). Additionally, it may support one or more further network technologies, such as WLAN (Wireless LAN).
  • GSM Global System for Mobile Communication
  • UMTS Universal Mobile Telecommunication terminal
  • LTE Long-Term Evolution
  • WLAN Wireless LAN
  • the third and the fourth microphone 13, 14 of each of the two speakerboxes may be used to locate the position of the speakerboxes for allowing True Wireless Stereo in combination with stereo crosstalk cancellation (see below for details).
  • Further options for using the third and the fourth microphone 13, 14 are to preferably capture the acoustical ambience for reducing background noise with noise cancelling algorithm (near speaker to far speaker), to measure the ambience volume level for adjusting the playback level (loudness of music, voice prompts and far speaker) to a convenient listening level automatically, to a lower volume late at night in bedroom, or to a loud playback in noise environment, and/or to detect the direction of sound sources (for example, a beamformer could focus on near speakers and attenuate unwanted sources more efficiently).
  • background noise with noise cancelling algorithm near speaker to far speaker
  • the ambience volume level for adjusting the playback level (loudness of music, voice prompts and far speaker) to a convenient listening level automatically, to a lower volume late at night in bedroom, or to a loud playback in noise environment
  • the direction of sound sources for example, a beamformer could focus on near speakers and attenuate unwanted sources more efficiently.
  • HFP Hands-Free Profile
  • the multi-channel audio data are streamed according to the present invention using the Bluetooth Serial Port Profile (SPP) or the iPod Accessory Protocol (iAP).
  • SPP defines how to set up virtual serial ports and connect two Bluetooth enabled devices. It is based on 3GPP TS 07.10, "Terminal Equipment to Mobile Station (TE-MS) multiplexer protocol", 3GPP Technical Specification Group Terminals, 1997 and the RFCOMM protocol. It basically emulates a serial cable to provide a simple substitute for existing RS-232, including the control signals known from that technology.
  • SPP is supported, for example, by Android based smartphones, such as a Samsung Galaxy.
  • one preferred solution is to transmit synchronized audio data from each of the first and the second channel together in the same packet, ensuring that the synchronization between the audio data is not lost during transmission.
  • samples from the first and the second audio channel may preferably be packed into one packet for each segment, hence, there is no chance of deviation
  • the audio data of the first and the second audio channel are generated by the first and the second microphone 1 1 , 12 on the basis of the same clock or a common clock reference in order to ensure a substantially zero sample rate deviation. 5.
  • the first device 10 is an external speaker/microphone apparatus, which comprises a housing 17 that is formed in the shape of a (regular) icosahedron.
  • the first device 10 may also be something else.
  • the shape of the housing may be formed in substantially a U-shape for being worn by a user on the shoulders around the neck, in this specification also designated as a "shoulderspeaker" (not shown in the figures).
  • at least a first and a second microphone for capturing multi-channel audio data comprising a first and a second audio channel may be provided at the sides of the "legs" of the U-shape, at a distance of, for example, about 20 cm.
  • the audio terminal 100 may comprise, in some scenarios, at least one additional one of the second device (shown in a smaller size at the top of the figure), or, more generally, at least one further speaker for playing back audio data comprising at least a first audio channel provided in a device that is separate from the first device 10.
  • the audio terminal 100 preferably allows over-the-air flash updates and device control of the first device 10 from the second device 20 (including updates for voice prompts used to notify status information and the like to a user) over a reliable Bluetooth protocol.
  • a reliable Bluetooth protocol For an Android based smartphone, such as a Samsung Galaxy, a custom RFCOMM Bluetooth service will preferably be used.
  • an iOS based device such as the Apple iPhone, the External Accessory Framework is preferably utilized. It is foreseen that the first device 10 supports at most two simultaneous control connections, be it to an Android based device or an iOS based device. If both are already connected, further control connections will preferably be rejected.
  • the speakerbox here, comprises a virtual machine (VM) application executing at least part of the operations as well as one or more flash memories.
  • VM virtual machine
  • each message consists of a tag (16 bit, unsigned), followed by a length (16 bit, unsigned) and then the optional payload.
  • the length is always the size of the entire pay- load in bytes, including the TL header. All integer values are preferably big-endian.
  • the OTA control operations preferably start at "Hash-Request” and work on 8 Kbyte sectors.
  • the protocol is inspired by rsync: Before transmitting flash updates, applications should compute the number of changed sectors by retrieving the hashes of all sectors, and then only transmit sectors that need updating. Flash updates go to a secondary flash memory which only once confirmed to be correct is used to update the primary flash.
  • COUNT PAIRED DEVICE RESPONSE 129 + Returns the number of devices in the paired device list.
  • Table 8 illustrates the response to the STEREO_PAIR_REQUEST, which has no parameters.
  • Table 17 illustrates the response to the EXIT_OTA_MODE_REQUEST.
  • Table 17 EXIT OTA MODE RESPONSE
  • the EXIT_OTA_COMPLETE_REQUEST will shut down the Bluetooth transport link, and kick the PIC to carry out the CSR8670 internal flash update operation. This message will only be acted upon if it follows from an EXIT_OTA_MODE_RESPONSE with SUCCESS "matching hash”.
  • Table 18 illustrates the HASH_REQUEST. It requests the hash values for a number of sectors. The requester should not request more sectors than can fit in a single response packet.
  • Table 20 illustrates the READ_REQUEST. It requests a read of the data from flash. Each sector will be read in small chunks so as not to exceed the maximum response packet size of 128.
  • Table 25 illustrates the response to the WRITE_REQUEST.
  • the following Table 32 illustrates the BINAURAL_RECORD_AUDIO_RESPONSE.
  • This is an unsolicited packet that will be sent repeatedly from the speakerbox with new audio content (preferably, SBC encoded audio data from the binaural microphones), following a B I N AU RAL_RECORD_START_REQU EST. To stop the automatic sending of these packets a BINAURAL_RECORD_STOP_REQUEST must be sent.
  • FIG. 3 A possible scenario “Sharing Audio Snapshots” is shown schematically and exemplarily in Fig. 3.
  • a user is at a specific location and enjoys his stay there.
  • he/she makes a binaural recording using an Audio-3D headset which is connected to a smartphone, denoted as the "Audio- 3D-Snapshot” .
  • the snapshot is complete, the user also takes a photo from the location.
  • the binaural recording is tagged by the photo, the exact position, which is available in the smartphone, the date and time and possibly a specific comment to identi- fy this moment in time later on. All these informations are uploaded to a virtual place, such as a social media network, at which people can share Audio-3D-Snapshots.
  • the remote person hears not only the speech content which the speakers on the local side emit, but also additional information which is inherent to the binaural signal transmitted via the Audio-3D communication link.
  • This additional information may allow the remote speaker to better identify the location of the speakers within the conference room. This, in particular, may enable the remote speaker to link specific speech segments to different speakers and may significantly increase the intelligibility even in case that all speakers talk at the same time.
  • FIG. 7 A possible scenario "Binaural Conference with Conventional Telephone Endpoints” is shown schematically and exemplarily in Fig. 7. This scenario is very similar to the scenario “Binaural Conference with Multiple Endpoints", explained in section 7.5 above. In this case, however, two participants at remote location O are connected to the binaural con- ference situation via a conventional telephone link using a telephone 505.
  • Audio-3D As already explained above, it is crucial for 3D audio perception that the binaural cues, i.e., the inherent characteristics defining the relation between the left and the right audio channel, are substantially preserved and transmitted in the complex signal processing chain of an end-to-end binaural communication. For this reason, Audio-3D reguires new algorithm designs of partial functionalities such as acoustical echo compensation, noise reduction, signal compression and adaptive jitter control. Also, specific new classes of algorithms must be introduced such as stereo crosstalk cancellation, which aims at achieving binaural audio playback in scenarios in which users do not use headphones. During the last years, parts of the reguired algorithms were developed and investigated in the context of binaural signal processing for hearing aids (see T.
  • the ITD cues influence the perception of the spatial location of acoustical events at low frequencies due to the time differences between the arrival of an acoustical wavefront at the left and the right human ear. Often, these cues are also denoted as phase differences between the two channels of the binaural signal.
  • Audio-3D should preferably be based on packet based transmission schemes, which requires technical solutions to deal with packet losses and delays. 8.2 Audio-3D terminal devices (headsets)
  • binaural signals should preferably be of a higher quality, since the binaural masking threshold level is known to be lower than the masking threshold for monaural signals (see B.C.J. Moore, "An Introduction to the Psychology of Hearing, Academic Press, 4 th Edition, 1997).
  • a binaural signal transmitted from one location to the other should preferentially be of a higher quality compared to the signal transmitted in conventional monaural telephony. This implies that high-quality acoustical signal processing approaches should be realized as well as audio compression schemes (audio codec) which allow higher bit rate and therefore quality modes.
  • Audio-3D in this example, is packet based and principally an interactive duplex applica- tion. Therefore, the end-to-end delay should preferably be as low as possible to avoid negative impacts on conversations and the transmission should be able to deal with different network conditions. Therefore jitter compensation methods, frame loss concealment strategies and audio codecs which adapt the quality and the delay with respect to a given instantaneous network characteristic are deemed crucial elements of Audio-3D applications.
  • the signal captured by each of the microphones is preferably processed by a noise reduction (NR), an equalizer (EQ) and an automatic gain control (AGC).
  • NR noise reduction
  • EQ equalizer
  • AGC automatic gain control
  • This source coded is preferably specifically suited for binaural signals and transforms the two channels of the audio signal into a stream of packets of a moderate data rate which fulfill the high quality constraints as defined in section 8.3 above.
  • the packets are finally transmitted to the connected communication partner via an IP link.
  • the output signal from the jitter buffer is fed, here, into an optional noise reduction (NR) and an automatic gain control (AGC) unit.
  • NR noise reduction
  • AGC automatic gain control
  • these units are not necessary, since this functionality has been realized on the side of the connected communication partner. Nevertheless, it often makes sense if the connected terminal does not provide the desired audio quality due to low bit rate source encoders or insufficient signal processing on the side of the connected terminal.
  • a functional unit for a stereo widening (STW) as well as a functional unit for a stereo crosstalk cancellation (XTC) are added.
  • the stereo crosstalk canceller unit Mainly, it compensates for the loss of binaural cues due to the emission of the two channels via closely spaced speakers and a cross-channel interference (audio signals emitted by the right loudspeaker reaching the left ear and audio signals emitted by the left loudspeaker reaching the right ear).
  • the purpose of the stereo crosstalk canceller unit is to employ signal processing to emit signals which cancel out the undesired cross-channel interference signals reaching the ears.
  • a full two-channel acoustical echo canceller is preferably used, rather than two single channel acoustical echo cancellers.
  • Fig. 1 1 The typical functionality to be realized in the conference bridge is shown schematically and exemplarily in Fig. 1 1 , based on an exemplary setup composed of three participants, of which one is connected via a conventional telephone (PSTN; public switched tele- phone network) connection, whereas the other two participants are connected via a packet based Audio-3D link.
  • PSTN public switched tele- phone network
  • Participant 3 receives the data from participant 1 and participant 2.
  • each participant receives the audio data from all participants but himself.
  • Variants are possible to control the outgoing audio streams, e.g.,
  • the output audio streams contain only signals from active sources.
  • the monaural signal must be converted into a signal which is compliant to a conventional telephone.
  • the audio bandwidth must be limited and the signal must be converted from binaural to mono, as shown schematically and exemplarily in Fig. 13.
  • an intelligent down-mix is preferably realized, such that undesired comb effects and spectral colorations are avoided. Since the intelligibility is usually significantly lower for monaural signals compared to binaural signals, additional signal processing / speech enhancements may preferably be implemented, such as a noise reduction and a dereverberation that may help the listener to better follow the conference.
  • signal levels are preferably adapted such that the transmitted signal does appear neither to be too loud nor of too low volume.
  • this increases the perceived communication quality since, e.g., a source encoder works better for signals with a higher level than for lower levels and the intelligibility is higher for higher level signals.
  • the signals are recorded with devices which mimic the influence of real ears (for example, an artificial head in general has "average ears” which shall approximate the impact of the ears of a huge amount of persons) or by using headset devices with a microphone in close proximity to the ear canal (see section 8.4.1 ).
  • headset devices with a microphone in close proximity to the ear canal (see section 8.4.1 ).
  • the ears of the person who listens to the recorded signals and the ears which have been the basis for the binaural recording are not identical.
  • an equalizer can be used in the sending direction in Figs. 9 and 10 to compensate for possible deviations of the microphone characteristics related to the left and the right channel of the binaural recordings.
  • Audio-3D a goal of Audio-3D is the transmission of speech contents as well as a transparent reproduction of the ambience in which acoustical contents have been recorded. In this sense, a noise reduction which removes acoustical background noise may not be useful at the first glance.
  • At least stationary undesired noises should preferably be removed to increase the conversational intelligibil- ity.
  • the two connected loudspeakers may preferably instruct the user how to place both speaker devices in relation to each other. This solution will guide the user to correct the speaker and the listener position until it is optimal for binaural sound reproduction.
  • the audio terminal 100 shown in Fig. 1 generates metadata provided with the multi-channel audio data, wherein the metadata indicates that the multi-channel audio data is binaurally captured.
  • the metadata further indicates one or more of: a type of the first device, a microphone use case, a microphone attenuation level, a beamforming processing profile, a signal processing profile and an audio encoding format.
  • a suitable metadata format could be defined as follows:
  • an audio terminal which comprises only one of (a) at least a first and a second microphone and (b) at least one of a first and a second speaker, the first one being preferably usable for recording multi-channel audio data comprising at least a first and a second audio channel and the second one being preferably usable for playing back multi-channel audio data comprising at least a first and a second audio channel.
  • the audio terminal 100 is adapted to provide, preferably, by means of the communication unit 21 , the multi-channel audio data such that a remote user is able to listen to the multi-channel audio data.
  • the audio terminal 100 may be adapted to communicate the multi-channel audio data to a remote audio terminal via a data communication, e.g., a suitable Voice-over-IP communication.
  • a data communication e.g., a suitable Voice-over-IP communication.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Telephone Function (AREA)
EP14777648.8A 2014-10-01 2014-10-01 Audioanschluss Active EP3228096B1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2014/071083 WO2016050298A1 (en) 2014-10-01 2014-10-01 Audio terminal

Publications (2)

Publication Number Publication Date
EP3228096A1 true EP3228096A1 (de) 2017-10-11
EP3228096B1 EP3228096B1 (de) 2021-06-23

Family

ID=51655751

Family Applications (1)

Application Number Title Priority Date Filing Date
EP14777648.8A Active EP3228096B1 (de) 2014-10-01 2014-10-01 Audioanschluss

Country Status (2)

Country Link
EP (1) EP3228096B1 (de)
WO (1) WO2016050298A1 (de)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023050089A1 (zh) * 2021-09-28 2023-04-06 深圳市大疆创新科技有限公司 音频采集方法、***及计算机可读存储介质
EP4300994A4 (de) * 2021-04-30 2024-06-19 Samsung Electronics Co., Ltd. Verfahren und elektronische vorrichtung zur aufzeichnung von audiodaten aus mehreren vorrichtungen

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126185A (zh) * 2016-08-18 2016-11-16 北京塞宾科技有限公司 一种基于蓝牙的全息声场录音通讯装置及***
CN111108760B (zh) * 2017-09-29 2021-11-26 苹果公司 用于空间音频的文件格式
WO2019157069A1 (en) * 2018-02-09 2019-08-15 Google Llc Concurrent reception of multiple user speech input for translation
CN110351690B (zh) * 2018-04-04 2022-04-15 炬芯科技股份有限公司 一种智能语音***及其语音处理方法
CN111385775A (zh) * 2018-12-28 2020-07-07 盛微先进科技股份有限公司 一种无线传输***及其方法
TWI700953B (zh) * 2018-12-28 2020-08-01 盛微先進科技股份有限公司 一種無線傳輸系統及其方法
KR102565882B1 (ko) * 2019-02-12 2023-08-10 삼성전자주식회사 복수의 마이크들을 포함하는 음향 출력 장치 및 복수의 마이크들을 이용한 음향 신호의 처리 방법
CN110444232B (zh) * 2019-07-31 2021-06-01 国金黄金股份有限公司 音箱的录音控制方法及装置、存储介质和处理器

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379871B2 (en) * 2010-05-12 2013-02-19 Sound Id Personalized hearing profile generation with real-time feedback
US8855341B2 (en) * 2010-10-25 2014-10-07 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for head tracking based on recorded sound signals
US8767996B1 (en) * 2014-01-06 2014-07-01 Alpine Electronics of Silicon Valley, Inc. Methods and devices for reproducing audio signals with a haptic apparatus on acoustic headphones

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO2016050298A1 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4300994A4 (de) * 2021-04-30 2024-06-19 Samsung Electronics Co., Ltd. Verfahren und elektronische vorrichtung zur aufzeichnung von audiodaten aus mehreren vorrichtungen
WO2023050089A1 (zh) * 2021-09-28 2023-04-06 深圳市大疆创新科技有限公司 音频采集方法、***及计算机可读存储介质

Also Published As

Publication number Publication date
EP3228096B1 (de) 2021-06-23
WO2016050298A1 (en) 2016-04-07

Similar Documents

Publication Publication Date Title
EP3228096B1 (de) Audioanschluss
US11037544B2 (en) Sound output device, sound output method, and sound output system
US8073125B2 (en) Spatial audio conferencing
US20080004866A1 (en) Artificial Bandwidth Expansion Method For A Multichannel Signal
AU2008362920B2 (en) Method of rendering binaural stereo in a hearing aid system and a hearing aid system
US20140050326A1 (en) Multi-Channel Recording
US9749474B2 (en) Matching reverberation in teleconferencing environments
US20220369034A1 (en) Method and system for switching wireless audio connections during a call
US20140226842A1 (en) Spatial audio processing apparatus
US20070109977A1 (en) Method and apparatus for improving listener differentiation of talkers during a conference call
US20170223474A1 (en) Digital audio processing systems and methods
WO2014052431A1 (en) Method for improving perceptual continuity in a spatial teleconferencing system
WO2015130508A2 (en) Perceptually continuous mixing in a teleconference
US20230075802A1 (en) Capturing and synchronizing data from multiple sensors
US20220345845A1 (en) Method, Systems and Apparatus for Hybrid Near/Far Virtualization for Enhanced Consumer Surround Sound
TWM626327U (zh) 用於在分別對應於複數個使用者的複數個通訊裝置之間分配音訊信號之系統
BRPI0715573A2 (pt) processo e dispositivo para aquisiÇço, transmissço e reproduÇço de eventos sonoros para aplicaÇÕes em comunicaÇço
JP2022514325A (ja) 聴覚デバイスにおけるソース分離及び関連する方法
US20220368554A1 (en) Method and system for processing remote active speech during a call
US12010496B2 (en) Method and system for performing audio ducking for headsets
Rothbucher et al. Backwards compatible 3d audio conference server using hrtf synthesis and sip
US20240249711A1 (en) Audio cancellation
WO2017211448A1 (en) Method for generating a two-channel signal from a single-channel signal of a sound source
Chen et al. Highly realistic audio spatialization for multiparty conferencing using headphones
Corey et al. Immersive Enhancement and Removal of Loudspeaker Sound Using Wireless Assistive Listening Systems and Binaural Hearing Devices

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20170818

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20190415

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: H04S 3/00 20060101AFI20201221BHEP

Ipc: H04R 5/04 20060101ALN20201221BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: H04R 5/04 20060101ALN20210113BHEP

Ipc: H04S 3/00 20060101AFI20210113BHEP

INTG Intention to grant announced

Effective date: 20210128

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602014078295

Country of ref document: DE

Ref country code: AT

Ref legal event code: REF

Ref document number: 1405344

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210715

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210923

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1405344

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210623

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210923

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210924

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20210623

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211025

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602014078295

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

26N No opposition filed

Effective date: 20220324

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20211031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 602014078295

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211001

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211031

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20211001

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20141001

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210623

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240318

Year of fee payment: 10

Ref country code: GB

Payment date: 20240325

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240325

Year of fee payment: 10