EP4282162A1 - Microphone, procédé d'enregistrement de signal acoustique, dispositif de lecture de signal acoustique, ou procédé de lecture de signal acoustique - Google Patents

Microphone, procédé d'enregistrement de signal acoustique, dispositif de lecture de signal acoustique, ou procédé de lecture de signal acoustique

Info

Publication number
EP4282162A1
EP4282162A1 EP22702897.4A EP22702897A EP4282162A1 EP 4282162 A1 EP4282162 A1 EP 4282162A1 EP 22702897 A EP22702897 A EP 22702897A EP 4282162 A1 EP4282162 A1 EP 4282162A1
Authority
EP
European Patent Office
Prior art keywords
signal
membrane
microphone
loudspeaker
designed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22702897.4A
Other languages
German (de)
English (en)
Inventor
Klaus Kaetel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kaetel Systems GmbH
Original Assignee
Kaetel Systems GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kaetel Systems GmbH filed Critical Kaetel Systems GmbH
Publication of EP4282162A1 publication Critical patent/EP4282162A1/fr
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R7/00Diaphragms for electromechanical transducers; Cones
    • H04R7/02Diaphragms for electromechanical transducers; Cones characterised by the construction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/08Mouthpieces; Microphones; Attachments therefor
    • H04R1/083Special constructions of mouthpieces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/08Mouthpieces; Microphones; Attachments therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/405Non-uniform arrays of transducers or a plurality of uniform arrays with different transducer spacing

Definitions

  • Microphone method of recording an acoustic signal, device for reproducing an acoustic signal, or method of reproducing an acoustic signal
  • the present invention relates to the field of electroacoustics and in particular to concepts for recording and reproducing acoustic signals.
  • acoustic scenes are recorded using a set of microphones. Each microphone outputs a microphone signal.
  • a microphone signal For example, for an orchestral audio scene, 25 microphones may be used.
  • a sound engineer performs a mixing of the 25 microphone output signals into, for example, a standard format such as a stereo format, a 5.1, a 7.1, a 7.2, or other appropriate format.
  • a stereo format for example, two stereo channels are created by the sound engineer or an automatic mixing process.
  • a 5.1 format mixing results in five channels and one subwoofer channel.
  • a mix is made into seven channels and two subwoofer channels in a 7.2 format, for example.
  • a mixed result is applied to electrodynamic loudspeakers.
  • there are two speakers with the first speaker receiving the first stereo channel and the second speaker receiving the second stereo channel.
  • in a 7.2 playback format for example, there are seven loudspeakers in predetermined positions and two subwoofers that can be placed relatively arbitrarily. The seven channels are routed to their respective speakers, and the two subwoofer channels are routed to their respective subwoofers.
  • the European patent EP 2692154 B1 describes a set for capturing and playing back an audio scene, in which not only the translation is recorded and played back, but also the rotation and also the vibration. Therefore, a sound scene reproduced not only by a single detection signal or a single mixed signal, but by two detection signals or two mixed signals which are simultaneously recorded on the one hand and reproduced simultaneously on the other hand. This achieves that different emission characteristics are recorded from the audio scene compared to a standard recording and are reproduced in a playback environment.
  • a set of microphones is placed between the acoustic scene and an (imaginary) auditorium in order to capture the "conventional" or translational signal, which is characterized by a high level of directionality or high quality excellent.
  • a second set of microphones is placed above or to the side of the acoustic scene to record a low-Q or low-directivity signal intended to represent the rotation of the sound waves as opposed to translation.
  • corresponding loudspeakers are placed in the typical standard positions, each having an omnidirectional array to reproduce the rotational signal and a directional array to reproduce the "conventional" translational sound signal.
  • European patent EP 2692144 B1 discloses a loudspeaker for reproducing, on the one hand, the translational audio signal and, on the other hand, the rotary audio signal.
  • the loudspeaker thus has an omnidirectionally emitting arrangement on the one hand and a directionally emitting arrangement on the other hand.
  • European patent EP 2692151 B1 discloses an electret microphone which can be used to record the omnidirectional or the directional signal.
  • European patent EP 3061262 B1 discloses an earphone and a method for manufacturing an earphone that generates both a translatory sound field and a rotary sound field.
  • the European patent application EP 3061266 AO intended for grant discloses a headphone and a method for producing a headphone which is designed to to generate the "conventional" translational sound signal using a first transducer, and to generate the rotary sound field using a second transducer arranged perpendicularly to the first transducer.
  • the recording and playback of the rotational sound field in addition to the translational sound field leads to a significantly improved and thus high-quality audio signal perception, which almost gives the impression of a live concert, although the audio signal is played back through loudspeakers or headphones or earphones.
  • the object of the present invention is to create an improved concept for recording the entire sound on the one hand and for reproducing this entire recorded sound on the other.
  • a microphone for recording an acoustic signal according to patent claim 1 a playback device for an acoustic signal according to patent claim 15, a mobile device according to patent claim 29, a method for recording an acoustic signal according to patent claim 30, a method for reproducing an acoustic Signal according to patent claim 31, or a computer program according to patent claim 32 solved.
  • a microphone comprises a first part-microphone with a first membrane pair with membranes arranged opposite one another, and a second part-microphone with a second membrane pair which also has membranes located opposite one another.
  • the first pair of membranes is aligned such that the membranes of the first pair of membranes can be deflected along a first spatial axis
  • the second pair of membranes is arranged such that the membranes of the second pair of membranes can be deflected along a second spatial axis that is different from the first spatial axis.
  • a third partial microphone with a third pair of membranes is preferably provided, the membranes of the third pair of membranes being deflectable along a third spatial axis which differs from the first and second spatial axes, the spatial axes preferably being orthogonal or essentially orthogonal to one another.
  • each membrane pair of the microphone derives its own differential output signal by combining the membrane output signals of the two membranes arranged opposite one another, using a change in the phase relationship, and preferably a phase reversal of one of the two membrane output signals.
  • a separate differential signal is thus generated for each spatial axis, which represents a corresponding directional component of the rotation signal or, in general, a differential signal in each spatial axis.
  • Such a microphone with two or three sub-microphones can preferably also be used to generate not only the novel difference signals, but also classic component signals, as are known, for example, in the field of ambisonics technology.
  • the membrane output signals of the two membranes lying opposite one another can be added together in order to obtain a corresponding Ambisonics component.
  • the microphone also detects an omnidirectional component, which is obtained either by its own omnidirectional microphone or by adding the three directional components.
  • a microphone thus generates not only the three novel differential signals in the x-direction, y-direction and z-direction, but also the four components B (or W), X, Y and Z of a known one First-order ambisonics signal or a B-format signal. According to the invention, the result is that the acoustic quality is improved once again when such signals are reproduced.
  • the playback side it is preferred to play back at least two and preferably all three difference signals or differential mode signals in addition to the conventional or common-mode signal, specifically by means of a loudspeaker system that has one or more loudspeakers reproducing the conventional CM or common mode signal, and further comprising a second or a second and a third speaker to reproduce the difference signal.
  • a loudspeaker system that has one or more loudspeakers reproducing the conventional CM or common mode signal, and further comprising a second or a second and a third speaker to reproduce the difference signal.
  • three differential signals are provided, and the second loudspeaker device for reproducing the three differential signals comprises a total of at least six transducers, which are arranged in three different spatial directions, so that the differential signals recorded in different spatial directions are reproduced in the same direction on the reproduction side, in which they were originally recorded.
  • a rendering of a microphone signal is performed in a playback environment in which loudspeakers are placed at certain known locations.
  • a conventional translatory microphone signal is used, which can consist of an omnidirectional component and parametric side information, or which is available as a full B-format signal.
  • Vector-based amplitude panning (VBAP) is preferably used to render the microphone signal to the individual loudspeakers, for which appropriate weighting factors are derived from the direction information contained in the side information or from the B-format signal. be used.
  • weighting factors are preferably also used in order not only to render the conventional translatory audio signal, or to “distribute” it to the individual loudspeakers. Instead, these weighting factors are also used to weight or "distribute” the new difference signals in the different room axes to the different loudspeakers. This means that from a Taking position generated complete microphone signal, which consists of a conventional omnidirectional component and three directional components and / or (parametric) metadata that have direction information, and which also has the new two or three differential signals of the two or three spatial axes , a complete rendition can be generated.
  • a loudspeaker at one of the loudspeaker positions comprises a conventional translational element, which is supplied with the rendered translational audio signal for this loudspeaker at this loudspeaker position, and additionally, for each of the difference signals, a difference signal converter arranged according to the spatial direction of the difference signal, which can be designed, for example, as a double membrane without a housing can whose emission direction is arranged in the corresponding spatial axis or spatial direction.
  • Figure 3a shows a combiner for generating the difference signals
  • 3b shows a single combiner for differential signal routing
  • 3c shows a combiner according to an embodiment
  • FIG. 5 shows a microphone holder according to an embodiment
  • FIG. 6 shows a reproduction device according to an embodiment
  • FIG. 8 shows a renderer for a playback device or a mobile device according to an embodiment
  • 9a shows a converter arrangement with converters for each of the three differential signals
  • 9b shows a converter arrangement with a converter for the conventional common mode or CM signal
  • FIG. 10 shows a renderer for a playback device or a mobile device according to a further embodiment
  • FIG. 11 shows a renderer for a playback device or a mobile device according to a further exemplary embodiment with a loudspeaker implementation.
  • FIG. 1 shows a first partial microphone 1 with a membrane pair, which has a first membrane 11 and a second membrane 12, which are arranged opposite one another.
  • FIG. 1 shows a second partial microphone 2 with a second pair of membranes, which has a third membrane 13 and a fourth membrane, which are arranged opposite one another.
  • the first pair of membranes is arranged such that the first membrane 11 and the second membrane can be deflected along a first spatial axis, such as the x-axis
  • the second pair of membranes is arranged such that the third membrane 13 and the fourth membrane 14 along a second spatial axis, such as the y-axis of FIG. 1 can be deflected.
  • the second spatial axis differs from the first spatial axis, so the two spatial axes are not parallel.
  • the two spatial axes x, y are preferably orthogonal to one another or have an angle of between 60 and 120°.
  • Fig. 2 also shows a third partial microphone 13 with a third membrane pair, which has a fifth membrane 15 and a sixth membrane 16, which are arranged opposite one another, the third membrane pair being arranged such that the fifth membrane 15 and the sixth membrane 16 can be deflected along a third spatial axis, such as the z-axis.
  • the third spatial axis differs from the first spatial axis and the second spatial axis, with all three spatial axes preferably being orthogonal to one another. Different angles between the third spatial axis and the first or the second spatial axis, such as in a range between 60 and 120°, are preferred.
  • 2 also shows a very schematic sensitivity characteristic for each membrane 11 to 16, which additionally has either the letter F or the letter R. F stands for front and R stands for rear.
  • F stands for front and R stands for rear.
  • the different sensitivity characteristics of the individual membranes, each of which typically has a counter-electrode, are thus also arranged against one another.
  • Fig. 1 shows exit lines for each membrane.
  • the first partial microphone 1 is designed such that a first membrane signal is delivered in response to a deflection of the first membrane 11, and that a second membrane signal is delivered in response to a deflection of the second membrane, which has a specific phase relationship to the first membrane signal that due to the arrangement of the membranes or the wiring or the recorded sound field.
  • the second sub-microphone 2 which has the two membranes 13, 14, also has output lines in order to supply a third membrane signal from the third membrane 13 and a fourth membrane signal from the fourth membrane 14.
  • the third partial microphone is also designed to generate a fifth membrane signal in response to a deflection of the fifth membrane 15 and a sixth membrane signal in response to a deflection of the membrane 16 in the third spatial axis, for example in the z-direction to deliver.
  • the first sub-microphone, the second sub-microphone and, if present, the third sub-microphone are designed to combine the corresponding membrane signals of the membranes of the membrane pair.
  • This is illustrated in Figure 3a by a schematic combiner shown at 30 as a block for every two or three component microphones.
  • a corresponding individual combiner as shown, for example, in Fig. 3b at 31, can be present for each individual sub-microphone, so that the membrane signals of one sub-microphone are always combined, but that membrane signals from other sub-microphones are used at least for the generation of a first differential output signal 21 for the first partial microphone, a second differential output signal 22 for the second Part microphone and a third differential output signal 23 for the third part microphone are not combined.
  • the combiner 30 is also designed not only to form the difference signals 21, 22, 23, but also to form common-mode or common-mode signals or CM signals 24.
  • CM signals 24 can, for. B. only individual component signals X, Y, Z, as they are known from the Ambisonics technology, or an omnidirectional signal that is obtained, for example, when the membrane signals of all individual membranes are added without phase shift of individual membrane signals.
  • the combiner 30 is designed to combine the first membrane signal 11 and the second membrane signal 12 with a changed first phase relationship.
  • the first differential output signal Diffx 21 is therefore assigned to the first spatial axis, ie for example the x-axis.
  • the second sub-microphone is designed to combine the second membrane signal 13 and the third membrane signal 14 with a changed second phase relation in order to provide a second differential output signal Diffy, which is shown at 22 in FIG. 3a and is associated with the second spatial axis y.
  • the third sub-microphone is designed to combine the fifth membrane signal 15 and the sixth membrane signal 16 with a phase relation that has changed compared to the third phase relation, in order to provide a third differential output signal, which is shown at 23 in FIG. 3a and is assigned to the spatial axis z .
  • phase change element 40 is shown schematically in FIG. 3c, which preferably has a phase value of 180°, the phase angle of the phase element being in the range between 90 and 270° can.
  • the preferred range is 170° to 190°, or 180° in the most preferred embodiment.
  • the phase change device 41 is provided in order to change the second phase relationship for the second partial microphone, so that an addition, as shown schematically in FIG. 3c, takes place with a changed second phase relationship.
  • a phase change element 42 is also provided for the third partial microphone, which changes the third phase relationship between the membrane signals 15, 16 and adds the signals with the changed third phase relationship to obtain the third differential output signal Diffz 23 from FIG. 3c.
  • the combiner is also designed to form conventional common-mode signals.
  • the fifth membrane signal 15 and the sixth membrane signal 16 are added together with the original third phase relation, ie without the effect of a phase element 42, for example.
  • a corresponding procedure is followed in order to obtain a conventional y-direction component of a directional microphone by adding the membrane signals of the second membrane pair, 13, 14, with the original phase relationship, i.e. without the effect of a phase element 41.
  • an X-component is also of a directional microphone when the two directional characteristics, i.e. for the front membrane 11 and the rear membrane 12, are added, again without the effect of a phase element 40.
  • a total omnidirectional signal can be obtained if all six membrane signals are summed together in their original first, second and third phase relation, this omnidirectional signal being referred to as a W signal or P signal, for example, as derived from Ambisonics technology or for a signal in B format is known, which has an omnidirectional component and a directional component in the X direction, a directional component in the Y direction and a Z component in the Z direction.
  • the microphone according to the invention supplies differential signals for the individual directions in addition to these signals or as an alternative to these signals, ie signals that result when a difference is formed between the front and rear directional characteristics , in order to capture the sound field, which to a certain extent prevails laterally with respect to the diaphragms arranged opposite, e.g. B. above and below the two membranes 11, 12 from Fig. 1.
  • the change between the first phase relation on the left in FIG. 3c and the second phase relation on the right in FIG. 3c before the corresponding addition can be achieved by an actually provided phase shifter, a delay line, a phase reversal or a phase reversal.
  • the membrane signals are transmitted as symmetrical signals between a plus line 11a and a minus line 11b.
  • a plus line 11a and a minus line 11b Such a schematic representation of the membrane signal 11 is shown in FIG. 3b, the “line” 11 in FIG. 3c corresponding to the positive individual line 11a, the negative individual line 11b and a ground (GND) 11c.
  • the second membrane signal 12 which in turn consists of a positive line 12a, a negative line 12b and a common ground 12c.
  • the actual membrane signal is transmitted as the difference between the positive and the negative line, as is known for balanced line transmission.
  • the combiner 30 is designed as shown for a single combiner 31 in FIG. 3b.
  • the individual combiner 31 would be provided for each of the three sub-microphones 1, 2, 3 of FIG.
  • the individual combiner 31 has two inputs 32, 34 for the positive potential and two inputs 33, 35 for the negative potential and one (or two) ground inputs 38 for the ground potential GND.
  • the polarity of the positive and negative lines is reversed in the exemplary embodiment shown in FIG it is shown in Fig. 3b on the left for the membrane signal 12.
  • the positive line 12b is connected to the negative input 35 and the negative line 12b is connected to the positive input 34.
  • Diffx difference signal 21
  • GND output ground 39
  • Fig. 4 shows a preferred embodiment of the microphone, in which the three sub-microphones are all held by a membrane holder 50, each sub-microphone having an elongated housing, wherein in the corresponding tip of the sub-microphone the Membrane pairs are arranged, preferably protected by a permeable grid from the outside world.
  • the two membranes of the first partial microphone 1 are arranged in the yz plane, so that a deflection in the x direction is achieved.
  • the two membranes of the second partial microphone 2 are arranged in the x-z plane in order to achieve a deflection in the y direction, ie in the second spatial axis.
  • the two membranes of the third partial microphone 3 are arranged in the xy plane in order to be deflected by sound in the z direction.
  • the individual partial microphones also have an output line, which either leads the individual membrane signals to the outside or which already leads to the outside of the differential output signal 21, 22 or 23 (not shown in FIG. 4).
  • the individual lines can also lead the conventional common-mode components outwards in the individual directions, as shown at 24b, 24c for x and y, where the signal Z, which will be explained further with reference to FIG. 7, is not shown in FIG. 4, but can already be generated by the third partial microphone 3, preferably within the elongated housing.
  • each membrane has a counter-electrode, so that a total of six individual membranes and six corresponding counter-electrodes are present in the microphone according to the invention shown in FIG.
  • These counter-electrodes together form a separate condenser microphone for each membrane, whereby, depending on the implementation, a condenser or electret film can also be applied to the corresponding counter-electrode in order to have six individual condenser or electret microphones in the arrangement shown in FIG.
  • the "tips" of the three sub-microphones 1, 2, 3 are directed towards a common area or axis in order to position the three pairs of membranes as close as possible to one another in order to generate a rotational vibration, represented by its three individual components, which indicate the direction of rotation , to be able to capture.
  • a schematic (partial) microphone holder shown in FIG. 5 is preferably provided, which is shown at 50 in FIG. 4 and which is shown schematically in FIG. 5 in plan view.
  • the microphone holder is triangular or can also be kite-shaped or in another shape. However, it comprises two sides which are at an angle of 90° to one another in order to align the sub-microphone 1 and the sub-microphone 2 at an angle of 90° to one another.
  • a first holder 51 is provided which is provided on the first side of the two sides arranged at right angles to one another, and a second holder 52 which is provided on the other side of the two sides which are arranged at right angles to one another.
  • a third holder 53 is provided, which is formed in the bisecting line of the 90 "angle of the two sides on which the first holder 51 and the second holder 52 are provided and protrudes from the plane of the drawing to third part microphone as close as possible to bring in terms of its sensitive microphone tip to the two microphone tips of the first and second part microphone.
  • the holders 51, 52 and 53 are preferably designed as clips in order to be able to mount the individual sub-microphones without tools.
  • other holding means can also be provided to hold the elongate sub-microphones in the appropriate angled shape so that the diaphragm pairs are aligned as has been explained with reference to FIG. 1 or FIG.
  • the microphones can also be arranged at an angle between 70 and 110 ° or the third holder 53 or the third part microphone at an angle between 30 and Be arranged 60 ° with respect to the first holder and the second holder.
  • the microphone holder 50 is also attached to a tripod 54, indicated schematically in FIG.
  • the microphone can also be suspended from a ceiling with a cable construction instead of the stand 54 in order to have the lower area free, for example when a stage is to be recorded.
  • FIG. 7 shows an overview of all signals that can be supplied by the microphone, as has been shown with reference to FIG. 4, for example, or FIG. 2 or 3b.
  • the microphone can deliver the components of B format, also known as FOA (First Order Ambisonics) format.
  • B format also known as FOA (First Order Ambisonics) format.
  • This is an omnidirectional signal 24a and the directional components 24b, 24c, 24d, as shown at the output 24 in FIG. 3c.
  • These signals are usually used to excite the conventional translational vibrations via an appropriately positioned sound transducer.
  • the microphone according to the invention supplies the differential signals in the three spatial directions Diffx 21, Diffy 22 and Diffz 23.
  • an omnidirectional differential signal 21a could also be generated that can be obtained by adding the three directed difference signals.
  • the present invention thus provides a new type of B format for the rotational vibrations or the differential sound field.
  • the playback device comprises an interface 110 for receiving the first electrical signal 24 corresponding to a common mode acoustic signal, a separate second electrical signal corresponding to a differential acoustic signal and a separate third electrical signal corresponding to a differential acoustic signal.
  • the playback device includes a first loudspeaker device 131a, 132a, 133a, 134a, 135a for playing back the first electrical signal, the first loudspeaker device being designed to generate translational vibrations in response to the first electrical signal.
  • the playback device also includes a second loudspeaker device 131b, 132b, 133b, 134b, 135b for playing back the second and the third electrical signal, the second loudspeaker device being different from the first loudspeaker device.
  • the second loudspeaker device is designed to generate rotational vibrations in response to the second signal, that is to say to a first differential signal, and to the third electrical signal, that is to say in response to the second differential signal.
  • the second loudspeaker device is designed to generate sound with a second directional characteristic that differs from a first directional characteristic that is assigned to the first loudspeaker device.
  • the playback device also includes a renderer 120, which works separately for the common-mode signals, i.e. for the first electrical signal 24, and the differential signals (DM - differential mode), and which in one embodiment provides information about loudspeaker positions in a playback space, as shown at 121, and information 122 about a position of the Microphone, for example the microphone shown in Fig. 4 is obtained.
  • the microphone does not necessarily have to be a real microphone, but can be a virtual microphone that processes synthetic or pre-recorded signals and converts them into a specific microphone format, with this microphone format being based on the state of the sound field at a recording position where the virtual microphone is to be used is arranged, is related.
  • Several virtual microphone signals can also be used and processed in the renderer 120 to describe a sound field.
  • the renderer 120 operates separately for the common mode signals and the differential signals.
  • a left surround playback position or playback position LS arranged at the rear left, a left playback position L, a center playback position C, a right playback position R and a Right Surround or rear right playback position RS as the signals 60, 70, 80, 90, 100.
  • the renderer 120 also supplies difference signals to the corresponding loudspeakers, which are represented by 61 , 71 , 81 , 91 , 101 .
  • the renderer supplies not just a single difference signal for each individual loudspeaker, which consists of both the loudspeaker unit 131a, for example, and the second loudspeaker device 131b, but three difference signals, namely for the spatial directions x, y, z.
  • two or just a single difference signal can also be supplied, so that only two or just a single difference signal to the corresponding loudspeaker and in particular the corresponding loudspeaker device for the difference signals 131b, 132b, 133b, 134b, 135b are supplied.
  • the invention can also be used to render headphone signals from many different microphone signals at many different positions.
  • the renderer (120) is thus designed to render (120) the microphone signal using a virtual position (122) of the real or virtual microphone and using information (121) about the various loudspeaker positions in order to each of a first plurality of speakers to generate a speaker signal (60, 70, 80, 90, 100), or to generate multiple microphone signals using virtual positions of the real or virtual microphones and using different head-related transfer functions (HRTFs) derived from the positions and a respective side of a headphone, to render (120) to generate a headphone signal (60, 70, 80, 90, 100) for each side of two headphone sides, and to render using the position of the real or virtual microphone and using the different speaker positions, to render (120) the first differential output signal (21) and the second differential output signal (22) in order to generate a speaker signal (61, 71, 81, 91, 101) for each speaker of a plurality of second speakers to generate, or by using the virtual positions of the real or virtual microphones and using different head be to render (120) respective first difference output signals and respective
  • Loudspeakers such as are known from EP 2692144 B1, have corresponding inputs for the corresponding acoustic converters.
  • the converter for the translation signal ie for the first electrical signal, which represents a common-mode signal, is shown in FIG. 9b with 131a to 135a.
  • This converter or the corresponding loudspeaker device receives a corresponding signal, namely the signal 60, 70, 80, 90, 100, which can optionally be amplified, as is also shown in FIG. 9b.
  • the second loudspeaker device for the differential signal has only a single signal in the loudspeaker presented in the prior art.
  • each loudspeaker receives two or even three individual signals which can be output to corresponding converters, as illustrated in FIG. 9a.
  • the second loudspeaker device has two converters 170a for the x-direction, ie for the Diffx difference signal.
  • Two converters 170b are provided for the y-difference signal Diffy, which are arranged opposite one another in the schematic cube shown in FIG. 9a.
  • the second loudspeaker device has two converters 170c in order to reproduce the z component of the rotational vibration.
  • the second loudspeaker device thus has a "full equipment" in Fig. 9a at least six individual typically caseless diaphragms, with a pair of opposing diaphragms being fed the respective x,y,z differential signal.
  • the corresponding electrical signals received by the interface 110 can also be output directly via loudspeakers, i.e. without using a renderer 120.
  • a corresponding microphone could be placed at any desired "speaker position" in a studio environment .
  • no renderer 120 is necessary.
  • the signals fed into the interface 110 would be fed into the loudspeakers directly or, if necessary, after amplification, as shown in Figs.
  • the first loudspeaker device which is implemented in each of the five loudspeakers 131, 132, 133, 134, 135, is designed to have a first transducer for acoustically reproducing the common-mode electrical signal, the first transducer being designed to have a to emit first direction.
  • the second loudspeaker device includes a second transducer for acoustically reproducing the first differential signal, the second transducer being designed to emit in a second direction that differs from the first direction.
  • the second loudspeaker device also has a third transducer for acoustically converting the second difference signal, the third transducer being designed to emit in a third direction, which is different from the first and the second direction or is different from the second direction and is substantially equal to the first direction.
  • This implementation also includes the case where the rotational vibration has a component in the direction in which the conventional translational vibration takes place.
  • the interface comprises three differential electrical signals 21, 22, 23, identified as the second electrical signal, the third electrical signal and the fourth be referred to as an electrical signal.
  • the interface can also receive only two electrical signals as differential signals, so that the rotational vibration can be reproduced correctly, at least in a two-dimensional direction.
  • the first loudspeaker device for the common-mode signal ie for the conventional audio signal
  • the first loudspeaker device is equipped with a crossover 162, a tweeter 161 and a woofer or mid-range driver 163, as shown at 131a in FIG.
  • the first loudspeaker device can also have several different converters, which, however, are all driven by one and the same common-mode signal 24, for example, or one and the same common-mode signal 60, 70, 80, 90, 100 from Fig. 6 (apart from a frequency division via the crossover 162) are fed.
  • the individual differential converters 170a, 170b, 170c, shown at 131b in Fig. 11 or in Fig. 9b, are each fed with different signals which have not been generated by frequency decomposition or the like, but which have preferably been recorded separately , and rendered separately, either directly or through independent separate rendering. So there is preferably no mixing between the difference signals on the way from recording to playback, but only rendering, ie z. B. an application of appropriate panning weights, as is also shown with reference to FIGS. In addition, there is also no mixing in the reproduction or in the renderer 120 of the common-mode signal on the one hand and one or more differential signals on the other hand.
  • the corresponding signals are routed separately to the corresponding transducers and the acoustic output signals are only superimposed in the sound field generated by one or more of the loudspeakers 131, 132, 133, 134, 135, as shown in FIG is.
  • the common mode renderer receives either only the omnidirectional electrical signal 24a or the complete FOA or B format signal with the X component 24b, the Y component 24c and the Z component 24d.
  • the difference signal renderer only receives the difference signals in the x-direction 20, in the y-direction 22 and in the z-direction 23.
  • the difference-signal renderer is supplied with the rendering setting 121, which the common-mode renderer from the B format signals for a particular display device.
  • the rendering of the difference signals is therefore possible efficiently because it takes place with the same rendering settings 121 and in particular with corresponding panning weights 121a as explained in FIG. 10 . There is therefore no need to determine rendering weights yourself. Instead, the differential signals 21, 22, 23 are "treated" in the same way as the omnidirectional signal 24a, i.e. the common-mode signal in Fig. 8.
  • the difference signal renderer Different from the rendering for the common mode signal, it is further preferred to reduce the effort that the difference signal renderer only generates a left difference rendered signal, a center difference rendered signal and a right difference rendered signal, and that then the left rear difference rendered signal ( LS) and right rear (RS) is derived from the rendering signal for left and from the rendered signal for right, respectively.
  • a possible form of generation consists in the embodiment shown in FIG. 8 in a simple copy of the signal and a gain adjustment of the signal for left rear and right rear, whereby this gain adjustment can be an attenuation or an amplification depending on the implementation, with attenuation being preferred is used to concentrate the impression of the rotating sound field on the front L, C, R channels.
  • the panning weights are calculated from the common-mode signals or with the common-mode signals associated metadata is determined.
  • the position of a sound source in the common-mode signal is determined, specifically with respect to a microphone position. Then, using a position of a loudspeaker or several loudspeakers in a reproduction room and using the (virtual) position of the microphone in the reproduction room, the sound source in the common-mode signal is “placed” somewhere in the reproduction room, preferably using vector-based amplitude panning “.
  • the signal assigned to the sound source is provided with a weighting factor in order to obtain a corresponding signal.
  • a sound source to be placed between left and center is mapped such that a panning factor for the omnidirectional signal is 0.5 for the left speaker and is also 0.5 for the right speaker. If both speaker signals are then converted, the appears Sound source as a kind of "phantom source" between left and center. The same procedure is used for other sound sources in the signals.
  • the common-mode signal can be separated into individual sound sources using any source separation algorithm.
  • a preferred embodiment consists in subjecting the signal to a time-frequency transformation, with a plurality of sub-bands being generated for a sequence of consecutive frames, and with it then being determined per time-frequency bin of the sequence of frames from which direction the sound in the microphone signal is coming from.
  • This direction determination can be achieved by simply reading out metadata that has already been provided, which specify a DOA direction with an azimuth angle and an elevation angle for each time/frequency bin.
  • diffuseness information can also be supplied for the DOA information per time-frequency bin, as is known from audio signal processing, which is known among experts under the name DirAC (Directional Audio Coding). .
  • the panning weights are determined depending on the corresponding direction information, which is presented with “direction” in FIG per loudspeaker signal P, which is denoted by 24a.
  • the signal 24a can be the omnidirectional signal or a virtual microphone signal that has been derived for the corresponding loudspeaker.
  • This signal is then applied with the appropriate panning weight block 157 in weighter 153 in response to the appropriate DOA (Direction of Arrival) direction.
  • a diffuse signal is also generated using the upper branch, which has a decorrelator 154 .
  • the proportion of the diffuse signal is set by the two weights 151, 152 depending on the diffuseness information (diffusity information).
  • Both branches, the "diffuse branch” and the "direct branch” are in added to an adder 155.
  • This processing is carried out individually for each sub-band and in a further adder 156 all other correspondingly processed sub-bands are added up in order to obtain a loudspeaker signal for the first loudspeaker device, which is shown as an example in Fig. 11 with 60 for the left rear channel which, as has already been stated, a tweeter 161 and a woofer or mid-range driver 163 can have.
  • each differential signal is treated in the same way as the omnidirectional signal 24a, i.e. weighted with a weighter 158, which operates under the control of the panning weights, and an adder 159 then adds up correspondingly weighted other subbands of the same differential signal , to then e.g. B. to generate the differential signal for the x-direction, ie 61a, for the left rear loudspeaker.
  • a corresponding procedure is followed in order to generate the differential signals 61b, 61c for the Y converters and the Z converters.
  • the renderer 120 can be implemented together with the interface 121, for example, in mobile phone software or generally in a mobile device, with the signals for the individual loudspeakers 131, 132, 133, 134, 135 being transmitted, for example, via wireless transmission to the corresponding speakers can be supplied.
  • the mobile device is shown as 200 in FIG. B. a processor, a memory, various wireless interfaces, an accumulator, etc.
  • a central unit can be provided, which has an interface independently of a mobile phone in order to receive the signals 21, 22, 23, 24 from whatever source, and which is then designed to supply the corresponding renderer output signals 60 to 101 via lines to the corresponding loudspeakers.
  • the interface itself and a corresponding renderer for the corresponding loudspeaker can be implemented in the loudspeaker 131, 132, 133, 134, 135 itself, in which case each loudspeaker has a voltage supply and a corresponding input for the signals, i.e. the interface 110 , would exhibit.
  • each loudspeaker has a voltage supply and a corresponding input for the signals, i.e. the interface 110 , would exhibit.
  • Some or all of the method steps may be performed by hardware apparatus (or using a hardware Apparatus), such as a microprocessor, a programmable computer, or an electronic circuit. In some embodiments, some or more of the essential process steps can be performed by such an apparatus.
  • embodiments of the invention may be implemented in hardware or in software. Implementation can be performed using a digital storage medium such as a floppy disk, DVD, Blu-ray Disc, CD, ROM, PROM, EPROM, EEPROM or FLASH memory, hard disk or other magnetic or optical memory, on which electronically readable control signals are stored, which can interact with a programmable computer system in such a way or interact that the respective method is carried out. Therefore, the digital storage medium can be computer-readable.
  • a digital storage medium such as a floppy disk, DVD, Blu-ray Disc, CD, ROM, PROM, EPROM, EEPROM or FLASH memory, hard disk or other magnetic or optical memory, on which electronically readable control signals are stored, which can interact with a programmable computer system in such a way or interact that the respective method is carried out. Therefore, the digital storage medium can be computer-readable.
  • some embodiments according to the invention comprise a data carrier having electronically readable control signals capable of interacting with a programmable computer system in such a way that one of the methods described herein is carried out.
  • embodiments of the present invention can be implemented as a computer program product with a program code, wherein the program code is effective to perform one of the methods when the computer program product runs on a computer.
  • the program code can also be stored on a machine-readable carrier, for example.
  • Other exemplary embodiments include the computer program for performing one of the methods described herein, the computer program being stored on a machine-readable carrier.
  • an exemplary embodiment of the method according to the invention is therefore a computer program that has a program code for performing one of the methods described herein when the computer program runs on a computer.
  • a further exemplary embodiment of the method according to the invention is therefore a data carrier (or a digital storage medium or a computer-readable medium) on which the computer program for carrying out one of the methods described herein is recorded.
  • a further exemplary embodiment of the method according to the invention is therefore a data stream or a sequence of signals which represents the computer program for carrying out one of the methods described herein.
  • the data stream or sequence of signals may be configured to be transferred over a data communication link, such as the Internet.
  • Another embodiment includes a processing device, such as a computer or programmable logic device, configured or adapted to perform any of the methods described herein.
  • a processing device such as a computer or programmable logic device, configured or adapted to perform any of the methods described herein.
  • Another embodiment includes a computer on which the computer program for performing one of the methods described herein is installed.
  • a further exemplary embodiment according to the invention comprises a device or a system which is designed to transmit a computer program for carrying out at least one of the methods described herein to a recipient.
  • the transmission can take place electronically or optically, for example.
  • the recipient may be a computer, mobile device, storage device, or similar device.
  • the device or the system can, for example, comprise a file server for transmission of the computer program to the recipient.
  • a programmable logic device e.g., a field programmable gate array, an FPGA
  • a field programmable gate array may cooperate with a microprocessor to perform any of the methods described herein.
  • the methods are performed on the part of any hardware device. This can be hardware that can be used universally, such as a computer processor (CPU), or hardware that is specific to the method, such as an ASIC.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Multimedia (AREA)
  • Stereophonic Arrangements (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

L'invention concerne un microphone présentant les caractéristiques suivantes : un premier sous-microphone (1) pourvu d'une première paire de membranes comprenant une première membrane (11) et une deuxième membrane (12) agencées de sorte à se faire face; un deuxième sous-microphone (2) pourvu d'une deuxième paire de membranes comprenant une troisième membrane (13) et une quatrième membrane (14) agencées de sorte à se faire face, la première paire de membranes étant agencée de sorte que la première membrane (11) et la deuxième membrane (12) puissent être déviées le long d'un premier axe spatial, la deuxième paire de membranes étant disposée de sorte que la troisième membrane (13) et la quatrième membrane (14) puissent être déviées le long d'un deuxième axe spatial, et le deuxième axe spatial étant différent du premier axe spatial.
EP22702897.4A 2021-01-21 2022-01-20 Microphone, procédé d'enregistrement de signal acoustique, dispositif de lecture de signal acoustique, ou procédé de lecture de signal acoustique Pending EP4282162A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102021200555.1A DE102021200555B4 (de) 2021-01-21 2021-01-21 Mikrophon und Verfahren zum Aufzeichnen eines akustischen Signals
PCT/EP2022/051252 WO2022157252A1 (fr) 2021-01-21 2022-01-20 Microphone, procédé d'enregistrement de signal acoustique, dispositif de lecture de signal acoustique, ou procédé de lecture de signal acoustique

Publications (1)

Publication Number Publication Date
EP4282162A1 true EP4282162A1 (fr) 2023-11-29

Family

ID=80222120

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22702897.4A Pending EP4282162A1 (fr) 2021-01-21 2022-01-20 Microphone, procédé d'enregistrement de signal acoustique, dispositif de lecture de signal acoustique, ou procédé de lecture de signal acoustique

Country Status (5)

Country Link
US (1) US20230362545A1 (fr)
EP (1) EP4282162A1 (fr)
CN (1) CN117242782A (fr)
DE (1) DE102021200555B4 (fr)
WO (1) WO2022157252A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023166109A1 (fr) 2022-03-03 2023-09-07 Kaetel Systems Gmbh Dispositif et procédé de réenregistrement d'un échantillon audio existant

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FI20055261A0 (fi) * 2005-05-27 2005-05-27 Midas Studios Avoin Yhtioe Akustisten muuttajien kokoonpano, järjestelmä ja menetelmä akustisten signaalien vastaanottamista tai toistamista varten
ES2712724T3 (es) 2011-03-30 2019-05-14 Kaetel Systems Gmbh Caja de altavoces
US20130142358A1 (en) 2011-12-06 2013-06-06 Knowles Electronics, Llc Variable Directivity MEMS Microphone
DE102013221752A1 (de) 2013-10-25 2015-04-30 Kaetel Systems Gmbh Ohrhörer und verfahren zum herstellen eines ohrhörers
DE102013221754A1 (de) 2013-10-25 2015-04-30 Kaetel Systems Gmbh Kopfhörer und verfahren zum herstellen eines kopfhörers
KR102008745B1 (ko) * 2014-12-18 2019-08-09 후아웨이 테크놀러지 컴퍼니 리미티드 이동 디바이스들을 위한 서라운드 사운드 레코딩
WO2019183112A1 (fr) 2018-03-20 2019-09-26 3Dio, Llc Dispositif d'enregistrement binaural à amélioration directionnelle
GB2575492A (en) * 2018-07-12 2020-01-15 Centricam Tech Limited An ambisonic microphone apparatus

Also Published As

Publication number Publication date
TW202236863A (zh) 2022-09-16
WO2022157252A1 (fr) 2022-07-28
DE102021200555A1 (de) 2022-07-21
DE102021200555B4 (de) 2023-04-20
US20230362545A1 (en) 2023-11-09
CN117242782A (zh) 2023-12-15

Similar Documents

Publication Publication Date Title
DE69831458T2 (de) Mittelpunkt-stereowiedergabesystem für musikinstrumente
DE102013221754A1 (de) Kopfhörer und verfahren zum herstellen eines kopfhörers
DE102012017296B4 (de) Erzeugung von Mehrkanalton aus Stereo-Audiosignalen
DE102007008738A1 (de) Verfahren zur Verbesserung der räumlichen Wahrnehmung und entsprechende Hörvorrichtung
DE102013221752A1 (de) Ohrhörer und verfahren zum herstellen eines ohrhörers
DE102021203640B4 (de) Lautsprechersystem mit einer Vorrichtung und Verfahren zum Erzeugen eines ersten Ansteuersignals und eines zweiten Ansteuersignals unter Verwendung einer Linearisierung und/oder einer Bandbreiten-Erweiterung
EP4282163A2 (fr) Générateur acoustique pouvant se porter sur la tête, processeur de signaux et procédé pour faire fonctionner un générateur acoustique ou un processeur de signaux
DE102021200555B4 (de) Mikrophon und Verfahren zum Aufzeichnen eines akustischen Signals
DE102007051308B4 (de) Verfahren zum Verarbeiten eines Mehrkanalaudiosignals für ein binaurales Hörgerätesystem und entsprechendes Hörgerätesystem
DE102008059036A1 (de) Multimodale Raumklanglautsprecherbox
DE102021203632A1 (de) Lautsprecher, Signalprozessor, Verfahren zum Herstellen des Lautsprechers oder Verfahren zum Betreiben des Signalprozessors unter Verwendung einer Dual-Mode-Signalerzeugung mit zwei Schallerzeugern
EP3314915A1 (fr) Procédé de reproduction sonore dans des environnements réfléchissants, en particulier dans des salles d'écoute
DE102021205545A1 (de) Vorrichtung und Verfahren zum Erzeugen eines Ansteuersignals für einen Schallerzeuger oder zum Erzeugen eines erweiterten Mehrkanalaudiosignals unter Verwendung einer Ähnlichkeitsanalyse
DE102021200553B4 (de) Vorrichtung und Verfahren zum Ansteuern eines Schallerzeugers mit synthetischer Erzeugung des Differenzsignals
WO2023166109A1 (fr) Dispositif et procédé de réenregistrement d'un échantillon audio existant
DE4237710A1 (en) Improving head related sound characteristics for TV audio signal playback - using controlled audio signal processing for conversion into stereo audio signals
DE112019006599T5 (de) Tonausgabevorrichtung und Tonausgabeverfahren
DE19628261A1 (de) Verfahren und Vorrichtung zum elektronischen Einbetten von Richtungseinsätzen bei Zweikanal-Ton
WO2023052555A2 (fr) Système de haut-parleurs, circuit de commande pour un système de haut-parleurs comprenant un haut-parleur d'aigus et deux haut-parleurs moyens ou de graves et procédés correspondants
DE102021203639A1 (de) Lautsprechersystem, Verfahren zum Herstellen des Lautsprechersystems, Beschallungsanlage für einen Vorführbereich und Vorführbereich
AT389610B (de) Stereophonische aufnahmeeinrichtung zur verbesserung des raeumlichen hoerens
WO2023001673A2 (fr) Dispositif et procédé destinés à alimenter un espace en son
EP2373055B1 (fr) Dispositif d'écouteurs destiné à la retransmission de signaux audio spatiaux binauraux et systèmes en étant équipés
DE102018216604A1 (de) System zur Übertragung von Schall in den und aus dem Kopf eines Hörers unter Verwendung eines virtuellen akustischen Systems
DE10154932A1 (de) Verfahren zur Audiocodierung

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230821

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)