WO2013051085A1 - Audio signal processing device, audio signal processing method and audio signal processing program - Google Patents

Audio signal processing device, audio signal processing method and audio signal processing program Download PDF

Info

Publication number
WO2013051085A1
WO2013051085A1 PCT/JP2011/072773 JP2011072773W WO2013051085A1 WO 2013051085 A1 WO2013051085 A1 WO 2013051085A1 JP 2011072773 W JP2011072773 W JP 2011072773W WO 2013051085 A1 WO2013051085 A1 WO 2013051085A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase difference
audio signal
signal processing
speakers
speaker
Prior art date
Application number
PCT/JP2011/072773
Other languages
French (fr)
Japanese (ja)
Inventor
久司 大和田
一郎 菅井
知己 長谷川
輝夫 馬場
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Priority to PCT/JP2011/072773 priority Critical patent/WO2013051085A1/en
Publication of WO2013051085A1 publication Critical patent/WO2013051085A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems

Definitions

  • the present invention relates to an audio signal processing apparatus that performs processing for localizing a sound image.
  • Patent Document 1 describes a technique for controlling the localization position of a sound source by changing the level difference between an L channel (left channel) and an R channel (right channel). This technology is used for auto pan (panning), which is one of the effects of DJ performance.
  • Patent Document 2 describes a technique for localizing the reproduced sound of an audio signal outside a speaker by delaying the phase characteristics of the audio signals of the left and right channels without changing the frequency characteristics every time the frequency increases. ing.
  • An object of the present invention is to provide an audio signal processing device, an audio signal processing method, and an audio signal processing program capable of appropriately localizing a low-frequency component sound image at a desired position during speaker reproduction.
  • the audio signal processing device that processes the audio signals supplied to the two speakers has a binaural position within a predetermined range at the listening position at a low frequency of a predetermined value or less.
  • Phase difference control means is provided for performing control to give a relative phase difference between the audio signals supplied to each of the two speakers so as to cause a phase difference.
  • the audio signal processing method executed by the audio signal processing device that processes the audio signals supplied to the two speakers is the listening position in a low frequency range of a predetermined value or less.
  • an audio signal processing program executed by an audio signal processing apparatus that has a computer and processes audio signals supplied to two speakers is a low frequency whose frequency is a predetermined value or less.
  • Phase difference control means for performing control to give a relative phase difference between the audio signals supplied to each of the two speakers so that an interaural phase difference within a predetermined range occurs at a listening position To make the computer function.
  • an audio signal processing device that performs processing on audio signals supplied to two speakers has an interaural phase difference within a predetermined range at a listening position in a low frequency range of a predetermined value or less.
  • Phase difference control means for performing control to give a relative phase difference between the audio signals supplied to each of the two speakers.
  • the above audio signal processing apparatus performs processing on the audio signals supplied to the two speakers in order to control the localization position of the sound image.
  • the audio signal processing apparatus is preferably used as a DJ device.
  • the phase difference control means has a binaural phase difference within a predetermined range at the listening position (a phase difference for the sound that the listener listens to in both ears) at a listening position in a low frequency range below a predetermined value. Control is performed to give a relative phase difference between the audio signals supplied to the two speakers. Thereby, a desired interaural phase difference can be appropriately realized in a low frequency range. Therefore, the sound image of the low frequency component can be appropriately localized at a desired position during reproduction by the speaker.
  • the phase difference control unit is configured to determine the position based on a speaker opening angle formed by the speaker and the listening position and a distance between the speaker and the listening position. Set the phase difference. Thereby, the phase difference can be appropriately controlled according to the speaker opening angle and the distance between the speaker and the listening position, and a desired interaural phase difference can be effectively realized.
  • the phase difference control means can increase the phase difference when the speaker opening angle is small compared to when the speaker opening angle is large. This is because when the speaker opening angle is small, the interaural phase difference required for appropriately localizing the sound image tends to be larger than when the speaker opening angle is large.
  • the phase difference control means can increase the phase difference when the distance is long compared to when the distance is short. This is because when the distance between the speaker and the listening position is long, the interaural phase difference necessary for appropriately localizing the sound image tends to be larger than when the distance is short. It is.
  • the phase difference control means sets the phase difference based on the frequency of the audio signal.
  • the phase difference can be appropriately controlled according to the frequency of the audio signal, and a desired interaural phase difference can be effectively realized.
  • the phase difference control means can increase the phase difference when the frequency is low compared to when the frequency is high. This is because when the frequency of the audio signal is low, the inter-channel phase difference required to obtain the same interaural phase difference tends to be larger than when the frequency is high.
  • the phase difference control means performs control to give the phase difference so that the interaural phase difference within a range of 25 ° to 65 ° is generated. it can.
  • the low frequency is a lower limit of a frequency at which a localization position of a sound image can be controlled by changing a level difference between the audio signals supplied to the two speakers.
  • the “low range” used for the control is defined based on the lower limit value of the frequency at which the interaural phase difference necessary for appropriately localizing the sound image is obtained by the control related to “pan”. Specifically, a frequency band lower than the lower limit value can be used as a “low band”.
  • the listening position is located on a vertical bisector connecting the two speakers.
  • the two speakers are arranged either in front of the listening position, right next to the listening position, or behind the listening position.
  • an audio signal processing method executed by an audio signal processing apparatus that processes audio signals supplied to two speakers is predetermined at a listening position in a low frequency range of a predetermined value or less.
  • an audio signal processing program executed by an audio signal processing apparatus that has a computer and processes audio signals supplied to two speakers is a low frequency whose frequency is a predetermined value or less.
  • Phase difference control means for performing control to give a relative phase difference between the audio signals supplied to each of the two speakers so that an interaural phase difference within a predetermined range occurs at a listening position To make the computer function.
  • the above-described audio signal processing method and audio signal processing program can also appropriately localize a low-frequency component sound image at a desired position during reproduction by a speaker.
  • FIG. 1 is a diagram schematically illustrating an acoustic system 10 according to the first embodiment.
  • the acoustic system 10 is suitably used as a DJ device.
  • the acoustic system 10 mainly includes phase control units 1L and 1R and speakers 2L and 2R.
  • the phase control units 1L and 1R receive the same audio signal and perform control to add a phase to the input audio signal. Specifically, the phase control units 1L and 1R control to give a phase to the input audio signal so that a relative phase difference occurs between the audio signals output to the speakers 2L and 2R, respectively. I do.
  • only one of the phase control units 1L and 1R may realize a desired phase difference by adding a phase to the audio signal, or both of the phase control units 1L and 1R add a phase to the audio signal. By doing so, a desired phase difference may be realized.
  • the phase control units 1L and 1R are realized by an effector, a DSP, an amplifier, and the like.
  • the speakers 2L and 2R output the audio signals after being controlled by the phase controllers 1L and 1R, respectively.
  • the phase control units 1L and 1R control the phase difference between the audio signals of the speakers 2L and 2R, that is, the phase difference between channels corresponding to the speakers 2L and 2R (hereinafter referred to as “interchannel level”). By controlling the “phase difference”, the localization position of the sound image is controlled during reproduction by the speakers 2L and 2R.
  • the phase control units 1L and 1R correspond to the “audio signal processing device” in the present invention and function as “phase difference control means”.
  • FIG. 2 shows a specific example of an acoustic space to which the acoustic system 10 is applied.
  • the speakers 2L and 2R are arranged in front of the listening position (that is, the listener's position), the speaker 2L is arranged in front of the listening position, and the speaker 2R is arranged in front of the listening position.
  • the listening position is generally located on the vertical bisector connecting the lines connecting the speakers 2L and 2R.
  • the phase control units 1L and 1R control the phase difference between channels, so that sound images are reproduced by the speakers 2L and 2R during reproduction by the speakers 2L and 2R. Localize to any position between. Specifically, the phase control units 1L and 1R localize the sound image near the center of the speakers 2L and 2R (hereinafter referred to as “center localization”) as indicated by the broken line area A1, or indicate the broken line area A2.
  • the sound image is localized in the vicinity of the speaker 2L (hereinafter referred to as “left localization”), or the sound image is localized in the vicinity of the speaker 2R as indicated by the broken line area A3 (hereinafter referred to as “right localization”). I will let you.
  • FIG. 3 is a diagram for explaining a control method according to a comparative example.
  • the control method according to the comparative example is realized by an acoustic system 10x.
  • the acoustic system 10x includes multipliers 4L and 4R instead of the phase controllers 1L and 1R.
  • the multipliers 4L and 4R control the level of the audio signal by multiplying the input audio signal by a predetermined coefficient (a value between 0 and 1). Specifically, the multipliers 4L and 4R control the level difference between the audio signals output to the speakers 2L and 2R, respectively.
  • such multipliers 4L and 4R control the level difference between the audio signals of the speakers 2L and 2R, thereby controlling the localization position of the sound image during reproduction by the speakers 2L and 2R.
  • FIG. 3B shows a specific example of coefficients used by the multipliers 4L and 4R.
  • the horizontal axis indicates the localization position
  • the vertical axis indicates the coefficients used by the multipliers 4L and 4R.
  • the localization position shown in the center corresponds to “center localization”
  • the localization position shown on the left side corresponds to “left localization”
  • the localization position shown on the right side corresponds to “right localization” (this definition)
  • the coefficient of the multiplier 4L is multiplied by the multiplier in order to make the level of the left channel larger than the level of the right channel.
  • a value larger than the coefficient of 4R is set.
  • the coefficient of the multiplier 4R is larger than the coefficient of the multiplier 4L in order to make the right channel level larger than the left channel level. Set to a large value.
  • FIG. 4 shows an example of a simulation result when the control according to the comparative example is performed.
  • the interval between the speakers 2L and 2R is set to “4.2 [m]”, and the distance between the center position of the speakers 2L and 2R and the listening position is “2.1.
  • the simulation result is shown in the case where the distance between both ears of the listener is “0.25 [m]”.
  • Other simulation conditions are as follows. Note that a low frequency is used as the frequency of the input signal.
  • ⁇ Input signal sine wave
  • ⁇ Input signal frequency 50, 63, 80, 100, 130 [Hz] ⁇ Free sound field / point sound source
  • FIG. 4B shows an example of the interaural level difference obtained by the control according to the comparative example.
  • the horizontal position indicates the localization position
  • the vertical axis indicates the binaural level difference [dB].
  • FIG. 4C shows an example of the binaural phase difference obtained by the control according to the comparative example.
  • the horizontal position indicates the localization position
  • the vertical axis indicates the binaural phase difference [°].
  • the results obtained for each of a plurality of frequencies are shown superimposed. Note that the “localization position” shown on the horizontal axis of FIGS.
  • 4B and 4C is not the position where the sound image is actually localized, but the position where the sound image is localized by control (target position). (This definition is also applied to a graph in which the “localization position” is shown on the horizontal axis, which will be described later.)
  • control is performed so that the low-frequency component sound image is appropriately localized at a desired position during reproduction by the speakers 2L and 2R.
  • control is performed so that the low-frequency component sound image is appropriately localized at a desired position during reproduction by the speakers 2L and 2R.
  • control which gives a typical phase difference is performed. That is, in the first embodiment, when realizing the left localization or right localization in a low frequency range where it is difficult to properly localize a sound image by controlling the level difference, a desired interaural phase difference is obtained.
  • the phase control units 1L and 1R control the phase difference between channels so as to occur.
  • the phase control units 1L and 1R control the inter-channel phase difference so that the interaural phase difference within a range from 25 [°] to 65 [°] is generated.
  • the interaural time difference (generally synonymous with the interaural phase difference) and the interaural level difference (amplitude difference or level difference of the interaural transfer function) are clues for the direction perception of the horizontal plane. It is known.
  • the relationship between the time difference and the localization position is linear until the interaural time difference is “630 [ ⁇ sec]”, and the interaural time difference is “1 [msec]”. The above findings indicate that the localization position does not change.
  • the frequency range in which the interaural level difference and the interaural time difference affect localization is as shown in FIG. From FIG. 5A, in order to obtain a desired localization in a low frequency range (for example, 200 [Hz] or less), an interaural level difference of “10 [dB]” or more, or a sufficient interaural time difference (in other words, And interaural phase difference) are considered necessary. Therefore, in this embodiment, in order to obtain a desired localization in the low frequency range, control is performed based on the interaural time difference of the interaural level difference and the interaural time difference, that is, based on the interaural phase difference. Do. Specifically, the inter-channel phase difference is controlled so that a binaural phase difference in which the sound image is properly localized in a low frequency range is obtained.
  • FIG. 5B shows a result of one example (hereinafter, referred to as “Experiment 1”) of the localization experiment using the binaural phase difference.
  • Experiment 1 controls the phase difference and level difference between channels to form a state where the sound image is localized to the left and right, and measures the interaural phase difference in that state.
  • the experimental conditions in Experiment 1 are as follows. ⁇ Location: Anechoic chamber ⁇ Signal: Burst signal with narrow band noise ⁇ Signal center frequency: 50, 63, 80, 100, 125, 160, 200 [Hz] Signal level: 90 [dB] Test subject: 4 persons
  • FIG. 5B shows the frequency [Hz] on the horizontal axis and the interaural phase difference [°] on the vertical axis. That is, FIG. 5B shows the binaural phase difference for each frequency in a state where the sound image is localized to the left and right. From this, it is understood that the localization of the sound image can be obtained when the interaural phase difference of about 25 to 65
  • FIG. 5C shows the result of another example (hereinafter referred to as “Experiment 2”) of a localization experiment based on an interaural phase difference.
  • Experiment 2 a signal to which various interaural phase differences were added was auditioned using headphones, and an answer was obtained as to whether or not localization was obtained for each interaural phase difference.
  • the experimental conditions in Experiment 2 are as follows. -Signal: Burst signal with narrow band noise-Center frequency of signal: 50, 63, 80, 100, 125, 160, 200 [Hz] Test subject: 3 persons
  • FIG. 5C shows the interaural phase difference [°] on the horizontal axis and the probability [%] that the localization is obtained on the vertical axis. The result shown in FIG.
  • 5C is obtained by accumulating response results at various frequencies (center frequency). From FIG. 5 (c), it can be seen that the probability that the localization is obtained is 75 [%] or more in the binaural phase difference of about 25 to 65 [°]. Therefore, it is considered that localization of a sound image can be obtained when a binaural phase difference of about 25 to 65 [°] occurs.
  • control in response to the results of Experiment 1 and Experiment 2, control is performed so that an interaural phase difference of about 25 to 65 [°] occurs in the low frequency range.
  • the phase control unit when realizing the left localization or right localization in the low frequency range, the phase control unit is configured so that an interaural phase difference of about 25 to 65 [°] occurs in the absolute value.
  • 1L and 1R control the phase difference between channels. For example, by obtaining the interaural phase difference generated by various inter-channel phase differences through experiments or simulations, the inter-channel phase difference that produces an interaural phase difference of about 25 to 65 [°] in absolute value is obtained. Can be sought.
  • the phase control units 1L, 1R can be set to the stored inter-channel phase difference when realizing the left localization and the right localization. It is possible to perform control to add a phase to the audio signal supplied to both or one of 2L and 2R.
  • FIG. 6 shows simulation conditions used in the control according to the first embodiment.
  • the distance between the speakers 2L and 2R is “4.2 [m]”
  • the distance between the center position of the speakers 2L and 2R and the listening position is “2.1 [m]”. It is assumed that the distance between both ears of the listener is “0.25 [m]”.
  • Other simulation conditions are as follows. These simulation conditions are the same as the simulation conditions used in the control according to the above-described comparative example.
  • ⁇ Input signal sine wave
  • ⁇ Input signal frequency 50, 63, 80, 100, 130 [Hz] Free field / point sound source
  • FIG. 6B shows a specific example of the inter-channel phase difference provided at each localization position in the control according to the first embodiment. In FIG.
  • the horizontal axis indicates the localization position
  • the vertical axis indicates the inter-channel phase difference [°].
  • the phase difference between channels on the vertical axis indicates a positive value when the left channel is more advanced than the right channel, and a negative value when the right channel is more advanced than the left channel. .
  • an inter-channel phase difference is set such that the localization position decreases monotonically as it moves from left to right.
  • the phase difference between channels is set to about “150 °”, that is, the left channel is positioned.
  • the phase is set to advance about “150 [°]” from the phase of the right channel.
  • the inter-channel phase difference is set to “0 [°]”.
  • the phase difference between channels is set to about “ ⁇ 150 [°]”, that is, the phase of the right channel is It is set so as to advance about “150 [°]” from the phase of the left channel.
  • FIG. 7A shows an example of a simulation result when the control according to the above-described comparative example is performed (that is, when “pan” is performed).
  • the graph in FIG. 7A is the same as that shown in FIG.
  • FIG. 7B shows a simulation result example when the control according to the first embodiment is performed.
  • the horizontal axis indicates the localization position
  • the vertical axis indicates the binaural phase difference [°].
  • the results obtained for each of a plurality of frequencies are shown superimposed. For example, the interaural phase difference is measured by installing two microphones at positions corresponding to the listener's both ears. It is assumed that the control according to the first embodiment and the control according to the comparative example are performed using the same simulation conditions.
  • a desired binaural phase difference can be appropriately realized in a low frequency range. Therefore, according to the first embodiment, it is possible to appropriately localize a low-frequency component sound image at a desired position during reproduction by the speakers 2L and 2R.
  • phase control units 1L and 1R increase the inter-channel phase difference when the speaker opening angle is small compared to when the speaker opening angle is large. Further, the phase control units 1L and 1R increase the inter-channel phase difference when the speaker distance is long compared to when the speaker distance is short.
  • FIG. 8 is a diagram for explaining the definition of the speaker opening angle and the speaker distance.
  • the listening position is located on the vertical bisector 72 of the line segment 71 connecting the speakers 2L and 2R.
  • the speaker opening angle ⁇ is defined as the angle formed by the line segment 73L connecting the speaker 2L and the listening position and the vertical bisector 72, and the line segment 73R connecting the speaker 2R and the listening position by two vertical lines. It is defined as the angle formed by the equidistant line 72. These two angles are equal because the listening position is located on the vertical bisector 72 of the line 71 connecting the speakers 2L and 2R.
  • the speaker distance L is defined as the length of the line segment 73L connecting the speaker 2L and the listening position and the length of the line segment 73R connecting the speaker 2R and the listening position. These two lengths are also equal because the listening position is located on the vertical bisector 72 of the line 71 connecting the speakers 2L and 2R.
  • the positions of the speakers 2L and 2R are uniquely determined according to the listening position, the speaker opening angle ⁇ , and the speaker distance L. In the following, the positions are defined based on the speaker opening angle ⁇ and the speaker distance L. For the positions of the speakers 2L and 2R, words such as “speaker arrangement” and “speaker position” are used.
  • FIG. 9 shows an example of a suitable speaker arrangement. Specifically, the speaker arrangement in which an interaural phase difference of about 25 to 65 [°] is obtained by controlling the inter-channel phase difference is illustrated.
  • 9 (a) and 9 (b) express the speaker arrangement by showing the speaker opening angle ⁇ [°] and the speaker distance L [m] in polar coordinate display.
  • the speaker opening angle ⁇ is represented by an azimuth angle from a vertical line passing through the origin (listening position)
  • the speaker distance L is represented by a distance from the origin.
  • the speaker opening angle ⁇ and the speaker distance L for the speaker 2L are shown, that is, only the arrangement example of the speakers 2L is shown.
  • FIG. 9A shows an example of speaker arrangement in which an interaural phase difference of about 25 to 65 [°] is obtained by controlling the inter-channel phase difference when the frequency is set to “50 [Hz]”.
  • FIG. 9B shows a speaker in which an interaural phase difference of about 25 to 65 [°] is obtained by controlling the inter-channel phase difference when the frequency is set to “130 [Hz]”.
  • An arrangement example is shown.
  • FIGS. 9 (a) and 9 (b) show that an interaural phase difference of about 25 to 65 [°] can be obtained if the speaker 2L is arranged within a frame represented by a bold line. ing. Note that the results shown in FIGS. 9A and 9B are obtained, for example, by simulation as described above.
  • the speaker distance L at which a desired interaural phase difference can be obtained is longer than when the speaker opening angle ⁇ is small.
  • the speaker distance L at which a desired interaural phase difference is obtained is shorter than when the speaker opening angle ⁇ is large. That is, when the speaker opening angle ⁇ is large, a desired interaural phase difference can be obtained even if the speaker distance L is increased to some extent (for example, about 27 [m] at the maximum). It can be said that it is necessary to shorten the speaker distance L to some extent in order to obtain the interaural phase difference (for example, about 7 [m] at the minimum).
  • FIG. 9 shows only an arrangement example of the speaker 2L, it goes without saying that the speaker 2R is the same as this.
  • FIG. 10 illustrates the relationship between the speaker arrangement and the phase difference between channels. Specifically, an inter-channel phase difference that can obtain an interaural phase difference of about 25 to 65 [°] is illustrated for each speaker position.
  • 10 (a) and 10 (b) express the speaker arrangement by showing the speaker opening angle ⁇ [°] and the speaker distance L [m] in polar coordinate display (the detailed definition is the same as FIG. 9). Is).
  • the relationship between the speaker arrangement and the inter-channel phase difference is illustrated only for the speaker 2L.
  • FIG. 10A illustrates an inter-channel phase difference [°] necessary to obtain an interaural phase difference of “25 [°]” at each speaker position.
  • FIG. 10B illustrates an inter-channel phase difference [°] necessary to obtain an interaural phase difference of “65 [°]” at each speaker position.
  • the results shown in FIGS. 10A and 10B are obtained when the frequency is set to “100 [Hz]”. Such a result is obtained, for example, by a simulation as described above.
  • the inter-channel phase difference necessary to obtain the desired interaural phase difference (25 [°], 65 [°]) is represented by the speaker opening angle ⁇ and the speaker distance L. It turns out that it depends. Specifically, it is understood that when the speaker opening angle ⁇ is large, the inter-channel phase difference necessary for obtaining a desired interaural phase difference is smaller than when the speaker opening angle ⁇ is small. . In other words, when the speaker opening angle ⁇ is small, the inter-channel phase difference necessary to obtain the desired interaural phase difference is larger than when the speaker opening angle ⁇ is large (ie, “180 [ It is close to “°]”).
  • the inter-channel phase difference required to obtain the desired binaural phase difference is smaller than when the speaker distance L is long.
  • the inter-channel phase difference necessary to obtain a desired interaural phase difference is larger than when the speaker distance L is short (ie, “180 [°]”).
  • FIG. 10 shows only the relationship between the speaker arrangement and the phase difference between channels for the speaker 2L, but it goes without saying that the speaker 2R is the same as this.
  • the phase control units 1L and 1R control the inter-channel phase difference based on the relationship between the speaker arrangement and the inter-channel phase difference as shown in FIG. That is, the phase control units 1L and 1R are set to the phase difference between the channels 2L and 2R so that a desired interaural phase difference can be obtained at the currently set speaker opening angle ⁇ and speaker distance L. Control is performed to add a phase to the audio signal supplied to both or one.
  • an arithmetic expression for obtaining a phase difference between channels that obtains a desired interaural phase difference for each speaker position is created by experiment or simulation, and the phase control units 1L and 1R Based on such an arithmetic expression, control can be performed by obtaining a phase difference between channels corresponding to the current speaker position.
  • an inter-channel phase difference in which a desired interaural phase difference is obtained for each speaker position by experiment or simulation is stored as table data, and the phase control units 1L and 1R Control can be performed by reading the phase difference between channels according to the current speaker position from the table data. Note that the current speaker position can be acquired by input from the user, for example.
  • the inter-channel phase difference can be appropriately controlled according to the speaker opening angle and the speaker distance, and the desired interaural phase difference can be effectively realized. Therefore, according to the second embodiment, the sound image of the low frequency component can be localized at a desired position more reliably during reproduction by the speakers 2L and 2R.
  • the speaker opening angle ⁇ and the speaker distance L are not limited to be defined as shown in FIG.
  • the speaker opening angle can be defined as an angle formed by a line segment 73L connecting the speaker 2L and the listening position and a line segment 73R connecting the speaker 2R and the listening position.
  • the speaker opening angle in this example is twice the above-described speaker opening angle ⁇ .
  • the speaker distance can be defined as the distance on the vertical bisector 72 from the speakers 2L and 2R to the listening position.
  • the speaker distance in this example is “L ⁇ cos ⁇ ” when the speaker opening angle ⁇ and the speaker distance L are used.
  • the channel is changed according to the frequency of the audio signal.
  • the interphase difference may be controlled.
  • the phase difference between channels can be increased when the frequency is low compared to when the frequency is high (in other words, when the frequency is high and the frequency is low). In comparison, the phase difference between channels is reduced). This is because the inter-channel phase difference required to obtain the same binaural phase difference changes according to the frequency.
  • it is desirable that the inter-channel phase difference necessary for obtaining the same binaural phase difference is larger when the frequency is low than when the frequency is high. Therefore, in another example, when the frequency is low, the inter-channel phase difference is increased compared to when the frequency is high.
  • the present invention can be used for DJ equipment, for example.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)

Abstract

This audio signal processing device processes audio signals supplied to two speakers in order to control a localization position of a sound image. Phase difference control means performs control so as to impart a relative phase difference between audio signals supplied to each of the two speakers so that a phase difference between both ears that is within a predetermined range is generated at a listening position where the frequency is in a low range less than or equal to a predetermined value. As a result, in the low range, it is possible to suitably achieve a desired phase difference between both of the ears. Accordingly, at playback time for the speakers, it becomes possible to suitably localize the sound image of the low frequency component at a desired position.

Description

音声信号処理装置、音声信号処理方法及び音声信号処理プログラムAudio signal processing apparatus, audio signal processing method, and audio signal processing program
 本発明は、音像を定位させるための処理を行う音声信号処理装置に関する。 The present invention relates to an audio signal processing apparatus that performs processing for localizing a sound image.
 この種の技術が、例えば特許文献1、2に記載されている。特許文献1には、Lチャンネル(左チャンネル)とRチャンネル(右チャンネル)とのレベル差を変化させることで、音源の定位位置を制御する技術が記載されている。この技術は、DJパフォーマンスのエフェクトの一つであるオートパン(パン)に利用されている。他方で、特許文献2には、左右チャンネルのオーディオ信号の位相特性を、周波数が増すごとに周波数特性を変えることなく遅らせることにより、オーディオ信号の再生音をスピーカ外に音像定位させる技術が記載されている。 This type of technology is described in Patent Documents 1 and 2, for example. Patent Document 1 describes a technique for controlling the localization position of a sound source by changing the level difference between an L channel (left channel) and an R channel (right channel). This technology is used for auto pan (panning), which is one of the effects of DJ performance. On the other hand, Patent Document 2 describes a technique for localizing the reproduced sound of an audio signal outside a speaker by delaying the phase characteristics of the audio signals of the left and right channels without changing the frequency characteristics every time the frequency increases. ing.
特許第2976573号公報Japanese Patent No. 2976573 特開2000-228799号公報JP 2000-228799 A
 しかしながら、上記の特許文献1に記載されたような、左チャンネルと右チャンネルとのレベル差を変化させることで音像の定位位置を制御する技術(以下、適宜「パン」と呼ぶ。)では、低域の周波数において、定位位置を左右に振ろうと制御しても、定位が変化しない傾向にあった。その原因の一つとして、パンによっては、低域において、左チャンネル及び右チャンネルのレベル差を変化させることによる聴取者の両耳間の位相差(以下、「両耳間位相差」と呼ぶ。)が、十分に得られなかったことが考えられる。上記の特許文献2に記載された技術でも、低域において、音像の定位位置を適切に制御することが困難であった。 However, a technique for controlling the localization position of a sound image by changing the level difference between the left channel and the right channel as described in Patent Document 1 (hereinafter referred to as “pan” as appropriate) is low. Even if the localization position is controlled to move to the left and right at the frequency of the region, the localization tends not to change. One of the causes is that depending on the pan, the phase difference between the ears of the listener by changing the level difference between the left channel and the right channel in the low frequency range (hereinafter referred to as “interaural phase difference”). ) May not have been obtained sufficiently. Even with the technique described in Patent Document 2, it has been difficult to appropriately control the localization position of a sound image in a low frequency range.
 本発明が解決しようとする課題としては、上記のものが一例として挙げられる。本発明は、スピーカ再生時に、低域成分の音像を所望の位置に適切に定位させることが可能な音声信号処理装置、音声信号処理方法及び音声信号処理プログラムを提供することを目的とする。 The above is one example of problems to be solved by the present invention. An object of the present invention is to provide an audio signal processing device, an audio signal processing method, and an audio signal processing program capable of appropriately localizing a low-frequency component sound image at a desired position during speaker reproduction.
 請求項1に記載の発明では、2つのスピーカに供給する音声信号に対して処理を行う音声信号処理装置は、周波数が所定値以下の低域において、聴取位置で所定範囲内の両耳間位相差が生じるように、前記2つのスピーカのそれぞれに供給される前記音声信号の間に相対的な位相差を付与する制御を行う位相差制御手段を備える。 According to the first aspect of the present invention, the audio signal processing device that processes the audio signals supplied to the two speakers has a binaural position within a predetermined range at the listening position at a low frequency of a predetermined value or less. Phase difference control means is provided for performing control to give a relative phase difference between the audio signals supplied to each of the two speakers so as to cause a phase difference.
 請求項11に記載の発明では、2つのスピーカに供給する音声信号に対して処理を行う音声信号処理装置によって実行される音声信号処理方法は、周波数が所定値以下の低域において、聴取位置で所定範囲内の両耳間位相差が生じるように、前記2つのスピーカのそれぞれに供給される前記音声信号の間に相対的な位相差を付与する制御を行う位相差制御工程を備える。 In the invention according to claim 11, the audio signal processing method executed by the audio signal processing device that processes the audio signals supplied to the two speakers is the listening position in a low frequency range of a predetermined value or less. A phase difference control step of performing control to give a relative phase difference between the audio signals supplied to each of the two speakers so that an interaural phase difference within a predetermined range occurs;
 請求項12に記載の発明では、コンピュータを有し、2つのスピーカに供給する音声信号に対して処理を行う音声信号処理装置によって実行される音声信号処理プログラムは、周波数が所定値以下の低域において、聴取位置で所定範囲内の両耳間位相差が生じるように、前記2つのスピーカのそれぞれに供給される前記音声信号の間に相対的な位相差を付与する制御を行う位相差制御手段として前記コンピュータを機能させる。 According to the twelfth aspect of the present invention, an audio signal processing program executed by an audio signal processing apparatus that has a computer and processes audio signals supplied to two speakers is a low frequency whose frequency is a predetermined value or less. Phase difference control means for performing control to give a relative phase difference between the audio signals supplied to each of the two speakers so that an interaural phase difference within a predetermined range occurs at a listening position To make the computer function.
第1実施例に係る音響システムを概略的に示した図である。It is the figure which showed schematically the acoustic system which concerns on 1st Example. 音響システムが適用された音響空間の具体例を示す。The specific example of the acoustic space to which the acoustic system was applied is shown. 比較例に係る制御方法を説明するための図を示す。The figure for demonstrating the control method which concerns on a comparative example is shown. 比較例に係る制御を行った場合のシミュレーション結果例を示す。The example of a simulation result at the time of performing control concerning a comparative example is shown. 第1実施例に係る制御方法を補足説明するための図を示す。The figure for supplementarily explaining the control method concerning the 1st example is shown. 第1実施例に係る制御にて用いたシミュレーション条件を示す。The simulation conditions used in the control according to the first example are shown. 第1実施例に係る制御を行った場合のシミュレーション結果例を示す。The example of a simulation result at the time of performing control concerning the 1st example is shown. スピーカ開き角及びスピーカ距離の定義を説明するための図を示す。The figure for demonstrating the definition of a speaker opening angle and a speaker distance is shown. 所望の両耳間位相差が得られるスピーカ配置例を示す。An example of speaker arrangement in which a desired binaural phase difference can be obtained is shown. スピーカ配置とチャンネル間位相差との関係の例を示す。The example of the relationship between speaker arrangement | positioning and the phase difference between channels is shown.
 本発明の1つの観点では、2つのスピーカに供給する音声信号に対して処理を行う音声信号処理装置は、周波数が所定値以下の低域において、聴取位置で所定範囲内の両耳間位相差が生じるように、前記2つのスピーカのそれぞれに供給される前記音声信号の間に相対的な位相差を付与する制御を行う位相差制御手段を備える。 In one aspect of the present invention, an audio signal processing device that performs processing on audio signals supplied to two speakers has an interaural phase difference within a predetermined range at a listening position in a low frequency range of a predetermined value or less. Phase difference control means for performing control to give a relative phase difference between the audio signals supplied to each of the two speakers.
 上記の音声信号処理装置は、音像の定位位置を制御するために、2つのスピーカに供給する音声信号に対して処理を行う。例えば、音声信号処理装置は、DJ機器などとして好適に使用される。位相差制御手段は、周波数が所定値以下の低域において、聴取位置で所定範囲内の両耳間位相差(聴取者が両耳のそれぞれで聴き取る音についての位相差)が生じるように、2つのスピーカのそれぞれに供給される音声信号の間に相対的な位相差を付与する制御を行う。これにより、低域において、所望の両耳間位相差を適切に実現することができる。よって、スピーカによる再生時に、低域成分の音像を所望の位置に適切に定位させることが可能となる。 The above audio signal processing apparatus performs processing on the audio signals supplied to the two speakers in order to control the localization position of the sound image. For example, the audio signal processing apparatus is preferably used as a DJ device. The phase difference control means has a binaural phase difference within a predetermined range at the listening position (a phase difference for the sound that the listener listens to in both ears) at a listening position in a low frequency range below a predetermined value. Control is performed to give a relative phase difference between the audio signals supplied to the two speakers. Thereby, a desired interaural phase difference can be appropriately realized in a low frequency range. Therefore, the sound image of the low frequency component can be appropriately localized at a desired position during reproduction by the speaker.
 上記の音声信号処理装置の一態様では、前記位相差制御手段は、前記スピーカと前記聴取位置とによって形成されるスピーカ開き角、及び、前記スピーカと前記聴取位置との距離に基づいて、前記位相差を設定する。これにより、スピーカ開き角、及びスピーカと聴取位置との距離に応じて位相差を適切に制御することができ、所望の両耳間位相差を効果的に実現することが可能となる。 In one aspect of the audio signal processing device, the phase difference control unit is configured to determine the position based on a speaker opening angle formed by the speaker and the listening position and a distance between the speaker and the listening position. Set the phase difference. Thereby, the phase difference can be appropriately controlled according to the speaker opening angle and the distance between the speaker and the listening position, and a desired interaural phase difference can be effectively realized.
 好適には、前記位相差制御手段は、前記スピーカ開き角が小さい場合、前記スピーカ開き角が大きい場合に比して、前記位相差を大きくすることができる。こうするのは、スピーカ開き角が小さい場合には、スピーカ開き角が大きい場合に比して、音像を適切に定位させるのに必要な両耳間位相差が大きくなる傾向にあるからである。 Preferably, the phase difference control means can increase the phase difference when the speaker opening angle is small compared to when the speaker opening angle is large. This is because when the speaker opening angle is small, the interaural phase difference required for appropriately localizing the sound image tends to be larger than when the speaker opening angle is large.
 また好適には、前記位相差制御手段は、前記距離が長い場合、前記距離が短い場合に比して、前記位相差を大きくすることができる。こうするのは、スピーカと聴取位置との距離が長い場合には、当該距離が短い場合に比して、音像を適切に定位させるのに必要な両耳間位相差が大きくなる傾向にあるからである。 Also preferably, the phase difference control means can increase the phase difference when the distance is long compared to when the distance is short. This is because when the distance between the speaker and the listening position is long, the interaural phase difference necessary for appropriately localizing the sound image tends to be larger than when the distance is short. It is.
 上記の音声信号処理装置の他の一態様では、前記位相差制御手段は、前記音声信号の周波数に基づいて、前記位相差を設定する。これにより、音声信号の周波数に応じて位相差を適切に制御することができ、所望の両耳間位相差を効果的に実現することが可能となる。 In another aspect of the audio signal processing device, the phase difference control means sets the phase difference based on the frequency of the audio signal. Thus, the phase difference can be appropriately controlled according to the frequency of the audio signal, and a desired interaural phase difference can be effectively realized.
 好適には、前記位相差制御手段は、前記周波数が低い場合、前記周波数が高い場合に比して、前記位相差を大きくすることができる。こうするのは、音声信号の周波数が低い場合には、周波数が高い場合に比して、同じ両耳間位相差を得るために必要なチャンネル間位相差が大きくなる傾向にあるからである。 Preferably, the phase difference control means can increase the phase difference when the frequency is low compared to when the frequency is high. This is because when the frequency of the audio signal is low, the inter-channel phase difference required to obtain the same interaural phase difference tends to be larger than when the frequency is high.
 上記の音声信号処理装置において好適には、前記位相差制御手段は、25°から65°までの範囲内の前記両耳間位相差が生じるように、前記位相差を付与する制御を行うことができる。 Preferably, in the above audio signal processing device, the phase difference control means performs control to give the phase difference so that the interaural phase difference within a range of 25 ° to 65 ° is generated. it can.
 また、上記の音声信号処理装置において好適には、前記低域は、前記2つのスピーカのそれぞれに供給される前記音声信号のレベル差を変化させることで音像の定位位置を制御可能な周波数の下限値に基づいて規定される。つまり、制御に用いる「低域」は、「パン」に係る制御により、音像を適切に定位させるのに必要な両耳間位相差が得られる周波数の下限値に基づいて規定される。具体的には、当該下限値を下回る周波数帯域を、「低域」として用いることができる。 In the audio signal processing device, preferably, the low frequency is a lower limit of a frequency at which a localization position of a sound image can be controlled by changing a level difference between the audio signals supplied to the two speakers. Defined based on value. That is, the “low range” used for the control is defined based on the lower limit value of the frequency at which the interaural phase difference necessary for appropriately localizing the sound image is obtained by the control related to “pan”. Specifically, a frequency band lower than the lower limit value can be used as a “low band”.
 好適な例では、前記聴取位置は、前記2つのスピーカを結ぶ線分の垂直二等分線上に位置する。 In a preferred example, the listening position is located on a vertical bisector connecting the two speakers.
 また好適な例では、前記2つのスピーカは、前記聴取位置の前方、前記聴取位置の真横、前記聴取位置の後方のいずれかに配置される。 In a preferred example, the two speakers are arranged either in front of the listening position, right next to the listening position, or behind the listening position.
 本発明の他の観点では、2つのスピーカに供給する音声信号に対して処理を行う音声信号処理装置によって実行される音声信号処理方法は、周波数が所定値以下の低域において、聴取位置で所定範囲内の両耳間位相差が生じるように、前記2つのスピーカのそれぞれに供給される前記音声信号の間に相対的な位相差を付与する制御を行う位相差制御工程を備える。 In another aspect of the present invention, an audio signal processing method executed by an audio signal processing apparatus that processes audio signals supplied to two speakers is predetermined at a listening position in a low frequency range of a predetermined value or less. A phase difference control step of performing control to give a relative phase difference between the audio signals supplied to each of the two speakers so that an interaural phase difference within a range is generated;
 本発明の更に他の観点では、コンピュータを有し、2つのスピーカに供給する音声信号に対して処理を行う音声信号処理装置によって実行される音声信号処理プログラムは、周波数が所定値以下の低域において、聴取位置で所定範囲内の両耳間位相差が生じるように、前記2つのスピーカのそれぞれに供給される前記音声信号の間に相対的な位相差を付与する制御を行う位相差制御手段として前記コンピュータを機能させる。 In still another aspect of the present invention, an audio signal processing program executed by an audio signal processing apparatus that has a computer and processes audio signals supplied to two speakers is a low frequency whose frequency is a predetermined value or less. Phase difference control means for performing control to give a relative phase difference between the audio signals supplied to each of the two speakers so that an interaural phase difference within a predetermined range occurs at a listening position To make the computer function.
 上記した音声信号処理方法及び音声信号処理プログラムによっても、スピーカによる再生時に、低域成分の音像を所望の位置に適切に定位させることが可能となる。 The above-described audio signal processing method and audio signal processing program can also appropriately localize a low-frequency component sound image at a desired position during reproduction by a speaker.
 以下、図面を参照して本発明の好適な実施例について説明する。 Hereinafter, preferred embodiments of the present invention will be described with reference to the drawings.
 [第1実施例]
 まず、本発明の第1実施例について説明する。
[First embodiment]
First, a first embodiment of the present invention will be described.
 (装置構成)
 図1は、第1実施例に係る音響システム10を概略的に示した図である。音響システム10は、DJ機器などとして好適に使用される。
(Device configuration)
FIG. 1 is a diagram schematically illustrating an acoustic system 10 according to the first embodiment. The acoustic system 10 is suitably used as a DJ device.
 図1に示すように、音響システム10は、主に、位相制御部1L、1Rと、スピーカ2L、2Rとを有する。位相制御部1L、1Rは、同一の音声信号が入力され、入力された音声信号に対して位相を付与する制御を行う。具体的には、位相制御部1L、1Rは、スピーカ2L、2Rのそれぞれに出力する音声信号の間に相対的な位相差が生じるように、入力された音声信号に対して位相を付与する制御を行う。この場合、位相制御部1L、1Rの一方のみが音声信号に位相を付与することで所望の位相差を実現することとしても良いし、位相制御部1L、1Rの両方が音声信号に位相を付与することで所望の位相差を実現することとしても良い。例えば、位相制御部1L、1Rは、エフェクタやDSPやアンプなどによって実現される。そして、スピーカ2L、2Rは、それぞれ、位相制御部1L、1Rによって制御された後の音声信号を出力する。 As shown in FIG. 1, the acoustic system 10 mainly includes phase control units 1L and 1R and speakers 2L and 2R. The phase control units 1L and 1R receive the same audio signal and perform control to add a phase to the input audio signal. Specifically, the phase control units 1L and 1R control to give a phase to the input audio signal so that a relative phase difference occurs between the audio signals output to the speakers 2L and 2R, respectively. I do. In this case, only one of the phase control units 1L and 1R may realize a desired phase difference by adding a phase to the audio signal, or both of the phase control units 1L and 1R add a phase to the audio signal. By doing so, a desired phase difference may be realized. For example, the phase control units 1L and 1R are realized by an effector, a DSP, an amplifier, and the like. The speakers 2L and 2R output the audio signals after being controlled by the phase controllers 1L and 1R, respectively.
 音響システム10では、位相制御部1L、1Rが、スピーカ2L、2Rの音声信号の位相差を制御することで、つまりスピーカ2L、2Rに対応するチャンネル間の位相差(以下では、「チャンネル間位相差」と呼ぶ。)を制御することで、スピーカ2L、2Rによる再生時において、音像の定位位置を制御する。なお、位相制御部1L、1Rは、本発明における「音声信号処理装置」に相当し、「位相差制御手段」として機能する。 In the acoustic system 10, the phase control units 1L and 1R control the phase difference between the audio signals of the speakers 2L and 2R, that is, the phase difference between channels corresponding to the speakers 2L and 2R (hereinafter referred to as “interchannel level”). By controlling the “phase difference”, the localization position of the sound image is controlled during reproduction by the speakers 2L and 2R. The phase control units 1L and 1R correspond to the “audio signal processing device” in the present invention and function as “phase difference control means”.
 図2は、音響システム10が適用された音響空間の具体例を示す。図2に示すように、スピーカ2L、2Rは聴取位置(つまり聴取者の位置)の前方に配置され、スピーカ2Lは聴取位置の左前方に配置され、スピーカ2Rは聴取位置の右前方に配置される。この場合、聴取位置は、スピーカ2L、2Rを結ぶ線分の垂直二等分線上に概ね位置する。 FIG. 2 shows a specific example of an acoustic space to which the acoustic system 10 is applied. As shown in FIG. 2, the speakers 2L and 2R are arranged in front of the listening position (that is, the listener's position), the speaker 2L is arranged in front of the listening position, and the speaker 2R is arranged in front of the listening position. The In this case, the listening position is generally located on the vertical bisector connecting the lines connecting the speakers 2L and 2R.
 図2中の破線領域A1~A3に示すように、音響システム10では、位相制御部1L、1Rがチャンネル間位相差を制御することで、スピーカ2L、2Rによる再生時に、音像をスピーカ2L、2R間の任意の位置に定位させる。具体的には、位相制御部1L、1Rは、破線領域A1に示すようにスピーカ2L、2Rの中央付近に音像を定位(以下、「中央定位」と呼ぶ。)させたり、破線領域A2に示すようにスピーカ2Lの近傍に音像を定位(以下、「左側定位」と呼ぶ。)させたり、破線領域A3に示すようにスピーカ2Rの近傍に音像を定位(以下、「右側定位」と呼ぶ。)させたりする。 As indicated by broken line areas A1 to A3 in FIG. 2, in the acoustic system 10, the phase control units 1L and 1R control the phase difference between channels, so that sound images are reproduced by the speakers 2L and 2R during reproduction by the speakers 2L and 2R. Localize to any position between. Specifically, the phase control units 1L and 1R localize the sound image near the center of the speakers 2L and 2R (hereinafter referred to as “center localization”) as indicated by the broken line area A1, or indicate the broken line area A2. In this way, the sound image is localized in the vicinity of the speaker 2L (hereinafter referred to as “left localization”), or the sound image is localized in the vicinity of the speaker 2R as indicated by the broken line area A3 (hereinafter referred to as “right localization”). I will let you.
 (比較例の問題点)
 ここでは、前述した特許文献1に記載されたような「パン」に基づいた制御を比較例として挙げ、この比較例の問題点について説明する。
(Problems of the comparative example)
Here, control based on “pan” as described in Patent Document 1 described above is given as a comparative example, and problems of this comparative example will be described.
 図3は、比較例に係る制御方法を説明するための図を示す。図3(a)に示すように、比較例に係る制御方法は、音響システム10xによって実現される。音響システム10xは、位相制御部1L、1Rの代わりに、乗算器4L、4Rを有する。乗算器4L、4Rは、入力された音声信号に対して所定の係数(0以上で1以下の値)を乗算することで、音声信号のレベルを制御する。具体的には、乗算器4L、4Rは、スピーカ2L、2Rのそれぞれに出力する音声信号間のレベル差を制御する。音響システム10xでは、このような乗算器4L、4Rがスピーカ2L、2Rの音声信号のレベル差を制御することで、スピーカ2L、2Rによる再生時において、音像の定位位置を制御する。 FIG. 3 is a diagram for explaining a control method according to a comparative example. As shown in FIG. 3A, the control method according to the comparative example is realized by an acoustic system 10x. The acoustic system 10x includes multipliers 4L and 4R instead of the phase controllers 1L and 1R. The multipliers 4L and 4R control the level of the audio signal by multiplying the input audio signal by a predetermined coefficient (a value between 0 and 1). Specifically, the multipliers 4L and 4R control the level difference between the audio signals output to the speakers 2L and 2R, respectively. In the acoustic system 10x, such multipliers 4L and 4R control the level difference between the audio signals of the speakers 2L and 2R, thereby controlling the localization position of the sound image during reproduction by the speakers 2L and 2R.
 図3(b)は、乗算器4L、4Rが用いる係数の具体例を示している。図3(b)では、横軸に定位位置を示しており、縦軸に乗算器4L、4Rが用いる係数を示している。横軸において、中央に示す定位位置は「中央定位」に相当し、最も左側に示す定位位置は「左側定位」に相当し、最も右側に示す定位位置は「右側定位」に相当する(この定義は、後述する横軸に「定位位置」が示されたグラフについても、同様に適用されるものとする)。 FIG. 3B shows a specific example of coefficients used by the multipliers 4L and 4R. In FIG. 3B, the horizontal axis indicates the localization position, and the vertical axis indicates the coefficients used by the multipliers 4L and 4R. In the horizontal axis, the localization position shown in the center corresponds to “center localization”, the localization position shown on the left side corresponds to “left localization”, and the localization position shown on the right side corresponds to “right localization” (this definition) The same applies to a graph having a “localization position” indicated on the horizontal axis, which will be described later.
 図3(b)に示すように、比較例に係る制御では、音像を左側に定位させる場合には、左チャンネルのレベルを右チャンネルのレベルより大きくするために、乗算器4Lの係数が乗算器4Rの係数よりも大きな値に設定される。これに対して、比較例に係る制御では、音像を右側に定位させる場合には、右チャンネルのレベルを左チャンネルのレベルより大きくするために、乗算器4Rの係数が乗算器4Lの係数よりも大きな値に設定される。 As shown in FIG. 3B, in the control according to the comparative example, when the sound image is localized to the left side, the coefficient of the multiplier 4L is multiplied by the multiplier in order to make the level of the left channel larger than the level of the right channel. A value larger than the coefficient of 4R is set. On the other hand, in the control according to the comparative example, when the sound image is localized on the right side, the coefficient of the multiplier 4R is larger than the coefficient of the multiplier 4L in order to make the right channel level larger than the left channel level. Set to a large value.
 図4は、比較例に係る制御を行った場合のシミュレーション結果の一例を示している。ここでは、図4(a)に示すように、スピーカ2L、2Rの間隔を「4.2[m]」に設定し、スピーカ2L、2Rの中央位置と聴取位置との距離を「2.1[m]」に設定し、聴取者の両耳間距離を「0.25[m]」とした場合のシミュレーション結果を示す。その他のシミュレーション条件は下記の通りである。なお、入力信号の周波数としては、低域の周波数を用いたものとする。
・入力信号:正弦波
・入力信号の周波数:50、63、80、100、130[Hz]
・自由音場
・点音源
 上記のようなシミュレーション条件の元、比較例に係るパンの制御を行ってスピーカ2L、2Rから音を発生させた場合の、聴取者が両耳のそれぞれで聴き取る音についてのレベル差(以下、「両耳間レベル差」と呼ぶ。)、及び、聴取者が両耳のそれぞれで聴き取る音についての位相差(両耳間位相差)を測定する。例えば、聴取者の両耳のそれぞれに対応する位置に2つのマイクを設置することで、両耳間レベル差及び両耳間位相差が測定される。
FIG. 4 shows an example of a simulation result when the control according to the comparative example is performed. Here, as shown in FIG. 4A, the interval between the speakers 2L and 2R is set to “4.2 [m]”, and the distance between the center position of the speakers 2L and 2R and the listening position is “2.1. The simulation result is shown in the case where the distance between both ears of the listener is “0.25 [m]”. Other simulation conditions are as follows. Note that a low frequency is used as the frequency of the input signal.
・ Input signal: sine wave ・ Input signal frequency: 50, 63, 80, 100, 130 [Hz]
・ Free sound field / point sound source Under the simulation conditions as described above, the sound that the listener listens to with both ears when the pan control according to the comparative example is performed and the sound is generated from the speakers 2L and 2R. And the phase difference (interaural phase difference) of the sound that the listener listens to with both ears. For example, an inter-aural level difference and an inter-aural phase difference are measured by installing two microphones at positions corresponding to the respective ears of the listener.
 図4(b)は、比較例に係る制御によって得られた両耳間レベル差の一例を示している。図4(b)では、横軸に定位位置を示しており、縦軸に両耳間レベル差[dB]を示している。また、図4(c)は、比較例に係る制御によって得られた両耳間位相差の一例を示している。図4(c)では、横軸に定位位置を示しており、縦軸に両耳間位相差[°]を示している。図4(b)及び図4(c)では、複数の周波数ごとに得られた結果を重ねて示している。なお、図4(b)及び図4(c)の横軸に示す「定位位置」は、実際に音像が定位している位置ではなく、制御によって音像を定位させようとしている位置(目標の位置)を意味している(この定義は、後述する横軸に「定位位置」が示されたグラフについても、同様に適用されるものとする)。 FIG. 4B shows an example of the interaural level difference obtained by the control according to the comparative example. In FIG. 4B, the horizontal position indicates the localization position, and the vertical axis indicates the binaural level difference [dB]. FIG. 4C shows an example of the binaural phase difference obtained by the control according to the comparative example. In FIG. 4C, the horizontal position indicates the localization position, and the vertical axis indicates the binaural phase difference [°]. In FIG. 4B and FIG. 4C, the results obtained for each of a plurality of frequencies are shown superimposed. Note that the “localization position” shown on the horizontal axis of FIGS. 4B and 4C is not the position where the sound image is actually localized, but the position where the sound image is localized by control (target position). (This definition is also applied to a graph in which the “localization position” is shown on the horizontal axis, which will be described later.)
 図4(b)及び図4(c)より、比較例では、低域の周波数において、両耳間レベル差及び両耳間位相差が、定位位置に応じて適切に変化していないことがわかる。つまり、比較例では、音像を左側や右側に定位させようとレベル差を制御しているにも関わらず、十分な両耳間レベル差及び両耳間位相差が得られていないことがわかる。よって、比較例に係る制御によれば、低域において、定位位置を左右に振ろうと制御しても、定位がほとんど変化しないこととなる。 4 (b) and 4 (c), it can be seen that in the comparative example, the interaural level difference and the interaural phase difference are not appropriately changed according to the localization position in the low frequency range. . That is, in the comparative example, it is understood that a sufficient interaural level difference and interaural phase difference are not obtained even though the level difference is controlled so as to localize the sound image to the left side or the right side. Therefore, according to the control according to the comparative example, the localization is hardly changed even if the localization position is controlled to move left and right in the low frequency range.
 なお、上記した比較例のシミュレーション結果より、「130[Hz]」以下の周波数帯域では、十分な両耳間位相差が得られないと言える。つまり、パンによっては、「130[Hz]」以下の周波数帯域では、音像の定位位置を適切に制御することが困難であると言える。よって、以下では、「130[Hz]」以下の周波数帯域を「低域」として用いた、本実施例に係る制御を例示する。なお、制御に用いる「低域」は、スピーカ2L、2Rのレベル差を変化させることで音像の定位位置を制御可能な周波数の下限値に基づいて規定すれば良く、「130[Hz]」によって「低域」を規定することに限定はされない。「130[Hz]」は、レベル差を変化させることで音像の定位位置を制御可能な周波数の下限値を下回る周波数の一例である。 Note that, from the simulation results of the comparative example described above, it can be said that a sufficient binaural phase difference cannot be obtained in a frequency band of “130 [Hz]” or less. That is, depending on the pan, it can be said that it is difficult to appropriately control the localization position of the sound image in a frequency band of “130 [Hz]” or less. Therefore, in the following, control according to the present embodiment using a frequency band of “130 [Hz]” or less as “low band” will be exemplified. The “low frequency” used for the control may be defined based on the lower limit value of the frequency at which the localization position of the sound image can be controlled by changing the level difference between the speakers 2L and 2R. It is not limited to prescribing “low range”. “130 [Hz]” is an example of a frequency that is lower than the lower limit value of the frequency at which the localization position of the sound image can be controlled by changing the level difference.
 (第1実施例に係る制御方法)
 次に、第1実施例に係る制御方法について具体的に説明する。第1実施例では、上記した比較例の問題点を解決するべく、スピーカ2L、2Rによる再生時に、低域成分の音像が所望の位置に適切に定位するように制御を行う。具体的には、第1実施例では、低域において、聴取位置で所定範囲内の両耳間位相差が生じるように、2つのスピーカ2L、2Rのそれぞれに供給される音声信号の間に相対的な位相差を付与する制御を行う。つまり、第1実施例では、レベル差を制御することで音像を適切に定位させることが困難となる低域において、左側定位や右側定位などを実現する場合に、所望の両耳間位相差が生じるように、位相制御部1L、1Rがチャンネル間位相差を制御する。この場合、好ましくは、位相制御部1L、1Rは、25[°]から65[°]までの範囲内の両耳間位相差が生じるように、チャンネル間位相差を制御する。
(Control method according to the first embodiment)
Next, the control method according to the first embodiment will be specifically described. In the first embodiment, in order to solve the problems of the comparative example described above, control is performed so that the low-frequency component sound image is appropriately localized at a desired position during reproduction by the speakers 2L and 2R. Specifically, in the first embodiment, relative to each other between the audio signals supplied to the two speakers 2L and 2R so that an interaural phase difference within a predetermined range occurs at the listening position in the low frequency range. The control which gives a typical phase difference is performed. That is, in the first embodiment, when realizing the left localization or right localization in a low frequency range where it is difficult to properly localize a sound image by controlling the level difference, a desired interaural phase difference is obtained. The phase control units 1L and 1R control the phase difference between channels so as to occur. In this case, preferably, the phase control units 1L and 1R control the inter-channel phase difference so that the interaural phase difference within a range from 25 [°] to 65 [°] is generated.
 ここで、図5を参照して、第1実施例に係る制御方法の補足説明を行う。最初に、図5(a)を参照して、人間の方向知覚について説明する。一般的に、両耳間時間差(両耳間位相差と概ね同義である)、及び両耳間レベル差(両耳間伝達関数の振幅差あるいはレベル差)が、水平面の方向知覚の手掛かりとなることが知られている。まず、両耳間時間差であるが、インパルスによる実験により、両耳間時間差が「630[μsec]」までは時間差と定位位置との関係が線形になり、両耳間時間差が「1[msec]」以上では定位位置が変わらないといった知見が得られている。また、純音による実験により、「630[μsec]」が半周期となる「800[Hz]」までは音像の定位が得られ、「800[Hz]」以上では定位感が急速に減少し、「1.5[Hz]」~「1.6[Hz]」以上では定位感が全く得られないといった知見が得られている。次に、両耳間レベル差であるが、「10[dB]」~「15[dB]」程度の両耳間レベル差で音像が完全に片側に定位するといった知見や、定位が得られる両耳間レベル差は周波数に依存し、その両耳間レベル差は「2「kHz」」で最小となるといった知見が得られている。 Here, with reference to FIG. 5, a supplementary explanation of the control method according to the first embodiment will be given. First, human direction perception will be described with reference to FIG. In general, the interaural time difference (generally synonymous with the interaural phase difference) and the interaural level difference (amplitude difference or level difference of the interaural transfer function) are clues for the direction perception of the horizontal plane. It is known. First, regarding the interaural time difference, the relationship between the time difference and the localization position is linear until the interaural time difference is “630 [μsec]”, and the interaural time difference is “1 [msec]”. The above findings indicate that the localization position does not change. Further, through experiments using pure sounds, localization of the sound image is obtained up to “800 [Hz]” where “630 [μsec]” is a half cycle, and the localization feeling rapidly decreases above “800 [Hz]”. It has been found that a feeling of localization cannot be obtained at all from 1.5 [Hz] to 1.6 [Hz]. Next, regarding the interaural level difference, the knowledge that the sound image is localized to one side with the interaural level difference of about “10 [dB]” to “15 [dB]”, and the localization where localization is obtained. It is known that the interaural level difference depends on the frequency, and the interaural level difference is minimized at “2“ kHz ””.
 このような知見をまとめると、両耳間レベル差及び両耳間時間差が定位に影響する周波数範囲は、図5(a)に示すようになる。図5(a)より、低域(例えば200[Hz]以下)において所望の定位を得るためには、「10[dB]」以上の両耳間レベル差、若しくは十分な両耳間時間差(言い換えると両耳間位相差)が必要であると考えられる。したがって、本実施例では、低域において所望の定位を得るべく、両耳間レベル差及び両耳間時間差のうちの両耳間時間差に基づいて、つまり両耳間位相差に基づいて、制御を行う。具体的には、低域において音像が適切に定位する両耳間位相差が得られるように、チャンネル間位相差を制御する。 Summarizing such findings, the frequency range in which the interaural level difference and the interaural time difference affect localization is as shown in FIG. From FIG. 5A, in order to obtain a desired localization in a low frequency range (for example, 200 [Hz] or less), an interaural level difference of “10 [dB]” or more, or a sufficient interaural time difference (in other words, And interaural phase difference) are considered necessary. Therefore, in this embodiment, in order to obtain a desired localization in the low frequency range, control is performed based on the interaural time difference of the interaural level difference and the interaural time difference, that is, based on the interaural phase difference. Do. Specifically, the inter-channel phase difference is controlled so that a binaural phase difference in which the sound image is properly localized in a low frequency range is obtained.
 次に、図5(b)及び図5(c)を参照して、両耳間位相差による定位実験結果について説明する。この実験では、低域において音像が適切に定位する両耳間位相差を求めることを目的としている。 Next, with reference to FIG. 5 (b) and FIG. 5 (c), the localization experiment result by the binaural phase difference will be described. The purpose of this experiment is to obtain the interaural phase difference where the sound image is properly localized in the low frequency range.
 図5(b)は、両耳間位相差による定位実験の1つの例(以下、「実験1」と呼ぶ。)による結果を示している。実験1は、チャンネル間の位相差及びレベル差を制御して音像が左右に定位する状態を形成し、その状態における両耳間位相差を測定するものである。実験1における実験条件は下記の通りである。
・場所:無響室
・信号:狭帯域ノイズのバースト信号
・信号の中心周波数:50、63、80、100、125、160、200[Hz]
・信号のレベル:90[dB]
・被験者:4名
 図5(b)は、横軸に周波数[Hz]を示し、縦軸に両耳間位相差[°]を示している。つまり、図5(b)は、音像が左右に定位した状態での両耳間位相差を、周波数ごとに示している。これより、25~65[°]程度の両耳間位相差が生じた場合に、音像の定位が得られることがわかる。
FIG. 5B shows a result of one example (hereinafter, referred to as “Experiment 1”) of the localization experiment using the binaural phase difference. Experiment 1 controls the phase difference and level difference between channels to form a state where the sound image is localized to the left and right, and measures the interaural phase difference in that state. The experimental conditions in Experiment 1 are as follows.
・ Location: Anechoic chamber ・ Signal: Burst signal with narrow band noise ・ Signal center frequency: 50, 63, 80, 100, 125, 160, 200 [Hz]
Signal level: 90 [dB]
Test subject: 4 persons FIG. 5B shows the frequency [Hz] on the horizontal axis and the interaural phase difference [°] on the vertical axis. That is, FIG. 5B shows the binaural phase difference for each frequency in a state where the sound image is localized to the left and right. From this, it is understood that the localization of the sound image can be obtained when the interaural phase difference of about 25 to 65 [°] occurs.
 図5(c)は、両耳間位相差による定位実験の他の例(以下、「実験2」と呼ぶ。)による結果を示している。実験2は、種々の両耳間位相差を付与した信号をヘッドホンで試聴させ、各両耳間位相差ごとに、定位が得られるか否かの回答を得るものである。実験2における実験条件は下記の通りである。
・信号:狭帯域ノイズのバースト信号
・信号の中心周波数:50、63、80、100、125、160、200[Hz]
・被験者:3名
 図5(c)は、横軸に両耳間位相差[°]を示し、縦軸に定位が得られると回答した確率[%]を示している。図5(c)に示す結果は、種々の周波数(中心周波数)での回答結果を累積したものである。図5(c)より、25~65[°]程度の両耳間位相差において、定位が得られると回答した確率が75[%]以上になっていることがわかる。よって、25~65[°]程度の両耳間位相差が生じた場合に、音像の定位が得られるものと考えられる。
FIG. 5C shows the result of another example (hereinafter referred to as “Experiment 2”) of a localization experiment based on an interaural phase difference. In Experiment 2, a signal to which various interaural phase differences were added was auditioned using headphones, and an answer was obtained as to whether or not localization was obtained for each interaural phase difference. The experimental conditions in Experiment 2 are as follows.
-Signal: Burst signal with narrow band noise-Center frequency of signal: 50, 63, 80, 100, 125, 160, 200 [Hz]
Test subject: 3 persons FIG. 5C shows the interaural phase difference [°] on the horizontal axis and the probability [%] that the localization is obtained on the vertical axis. The result shown in FIG. 5C is obtained by accumulating response results at various frequencies (center frequency). From FIG. 5 (c), it can be seen that the probability that the localization is obtained is 75 [%] or more in the binaural phase difference of about 25 to 65 [°]. Therefore, it is considered that localization of a sound image can be obtained when a binaural phase difference of about 25 to 65 [°] occurs.
 本実施例では、このような実験1及び実験2の結果を受けて、低域において、25~65[°]程度の両耳間位相差が生じるように制御を行う。具体的には、本実施例では、低域において、左側定位や右側定位などを実現する場合に、絶対値において25~65[°]程度の両耳間位相差が生じるように、位相制御部1L、1Rがチャンネル間位相差を制御する。例えば実験やシミュレーションなどにより、種々のチャンネル間位相差にて発生する両耳間位相差を求めることで、絶対値において25~65[°]程度の両耳間位相差が生じるチャンネル間位相差を求めることができる。こうして求められたチャンネル間位相差を予め記憶しておけば、位相制御部1L、1Rは、左側定位や右側定位を実現する場合に、記憶されたチャンネル間位相差に設定されるように、スピーカ2L、2Rの両方又は一方に供給される音声信号に対して位相を付与する制御を行うことができる。 In this embodiment, in response to the results of Experiment 1 and Experiment 2, control is performed so that an interaural phase difference of about 25 to 65 [°] occurs in the low frequency range. Specifically, in the present embodiment, when realizing the left localization or right localization in the low frequency range, the phase control unit is configured so that an interaural phase difference of about 25 to 65 [°] occurs in the absolute value. 1L and 1R control the phase difference between channels. For example, by obtaining the interaural phase difference generated by various inter-channel phase differences through experiments or simulations, the inter-channel phase difference that produces an interaural phase difference of about 25 to 65 [°] in absolute value is obtained. Can be sought. If the inter-channel phase difference thus obtained is stored in advance, the phase control units 1L, 1R can be set to the stored inter-channel phase difference when realizing the left localization and the right localization. It is possible to perform control to add a phase to the audio signal supplied to both or one of 2L and 2R.
 (第1実施例による結果)
 次に、第1実施例に係る制御を行った場合のシミュレーション結果について説明する。
(Results of the first example)
Next, a simulation result when the control according to the first embodiment is performed will be described.
 図6は、第1実施例に係る制御にて用いたシミュレーション条件を示す。図6(a)に示すように、スピーカ2L、2Rの間隔は「4.2[m]」であり、スピーカ2L、2Rの中央位置と聴取位置との距離は「2.1[m]」であり、聴取者の両耳間距離は「0.25[m]」であるものとする。その他のシミュレーション条件は下記の通りである。なお、これらのシミュレーション条件は、上記した比較例に係る制御にて用いたシミュレーション条件と同様である。
・入力信号:正弦波
・入力信号の周波数:50、63、80、100、130[Hz]
・自由音場
・点音源
 図6(b)は、第1実施例に係る制御において、各定位位置において付与したチャンネル間位相差の具体例を示している。図6(b)では、横軸に定位位置を示しており、縦軸にチャンネル間位相差[°]を示している。縦軸のチャンネル間位相差は、左チャンネルが右チャンネルよりも位相が進んでいる場合を正の値として示し、右チャンネルが左チャンネルよりも位相が進んでいる場合を負の値として示している。
FIG. 6 shows simulation conditions used in the control according to the first embodiment. As shown in FIG. 6A, the distance between the speakers 2L and 2R is “4.2 [m]”, and the distance between the center position of the speakers 2L and 2R and the listening position is “2.1 [m]”. It is assumed that the distance between both ears of the listener is “0.25 [m]”. Other simulation conditions are as follows. These simulation conditions are the same as the simulation conditions used in the control according to the above-described comparative example.
・ Input signal: sine wave ・ Input signal frequency: 50, 63, 80, 100, 130 [Hz]
Free field / point sound source FIG. 6B shows a specific example of the inter-channel phase difference provided at each localization position in the control according to the first embodiment. In FIG. 6B, the horizontal axis indicates the localization position, and the vertical axis indicates the inter-channel phase difference [°]. The phase difference between channels on the vertical axis indicates a positive value when the left channel is more advanced than the right channel, and a negative value when the right channel is more advanced than the left channel. .
 図6(b)に示すように、定位位置が左から右に進むに従って単調減少するようなチャンネル間位相差が設定される。具体的には、左側定位を実現させる場合(つまり最も左側に音像を定位させることを図った場合)には、チャンネル間位相差が「150[°]」程度に設定される、つまり左チャンネルの位相が右チャンネルの位相よりも「150[°]」程度進むように設定される。また、中央定位を実現させる場合(つまりスピーカ2L、2R間の中央位置に音像を定位させることを図った場合)には、チャンネル間位相差が「0[°]」に設定される。また、右側定位を実現させる場合(つまり最も右側に音像を定位させることを図った場合)には、チャンネル間位相差が「-150[°]」程度に設定される、つまり右チャンネルの位相が左チャンネルの位相よりも「150[°]」程度進むように設定される。 As shown in FIG. 6B, an inter-channel phase difference is set such that the localization position decreases monotonically as it moves from left to right. Specifically, when the left localization is realized (that is, when the sound image is localized on the leftmost side), the phase difference between channels is set to about “150 °”, that is, the left channel is positioned. The phase is set to advance about “150 [°]” from the phase of the right channel. In addition, when center localization is realized (that is, when a sound image is localized at the center position between the speakers 2L and 2R), the inter-channel phase difference is set to “0 [°]”. When the right localization is realized (that is, when the sound image is localized on the rightmost side), the phase difference between channels is set to about “−150 [°]”, that is, the phase of the right channel is It is set so as to advance about “150 [°]” from the phase of the left channel.
 次に、図7を参照して、上記したシミュレーション条件を用いて第1実施例に係る制御を行った場合のシミュレーション結果について説明する。 Next, a simulation result when the control according to the first embodiment is performed using the above simulation conditions will be described with reference to FIG.
 ここでは、比較のために、図7(a)に、前述した比較例に係る制御を行った場合(つまり「パン」を行った場合)のシミュレーション結果例を示す。図7(a)のグラフは、図4(c)に示したものと同様である。一方、図7(b)は、第1実施例に係る制御を行った場合のシミュレーション結果例を示している。図7(a)及び図7(b)では、横軸に定位位置を示し、縦軸に両耳間位相差[°]を示している。また、図7(a)及び図7(b)では、複数の周波数ごとに得られた結果を重ねて示している。例えば、両耳間位相差は、聴取者の両耳のそれぞれに対応する位置に2つのマイクを設置することで測定される。なお、第1実施例に係る制御及び比較例に係る制御は、同一のシミュレーション条件を用いて行ったものとする。 Here, for comparison, FIG. 7A shows an example of a simulation result when the control according to the above-described comparative example is performed (that is, when “pan” is performed). The graph in FIG. 7A is the same as that shown in FIG. On the other hand, FIG. 7B shows a simulation result example when the control according to the first embodiment is performed. 7A and 7B, the horizontal axis indicates the localization position, and the vertical axis indicates the binaural phase difference [°]. Moreover, in FIG. 7A and FIG. 7B, the results obtained for each of a plurality of frequencies are shown superimposed. For example, the interaural phase difference is measured by installing two microphones at positions corresponding to the listener's both ears. It is assumed that the control according to the first embodiment and the control according to the comparative example are performed using the same simulation conditions.
 第1実施例の結果と比較例の結果とを比較すると、破線領域B1、C1に示すように、第1実施例に係る制御によれば、比較例に係る制御と比較して、左側定位を実現しようとした際に発生した両耳間位相差がかなり大きいことがわかる。同様に、破線領域B2、C2に示すように、第1実施例に係る制御によれば、比較例に係る制御と比較して、右側定位を実現しようとした際に発生した両耳間位相差(絶対値)がかなり大きいことがわかる。他方で、破線領域B3、C3に示すように、中央定位を実現しようとした際の両耳間位相差は、第1実施例及び比較例の両方とも概ね「0[°]」となっていることがわかる。 Comparing the result of the first example and the result of the comparative example, as shown in the broken line areas B1 and C1, according to the control according to the first example, the left localization is compared with the control according to the comparative example. It can be seen that the binaural phase difference that occurred when trying to achieve this was quite large. Similarly, as shown by the broken line areas B2 and C2, according to the control according to the first embodiment, the binaural phase difference generated when trying to realize the right localization as compared with the control according to the comparative example. It can be seen that the (absolute value) is quite large. On the other hand, as shown by the broken line regions B3 and C3, the interaural phase difference when attempting to achieve the central localization is substantially “0 [°]” in both the first example and the comparative example. I understand that.
 以上説明した結果より、第1実施例によれば、低域において、所望の両耳間位相差を適切に実現することができる。よって、第1実施例によれば、スピーカ2L、2Rによる再生時に、低域成分の音像を所望の位置に適切に定位させることが可能となる。 From the results described above, according to the first embodiment, a desired binaural phase difference can be appropriately realized in a low frequency range. Therefore, according to the first embodiment, it is possible to appropriately localize a low-frequency component sound image at a desired position during reproduction by the speakers 2L and 2R.
 [第2実施例]
 次に、本発明の第2実施例について説明する。第2実施例では、スピーカ2L、2Rと聴取位置とによって形成されるスピーカ開き角、及び、スピーカ2L、2Rと聴取位置との距離(以下、適宜「スピーカ距離」と呼ぶ。)に基づいて、付与するチャンネル間位相差を設定する点で、第1実施例と異なる。より具体的には、第2実施例では、位相制御部1L、1Rは、スピーカ開き角が小さい場合には、スピーカ開き角が大きい場合に比して、チャンネル間位相差を大きくする。また、位相制御部1L、1Rは、スピーカ距離が長い場合には、スピーカ距離が短い場合に比して、チャンネル間位相差を大きくする。
[Second Embodiment]
Next, a second embodiment of the present invention will be described. In the second embodiment, based on the speaker opening angle formed by the speakers 2L, 2R and the listening position, and the distance between the speakers 2L, 2R and the listening position (hereinafter referred to as “speaker distance” as appropriate). This is different from the first embodiment in that the phase difference between channels to be applied is set. More specifically, in the second embodiment, the phase control units 1L and 1R increase the inter-channel phase difference when the speaker opening angle is small compared to when the speaker opening angle is large. Further, the phase control units 1L and 1R increase the inter-channel phase difference when the speaker distance is long compared to when the speaker distance is short.
 図8は、スピーカ開き角及びスピーカ距離の定義を説明するための図を表す。前述したように、聴取位置は、スピーカ2L、2Rを結ぶ線分71の垂直二等分線72上に位置する。この場合、スピーカ開き角φは、スピーカ2Lと聴取位置とを結ぶ線分73Lと、垂直二等分線72とが成す角度、及び、スピーカ2Rと聴取位置とを結ぶ線分73Rと、垂直二等分線72とが成す角度として定義される。この2つの角度は、聴取位置がスピーカ2L、2Rを結ぶ線分71の垂直二等分線72上に位置するため、等しくなる。また、スピーカ距離Lは、スピーカ2Lと聴取位置とを結ぶ線分73Lの長さ、及び、スピーカ2Rと聴取位置とを結ぶ線分73Rの長さとして定義される。この2つの長さも、聴取位置がスピーカ2L、2Rを結ぶ線分71の垂直二等分線72上に位置するため、等しくなる。 FIG. 8 is a diagram for explaining the definition of the speaker opening angle and the speaker distance. As described above, the listening position is located on the vertical bisector 72 of the line segment 71 connecting the speakers 2L and 2R. In this case, the speaker opening angle φ is defined as the angle formed by the line segment 73L connecting the speaker 2L and the listening position and the vertical bisector 72, and the line segment 73R connecting the speaker 2R and the listening position by two vertical lines. It is defined as the angle formed by the equidistant line 72. These two angles are equal because the listening position is located on the vertical bisector 72 of the line 71 connecting the speakers 2L and 2R. The speaker distance L is defined as the length of the line segment 73L connecting the speaker 2L and the listening position and the length of the line segment 73R connecting the speaker 2R and the listening position. These two lengths are also equal because the listening position is located on the vertical bisector 72 of the line 71 connecting the speakers 2L and 2R.
 なお、スピーカ2L、2Rの位置は、聴取位置とスピーカ開き角φ及びスピーカ距離Lとに応じて一義的に決まるものであるが、以下では、スピーカ開き角φ及びスピーカ距離Lに基づいて規定したスピーカ2L、2Rの位置について、「スピーカ配置」や「スピーカ位置」といった文言を用いるものとする。 The positions of the speakers 2L and 2R are uniquely determined according to the listening position, the speaker opening angle φ, and the speaker distance L. In the following, the positions are defined based on the speaker opening angle φ and the speaker distance L. For the positions of the speakers 2L and 2R, words such as “speaker arrangement” and “speaker position” are used.
 図9は、好適なスピーカ配置の例を示している。具体的には、チャンネル間位相差を制御することで、25~65[°]程度の両耳間位相差が得られたスピーカ配置を例示している。 FIG. 9 shows an example of a suitable speaker arrangement. Specifically, the speaker arrangement in which an interaural phase difference of about 25 to 65 [°] is obtained by controlling the inter-channel phase difference is illustrated.
 図9(a)及び図9(b)は、スピーカ開き角φ[°]及びスピーカ距離L[m]を極座標表示にて示すことで、スピーカ配置を表現している。具体的には、原点(聴取位置)を通る垂直線からの方位角によってスピーカ開き角φを表し、原点からの距離によってスピーカ距離Lを表している。ここでは、スピーカ2Lについてのスピーカ開き角φ及びスピーカ距離Lのみを示している、つまりスピーカ2Lの配置例のみを示している。 9 (a) and 9 (b) express the speaker arrangement by showing the speaker opening angle φ [°] and the speaker distance L [m] in polar coordinate display. Specifically, the speaker opening angle φ is represented by an azimuth angle from a vertical line passing through the origin (listening position), and the speaker distance L is represented by a distance from the origin. Here, only the speaker opening angle φ and the speaker distance L for the speaker 2L are shown, that is, only the arrangement example of the speakers 2L is shown.
 図9(a)は、周波数を「50[Hz]」に設定した場合に、チャンネル間位相差を制御することで25~65[°]程度の両耳間位相差が得られたスピーカ配置例を示している。また、図9(b)は、周波数を「130[Hz]」に設定した場合に、チャンネル間位相差を制御することで25~65[°]程度の両耳間位相差が得られたスピーカ配置例を示している。具体的には、図9(a)及び(b)は、太線で表された枠内にスピーカ2Lを配置すれば、25~65[°]程度の両耳間位相差が得られることを示している。なお、図9(a)及び(b)に示すような結果は、例えば上記したようなシミュレーション等により得られる。 FIG. 9A shows an example of speaker arrangement in which an interaural phase difference of about 25 to 65 [°] is obtained by controlling the inter-channel phase difference when the frequency is set to “50 [Hz]”. Is shown. FIG. 9B shows a speaker in which an interaural phase difference of about 25 to 65 [°] is obtained by controlling the inter-channel phase difference when the frequency is set to “130 [Hz]”. An arrangement example is shown. Specifically, FIGS. 9 (a) and 9 (b) show that an interaural phase difference of about 25 to 65 [°] can be obtained if the speaker 2L is arranged within a frame represented by a bold line. ing. Note that the results shown in FIGS. 9A and 9B are obtained, for example, by simulation as described above.
 図9(a)及び(b)より、スピーカ開き角φが大きい場合には、スピーカ開き角φが小さい場合に比して、所望の両耳間位相差が得られるスピーカ距離Lが長くなることがわかる。言い換えると、スピーカ開き角φが小さい場合には、スピーカ開き角φが大きい場合に比して、所望の両耳間位相差が得られるスピーカ距離Lが短くなることがわかる。つまり、スピーカ開き角φが大きいと、スピーカ距離Lをある程度長くしても所望の両耳間位相差が得られるが(例えば最大で27[m]程度)、スピーカ開き角φが小さいと、所望の両耳間位相差を得るためにはスピーカ距離Lをある程度短くする必要があると言える(例えば最小で7[m]程度)。 9A and 9B, when the speaker opening angle φ is large, the speaker distance L at which a desired interaural phase difference can be obtained is longer than when the speaker opening angle φ is small. I understand. In other words, it can be seen that when the speaker opening angle φ is small, the speaker distance L at which a desired interaural phase difference is obtained is shorter than when the speaker opening angle φ is large. That is, when the speaker opening angle φ is large, a desired interaural phase difference can be obtained even if the speaker distance L is increased to some extent (for example, about 27 [m] at the maximum). It can be said that it is necessary to shorten the speaker distance L to some extent in order to obtain the interaural phase difference (for example, about 7 [m] at the minimum).
 なお、図9ではスピーカ2Lの配置例のみを示しているが、スピーカ2Rもこれと同様になることは言うまでも無い。 Although FIG. 9 shows only an arrangement example of the speaker 2L, it goes without saying that the speaker 2R is the same as this.
 図10は、スピーカ配置とチャンネル間位相差との関係を例示している。具体的には、各スピーカ位置ごとに、25~65[°]程度の両耳間位相差が得られるチャンネル間位相差を例示している。 FIG. 10 illustrates the relationship between the speaker arrangement and the phase difference between channels. Specifically, an inter-channel phase difference that can obtain an interaural phase difference of about 25 to 65 [°] is illustrated for each speaker position.
 図10(a)及び(b)は、スピーカ開き角φ[°]及びスピーカ距離L[m]を極座標表示にて示すことで、スピーカ配置を表現している(詳細な定義は図9と同様である)。ここでも、スピーカ2Lについてのみ、スピーカ配置とチャンネル間位相差との関係を図示している。 10 (a) and 10 (b) express the speaker arrangement by showing the speaker opening angle φ [°] and the speaker distance L [m] in polar coordinate display (the detailed definition is the same as FIG. 9). Is). Here also, the relationship between the speaker arrangement and the inter-channel phase difference is illustrated only for the speaker 2L.
 図10(a)は、各スピーカ位置において、「25[°]」の両耳間位相差を得るために必要なチャンネル間位相差[°]を例示している。また、図10(b)は、各スピーカ位置において、「65[°]」の両耳間位相差を得るために必要なチャンネル間位相差[°]を例示している。なお、図10(a)及び(b)に示す結果は、周波数を「100[Hz]」に設定した場合に得られたものである。このような結果は、例えば上記したようなシミュレーション等により得られる。 FIG. 10A illustrates an inter-channel phase difference [°] necessary to obtain an interaural phase difference of “25 [°]” at each speaker position. FIG. 10B illustrates an inter-channel phase difference [°] necessary to obtain an interaural phase difference of “65 [°]” at each speaker position. The results shown in FIGS. 10A and 10B are obtained when the frequency is set to “100 [Hz]”. Such a result is obtained, for example, by a simulation as described above.
 図10(a)及び(b)より、所望の両耳間位相差(25[°]、65[°])を得るために必要なチャンネル間位相差は、スピーカ開き角φ及びスピーカ距離Lに依存することがわかる。具体的には、スピーカ開き角φが大きい場合には、スピーカ開き角φが小さい場合に比して、所望の両耳間位相差を得るために必要なチャンネル間位相差が小さくなることがわかる。言い換えると、スピーカ開き角φが小さい場合には、スピーカ開き角φが大きい場合に比して、所望の両耳間位相差を得るために必要なチャンネル間位相差が大きくなる(つまり「180[°]に近付く」)ことがわかる。また、スピーカ距離Lが短い場合には、スピーカ距離Lが長い場合に比して、所望の両耳間位相差を得るために必要なチャンネル間位相差が小さくなることがわかる。言い換えると、スピーカ距離Lが長い場合には、スピーカ距離Lが短い場合に比して、所望の両耳間位相差を得るために必要なチャンネル間位相差が大きくなる(つまり「180[°]に近付く」)ことがわかる。 10 (a) and 10 (b), the inter-channel phase difference necessary to obtain the desired interaural phase difference (25 [°], 65 [°]) is represented by the speaker opening angle φ and the speaker distance L. It turns out that it depends. Specifically, it is understood that when the speaker opening angle φ is large, the inter-channel phase difference necessary for obtaining a desired interaural phase difference is smaller than when the speaker opening angle φ is small. . In other words, when the speaker opening angle φ is small, the inter-channel phase difference necessary to obtain the desired interaural phase difference is larger than when the speaker opening angle φ is large (ie, “180 [ It is close to “°]”). It can also be seen that when the speaker distance L is short, the inter-channel phase difference required to obtain the desired binaural phase difference is smaller than when the speaker distance L is long. In other words, when the speaker distance L is long, the inter-channel phase difference necessary to obtain a desired interaural phase difference is larger than when the speaker distance L is short (ie, “180 [°]”). "
 なお、図10では、スピーカ2Lについてのスピーカ配置とチャンネル間位相差との関係のみを示しているが、スピーカ2Rもこれと同様になることは言うまでも無い。 FIG. 10 shows only the relationship between the speaker arrangement and the phase difference between channels for the speaker 2L, but it goes without saying that the speaker 2R is the same as this.
 以上より、第2実施例では、位相制御部1L、1Rは、図10に示したようなスピーカ配置とチャンネル間位相差との関係に基づいて、チャンネル間位相差を制御する。つまり、位相制御部1L、1Rは、現在設定されているスピーカ開き角φ及びスピーカ距離Lにおいて所望の両耳間位相差が得られるチャンネル間位相差に設定されるように、スピーカ2L、2Rの両方又は一方に供給される音声信号に対して位相を付与する制御を行う。 As described above, in the second embodiment, the phase control units 1L and 1R control the inter-channel phase difference based on the relationship between the speaker arrangement and the inter-channel phase difference as shown in FIG. That is, the phase control units 1L and 1R are set to the phase difference between the channels 2L and 2R so that a desired interaural phase difference can be obtained at the currently set speaker opening angle φ and speaker distance L. Control is performed to add a phase to the audio signal supplied to both or one.
 1つの例では、実験やシミュレーションなどにより、スピーカ位置ごとに所望の両耳間位相差が得られるチャンネル間位相差を求めるための演算式を作成しておき、位相制御部1L、1Rは、そのような演算式に基づいて、現在のスピーカ位置に応じたチャンネル間位相差を求めて制御を行うことができる。他の例では、実験やシミュレーションなどにより、スピーカ位置ごとに所望の両耳間位相差が得られたチャンネル間位相差をテーブルデータとして記憶しておき、位相制御部1L、1Rは、そのようなテーブルデータから、現在のスピーカ位置に応じたチャンネル間位相差を読み出して制御を行うことができる。なお、現在のスピーカ位置は、例えばユーザからの入力により取得することができる。 In one example, an arithmetic expression for obtaining a phase difference between channels that obtains a desired interaural phase difference for each speaker position is created by experiment or simulation, and the phase control units 1L and 1R Based on such an arithmetic expression, control can be performed by obtaining a phase difference between channels corresponding to the current speaker position. In another example, an inter-channel phase difference in which a desired interaural phase difference is obtained for each speaker position by experiment or simulation is stored as table data, and the phase control units 1L and 1R Control can be performed by reading the phase difference between channels according to the current speaker position from the table data. Note that the current speaker position can be acquired by input from the user, for example.
 以上説明した第2実施例によれば、スピーカ開き角及びスピーカ距離に応じてチャンネル間位相差を適切に制御することができ、所望の両耳間位相差を効果的に実現することができる。よって、第2実施例によれば、スピーカ2L、2Rによる再生時に、より確実に、低域成分の音像を所望の位置に定位させることが可能となる。 According to the second embodiment described above, the inter-channel phase difference can be appropriately controlled according to the speaker opening angle and the speaker distance, and the desired interaural phase difference can be effectively realized. Therefore, according to the second embodiment, the sound image of the low frequency component can be localized at a desired position more reliably during reproduction by the speakers 2L and 2R.
 なお、スピーカ開き角φ及びスピーカ距離Lを、図8に示したように定義することに限定はされない。他の例では、スピーカ開き角を、スピーカ2Lと聴取位置とを結ぶ線分73Lと、スピーカ2Rと聴取位置とを結ぶ線分73Rとが成す角度として定義することができる。この例におけるスピーカ開き角は、上記したスピーカ開き角φの2倍となる。また他の例では、スピーカ距離を、スピーカ2L、2Rから聴取位置までの垂直二等分線72上での距離として定義することができる。この例におけるスピーカ距離は、上記したスピーカ開き角φ及びスピーカ距離Lを用いると「L×cosφ」となる。 Note that the speaker opening angle φ and the speaker distance L are not limited to be defined as shown in FIG. In another example, the speaker opening angle can be defined as an angle formed by a line segment 73L connecting the speaker 2L and the listening position and a line segment 73R connecting the speaker 2R and the listening position. The speaker opening angle in this example is twice the above-described speaker opening angle φ. In another example, the speaker distance can be defined as the distance on the vertical bisector 72 from the speakers 2L and 2R to the listening position. The speaker distance in this example is “L × cos φ” when the speaker opening angle φ and the speaker distance L are used.
 [変形例]
 上記では、スピーカ開き角及びスピーカ距離に応じてチャンネル間位相差を制御する例(第2実施例参照)を示したが、この代わりに若しくはこれに加えて、音声信号の周波数に応じて、チャンネル間位相差を制御しても良い。具体的には、他の例では、周波数が低い場合に、周波数が高い場合に比して、チャンネル間位相差を大きくすることができる(言い換えると、周波数が高い場合に、周波数が低い場合に比して、チャンネル間位相差を小さくする)。こうするのは、同じ両耳間位相差を得るために必要なチャンネル間位相差は、周波数に応じて変化するためである。具体的には、同じ両耳間位相差を得るために必要なチャンネル間位相差は、周波数が低い場合には、周波数が高い場合に比べ大きくすることが望ましい。そのため、他の例では、周波数が低い場合には、周波数が高い場合に比して、チャンネル間位相差を大きくする。
[Modification]
In the above, an example (see the second embodiment) of controlling the phase difference between channels according to the speaker opening angle and the speaker distance is shown, but instead of or in addition to this, the channel is changed according to the frequency of the audio signal. The interphase difference may be controlled. Specifically, in another example, the phase difference between channels can be increased when the frequency is low compared to when the frequency is high (in other words, when the frequency is high and the frequency is low). In comparison, the phase difference between channels is reduced). This is because the inter-channel phase difference required to obtain the same binaural phase difference changes according to the frequency. Specifically, it is desirable that the inter-channel phase difference necessary for obtaining the same binaural phase difference is larger when the frequency is low than when the frequency is high. Therefore, in another example, when the frequency is low, the inter-channel phase difference is increased compared to when the frequency is high.
 また、上記では、聴取位置の前方にスピーカ2L、2Rを配置する例を示したが(図2等参照)、聴取位置の後方や聴取位置の真横に、スピーカ2L、2Rを配置しても良い。スピーカ2L、2Rを聴取位置の後方や真横に配置した場合にも、上記した方法と同様の方法により、チャンネル間位相差を制御することができる。 Moreover, although the example which arrange | positions the speakers 2L and 2R ahead of a listening position was shown above (refer FIG. 2 etc.), you may arrange | position the speakers 2L and 2R behind a listening position or right beside a listening position. . Even when the speakers 2L and 2R are arranged behind or directly beside the listening position, the phase difference between channels can be controlled by the same method as described above.
 以上に述べたように、実施例は、上述した実施例に限られるものではなく、特許請求の範囲及び明細書全体から読み取れる発明の要旨あるいは思想に反しない範囲で適宜変更可能である。 As described above, the embodiments are not limited to the above-described embodiments, and can be appropriately changed without departing from the spirit or idea of the invention that can be read from the claims and the entire specification.
 本発明は、例えばDJ機器などに利用することができる。 The present invention can be used for DJ equipment, for example.
 1L、1R 位相制御部
 2L、2R スピーカ
 10 音響システム
1L, 1R Phase control unit 2L, 2R Speaker 10 Acoustic system

Claims (12)

  1.  2つのスピーカに供給する音声信号に対して処理を行う音声信号処理装置であって、
     周波数が所定値以下の低域において、聴取位置で所定範囲内の両耳間位相差が生じるように、前記2つのスピーカのそれぞれに供給される前記音声信号の間に相対的な位相差を付与する制御を行う位相差制御手段を備えることを特徴とする音声信号処理装置。
    An audio signal processing apparatus that processes audio signals supplied to two speakers,
    A relative phase difference is given between the audio signals supplied to each of the two speakers so that a binaural phase difference within a predetermined range occurs at a listening position in a low frequency range below a predetermined value. An audio signal processing apparatus comprising phase difference control means for performing control.
  2.  前記位相差制御手段は、前記スピーカと前記聴取位置とによって形成されるスピーカ開き角、及び、前記スピーカと前記聴取位置との距離に基づいて、前記位相差を設定することを特徴とする請求項1に記載の音声信号処理装置。 The phase difference control means sets the phase difference based on a speaker opening angle formed by the speaker and the listening position and a distance between the speaker and the listening position. The audio signal processing device according to 1.
  3.  前記位相差制御手段は、前記スピーカ開き角が小さい場合、前記スピーカ開き角が大きい場合に比して、前記位相差を大きくすることを特徴とする請求項2に記載の音声信号処理装置。 3. The audio signal processing apparatus according to claim 2, wherein the phase difference control means increases the phase difference when the speaker opening angle is small compared to when the speaker opening angle is large.
  4.  前記位相差制御手段は、前記距離が長い場合、前記距離が短い場合に比して、前記位相差を大きくすることを特徴とする請求項2又は3に記載の音声信号処理装置。 The audio signal processing apparatus according to claim 2 or 3, wherein the phase difference control means increases the phase difference when the distance is long as compared to when the distance is short.
  5.  前記位相差制御手段は、前記音声信号の周波数に基づいて、前記位相差を設定することを特徴とする請求項1乃至4のいずれか一項に記載の音声信号処理装置。 5. The audio signal processing device according to claim 1, wherein the phase difference control means sets the phase difference based on a frequency of the audio signal.
  6.  前記位相差制御手段は、前記周波数が低い場合、前記周波数が高い場合に比して、前記位相差を大きくすることを特徴とする請求項5に記載の音声信号処理装置。 6. The audio signal processing apparatus according to claim 5, wherein the phase difference control means increases the phase difference when the frequency is low compared to when the frequency is high.
  7.  前記位相差制御手段は、25°から65°までの範囲内の前記両耳間位相差が生じるように、前記位相差を付与する制御を行うことを特徴とする請求項1乃至6のいずれか一項に記載の音声信号処理装置。 The said phase difference control means performs control which provides the said phase difference so that the said binaural phase difference in the range from 25 degrees to 65 degrees will arise. The audio signal processing apparatus according to one item.
  8.  前記低域は、前記2つのスピーカのそれぞれに供給される前記音声信号のレベル差を変化させることで音像の定位位置を制御可能な周波数の下限値に基づいて規定されることを特徴とする請求項1乃至7のいずれか一項に記載の音声信号処理装置。 The low frequency range is defined based on a lower limit value of a frequency at which a localization position of a sound image can be controlled by changing a level difference between the audio signals supplied to the two speakers. Item 8. The audio signal processing device according to any one of Items 1 to 7.
  9.  前記聴取位置は、前記2つのスピーカを結ぶ線分の垂直二等分線上に位置することを特徴とする請求項1乃至8のいずれか一項に記載の音声信号処理装置。 The audio signal processing apparatus according to any one of claims 1 to 8, wherein the listening position is located on a vertical bisector connecting a line connecting the two speakers.
  10.  前記2つのスピーカは、前記聴取位置の前方、前記聴取位置の真横、前記聴取位置の後方のいずれかに配置されることを特徴とする請求項1乃至9のいずれか一項に記載の音声信号処理装置。 The audio signal according to any one of claims 1 to 9, wherein the two speakers are arranged in front of the listening position, right next to the listening position, or rear of the listening position. Processing equipment.
  11.  2つのスピーカに供給する音声信号に対して処理を行う音声信号処理装置によって実行される音声信号処理方法であって、
     周波数が所定値以下の低域において、聴取位置で所定範囲内の両耳間位相差が生じるように、前記2つのスピーカのそれぞれに供給される前記音声信号の間に相対的な位相差を付与する制御を行う位相差制御工程を備えることを特徴とする音声信号処理方法。
    An audio signal processing method executed by an audio signal processing apparatus that processes audio signals supplied to two speakers,
    A relative phase difference is given between the audio signals supplied to each of the two speakers so that a binaural phase difference within a predetermined range occurs at a listening position in a low frequency range below a predetermined value. An audio signal processing method comprising a phase difference control step for performing control.
  12.  コンピュータを有し、2つのスピーカに供給する音声信号に対して処理を行う音声信号処理装置によって実行される音声信号処理プログラムであって、
     周波数が所定値以下の低域において、聴取位置で所定範囲内の両耳間位相差が生じるように、前記2つのスピーカのそれぞれに供給される前記音声信号の間に相対的な位相差を付与する制御を行う位相差制御手段として前記コンピュータを機能させることを特徴とする音声信号処理プログラム。
    An audio signal processing program executed by an audio signal processing apparatus that has a computer and processes audio signals supplied to two speakers,
    A relative phase difference is given between the audio signals supplied to each of the two speakers so that a binaural phase difference within a predetermined range occurs at a listening position in a low frequency range below a predetermined value. An audio signal processing program that causes the computer to function as phase difference control means for performing control.
PCT/JP2011/072773 2011-10-03 2011-10-03 Audio signal processing device, audio signal processing method and audio signal processing program WO2013051085A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/072773 WO2013051085A1 (en) 2011-10-03 2011-10-03 Audio signal processing device, audio signal processing method and audio signal processing program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/072773 WO2013051085A1 (en) 2011-10-03 2011-10-03 Audio signal processing device, audio signal processing method and audio signal processing program

Publications (1)

Publication Number Publication Date
WO2013051085A1 true WO2013051085A1 (en) 2013-04-11

Family

ID=48043278

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/072773 WO2013051085A1 (en) 2011-10-03 2011-10-03 Audio signal processing device, audio signal processing method and audio signal processing program

Country Status (1)

Country Link
WO (1) WO2013051085A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104837106A (en) * 2015-05-25 2015-08-12 上海音乐学院 Audio signal processing method and device for spatialization sound

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58194500A (en) * 1982-04-30 1983-11-12 Nippon Hoso Kyokai <Nhk> Zooming device of audio signal
JPS6157797U (en) * 1984-09-20 1986-04-18
JP2002354597A (en) * 2001-03-22 2002-12-06 New Japan Radio Co Ltd Pseudo stereo circuit and pseudo stereo device
JP2003061198A (en) * 2001-08-10 2003-02-28 Pioneer Electronic Corp Audio reproducing device
JP2011097561A (en) * 2009-11-02 2011-05-12 Harman Becker Automotive Systems Gmbh Audio system phase equalization

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58194500A (en) * 1982-04-30 1983-11-12 Nippon Hoso Kyokai <Nhk> Zooming device of audio signal
JPS6157797U (en) * 1984-09-20 1986-04-18
JP2002354597A (en) * 2001-03-22 2002-12-06 New Japan Radio Co Ltd Pseudo stereo circuit and pseudo stereo device
JP2003061198A (en) * 2001-08-10 2003-02-28 Pioneer Electronic Corp Audio reproducing device
JP2011097561A (en) * 2009-11-02 2011-05-12 Harman Becker Automotive Systems Gmbh Audio system phase equalization

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104837106A (en) * 2015-05-25 2015-08-12 上海音乐学院 Audio signal processing method and device for spatialization sound

Similar Documents

Publication Publication Date Title
KR101827032B1 (en) Stereo image widening system
US10375503B2 (en) Apparatus and method for driving an array of loudspeakers with drive signals
US10356528B2 (en) Enhancing the reproduction of multiple audio channels
AU2015413301B2 (en) Apparatus and method for sound stage enhancement
US9609418B2 (en) Signal processing circuit
JP4924119B2 (en) Array speaker device
US20050089181A1 (en) Multi-channel audio surround sound from front located loudspeakers
CN104641659A (en) Speaker device and audio signal processing method
US20110268299A1 (en) Sound field control apparatus and sound field control method
EP2856775A1 (en) Stereo widening over arbitrarily-configured loudspeakers
EP3089476A1 (en) Sound system
US20190037334A1 (en) Methods and systems for providing virtual surround sound on headphones
JP2007228526A (en) Sound image localization apparatus
KR20120067294A (en) Speaker array for virtual surround rendering
JP6380060B2 (en) Speaker device
US20170272889A1 (en) Sound reproduction system
US20080175396A1 (en) Apparatus and method of out-of-head localization of sound image output from headpones
EP2446647A1 (en) A dsp-based device for auditory segregation of multiple sound inputs
WO2013057906A1 (en) Audio signal reproducing apparatus and audio signal reproducing method
WO2013051085A1 (en) Audio signal processing device, audio signal processing method and audio signal processing program
US8929557B2 (en) Sound image control device and sound image control method
JP5418256B2 (en) Audio processing device
US20090052676A1 (en) Phase decorrelation for audio processing
CN107534813B (en) Apparatus for reproducing multi-channel audio signal and method of generating multi-channel audio signal
US11974106B2 (en) Array augmentation for audio playback devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11873572

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11873572

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP