CA2189126C - Three-dimensional virtual audio display employing reduced complexity imaging filters - Google Patents

Three-dimensional virtual audio display employing reduced complexity imaging filters Download PDF

Info

Publication number
CA2189126C
CA2189126C CA002189126A CA2189126A CA2189126C CA 2189126 C CA2189126 C CA 2189126C CA 002189126 A CA002189126 A CA 002189126A CA 2189126 A CA2189126 A CA 2189126A CA 2189126 C CA2189126 C CA 2189126C
Authority
CA
Canada
Prior art keywords
function
transfer function
display method
audio display
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002189126A
Other languages
French (fr)
Other versions
CA2189126A1 (en
Inventor
Jonathan S. Abel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aureal Semiconductor Inc
Original Assignee
Aureal Semiconductor Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/303,705 external-priority patent/US5659619A/en
Application filed by Aureal Semiconductor Inc filed Critical Aureal Semiconductor Inc
Publication of CA2189126A1 publication Critical patent/CA2189126A1/en
Application granted granted Critical
Publication of CA2189126C publication Critical patent/CA2189126C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/002Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
    • H04S1/005For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Color Television Image Signal Generators (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Holo Graphy (AREA)

Abstract

Compressed head-related transfer function (HRTF)(130) parameters are prederi ved or derived in real time for use in filtering an audi o signal for a virtual audio display. From a frequency domain viewpoint, freq uency components of known transfer functions are smoothe d (125) over bandwidths which are a function of the width of the ear's critica l bands. In the first implementation, an HRTF is smooth ed (125) by convolving the HRTF (120) with a frequency dependent weighting function i n the frequency domain. In the second way, the HRTF's frequency axis is warped or mapped into a non-linear frequency domain.</SDOA B>

Description

WO 95/31881 . ~,lIU.~__._ ' 7 ~ 2~ ~912~

DESCRIPTION
THREE-DIMENSIONAL VIRTUAL AUDIO DISPLAY
EMPLOYING REDUCED COMPLEXITY IMAGING FILTERS
Tf~rhnirql Field This inYentiùn relates generally to three-~iimpn~irlnAl or "virtual" audio. Moreparticularly, this invention relates to a method and apparatus for reducing the complexity of imaging filters employed in virtual audio displays. In accordance with the teachings of the invention, such reduction in complexity may be achieved without sllh5tqntiqAlly affecting the~ nA~u~ rAqli7Atinn' ~ 't~ oftheresultingthree--l;"
audio 1..- ~
B~ ' Art Sounds arriving at a listener's ears exhibit u~ul,..6~.Liù" effects which depend on the relative positions of the sound source and listener. Listening environment effects may also be present. These effects, including differences in signal intensity and time of arrival, impart to the listener a sense of the sound source location. If included, ~llvilu~ c~lLdl effects, such as early and late sound reflections, may also impart to the listener a sense of an acoustical environment. By processing a sound so as to simulate the a~ IU~ t~ I~IU~ ,ALiu~ effects, a listener will perceive the sound to originate from a specified point in three-riimPncinn~Al space--that is a "virtual" position. See, for example, "HPq~lrhonP simulation of free-field listening" by Wightman and Kistler, .1.
Acoust. Soc. Am., Vol. 85, No. 2, 1989.
Current three-riimPn~ nql or virtual audio displays are ;..,1,1. ."~ .,t. ~'i by time-domain filtering an audio input signal with selected head-related transfer functions (HRTFs).
Each HRTF is designed to reproduce the L/lU~J~dLiUI~ effects and acoustic cues lu~ull~iblc for psyrhrq~o~tir lrlrqli7q~hm at a particular position or region in three-~i;ll.. l~.)llAl space or a direction in three-~ Al space. See, for example, rqli7qAtinn in Virtual Acoustic Displays" by Elizabeth M. Wenzel, Presence, Vol. 1, No. 1, Summer 1992. For simplicity, the present document will refer only to a single woss/3ls8l ~ l 8q 1 2~ U~ a -2- =
HRTF operating on a single audio channel. In practice, pairs of HRTFs are employed in order to provide the proper signals to the ears of the listener.
At the present time, most HRTFs are indexed by spatial direction only, the rangecomponent being talAen into account ;~ ly. Some HRTFs define spatial position S by including both range and direction and are indexed by position. Although particular examples herein may refer to HRTFs defining direction, the present invention applies to HRTFs IC~lC~ g either direction or position.
HRTFs are typically derived by ~,I...;,f~ u~ or by modifying ~ A~,. i,.l, ..ldlly derived HRTFs. In practical virtual audio display ~ a tableof HRTF parameter sets are stored, each HRTF parameter set being associated with a particular point or region in three-~limpn~ space. In order to reduce the table storage ICU,UilCIII.,II~I, HRTF parameters for only a few spatial positions are stored.
HRTF parameters for other spatial positions are generated by i..L,.IJol.~ g among d,UIJlUl ' sets of HRTF positions which are stored in the table.
As noted above, the acoustic cllv;lu~ may also be taken into account. In practice, this may be a ~ l.fA by modifying the HRTF or by subjecting the audio signal to additional filtering simulating the desired acoustic ~.~lvilulllll.,.ll. For simplicity in ~JIC~I,IIId~iUl~, the ~ .,l.o.l;.". ..~ disclosed refer to the HRTFs, however, the invention applies more generally to all transfer functions for use in virtual audio displays, including HRTFs, transfer functions lC~IIC;~Cllt;ll~ acoustic .. vilu.. l,. .l~l effects and transfer functions IC,Ul~ both head-related transforms and acoustic environmental effects.
A typical prior art ~-- - ,- .,,. - ~- ~ is shown in Figure 1. A three--l; ,-- .~;f)n~l spatial location or position signal 10 is applied to an HRTF parameter table and il.,~l~Uoldliu,.
25 function 11, resulting in a set of i.,t~.. uold~d HRTF parameters 12 responsive to the three-.l,."...~;...,,.l position identified by signal 10. An input audio signal 14 is applied to an imaging filter 15 whose transfer function is determined by the applied illtl luoldlcd HRTF r:lr,nnf~.r~ The filter li provides a "spatialized" audio output suitable for application to one channel of a headphone 17.
Although the various Figures show l'f -'11~}' .. 1. ~ for I~JlUdU.,~iUI~, dlJ,UlUI ' ' HRTFs may create ,u~ u ~ ly localized audio with other types of audio 1,~,.~.l,....~
including 1-,.1.~ ., The invention is not limited to use with any particular type of audio transducer.

wo 95/31881 21 8 9 1 2 6 r~ o When the imaging f~lter is i~ r."r"lr~i as a finite-impulse-response (FIR) filter, the HRTF parameters define the FIR filter taps which comprise the impulse response associated with the HRTF. As discussed below, the invention is not limited to use with FIR filters.
The main drawback to the prior art approach shown in Figure I is the .U".l""_~;,.. _l cost of relatively long or complex HRTFs. The prior art employs several techniques to reduce the length or complexity of HRTFs. An HRTF, as shown in Figure 2a, comprises a time delay D component and an impulse response g(t) . u",l, Thus, imaging filters may be ",.l l .". ..'~.l as a time delay function Z.D and an impulse response function g(t), as shown in Figure 2b. By first removing the time delay, thereby time aligning the HRTFs, the ~ .l,.",.l complexity of the impulse response function of the imaging filter is reduced.
Figure 3a shows a prior art AII~IIIL,. .1- ~1l in which pairs of u~ uc~J or "raw"
HRTF parameters 100 are applied to a time-alignment processor 101, providing at its outputs time-aligned HRTFs 102 and time-delay values 103 for later use (not shown?.
Processor 101 cross-correlates pairs of raw HRTFs to determine their time difference of arrival; these time differences are the delay values 103. Because the time delay value values 103 and the filter terms are retained for later use, there is no ~,y, l,.~u ~
In. ,.1,,_1;.,., loss--the perceptual impact is preserved. Each time-aligned HRTF 102 is then processed by a minimum-phase converter 104 to remove residual time delay and to further shorten the time-aligned HRTFs.
Figure 3b shows two left-right pairs (Rl~LI and R2/L2) of exemplary raw HRTFs resulting from raw HRTF parameters 100. Figure 3c shows collc*,u.,l;.l~ time-aligned HRTFs 102. Figure 3d shows the CUlll,,~lUlldill~ output minimum-phase HRTFs 105.The impulse response lengths of the time-aligned HRTFs 102 are shortened with respect to the raw HRTFs 100 and the minimum-phase HRTFs 105 are shortened with respect to the time-aligned HRTFs 102. Thus, by extracting the delay so as to time align the HRTFs and by applying minimum phase conversion, the filter complexity (its length, in the case of an FIR filter) is reduced.
30 Despite the use of the techniques of Figures 2b and 3a, at an audio sampling rate of 48 kHz, minimum phase responses as long as 256 points for an FIR filter are commonly used, requiring processors executing on the order of 25 mips per audio source rendered.

W0~5131881 21 89 ~ 2~ r`'~ '' ' When c..."~ resources are limited, two additional approaches are used in the prior art, either singly or in ~ "~ , to further reduce the length or complexity of HRTFs. One technique is to reduce the sampling rate by down sampling the HRTF asshown in Figure 4a. Since many lnnqli7qtinn cues, lAu~i~,ulAuly those important to elevation, involve high-frequency ~ reducing the sampling rate may ~rPptqhly degrade the p~l~ul~ ofthe audio display.
Another technique, shown in Figure 4b, is to apply a windowing function to the HRTF by multiplying the HRTF by a windowing function in the time domain or by convolving the HRTF with a cu..."~.J..dil,g weighting function in the frequency domain.
This process is most easily understood by ~ the .""II;l,li. ~1;.. of the HRTF
by a window in the time domain--the window width is selected to be narrower thanthe HRTF, resulting in a shortened HRTF. Such windowing results in a frequency-domain smoothing with a fixed weighting function. This known windowing techniquedegrades ~)ay- 1~ J.. I;r 1O~A1;7~t;nn ~ I AIAI Irl;~ pAU~ UIAUly with respect to spatial positions or directions having complex or long impulse responses. Thus, there is a need for a way to reduce the complexity or length of HRTFs while ~ the perceptual impact and ~Jay..l...A. ., .~ In~li7qtion ~ of the origmal HRTFs.
Disclosure of ~nvention In accordance with the present invention, a three~ virtual audio display generates a set of transfer function parameters in response to a spatial location signal and filters an audio signal in response to the set of head-related transfer function rqr.qrn~r~
The set of head-related transfer function parameters are smoothed versions of parameterS
for Icnown head-related transfer functions.
The smoothing according to the present invention is best explained by . ~ g its action in the frequency domain: the frequency .. ,1.. ,.,.. 1~ of Icnown transfer functions are smoothed over bandwidths which are a non-constant function of frequency.
The parameters of the resulting transfer functions, referred to herein as ''~:ull~ acd'' transfer functions, are used to filter the audio signal for the virtual audio display. The COIlllJl~au~ head-related transfer function parameters may be prederived or may be derived in real time. Preferably, the smoothing bandwidth is a function of the width of the ear's critical bands (i.e., a function of "critical bandwidth"). The function may be such that the smoothing bandwidth is proportional to critical bandwidth. As is well wo 95/31881 r ~ o 21891~

known, the ear's critical bands increase in width with increasing frequency, thus the smoothing bandwidth also increases with frequency.
The wider the smoothing bandwidth relative to the critical bandwidth, the less complex the resulting HRTF. In the case of an HRTF i"~ .1 as an FIR filter, the S length of the filter (the number of filter taps) is inversely related to the smoothing bandwidth expressed as a multiple of critical bandwidth.
By applying the teachings of the present invention which take critical bandwidth into account, for the same reduction in complexity or length, the resulting less complex or shortened HRTFs have less rlP~r~ i(m of perceptual impact and ~ u ~
1.. .1;,.~ than HRTFs made less complex or shortened by prior art windowing techniques such as described above.
An example HRTF ("raw HRTF") and shortened versions produced by a prior art w;~ldu..;l~g method ("prior art HRTF") and by the method according to the present invention ("cu,~ id HRTF") are shown in Figures 5a (time domain) and 5b lS (frequency domain). The raw HRTF is an example of a known HRTF that has not been processed to reduce its complexity or length. In Figure 5a, the HRTF time-domainimpulse response amplitudes are plotted along a time axis of 0 to 3 n~illicPc-mtlc In Figure Sb the frequency-domain transfer function power of each HRTF is plotted along a log frequency axis extending from I kHz to 20 kHz. In the time domain, Figure Sa, the prior art HRTF exhibits some shortening, but the cull,~lc~cd HRTF exhibits even rnûre shortening. In the frequency domain, Figure Sb, the effect of uniform smoothing bandwidth on the prior art HRTF is apparent, whereas the ~,UIII~ Cd HRTF shows the effect of an increasing smoothing bandwidth as frequency increases. Because of the log frequency scale of Figure 5b, the cu~ cd HRTF displays a constant smoothing withrespect to the raw HRTF. Despite their differences in time-domain length and frequency-domain frequency response, the raw HRTF, the prior art HRTF, and the cull~ cd HRTF provide ççrnr~r~hl~ ps~ l, ul ~ p._l rul l.~
When the amount of prior art windowing and CUIII~JIC~;UII according to the present invention are chosen so as to provide substantially similar ~ayl l...- u ~ clrulll~
with respect to raw HRTFs, ~l cli~ u y double-blind listening tests indicate a preference for cu",~,c~cd HRTFs over prior art windowed HRTFs. Somewhat ~ul~ ;ly, C"'l'l''```~;~ HRTFs were also preferred over raw HRTFs. This is believed to be .

wo9~ 881 2 ~ 89 i 26 r~l~u~so because the HRTF fine structure eliminated by the smoothing process is ulluull~lGt~i from HRTF position to HRTF position and may be perceived as a form of noise.
The present invention may be i~ i in at least two ways. In a first way, an HRTF is smoothed by convolving the HRTF with a frequency dependent weighting function in the frequency domain. This weighting function differs from the frequency domain dual of the prior art time-domain windowing function in that the weighting function varies as a function of frequency instead of being invariant. Alternatively, a time-domain dual of the frequency dependent weighting function may be applied to the HRTF impulse response in the time domain. In a second way, the HRTF's frequency axis is warped or mapped into a non-linear frequency domain and the frequency-warped HRTF is either multiplied by a l;Ull~ ;U--~I window function in the time domain (after ~"" r,~ ", to the time domain) or convolved with the nu..~ frequency response of the conventional window function in the frequency domain. Inverse frequency warping is ~U~ y applied to the windowed signal.
15 The present invention may be i."l,l.. ,l~l"~1 using any type of imaging filter, including, but not limited to, analog filters, hybrid analog/digital filters, and digital filters. Such filters may be i.lll,l..,l. t i in hardware, software or hybrid hard-ware/software ~ , including, for example, digital signal ~ . When illll.l. 1..l.~ ~1 digitally or partially digitally, FIR, IIR (infinite-impulse-response) and hybrid FIR/IIR filters may be employed. The present invention may also be implement-ed by a principal component filter d~h;t_~ul~. Other aspects of the virtual audio display may be j, ll..,,..,t.1 using any ~ n of analog, digital, hybrid analog/digital, hardware, software, and hybrid hardware/software techniques, including, for example, digital signal processing.
In the case of an FIR filter , ' the HRTF parameters are the filter taps defining the FIR filter. In the case of an IIR filter, the HRTF parameters are the poles and zeroes or other . ll~ defining the IIR filter. In the case of a principal component filter, the HRTF parameterS are the position-dependent weights.
In another aspect of the invention, each HRTF in a group of HRTFs is split into a fixed head-related transfer function common to all head-related transfer functions in the group and a variable head-related transfer function associated with respective head-related transfer functions, the cc.mhir~tion of the fixed and each variable head-related transfer function being substantially equivalent to the respective original known head-related transfer function. The smoothing techniques according to the present invention may be applied to either the fixed HRTF, the variable HRTF, to both, or to neither of them .
Brief Description of Drawi~c Figure I is a functional block diagram of a prior art virtual audio display arrange-ment.
Figure 2a is an example of the impulse response of a head-related transfer funcvion (HRTF).
Figure 2b is a functional block diagram illustrating the manner in which an imaging filter may represent the time-delay and impulse response portions of an HRTF.
Figure 3a is a functional block diagram of one prior art technique for reducing the complexity or length of an HRTF.
Figure 3b is a set of example left and right "raw" HRTF pairs.
Figure 3c is the set of HRTF pairs as in Figure 3b which are now vime aligned toreduce their length.
Figure 3d is the set of HRTF pairs as in Figure 3c which are now minimum phase converted to further reduce their length.
Figure 4a is a functional block diagram showing a prior art technique for shortening ~0 an HRTF impulse response by reducing the sampling rate.
Figure 4b is a functional block diagram showing a prior art technique for shortening an HRTF impulse response by Illul~ it by a window in the time domain.
Figure 5a is a set of three waveforms in the time domain, illustrating an example of a "raw" HRTF, the HRTF shortened by prior art techniques and the HRTF ~ , c:,:,cd according to the teachings of the present invention.
Figure 5b is a frequency domain ~c~lc~ iOn of the set of HRTF waveforms of Figure Sa.
Figure 6a is a functional block diagram showing an ~...I~Q.I;,~ for deriving c v~ c~ HRTFs according to the present invention.
Figure 6b shows the frequency response of an exemplary input HRTF.
Figure 6c shows the impulse response of the exemplary input HRTF impulse response.
Figure 6d shows the frequency response of the c v~ lca~cd output HRTF.

WO 95/31881 P~ J.. 510: .
21~9~26 Figure 6e shows the impulse response of the LU..,~IL,aCLl output HRTF.
Figure 7a shows an alternative l~mho~limrnt for deriving LU~ CLI HRTFs according to the present invention.
Figure 7b shows the impulse response of an exemplary input HRTF impulse response, Figure 7c shows the frequency response of the exemplary input HRTF.
Figure 7d shows the frequency response of the input HRTF after frequency warping.
Figure 7e shows the frequency response of the LUIIIIJlL.l~d output HRTF.
Figure 7f shows the frequency response of the LUIIIIJlC~Cd output HRTF after inverse frequency warping.
Figure 7g shows the impulse response of the ~ 1 output HRTF after inverse frequency warping.
Figure 8 shows three of a family of windows useful in ,.1~ .lh~ the operation of the C~ JOd;III~ of Figures 6a and 7a.
1~ Figure 9 is a functional block diagram in which the imaging filter is embodied as a principal component filter.
Figure 10 is a functional block diagram showing another aspect of the present invention.
Modes for Carrvin~ Out the Invention Figure 6a shows an ~,...~o~ for deriving LUI~ c~:~Cd HRTFs according to the present invention. According to this ~ lù~ an input HRTF is smoothed by convolving the frequency response of the input HRTF with a frequency dependent weighting function in the frequency domain. Alternatively, a time-domain dual of the frequency dependent weighting function may be applied to the HRTF impulse response in the time domain.
Figure 7a shows an alternative .,~ o~lll". ~ for deriving cc".,~.c~c~ HRTFs according to the present invention. According to this L~llboL~ lclll~ the frequency axis of the input HRTF is warped or mapped into a non-linear frequency domain and thefrequency-warped HRTF is convolved with the frequency response of a non-varying weighting function in the frequency domain (a weighting function which is the dual of a LUII~ ,iUII.II time-domain windowing function). Inverse frequency warping is then applied to the smoothed signal. Alternatively, the frequency-warped HRTF may be r~."..,~ into the time domain and multiplied by a conventional window function.
Referring to Figure 6a, an optional nonlinear scaling function 51 is applied to an input HRTF 50. A smoothing function 54 is then applied to the HRTF 52. If nonlinear scaling is applied to the input HRTF, an inverse scaling function 56 is then applied to the smoo&ed HRTF 54. A CU~I,U~C~C~ HRTF 57 is provided at the output. As explained further below, the nonlinear scaling 51 and inverse scaling 56 can control whether the smoothing mean function is with respect to signal amplitude or power and whether it is an arithmetic averaging, a geometric averaging or another mean function.
The smoothing processor 54 convolves the HRTF with a frequency-dependent weighting function. The smoothing processor may be j"l~ ,....r. l as a running weighted arithmetic mean, S(f) = ~ W (n) H(f - n), 2bf + 1 n=-bl f Equation 1.
where at least the smoothing bandwidth br and, optionally, the window shape Wf are a function of frequency. The width of the weighting function increases with frequency;
preferably, the weighting function length is a multiple of critical bandwidth: the shorter the required HRTF impulse response length, the greater the multiple.
HRTFs typically lack low-frequency content (below about 300 Hz) and high-frequency content (above about 16 kHz). In order to provide the shortest possible (and, hence, least complex) HRTFs, it is desirable to extend HRTF frequency response to or even beyond the normal lower and upper extremes of human hearing. However, if this is done, the width of the weighting function in the extended low-frequency and high-frequency audio-band regions should be wider relative to the ear's critical bands than the multiple of critical bandwidth used through the main, unextended portion of the audio band in which HRTFs typically have content.
Below about 500 Hz, HRTFs are d,U~JII ' ' 1y flat spectrally because audio wavelengths are large compared to head size. Thus, a smoothing bandwidth wider than the above-mentioned multiple of critical bandwidth preferably is used. At high rlc~lucllcic~, above about 16 kHz, a smoothing bandwidth wider than the above-mentioned multiple of critical bandwidth preferably is also used because human hearing _ . .. .... . . .. . . .... ..... . .... .. . . .. .. ..

W095131881 ` 21 891 2~ r~ c~ .

is poor at such high r.~u~l~ci~S and most lnn71i7:~t;nn cues are . ~ l below such high rl~u~ s. Thus, the weighting bandwidth at the low-frequency and high-frequency extremes of the audio band preferably may be widened beyond the l,~u,dw;.l~l,, predicted by the equations set forth herein. For example, in one practical . :L
S of the invention, a constant smoothing bandwidth of about 250 Hz is used for r. ~ below I kHz, and a third-octave bandwidth is used above I kHz. One-third octave bandwidth ~ critical bandwidth; at 1 kHz the one-third octave bandwidth is about 250 Hz. Thus, below I kHz the smoothing bandwidth is wider than the critical bandwidth. In some cases, power noted at low rl~u. ~c;~s (say, in the range 300 to 500 Hz) is . ^~ lrll to DC to fill in data not accurately determined using UUII~ iUlldl HRTF ~ U~ L techniques.
Although a weighting function having the same multiple of critical bandwidth maybe used in processing all of the HRTFs in a group, weighting functions having different critical bandwidth multiples may be applied to respective HRTFs so that not all HRTFs are cull~ ~ to the same extent--this may be necessary in order to assure that the resulting co"l~ ;d HRTFs are generally of the same complexity or length (certainones of the raw HRTFs will be of greater complexity or length depending on the spatial location which they represent and may therefore require greater or lesser ~,UIII~
Alternatively, HRTFs 1~ lL;llg certain directions or spatial positions may be cu,.~,u.~,~,cd less than others in order to maintain the perception of better overall spatial ;.", while still obtaining some overall lessening in f" l,~ i.."~l complexity.
The amount of HRTF CU~ JIU,~;UII may be varied as a function of the relative JU`I;~_ illl,UUlkUIU; of the HRTF. For example, early reflections, which are rendered using separate HRTFs because they arrive from different directions, are not as important to spatialize as accurate~y as is the direct sound path. Thus, early reflections could be rendered using "over shortened" HRTFs without perceptual impact.
Another way to view the smoothing 54 of Figure 6a is that for each frequencyf, SO (f ) = ~, Wf t (n) Ho (n), Equation 2.
where ~ Wf O (n)= I, Equation 3.
Wfo (n) 20, for all n, Equation 4.

wo gS/31881 2 1 8 ~ 1 ~6 ~ c He(n) is the input HRTF 52 at position O, SO~ is the ~o~ ~ HRTF 54, n is frequency, and N is one half the Nyquist frequency. Thus, there are a family of weighting functions WrO(n), each defined on an interval 0 to N, which have a width which is a function of their center frequency f and, optionally, also a function of the S HRTF position ~. The summation of each weighting function is I (Equation 3). Figure 8 shows three members of a family of Gaussian-shaped weighting functions with their amplitude response plotted against frequency. Only three of the family of weighting functions are shown for simplicity. The center window is centered at frequency nO and has a bandwidth bf _ n . The weighting functions need not have a Gaussian shape. Other shaped weighting functions, including 1c:u~ ;uldl, for simplicity, may be employed.
Also, the weighting functions need not be ~yllllllt:Lli1dl about their center frequency.
Taking into account the nonlinear scaling function 51 and the inverse scaling function 56, Figure 6a may be more generally ~ aS
So(f ) = G { ~, Wf,~ (n) G ~ Ho (n) ~) n.O Equation-S.
where G is the scaling 51 and G ~' is the inverse scaling.
While the smoothing 54 thus far described provides an arithmetic mean function, depending on the statistics of the input HRTF transfer function, a trimmed mean or median might be favored over the arithmetic mean.
Because the human ear appears to be sensitive to the total filter power in a critical band, it is preferred to implement the nonlinear scaling 51 of Figure 6a as a magnitude squared operation and the output inverse scaler 56 as a square root. It may be desirable to apply certain pre-processing or post-processing such as minimum phase conversion.
Alternatively, or in addition to the magnitude squared scaling and square root inverse scaling, the arithmetic mean of the smoothing 54 becomes a geometric mean when the nonlinear scaling 51 provides a logarithm function and the inverse scaling 56 an~ A~U~ idliu11 function. Such a mean is useful in preserving spectral nulls thought to be important for elevation perception.
- Figures 6b and 6c show an exemplary input HRTF frequency spectrum and input impulse response, respectively, in the frequency domain and the time domain. Figures 6d and 6e show the CUIII~ Gd output HRTF 57 in the respective domains. The degree to which the HRTF spectrum is smoothed and its impulse response is shortened will WO 95/31881 -12- P~ C: .
depend on the multiple of critical bandwidth chosen for the smoothing 54. The cu~ "c~cd HRTF ~ rli~ S will also depend ûn the window shape and other factors discussed above.
Refer now to Figure 7a. In this ( .~.l.Q.1;,". 1 the frequency axis of the input HRTF
is altered by a frequency warping function 121 SO that a constant-bandwidth smoothing 125 acting on the warped frequency spectrum i.,.l.l~ .,...,1~ the equivalent of smoothing 54 of Figure 6a. The smoothed HRTF is processed by an inverse warping 129 to provide the output ,U~ Cd HRTF. In the same manner as in Figure 6a, nonlinear scaling 51 and inverse scaling 56 optionally may be applied to the input and output HRTFs.
The frequency warping function 121 in conjunction with constant bandwidth smoothing serves the purpose of the frequency-varying smoothing bandwidth of theFigure 6a ~ .o~ ..l For example, a warping function mapping frequency to Bark may be used to implement critical-band smoothing. Smoothing 125 may be . ' as a time-domain window function n~ltirlit~:~ti~n or as a frequency-domain weighting function convolution similar to the rll~llf..li.". . l of Figure 6a except that the weighting function width is constant with frequency. As with respect to Figure 6a, it may be desirable to apply certain pre-processing or post-processing such as minimum phase conversion.
The order in which the frequency warping function 121 and the scaling function 51 are applied may be reversed. Although these functions are not linear, they do commute because the frequency warping 121 affects the frequency domain while the scaling 51 affects only the value of the frequency bins. Cul- ~c~lu~ ly, the inverse scaling function 56 and the inverse warping function 129 may also be reversed.
As a further alternative, the output HRTF may be taken after block 125, in whichcase inverse scaling and inverse warping may be provided in the apparatus or functions which receive the ~UIII~IC~Ci HRTF r~rnP7f rc Figures 7b and 7c show an exemplary input HRTF input response and frequency spectrum, Ic~ Cly. Figure 7d shows the frequency spectrum of the HRTF mapped into Bark. Figure 7e shows the spectrum of the HRTF after smoothing 125. After ~ d~ ~ u;ll~ inverse frequency warping, the resulting ~ UIll~Jlca~c I HRTF has a spectrum as shown in Figure 7f and an impulse response as shown in Figure 7g. It will be noted wo 9~/3 188 1 2 1 8 9 ~ 2 6 P ~ I, , ~ ~ . .
that the resulting HRTF ~ ll~d~ Lics are the same as those of the t:lllbodill~ L of Figure 6a.
The imaging filter may also be embodied as a principal component filter in the manner of Figure 9. A position signal 30 is applied to a weight table and ~oldliun function 31 which is functionally similar to block 11 of Figure 1. The parameters provided by block 31, the ill~ OIG~ l weights, the directional matrix and the principal component filters are functionally equivalent to HRTF parameters controlling an imaging filter. The imaging filter 15' of this c ~ ' filters the input signal 33 in a set of parallel fixed filters 34, principal component filters, PC0 through PC,~" whose outputs are mixed via a position-dependent weighting to form an .l~/ylu~i",~iull to the desired imaging filter. The accuracy of the ~lu~d~ Liul~ increase with the number of principal component filters used. More c~ l resources, in the form of additional principal component filters, are needed to achieve a given degree of U~ iUII to a set of raw HRTFs than to versions l,ulll~ ~ in accordance with this ~, l,o li" ,l of the present invention.
Another aspect of the invention is shown in the f ,.l,o.l;..l ' of Figure 10. A three-~1imPni~n:~l spatial location or position signal 70 is applied to an equalized HRTF
parameter table and ;llt~ l l,c,L.Iiul~ function 71, resulting in a set of i~ uh~l equalized HRTF parameters 72 responsive to the three-ll;~ position identified by signal 70. An input audio signal 73 is applied to an equalizing filter 74 and an imaging filter 75 whose transfer function is deteFmineo by the applied ;llt~ ' equalized HRTF
r:~r lrnPtPrC Alternatively, the equalizing filter 74 may be located after the imaging filter 75. The filter 75 provides a spatialized audio output suitable for application to one channel of a headphone 77.
The sets of equalized head-related transfer function parameters in the table 71 are prederived by splitting a group of known head-related transfer functions into a fixed head-related transfer function common to all head-related transfer functions in the group and a variable, position-dependent head-related transfer function associated with each of the known head-related transfer functions, the ~...II,;,, .li..ll of the fixed and each variable 30 head-related transfer function being substantially equal to the respective original known head-related transfer function. The equalizing filter 74 thus represents the fixed head-related transfer function common to all head-related transfer functions in the table. In this mamner the HRTFs and imaging filter are reduced in complexity.
_ wo 95/31881 r~ JL,''/:' The P~ li7Ation filter I ~ are chosen to minimize the complexity of theimaging filters. This minimizes the size of the equalized HRTF table, reduces the rU ,~ resources for HRTF i..L~ ol-Lion and image filtering and reduces memory resources for tabulated HRTFs. In the case of FIR imaging filters, it is desired to minimize filter length.
Various u~l;,,,;,-l;n~ criteria may be used to find the desired pqll~li7~tj~1n filter. The 1i7~tinn filter may ~ IU~ the average HRTF, as this choice makes the position-dependent portion spectrally flat (and short in time) on average. The equalization filter may represent the diffuse field sound component of the group of known transfer functions. When the Pq~l~li7~tinn filter is formed as a weighted average of HRTFs, the weighting should give more importance to longer or more complex HRTFs.
Different fixed ~ ;. .,. may be provided for left and right channels (either before or after the position variable HRTFs) or a single PqllAIi7:l~irm may be applied to the lS monaural source signal (either as a single filter before the monaural signal is split into left and right ~u,,,l,u,,...l~ or as two filters applied to each of the left and right As might be expected from human symmetry, the optimal left~ar and right-ear eq~Ali7:~ti~n filters are often nearly identical. Thus, the audio source signal may be filtered using a single ~Pq~l~li7A~ion filter, with its output passed to both position-dependent HRTF filters.
Further benefits may be achieved by smoothing either the equalized HRTF
rArAnnPtPrs~ the parameters of the fixed equalizing filter or both the equalized HRTF
parameters and equalizing filter parameters in accordance with the teachings of the present invention.
2~ Also, using different filter structures for the ~qllAli7Ation filter and the imaging filter may result in .~-",~ savings: for example, one may be ;~ t~.1 as an IIR
filter and the other as an FIR filter. Because it is a fixed filter typically with a fairly smooth response, the equalizing filter may best be i " .~ f .l as a low-order IIR filter.
Also, it could readily be i".~ .1 as an analog filter.
Any filtering technique d~l~)lU~JI' ' for use in HRTF filters, including principal component methods, may be used to implement the variable, position-dependent portion equali_ed HRTF parameters. For example, Figure 10 may be modified to employ as WO95/31881 2 1 8~ 1 2~ o imaging filter 75 a principal component imaging filter 15' of the type described in connection with the r ~ c..~ of l:igure 9.

Claims (41)

1. A three-dimensional virtual audio display method comprising generating a set of transfer function parameters in response to a spatial location or direction signal, and filtering an audio signal in response to said set of transfer function parameters, wherein said set of transfer function parameters selected from or interpolated among parameters is derived by smoothing frequency components of a known transfer function over a bandwidth which is a non-constant function of frequency, and noting the parameters of the transfer function of the resulting compressed transfer function.
2. An audio display method according to claim 1 wherein the bandwidth is a function of critical bandwidth.
3. An audio display method according to claim 2 wherein the smoothing comprises,for each frequency component in at least part of the audio band of the display, applying a mean function to the frequency components within the bandwidth containing the frequency component.
4. An audio display method according to claim 3 wherein the mean function is a function of the amplitude of the frequency components.
5. An audio display method according to claim 3 wherein the mean function is a function of the power of the frequency components.
6. An audio display method according to claim 4 or claim 5 wherein said mean function determines the median.
7. An audio display method according to claim 4 or claim 5 wherein said mean function determines the weighted arithmetic mean.
8. An audio display method according to claim 4 or claim 5 wherein said mean function determines the weighted geometric mean.
9. An audio display method according to claim 4 or claim 5 wherein said mean function determines a trimmed mean.
10. An audio display method according to claim 2 wherein said weighting functionhas a rectangular shape.
11. An audio display method according to claim 1 wherein the bandwidth is proportional to critical bandwidth.
12. An audio display method according to claim 11 wherein said transfer functionparameters are extended at low and high frequencies and wherein said bandwidth is wider than a bandwidth proportional to critical bandwidth in said low- and high-frequency regions.
13. An audio display method according to claim 1 wherein the smoothing comprisesconvolving the transfer function with a frequency dependent weighting function, the width of which is a function of critical bandwidth.
14. An audio display method according to claim 13 wherein the weighting functionhas a bandwidth which is a multiple (one or greater) of critical bandwidth.
15. An audio display method according to claim 14 wherein said transfer functionparameters are extended at low and high frequencies and wherein said bandwidth is wider than a bandwidth proportional to critical bandwidth in said low- and high-frequency regions.
16. An audio display method according to claim 13 wherein said weighting function has a shape having a higher-order continuity than a rectangularly-shaped window.
17. An audio display method according to claim 1 wherein smoothing frequency components comprises smoothing said frequency components in the frequency domain.
18. An audio display method according to claim 17 wherein said smoothing comprises convolving said known transfer function H(f) with the frequency response of a weighting function w?(i) in the frequency domain according to the relationship where at least the smoothing bandwidth bf and, optionally, the weighting function shape Wf are a function of frequency.
19. An audio display method according to claim 1 wherein smoothing frequency components comprises applying a frequency warping function to said known transfer function, transforming the frequency-warped transfer function to the time domain, and time-domain windowing the impulse response of the frequency-warped transfer function.
20. An audio display method according to claim 1 wherein smoothing frequency components comprises applying a frequency warping function to said known transfer function and frequency-domain convolving the frequency-warped transfer function with the frequency response of a constant weighting function.
21. An audio display method according to claim 19 or claim 20 wherein said frequency warping function maps the transfer function to Bark.
22. An audio display method according to claim 19 or claim 20 further comprisingapplying a non-linear scaling to said known transfer function prior to said multiplication or said convolving and applying an inverse scaling to the windowed or convolved transfer function.
23. An audio display method according to claim 1 wherein said filtering is principal-component filtering.
24. An audio display method according to claim 1 wherein said transfer function parameters are equalized transfer function parameters and said filtering includes fixed equalization filtering and filtering in response to said equalized transfer function parameters.
25. An audio display method according to claim 1 wherein said set of transfer functions are derived by smoothing frequency components of known transfer functions over different bandwidths as a function of the spatial location or directions associated with the transfer function.
26. An audio display method according to claim 1 wherein said set of transfer functions are derived by smoothing frequency components of known transfer functions over different bandwidths as a function of the complexity of the transfer function.
27. An audio display method according to claim 1 wherein said set of transfer functions are derived by smoothing frequency components of known transfer functions over different bandwidths as a function of the spatial location or direction associated with the transfer function and as a function of the complexity of the transfer function.
28. An audio display method according to claim 26 or 27 wherein the bandwidth increases with increasing transfer function complexity.
29. An audio display method according to claim 1 or claim 28 wherein the bandwidth is selected such that the most complex resulting compressed transfer function does not exceed a predetermined complexity.
30. An audio display method according to claim 1 wherein said set of transfer functions are derived by smoothing frequency components of known transfer functions over different bandwidths as a function of the relative psychoacoustic importance of the transfer function.
31. An audio display method according to claim 1 wherein said set of transfer functions are derived by smoothing frequency components of known transfer functions over different bandwidths as a function of the spatial location or direction associated with the transfer function and as a function of the relative psychoacoustic importance of the transfer function.
32. A three-dimensional virtual audio display method comprising generating a set of equalized transfer function parameters in response to a spatial location or direction signal, and filtering an audio signal with fixed equalization filtering and in response to said set of equalized transfer function parameters, wherein said fixed equalization filtering are derived by and said set of equalized transfer function parameters are selected from or interpolated among parameters derived by splitting a group of known transfer functions into a fixed transfer function common to all transfer functions in the group and a variable transfer function associated with each of the known transfer functions, the combination of the fixed and each variable transfer function being substantially equal to the respective original known transfer function, noting the parameters of said fixed transfer function for characterizing said fixed equalization filtering, and noting the parameters of each of the transfer functions of the resulting variable transfer function for use as said equalized transfer function parameters.
33. An audio display method according to claim 28 wherein the derivation of saidfixed equalization filtering and said set of equalized transfer function parameters further includes smoothing frequency components of each of the variable transfer functions over a bandwidth which is a non-constant function of frequency.
34. An audio display method according to claim 28 wherein the derivation of saidfixed equalization filtering and said set of equalized transfer function parameters further includes smoothing frequency components of the fixed transfer function over a bandwidth which is a non-constant function of frequency.
35. An audio display method according to claim 28 wherein said group of known transfer functions is split into a fixed transfer function and a plurality of variable transfer functions by selecting a fixed transfer function resulting in the least complex variable transfer functions.
36. An audio display method according to claim 28 wherein said group of known transfer functions is split into a fixed transfer function and a plurality of variable transfer functions by selecting a fixed transfer function representing the diffuse field sound component of the group of known transfer functions.
37. An audio display method according to claim 28 wherein said group of known transfer functions are transfer functions representing a particular direction or range of directions in space.
38. An audio display method according to claim 28 comprising the additional stepof smoothing frequency components of the fixed transfer function over a bandwidth which is a non-constant function of frequency and wherein the step of noting theparameters of said fixed transfer function for characterizing said fixed equalization filtering notes the parameters of the resulting compressed fixed transfer function.
39. An audio display method according to claim 28 wherein sets of equalized transfer function parameters generated in response to a spatial location or direction signal are generated by principal-component filtering.
40. Three-dimensional virtual audio display apparatus comprising means for generating a set of transfer function parameters in response to a spatial location or direction signal, said parameters selected from or interpolated among parameters obtained by smoothing frequency components of a known transfer function over a bandwidth which is a non-constant function of frequency, and noting the parameters of the transfer function of the resulting compressed transfer function, and means for filtering an audio signal in response to said set of transfer functionparameters.
41. A three-dimensional virtual audio display method comprising means for generating a set of equalized transfer function parameters in response to a spatial location or direction signal, said parameters selected from or interpolated among parameters obtained by splitting a group of known transfer functions into a fixed transfer function common to all transfer functions in the group and a variable transfer function associated with each of the known transfer functions, the combination of the fixed and each variable transfer function being substantially equal to the respective original known transfer function, noting the parameters of said fixed transfer function for characterizing said fixed equalization filtering, and noting the parameters of each of the transfer functions of the resulting variable transfer function for use as said equalized transfer function parameters, and means for filtering an audio signal with fixed equalization filtering and in response to said set of equalized transfer function parameters.
CA002189126A 1994-05-11 1995-05-03 Three-dimensional virtual audio display employing reduced complexity imaging filters Expired - Fee Related CA2189126C (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US24186794A 1994-05-11 1994-05-11
US08/241,867 1994-05-11
US08/303,705 US5659619A (en) 1994-05-11 1994-09-09 Three-dimensional virtual audio display employing reduced complexity imaging filters
US08/303,705 1994-09-09
PCT/US1995/004839 WO1995031881A1 (en) 1994-05-11 1995-05-03 Three-dimensional virtual audio display employing reduced complexity imaging filters

Publications (2)

Publication Number Publication Date
CA2189126A1 CA2189126A1 (en) 1995-11-23
CA2189126C true CA2189126C (en) 2001-05-01

Family

ID=26934650

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002189126A Expired - Fee Related CA2189126C (en) 1994-05-11 1995-05-03 Three-dimensional virtual audio display employing reduced complexity imaging filters

Country Status (5)

Country Link
EP (1) EP0760197B1 (en)
JP (1) JPH11503882A (en)
AU (1) AU703379B2 (en)
CA (1) CA2189126C (en)
WO (1) WO1995031881A1 (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997025834A2 (en) * 1996-01-04 1997-07-17 Virtual Listening Systems, Inc. Method and device for processing a multi-channel signal for use with a headphone
US6009179A (en) * 1997-01-24 1999-12-28 Sony Corporation Method and apparatus for electronically embedding directional cues in two channels of sound
JPH1188994A (en) * 1997-09-04 1999-03-30 Matsushita Electric Ind Co Ltd Sound image presence device and sound image control method
US6307941B1 (en) 1997-07-15 2001-10-23 Desper Products, Inc. System and method for localization of virtual sound
US6067361A (en) * 1997-07-16 2000-05-23 Sony Corporation Method and apparatus for two channels of sound having directional cues
US6125115A (en) * 1998-02-12 2000-09-26 Qsound Labs, Inc. Teleconferencing method and apparatus with three-dimensional sound positioning
US6741706B1 (en) * 1998-03-25 2004-05-25 Lake Technology Limited Audio signal processing method and apparatus
AUPP271598A0 (en) * 1998-03-31 1998-04-23 Lake Dsp Pty Limited Headtracked processing for headtracked playback of audio signals
WO2000019415A2 (en) * 1998-09-25 2000-04-06 Creative Technology Ltd. Method and apparatus for three-dimensional audio display
FI108504B (en) 1999-04-30 2002-01-31 Nokia Corp Management of telecommunication system talk groups
GB2351213B (en) * 1999-05-29 2003-08-27 Central Research Lab Ltd A method of modifying one or more original head related transfer functions
JP4867121B2 (en) * 2001-09-28 2012-02-01 ソニー株式会社 Audio signal processing method and audio reproduction system
EP1905002B1 (en) 2005-05-26 2013-05-22 LG Electronics Inc. Method and apparatus for decoding audio signal
JP4988716B2 (en) 2005-05-26 2012-08-01 エルジー エレクトロニクス インコーポレイティド Audio signal decoding method and apparatus
KR101333031B1 (en) 2005-09-13 2013-11-26 코닌클리케 필립스 일렉트로닉스 엔.브이. Method of and device for generating and processing parameters representing HRTFs
EP1974348B1 (en) 2006-01-19 2013-07-24 LG Electronics, Inc. Method and apparatus for processing a media signal
KR20080093024A (en) 2006-02-07 2008-10-17 엘지전자 주식회사 Apparatus and method for encoding/decoding signal
JP2007221445A (en) * 2006-02-16 2007-08-30 Sharp Corp Surround-sound system
WO2007111560A2 (en) * 2006-03-28 2007-10-04 Telefonaktiebolaget Lm Ericsson (Publ) Filter adaptive frequency resolution
FR2899424A1 (en) * 2006-03-28 2007-10-05 France Telecom Audio channel multi-channel/binaural e.g. transaural, three-dimensional spatialization method for e.g. ear phone, involves breaking down filter into delay and amplitude values for samples, and extracting filter`s spectral module on samples
MY151651A (en) * 2006-07-04 2014-06-30 Dolby Int Ab Filter compressor and method for manufacturing compressed subband filter impulse responses
US9622006B2 (en) 2012-03-23 2017-04-11 Dolby Laboratories Licensing Corporation Method and system for head-related transfer function generation by linear mixing of head-related transfer functions
US9263055B2 (en) 2013-04-10 2016-02-16 Google Inc. Systems and methods for three-dimensional audio CAPTCHA

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5105462A (en) * 1989-08-28 1992-04-14 Qsound Ltd. Sound imaging method and apparatus
US5440639A (en) * 1992-10-14 1995-08-08 Yamaha Corporation Sound localization control apparatus
US5404406A (en) * 1992-11-30 1995-04-04 Victor Company Of Japan, Ltd. Method for controlling localization of sound image
US5438623A (en) * 1993-10-04 1995-08-01 The United States Of America As Represented By The Administrator Of National Aeronautics And Space Administration Multi-channel spatialization system for audio signals

Also Published As

Publication number Publication date
EP0760197A4 (en) 2004-08-11
EP0760197B1 (en) 2009-01-28
CA2189126A1 (en) 1995-11-23
AU703379B2 (en) 1999-03-25
AU2460395A (en) 1995-12-05
EP0760197A1 (en) 1997-03-05
JPH11503882A (en) 1999-03-30
WO1995031881A1 (en) 1995-11-23

Similar Documents

Publication Publication Date Title
CA2189126C (en) Three-dimensional virtual audio display employing reduced complexity imaging filters
US5659619A (en) Three-dimensional virtual audio display employing reduced complexity imaging filters
US6072877A (en) Three-dimensional virtual audio display employing reduced complexity imaging filters
US8515104B2 (en) Binaural filters for monophonic compatibility and loudspeaker compatibility
US7564978B2 (en) Advanced processing based on a complex-exponential-modulated filterbank and adaptive time signalling methods
CN106105269B (en) Acoustic signal processing method and equipment
US6307941B1 (en) System and method for localization of virtual sound
KR101215872B1 (en) Parametric coding of spatial audio with cues based on transmitted channels
EP0762804B1 (en) Three-dimensional acoustic processor which uses linear predictive coefficients
US11611828B2 (en) Systems and methods for improving audio virtualization
US20110026718A1 (en) Virtualizer with cross-talk cancellation and reverb
CN107005778A (en) The audio signal processing apparatus and method rendered for ears
EP2939443B1 (en) System and method for variable decorrelation of audio signals
US9848274B2 (en) Sound spatialization with room effect
US8059824B2 (en) Joint sound synthesis and spatialization
AU732016B2 (en) Three-dimensional virtual audio display employing reduced complexity imaging filters
Tamulionis et al. Listener movement prediction based realistic real-time binaural rendering
JPH0775439B2 (en) 3D sound field playback device

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed