CN107786936A - The processing method and terminal of a kind of voice signal - Google Patents
The processing method and terminal of a kind of voice signal Download PDFInfo
- Publication number
- CN107786936A CN107786936A CN201610724585.0A CN201610724585A CN107786936A CN 107786936 A CN107786936 A CN 107786936A CN 201610724585 A CN201610724585 A CN 201610724585A CN 107786936 A CN107786936 A CN 107786936A
- Authority
- CN
- China
- Prior art keywords
- acoustic pressure
- hrtf
- represent
- sound source
- calculated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
- H04S5/02—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation of the pseudo four-channel type, e.g. in which rear channel signals are derived from two-channel stereo signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16B—BIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
- G16B99/00—Subject matter not provided for in other groups of this subclass
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biotechnology (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Theoretical Computer Science (AREA)
- Stereophonic System (AREA)
Abstract
The invention provides a kind of processing method of voice signal and terminal, wherein, processing method includes:Obtain the three-dimensional headform of terminal user;Sonic transmissions that a sound source sends are calculated to the first head related transfer function HRTF of the left ear of three-dimensional headform and are transmitted to the 2nd HRTF of the auris dextra of three-dimensional headform;According to the first HRTF and the 2nd HRTF, the voice signal that sound source is sent is handled.The present invention solves the problems, such as that virtual auditory technology is existing because virtual auditory space caused by non-personalized HRTF synthesizes distortion in the prior art, improves the audio experience of user.
Description
Technical field
The present invention relates to the processing method and terminal of audio signal processing technique, more particularly to a kind of voice signal.
Background technology
The development of virtual reality technology is more and more ripe, but the core of virtual reality technology all focuses on vision side at present
Face.But the mankind can obtain about 60% information by vision, can be obtained greatly by the sense of hearing when perceiving objective world
About 30% information, remaining 10% information can be obtained by sense organs such as tactile, smell, therefore in virtual reality technology,
Virtual auditory technology is equally indispensable technology, and people, which perceive, judges the information such as the azimuth-range of the sense of hearing, for people
Real virtual reality experience is also of equal importance.
Virtual auditory is by Audio Signal Processing technology, some special acoustic effects based on human ear, passes through acoustics phase
Algorithm calculating simulation is closed, so as to which sound source is reconstituted in into three dimensions any position, realizes the reproduction of sound bearing.It is in addition, virtual
The sense of hearing is to carry out rendering for the sense of hearing to the virtual environment being made up of sound-source signal, hearer and scene, and passes through numerical simulation
Mode, the Virtual Sound of specific location in special scenes is obtained, to replace real auditory perception.Set for the current VR that wears
For standby, plus the three-dimensional sense of hearing, real feeling of immersion can be more brought.
Virtual auditory technology is exactly given sound source, the physics and geometrical condition of environment, simulate sound source radiation sound wave and
Its transmitting procedure, so as to obtain the time of sound and spatial information, finally with head related transfer function (HRTF) signal transacting
Human ear is simulated to sound wave comprehensive function, the time of sound and spatial information are converted into binaural signals, then pass through Headphone reproducing
To ears.HRTF is the basis of virtual auditory technology, and HRTF can be rebuild by the amplitude characteristic and interaural difference of left and right ear.
HRTF acquisition methods are mostly that laboratory measurement obtains, and part Experiment room obtains HRTF by way of test both at home and abroad at present
Database, and measure be specific crowd part orientation HRTF.
But although HRTF can be obtained by way of measurement, measuring method is cumbersome, costly and time consuming, and surveys
The database of amount is also the HRTF of specific crowd.In actual life, due to people, everyone physiological structure is different, head
Shape and external ear size are also different, therefore everyone has its unique HRTF, that is, the HRTF databases measured can not represent institute
The structure distribution feature of someone.So, if the HRTF data obtained by experiment render to carry out virtual auditory, the sense of hearing is being carried out
People are more likely to produce hearing difference when playback, or even can not distinguish the azimuth information of the sense of hearing.In addition, HRTF measurements
Database is also the data in spatially discrete part orientation, can not represent the positional information in all orientation, so virtual
The sense of hearing can also have deviation.
Therefore, in summary, there is due to virtually being listened caused by non-personalized HRTF for existing virtual auditory technology
The problem of feeling space combination distortion, reduce the audio experience of user.
The content of the invention
In order to solve present in prior art because virtual auditory space caused by non-personalized HRTF synthesizes distortion
The problem of, the invention provides a kind of processing method of voice signal and terminal.
In order to solve the above-mentioned technical problem, in a first aspect, the invention provides a kind of processing method of voice signal, bag
Include:
Obtain the three-dimensional headform of terminal user;
Letter is transmitted in the first head association for calculating sonic transmissions that a sound source sends to the left ear of the three-dimensional headform
Number HRTF and the auris dextra transmitted to the three-dimensional headform the 2nd HRTF;
According to the first HRTF and the 2nd HRTF, the voice signal sent to the sound source is handled.
Alternatively, the step of three-dimensional headform of the acquisition terminal user includes:Terminal is obtained by terminal scanning
The three-dimensional headform of user.
Alternatively, the sonic transmissions that one sound source of calculating is sent to the first head of the left ear of the three-dimensional headform associates
Transfer function H RTF and transmitting to the step of two HRTF of the auris dextra of the three-dimensional headform includes:Calculate the sound wave
Transmit to the first acoustic pressure of the left ear, the second acoustic pressure of the sonic transmissions to the auris dextra and in three-dimensional people's head mould
After type is removed, the 3rd acoustic pressure caused by situ of the sound wave in the three-dimensional headform;According to first acoustic pressure
With the 3rd acoustic pressure, the first HRTF is calculated;According to second acoustic pressure and the 3rd acoustic pressure, described second is calculated
HRTF。
Alternatively, calculate the sonic transmissions to the left ear the first acoustic pressure the step of include:When the sound wave direct projection
When transmitting to the left ear, the sonic transmissions are calculated to the first acoustic pressure of the left ear;When the sound wave passes through transmission and reflection
During to the left ear, according to the sequencing of reflection generation, the acoustic pressure after the sound wave reflects is calculated successively, wherein, when
When the acoustic pressure being calculated successively is all higher than a preset strength threshold value, the sonic transmissions are calculated to the first of the left ear
Acoustic pressure.
Alternatively, described according to first acoustic pressure and the 3rd acoustic pressure, a step of HRTF is calculated, includes:
According to formulaThe first HRTF is calculated, wherein, HLRepresent the first HRTF, PL(rL,
θL,F, a) represent the first acoustic pressure, P0(r0, f) and represent the 3rd acoustic pressure, rLRepresent between the sound source and the left ear away from
From θLHorizontal angle of the sound source relative to the left ear is represented,Represent the elevation angle of the sound source relative to the left ear, f
Frequency of sound wave is represented, a represents the physiological parameter of the three-dimensional headform, r0Represent the sound source and the three-dimensional headform
The distance between center.
Alternatively, calculate the sonic transmissions to the auris dextra the second acoustic pressure the step of include:When the sound wave direct projection
When transmitting to the auris dextra, the sonic transmissions are calculated to the second acoustic pressure of the auris dextra;When the sound wave passes through transmission and reflection
During to the auris dextra, according to the sequencing of reflection generation, the acoustic pressure after the sound wave reflects is calculated successively, wherein, when
When the acoustic pressure being calculated successively is all higher than a preset strength threshold value, the sonic transmissions are calculated to the second of the auris dextra
Acoustic pressure.
Alternatively, according to second acoustic pressure and the 3rd acoustic pressure, the step of two HRTF is calculated, includes:According to
FormulaThe 2nd HRTF is calculated, wherein, HRRepresent the 2nd HRTF, PR(rR, θR,F, a) represent the second acoustic pressure, P0(r0, f) and represent the 3rd acoustic pressure, rRRepresent the distance between the sound source and the auris dextra, θR
Horizontal angle of the sound source relative to the auris dextra is represented,The elevation angle of the sound source relative to the auris dextra is represented, f is represented
Frequency of sound wave, a represent the physiological parameter of the three-dimensional headform, r0Represent in the sound source and the three-dimensional headform
The distance between heart position.
Second aspect, present invention also offers a kind of terminal, including:
Acquisition module, for obtaining the three-dimensional headform of terminal user;
Computing module, first for the left ear of the sonic transmissions that one sound source of calculating is sent to the three-dimensional headform
Portion associates the 2nd HRTF of transfer function H RTF and the auris dextra transmitted to the three-dimensional headform;
Processing module, for according to the first HRTF and the 2nd HRTF, the voice signal sent to the sound source to be handled.
Alternatively, the acquisition module is used for, and the three-dimensional headform of terminal user is obtained by terminal scanning.
Alternatively, the computing module includes:First computing unit, for calculating the sonic transmissions to the left ear
First acoustic pressure, the second acoustic pressure of the sonic transmissions to the auris dextra and after the three-dimensional headform removes, the sound
3rd acoustic pressure caused by situ of the ripple in the three-dimensional headform;Second computing unit, for according to first sound
Pressure and the 3rd acoustic pressure, are calculated the first HRTF;3rd computing unit, for according to second acoustic pressure and the 3rd sound
Pressure, is calculated the 2nd HRTF.
Alternatively, first computing unit is used for, and when the sound wave direct projection is transmitted to the left ear, calculates the sound
Ripple is transmitted to the first acoustic pressure of the left ear;When the sound wave passes through transmission and reflection to the left ear, occur according to reflection
Sequencing, the acoustic pressure after the sound wave reflects is calculated successively, wherein, when the acoustic pressure being calculated successively is all higher than
During one preset strength threshold value, the sonic transmissions are calculated to the first acoustic pressure of the left ear.
Alternatively, second computing unit is used for, according to formulaCalculate
To the first HRTF, wherein, HLRepresent the first HRTF, PL(rL, θL,F, a) represent the first acoustic pressure, PO(rO, f) and represent the 3rd sound
Pressure, rLRepresent the distance between the sound source and the left ear, θLHorizontal angle of the sound source relative to the left ear is represented,
The elevation angle of the sound source relative to the left ear is represented, f represents frequency of sound wave, and a represents the physiology ginseng of the three-dimensional headform
Number, rORepresent the distance between the sound source and the center of the three-dimensional headform.
Alternatively, first computing unit is used for, and when the sound wave direct projection is transmitted to the auris dextra, calculates the sound
Ripple is transmitted to the second acoustic pressure of the auris dextra;When the sound wave passes through transmission and reflection to the auris dextra, occur according to reflection
Sequencing, the acoustic pressure after the sound wave reflects is calculated successively, wherein, when the acoustic pressure being calculated successively is all higher than
During one preset strength threshold value, the sonic transmissions are calculated to the second acoustic pressure of the auris dextra.
Alternatively, the 3rd computing unit is used for, according to formulaCalculate
To the 2nd HRTF, wherein, HRRepresent the 2nd HRTF, PR(rR, θR,F, a) represent the second acoustic pressure, PO(rO, f) and represent the 3rd sound
Pressure, rRRepresent the distance between the sound source and the auris dextra, θRHorizontal angle of the sound source relative to the auris dextra is represented,The elevation angle of the sound source relative to the auris dextra is represented, f represents frequency of sound wave, and a represents the physiology of the three-dimensional headform
Parameter, rORepresent the distance between the sound source and the center of the three-dimensional headform.
The beneficial effects of the invention are as follows:
Three-dimensional headform of the invention by obtaining terminal user first, the sonic transmissions that then calculating sound source is sent are extremely
First HRTF of the left ear of three-dimensional headform and transmit to the 2nd HRTF of the auris dextra of three-dimensional headform, finally according to meter
The first obtained HRTF and the 2nd HRTF, the signal that sound source is sent is handled so that terminal can pass through the side of calculating
Formula obtains personalized the first HRTF and the 2nd HRTF of terminal user, and by personalized the first HRTF and the 2nd HRTF to sound
Sound signal is handled, so that terminal user can obtain more real audio experience, is solved empty in the prior art
It is existing due to the problem of virtual auditory space caused by non-personalized HRTF synthesizes distortion to intend hearing technology, improves user
Audio experience.
Brief description of the drawings
Fig. 1 represents the step flow chart of the processing method of voice signal in the first embodiment of the present invention;
Fig. 2 represents the step flow chart of the processing method of voice signal in the second embodiment of the present invention;
Fig. 3 represents the schematic diagram of a scenario that sound wave transmits indoors;
Fig. 4 represents the structured flowchart of the terminal of voice signal in the third embodiment of the present invention.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in accompanying drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
Completely it is communicated to those skilled in the art.
First embodiment:
As shown in figure 1, be the step flow chart of the processing method of voice signal in the first embodiment of the present invention, the processing
Method includes:
Step 101, the three-dimensional headform of terminal user is obtained.
In this step, specifically, when obtaining the three-dimensional headform of terminal user, the side of terminal scanning can be passed through
Formula obtains the three-dimensional headform of terminal user.In addition, specifically, number of people threedimensional model can include terminal user three/
One torso portion.In addition, after terminal scanning gets the three-dimensional headform of user, three-dimensional headform can be built
Mould processing, so as to get the data such as the physiological parameter of three-dimensional headform.
Step 102, the sonic transmissions that one sound source of calculating is sent are associated to the first head of the left ear of three-dimensional headform and passed
Delivery function HRTF and transmit to the 2nd HRTF of the auris dextra of three-dimensional headform.
In this step, specifically, after the three-dimensional headform of terminal user is got, the side of calculating can be passed through
Formula, sonic transmissions that a sound source sends are calculated to the first HRTF of the left ear of three-dimensional headform, and sound source is sent
Sonic transmissions to the auris dextra of three-dimensional headform the 2nd HRTF.So, each terminal can be obtained by way of calculating to use
The exclusive personalized HRTF in family.
Step 103, according to the first HRTF and the 2nd HRTF, the voice signal that sound source is sent is handled.
In this step, specifically, can be according to the first HRTF being calculated and the 2nd HRTF, the sound sent to sound source
Sound signal is handled.So, according to the exclusive personalized HRTF of each terminal user, the voice signal that sound source is sent is entered
Row processing so that in virtual reality technology, it is not necessary to HRTF pairs in the HRTF databases obtained further according to laboratory measurement
The voice signal that terminal user receives is handled, and improves the Auditory Perception degree of terminal user.
So, then the present embodiment calculates the sound that sound source is sent by obtaining terminal user three-dimensional headform first
Ripple is transmitted to the first HRTF of the left ear of three-dimensional headform and transmitted to the 2nd HRTF of the auris dextra of three-dimensional headform, most
Afterwards according to the first HRTF and the 2nd HRTF being calculated, the signal that sound source is sent is handled so that terminal can pass through
The mode of calculating obtains personalized the first HRTF and the 2nd HRTF of terminal user, and passes through the first HRTF and second of personalization
HRTF is handled voice signal, so that terminal user can obtain more real audio experience, is solved existing
Virtual auditory technology is existing due to the problem of virtual auditory space caused by non-personalized HRTF synthesizes distortion in technology, carries
The high audio experience of user.
Second embodiment:
As shown in Fig. 2 be the step flow chart of the processing method of voice signal in the second embodiment of the present invention, the processing
Method includes:
Step 201, the three-dimensional headform of terminal user is obtained.
In this step, specifically, when obtaining the three-dimensional headform of terminal user, the side of terminal scanning can be passed through
Formula obtains the three-dimensional headform of terminal user.In addition, specifically, number of people threedimensional model can include terminal user three/
One torso portion.In addition, after terminal scanning gets the three-dimensional headform of user, three-dimensional headform can be built
Mould processing, so as to get the data such as the physiological parameter of three-dimensional headform.
Step 202, sonic transmissions are calculated to the first acoustic pressure of left ear, the second acoustic pressure of sonic transmissions to auris dextra and three
After dimension headform removes, the 3rd acoustic pressure caused by situ of the sound wave in three-dimensional headform.
In this step, specifically, in the first acoustic pressure of calculating sonic transmissions to the left ear of three-dimensional headform, if sound wave
Direct projection is transmitted to the left ear of three-dimensional headform, then can directly calculate sonic transmissions to the first acoustic pressure of left ear;If sound wave is warp
When crossing left to the three-dimensional headform ear of transmission and reflection, the sequencing that can occur according to reflection, calculate successively sound wave occur it is anti-
Acoustic pressure after penetrating, wherein, when the acoustic pressure being calculated successively is all higher than a preset strength threshold value, the sonic transmissions are calculated to a left side
First acoustic pressure of ear.Specifically, when calculating the first acoustic pressure, it can first be calculated and transmit to the of the left ear of three-dimensional headform
One sound intensity.
Certainly, in the second acoustic pressure of calculating sonic transmissions to three-dimensional headform's auris dextra, if sound wave direct projection is transmitted to the right side
Ear, then sonic transmissions can be calculated to the second acoustic pressure of auris dextra;If sound wave passes through transmission and reflection to auris dextra, can be according to reflection
The sequencing of generation, the acoustic pressure after sound wave reflects is calculated successively, wherein, when the acoustic pressure being calculated successively is all higher than one
During preset strength threshold value, sonic transmissions are calculated to the second acoustic pressure of auris dextra.Specifically, when calculating the second acoustic pressure, can first count
Calculation obtains transmitting to second sound intensity of three-dimensional headform's auris dextra.
This is specifically described below.
During sonic transmissions, the transmitting procedure of sound wave can be divided into two kinds of direct projection and reflection, and sound wave is in room
It is mostly direct projection transmission during outer transmission, and is mostly transmission and reflection when transmitting indoors.As shown in figure 3, transmitted indoors for sound wave
Schematic diagram of a scenario.In figure 3, A is three-dimensional headform, and B is sound source, and C is indoor wall, and solid line is that road is transmitted in sound wave direct projection
Footpath, dotted line are sound wave bounce transmission paths.
, can be according to the calculation formula of the sound intensity, directly when the sound wave direct projection that sound source B is sent is transmitted to three-dimensional headform A
Connect according to parameter r and e, sound source B is calculated in the sound intensity caused by three-dimensional headform.Wherein, r represents sound source B and three-dimensional people
What the distance between head model, the e expression sound source B energy per second sent, and parameter e were transmitted in each direction divides energy and ginseng
Number θ andCorrelation, wherein, θ represents horizontal angle of the sound source B relative to sound source recipient,Represent that sound source B receives relative to sound source
The elevation angle of person.Further, since the acoustic pressure that receives of three-dimensional headform also with frequency of sound wave f and the physiology of three-dimensional headform
Relating to parameters, therefore can be according to parameter rL, θL,F and a, sound source is calculated caused by the left ear of three-dimensional headform
One acoustic pressure, according to parameter rR, θR,Sound source is calculated in the second acoustic pressure caused by three-dimensional headform's auris dextra in f, a, wherein,
rLRepresent the distance between sound source and left ear, θLHorizontal angle of the sound source relative to left ear is represented,Represent sound source relative to left ear
The elevation angle, f represents frequency of sound wave, and a represents the physiological parameter of three-dimensional headform, rRRepresent the distance between sound source and auris dextra, θR
Horizontal angle of the sound source relative to auris dextra is represented,Represent the elevation angle of the sound source relative to auris dextra.
In addition, when the sound wave transmission and reflection that sound source B is sent is to three-dimensional headform A, because sound wave is in the process of reflection
It is middle to be decayed according to the number and path intensity of reflection, therefore during sound wave transmission and reflection, it is also necessary to acquisition sound
Ripple and indoor wall the C point of impingement, then according to the point of impingement got, it is reverse to get the reflection of sound wave, and according to sound wave
The sequencing occurred is reflected, the intensity of acoustic wave after sound wave reflection is calculated successively, and intensity of acoustic wave is converted into acoustic pressure.Its
In, if sound wave, during reflection, acoustic pressure is less than a preset strength threshold value, i.e. sound wave will not transmit to three-dimensional headform again
When, transmission that can be no longer to sound wave is handled, if but the acoustic pressure that is calculated successively is all higher than the preset strength threshold value,
Then also need to calculate the acoustic pressure that sonic transmissions receive to three-dimensional headform.Specifically, sound wave transmission and reflection is being calculated to three
When tieing up headform A acoustic pressure, parameter r, θ are being considered,While f and a, it is also necessary to be in view of reflectance factor and scattering
Number, certainly, according to the difference of reflecting medium, reflectance factor and scattering coefficient also differ.
So, with reference to aforesaid way, sound wave can be calculated and transmitted by direct projection or reflection mode to three-dimensional people's head mould
First acoustic pressure of the left ear of type, transmit to the second acoustic pressure of three-dimensional headform's auris dextra and sound wave in the original position of three-dimensional headform
Put the 3rd acoustic pressure caused by place.
Step 203, according to the first acoustic pressure and the 3rd acoustic pressure, the first HRTF is calculated.
In this step, specifically, existing getting sonic transmissions to the first acoustic pressure and sound wave of the left ear of three-dimensional headform
, can be according to formula caused by the situ of three-dimensional headform during three acoustic pressures
The first HRTF is calculated, wherein, HLRepresent the first HRTF, PL(rL, θL,F, a) represent the first acoustic pressure, PO(rO, f) represent
3rd acoustic pressure, rLRepresent the distance between sound source and left ear, θLHorizontal angle of the sound source relative to left ear is represented,Represent sound source phase
For the elevation angle of left ear, f represents frequency of sound wave, and a represents the physiological parameter of three-dimensional headform, rORepresent sound source and the three-dimensional number of people
The distance between center of model.
Step 204, according to the second acoustic pressure and the 3rd acoustic pressure, the 2nd HRTF is calculated.
In this step, specifically, existing getting sonic transmissions to the second acoustic pressure harmony ripple of three-dimensional headform's auris dextra
, can be according to formula caused by the situ of three-dimensional headform during three acoustic pressures
The 2nd HRTF is calculated, wherein, HRRepresent the 2nd HRTF, PR(rR, θR,F, a) represent the second acoustic pressure, PO(rO, f) represent
3rd acoustic pressure, rRRepresent the distance between sound source and auris dextra, θRHorizontal angle of the sound source relative to auris dextra is represented,Represent sound source phase
For the elevation angle of auris dextra, f represents frequency of sound wave, and a represents the physiological parameter of three-dimensional headform, rORepresent sound source and the three-dimensional number of people
The distance between center of model.
Step 205, according to the first HRTF and the 2nd HRTF, the voice signal that sound source is sent is handled.
In this step, specifically, can be according to the first HRTF being calculated and the 2nd HRTF, the sound sent to sound source
Sound signal is handled.So, according to the exclusive personalized HRTF of each terminal user, the voice signal that sound source is sent is entered
Row processing so that in virtual reality technology, it is not necessary to HRTF pairs in the HRTF databases obtained further according to laboratory measurement
The voice signal that terminal user receives is handled, and improves the Auditory Perception degree of terminal user.
So, the embodiment of the present invention is by calculating sonic transmissions that sound source is sent to the first sound of the left ear of three-dimensional headform
Pressure, the second acoustic pressure of sonic transmissions to three-dimensional headform's auris dextra, the caused by situ of the sound wave in three-dimensional headform
Three acoustic pressures, to calculate sonic transmissions to the first HRTF and sonic transmissions of the left ear of three-dimensional headform to three-dimensional headform's auris dextra
The 2nd HRTF, finally by the first HRTF being calculated and the 2nd HRTF, the voice signal that sound source is sent is handled,
So, HRTF calculating process is simplified so that terminal user can obtain the HRTF parameters of personalization, and according to personalization
HRTF carries out the processing of voice signal, and tests the HRTF databases measured without referring again to carry out the processing of voice signal,
It is existing because virtual auditory space synthesis caused by non-personalized HRTF is lost to solve virtual auditory technology in the prior art
Genuine problem, improve the audio experience of user.
3rd embodiment:
As shown in figure 4, being the structured flowchart of terminal in the third embodiment of the present invention, the terminal includes:
Acquisition module 401, for obtaining the three-dimensional headform of terminal user;
Computing module 402, first for the left ear of the sonic transmissions that one sound source of calculating is sent to three-dimensional headform
Portion associates transfer function H RTF and transmitted to the 2nd HRTF of the auris dextra of three-dimensional headform;
Processing module 403, for according to the first HRTF and the 2nd HRTF, handling the voice signal that sound source is sent.
Alternatively, acquisition module 401 is used for, and the three-dimensional headform of terminal user is obtained by terminal scanning.
Alternatively, computing module 402 includes:First computing unit, for calculate sonic transmissions to left ear the first acoustic pressure,
Sonic transmissions are to the second acoustic pressure of auris dextra and after three-dimensional headform removes, situ of the sound wave in three-dimensional headform
Caused 3rd acoustic pressure;Second computing unit, for according to the first acoustic pressure and the 3rd acoustic pressure, the first HRTF to be calculated;3rd
Computing unit, for according to the second acoustic pressure and the 3rd acoustic pressure, the 2nd HRTF to be calculated.
Alternatively, the first computing unit is used for, and when sound wave direct projection is transmitted to left ear, calculates sonic transmissions to the of left ear
One acoustic pressure;When sound wave passes through transmission and reflection to left ear, according to the sequencing of reflection generation, sound wave is calculated successively and is reflected
Acoustic pressure afterwards, wherein, when the acoustic pressure being calculated successively is all higher than a preset strength threshold value, sonic transmissions are calculated to left ear
First acoustic pressure.
Alternatively, the second computing unit is used for, according to formulaIt is calculated
One HRTF, wherein, HLRepresent the first HRTF, PL(rL, θL,F, a) represent the first acoustic pressure, PO(rO, f) and represent the 3rd acoustic pressure, rL
Represent the distance between sound source and left ear, θLHorizontal angle of the sound source relative to left ear is represented,Represent sound source relative to left ear
The elevation angle, f represent frequency of sound wave, and a represents the physiological parameter of three-dimensional headform, rORepresent sound source and the center of three-dimensional headform
The distance between position.
Alternatively, the first computing unit is used for, and when sound wave direct projection is transmitted to auris dextra, calculates sonic transmissions to the of auris dextra
Two acoustic pressures;When sound wave passes through transmission and reflection to auris dextra, according to the sequencing of reflection generation, sound wave is calculated successively and is reflected
Acoustic pressure afterwards, wherein, when the acoustic pressure being calculated successively is all higher than a preset strength threshold value, sonic transmissions are calculated to auris dextra
Second acoustic pressure.
Alternatively, the 3rd computing unit is used for, according to formulaIt is calculated
Two HRTF, wherein, HRRepresent the 2nd HRTF, PR(rR, θR,F, a) represent the second acoustic pressure, PO(rO, f) and represent the 3rd acoustic pressure, rR
Represent the distance between sound source and auris dextra, θRHorizontal angle of the sound source relative to auris dextra is represented,Represent sound source relative to auris dextra
The elevation angle, f represent frequency of sound wave, and a represents the physiological parameter of three-dimensional headform, rORepresent sound source and the center of three-dimensional headform
The distance between position.
Above-described is the preferred embodiment of the present invention, it should be pointed out that is come for the ordinary person of the art
Say, some improvements and modifications can also be made under the premise of principle of the present invention is not departed from, and these improvements and modifications also exist
In protection scope of the present invention.
Claims (14)
- A kind of 1. processing method of voice signal, it is characterised in that including:Obtain the three-dimensional headform of terminal user;Sonic transmissions that a sound source sends are calculated to the first head related transfer function of the left ear of the three-dimensional headform HRTF and the auris dextra transmitted to the three-dimensional headform the 2nd HRTF;According to the first HRTF and the 2nd HRTF, the voice signal sent to the sound source is handled.
- 2. processing method according to claim 1, it is characterised in that the three-dimensional headform's for obtaining terminal user Step includes:The three-dimensional headform of terminal user is obtained by terminal scanning.
- 3. processing method according to claim 1, it is characterised in that the sonic transmissions that one sound source of calculating is sent to described three Tie up the first head related transfer function HRTF of the left ear of headform and transmit to the auris dextra of the three-dimensional headform The step of two HRTF, includes:Calculate the sonic transmissions to the first acoustic pressure of the left ear, the second acoustic pressure of the sonic transmissions to the auris dextra and After the three-dimensional headform removes, the 3rd acoustic pressure caused by situ of the sound wave in the three-dimensional headform;According to first acoustic pressure and the 3rd acoustic pressure, the first HRTF is calculated;According to second acoustic pressure and the 3rd acoustic pressure, the 2nd HRTF is calculated.
- 4. processing method according to claim 3, it is characterised in that calculate the sonic transmissions to the first of the left ear The step of acoustic pressure, includes:When the sound wave direct projection is transmitted to the left ear, the sonic transmissions are calculated to the first acoustic pressure of the left ear;When the sound wave passes through transmission and reflection to the left ear, according to the sequencing of reflection generation, the sound is calculated successively Ripple reflect after acoustic pressure, wherein, when the acoustic pressure being calculated successively is all higher than a preset strength threshold value, calculate institute Sonic transmissions are stated to the first acoustic pressure of the left ear.
- 5. processing method according to claim 4, it is characterised in that it is described according to first acoustic pressure and the 3rd acoustic pressure, A step of HRTF is calculated includes:According to formulaThe first HRTF is calculated, wherein, HLThe first HRTF is represented,Represent the first acoustic pressure, P0(r0, f) and represent the 3rd acoustic pressure, rLRepresent the sound source with it is described The distance between left ear, θLHorizontal angle of the sound source relative to the left ear is represented,Represent the sound source relative to described The elevation angle of left ear, f represent frequency of sound wave, and a represents the physiological parameter of the three-dimensional headform, r0Represent the sound source with it is described The distance between center of three-dimensional headform.
- 6. processing method according to claim 3, it is characterised in that calculate the sonic transmissions to the second of the auris dextra The step of acoustic pressure, includes:When the sound wave direct projection is transmitted to the auris dextra, the sonic transmissions are calculated to the second acoustic pressure of the auris dextra;When the sound wave passes through transmission and reflection to the auris dextra, according to the sequencing of reflection generation, the sound is calculated successively Ripple reflect after acoustic pressure, wherein, when the acoustic pressure being calculated successively is all higher than a preset strength threshold value, calculate institute Sonic transmissions are stated to the second acoustic pressure of the auris dextra.
- 7. processing method according to claim 6, it is characterised in that according to second acoustic pressure and the 3rd acoustic pressure, calculate The step of obtaining two HRTF includes:According to formulaThe 2nd HRTF is calculated, wherein, HRThe 2nd HRTF is represented,Represent the second acoustic pressure, P0(r0, f) and represent the 3rd acoustic pressure, rRRepresent the sound source and institute State the distance between auris dextra, θRHorizontal angle of the sound source relative to the auris dextra is represented,Represent the sound source relative to institute The elevation angle of auris dextra is stated, f represents frequency of sound wave, and a represents the physiological parameter of the three-dimensional headform, r0Represent the sound source and institute State the distance between center of three-dimensional headform.
- A kind of 8. terminal, it is characterised in that including:Acquisition module, for obtaining the three-dimensional headform of terminal user;Computing module, closed for the sonic transmissions that one sound source of calculating is sent to the first head of the left ear of the three-dimensional headform Join the 2nd HRTF of transfer function H RTF and the auris dextra transmitted to the three-dimensional headform;Processing module, for according to the first HRTF and the 2nd HRTF, the voice signal sent to the sound source to be handled.
- 9. terminal according to claim 8, it is characterised in that the acquisition module is used for, and is obtained eventually by terminal scanning The three-dimensional headform of end subscriber.
- 10. terminal according to claim 8, it is characterised in that the computing module includes:First computing unit, for calculating the sonic transmissions to the first acoustic pressure of the left ear, the sonic transmissions to described Second acoustic pressure of auris dextra and after the three-dimensional headform removes, original position of the sound wave in the three-dimensional headform 3rd acoustic pressure caused by place;Second computing unit, for according to first acoustic pressure and the 3rd acoustic pressure, the first HRTF to be calculated;3rd computing unit, for according to second acoustic pressure and the 3rd acoustic pressure, the 2nd HRTF to be calculated.
- 11. terminal according to claim 10, it is characterised in that first computing unit is used for, when the sound wave is straight Penetrate when transmitting to the left ear, calculate the sonic transmissions to the first acoustic pressure of the left ear;When the sound wave passes by reflection When transporting to the left ear, according to the sequencing of reflection generation, the acoustic pressure after the sound wave reflects is calculated successively, wherein, When the acoustic pressure being calculated successively is all higher than a preset strength threshold value, the sonic transmissions are calculated to the of the left ear One acoustic pressure.
- 12. terminal according to claim 11, it is characterised in that second computing unit is used for, according to formulaThe first HRTF is calculated, wherein, HLThe first HRTF is represented,Represent the first acoustic pressure, P0(r0, f) and represent the 3rd acoustic pressure, rLRepresent the sound source with it is described The distance between left ear, θLHorizontal angle of the sound source relative to the left ear is represented,Represent the sound source relative to described The elevation angle of left ear, f represent frequency of sound wave, and a represents the physiological parameter of the three-dimensional headform, r0Represent the sound source with it is described The distance between center of three-dimensional headform.
- 13. terminal according to claim 10, it is characterised in that first computing unit is used for, when the sound wave is straight Penetrate when transmitting to the auris dextra, calculate the sonic transmissions to the second acoustic pressure of the auris dextra;When the sound wave passes by reflection When transporting to the auris dextra, according to the sequencing of reflection generation, the acoustic pressure after the sound wave reflects is calculated successively, wherein, When the acoustic pressure being calculated successively is all higher than a preset strength threshold value, the sonic transmissions are calculated to the of the auris dextra Two acoustic pressures.
- 14. terminal according to claim 13, it is characterised in that the 3rd computing unit is used for, according to formulaThe 2nd HRTF is calculated, wherein, HRThe 2nd HRTF is represented,Represent the second acoustic pressure, P0(r0, f) and represent the 3rd acoustic pressure, rRRepresent the sound source and institute State the distance between auris dextra, θRHorizontal angle of the sound source relative to the auris dextra is represented,Represent the sound source relative to institute The elevation angle of auris dextra is stated, f represents frequency of sound wave, and a represents the physiological parameter of the three-dimensional headform, r0Represent the sound source and institute State the distance between center of three-dimensional headform.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610724585.0A CN107786936A (en) | 2016-08-25 | 2016-08-25 | The processing method and terminal of a kind of voice signal |
PCT/CN2017/082940 WO2018036194A1 (en) | 2016-08-25 | 2017-05-03 | Sound signal processing method, terminal, and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610724585.0A CN107786936A (en) | 2016-08-25 | 2016-08-25 | The processing method and terminal of a kind of voice signal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107786936A true CN107786936A (en) | 2018-03-09 |
Family
ID=61245440
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610724585.0A Withdrawn CN107786936A (en) | 2016-08-25 | 2016-08-25 | The processing method and terminal of a kind of voice signal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107786936A (en) |
WO (1) | WO2018036194A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110856095A (en) * | 2018-08-20 | 2020-02-28 | 华为技术有限公司 | Audio processing method and device |
CN110856094A (en) * | 2018-08-20 | 2020-02-28 | 华为技术有限公司 | Audio processing method and device |
CN111886882A (en) * | 2018-03-19 | 2020-11-03 | OeAW奥地利科学院 | Method for determining a listener specific head related transfer function |
CN112470497A (en) * | 2018-07-25 | 2021-03-09 | 杜比实验室特许公司 | Personalized HRTFS via optical capture |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1778143A (en) * | 2003-09-08 | 2006-05-24 | 松下电器产业株式会社 | Audio image control device design tool and audio image control device |
CN102413414A (en) * | 2010-10-13 | 2012-04-11 | 微软公司 | System and method for high-precision 3-dimensional audio for augmented reality |
CN105611481A (en) * | 2015-12-30 | 2016-05-25 | 北京时代拓灵科技有限公司 | Man-machine interaction method and system based on space voices |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105163242B (en) * | 2015-09-01 | 2018-09-04 | 深圳东方酷音信息技术有限公司 | A kind of multi-angle 3D sound back method and device |
-
2016
- 2016-08-25 CN CN201610724585.0A patent/CN107786936A/en not_active Withdrawn
-
2017
- 2017-05-03 WO PCT/CN2017/082940 patent/WO2018036194A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1778143A (en) * | 2003-09-08 | 2006-05-24 | 松下电器产业株式会社 | Audio image control device design tool and audio image control device |
CN102413414A (en) * | 2010-10-13 | 2012-04-11 | 微软公司 | System and method for high-precision 3-dimensional audio for augmented reality |
CN105611481A (en) * | 2015-12-30 | 2016-05-25 | 北京时代拓灵科技有限公司 | Man-machine interaction method and system based on space voices |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111886882A (en) * | 2018-03-19 | 2020-11-03 | OeAW奥地利科学院 | Method for determining a listener specific head related transfer function |
CN112470497A (en) * | 2018-07-25 | 2021-03-09 | 杜比实验室特许公司 | Personalized HRTFS via optical capture |
US11778403B2 (en) | 2018-07-25 | 2023-10-03 | Dolby Laboratories Licensing Corporation | Personalized HRTFs via optical capture |
CN110856095A (en) * | 2018-08-20 | 2020-02-28 | 华为技术有限公司 | Audio processing method and device |
CN110856094A (en) * | 2018-08-20 | 2020-02-28 | 华为技术有限公司 | Audio processing method and device |
CN110856095B (en) * | 2018-08-20 | 2021-11-19 | 华为技术有限公司 | Audio processing method and device |
US11451921B2 (en) | 2018-08-20 | 2022-09-20 | Huawei Technologies Co., Ltd. | Audio processing method and apparatus |
US11611841B2 (en) | 2018-08-20 | 2023-03-21 | Huawei Technologies Co., Ltd. | Audio processing method and apparatus |
US11863964B2 (en) | 2018-08-20 | 2024-01-02 | Huawei Technologies Co., Ltd. | Audio processing method and apparatus |
US11910180B2 (en) | 2018-08-20 | 2024-02-20 | Huawei Technologies Co., Ltd. | Audio processing method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
WO2018036194A1 (en) | 2018-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10939225B2 (en) | Calibrating listening devices | |
Cuevas-Rodríguez et al. | 3D Tune-In Toolkit: An open-source library for real-time binaural spatialisation | |
CN108616789B (en) | Personalized virtual audio playback method based on double-ear real-time measurement | |
CN102665156B (en) | Virtual 3D replaying method based on earphone | |
Katz | Boundary element method calculation of individual head-related transfer function. II. Impedance effects and comparisons to real measurements | |
CN111108555B (en) | Apparatus and methods for generating enhanced or modified sound field descriptions using depth-extended DirAC techniques or other techniques | |
Algazi et al. | Approximating the head-related transfer function using simple geometric models of the head and torso | |
Watanabe et al. | Dataset of head-related transfer functions measured with a circular loudspeaker array | |
CN107786936A (en) | The processing method and terminal of a kind of voice signal | |
KR20180108766A (en) | Rendering an augmented reality headphone environment | |
Meshram et al. | P-HRTF: Efficient personalized HRTF computation for high-fidelity spatial sound | |
CN105432097A (en) | Filtering with binaural room impulse responses with content analysis and weighting | |
CN105120418B (en) | Double-sound-channel 3D audio generation device and method | |
Fels et al. | Head-related transfer functions of children | |
EP1938655A1 (en) | Spatial audio simulation | |
Geronazzo et al. | A head-related transfer function model for real-time customized 3-D sound rendering | |
Mokhtari et al. | Computer simulation of KEMAR's head-related transfer functions: Verification with measurements and acoustic effects of modifying head shape and pinna concavity | |
Harder et al. | A framework for geometry acquisition, 3-D printing, simulation, and measurement of head-related transfer functions with a focus on hearing-assistive devices | |
Sunder | Binaural audio engineering | |
Liu et al. | An improved anthropometry-based customization method of individual head-related transfer functions | |
CN108540925B (en) | A kind of fast matching method of personalization head related transfer function | |
CN102802111B (en) | A kind of method and system for exporting surround sound | |
JP4226142B2 (en) | Sound playback device | |
Treeby et al. | The effect of hair on auditory localization cues | |
Oldfield | The analysis and improvement of focused source reproduction with wave field synthesis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20180309 |
|
WW01 | Invention patent application withdrawn after publication |