CN109413543B - Source signal extraction method, system and storage medium - Google Patents

Source signal extraction method, system and storage medium Download PDF

Info

Publication number
CN109413543B
CN109413543B CN201710698651.6A CN201710698651A CN109413543B CN 109413543 B CN109413543 B CN 109413543B CN 201710698651 A CN201710698651 A CN 201710698651A CN 109413543 B CN109413543 B CN 109413543B
Authority
CN
China
Prior art keywords
signal
input signals
input
interference signal
synchronizing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710698651.6A
Other languages
Chinese (zh)
Other versions
CN109413543A (en
Inventor
张健钢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Incus Co ltd
Original Assignee
Incus Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Incus Co ltd filed Critical Incus Co ltd
Priority to CN201710698651.6A priority Critical patent/CN109413543B/en
Priority to PCT/CN2017/117813 priority patent/WO2019033671A1/en
Priority to EP17921701.3A priority patent/EP3672275A4/en
Publication of CN109413543A publication Critical patent/CN109413543A/en
Application granted granted Critical
Publication of CN109413543B publication Critical patent/CN109413543B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • H04R2430/23Direction finding using a sum-delay beam-former

Landscapes

  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The present disclosure provides a method, system, and storage medium for continuously extracting a target interfering signal from a selected signal. The method comprises the following steps: collecting two or more paths of input signals, wherein each path of input signal contains a target interference signal; improving the independence of the input signals; calculating a coefficient matrix for improving the independence of the input signals; synchronising each pair or group of input signals; separating the synchronized input signal into a target interference signal and a useful signal; the output signal is intelligently selected.

Description

Source signal extraction method, system and storage medium
Technical Field
The present disclosure relates to the field of signal processing technologies, and in particular, to a signal processing technology for extracting an interference signal from a mixed signal.
Background
In the current signal processing and big data fields, the measurement of the observed signal is often interfered by useless signals, and therefore, how to improve the signal-to-noise ratio of the measured observed signal is a great challenge. The same problems occur in the fields of sound recording (e.g. studio recordings, hearing aids, 360-tone instruments), biomedical applications (e.g. brain wave recordings, brain imaging) and remote sensing (e.g. radar signals, echo location). The most common method of eliminating such interfering signals is to use filters in analog or digital form. However, the desired signal and the interfering signal often share a frequency band, which is difficult to separate by a filter.
The current separation technique mainly operates the hearing device by selectively adjusting the signal proportion, and focuses on how to calculate the coefficient matrix more effectively, or adopts a combination of a directional microphone and an omnidirectional microphone to enhance the speech intelligibility, but the traditional Independent Component Analysis (ICA) method cannot achieve an ideal effect, has an unsatisfactory effect on removing interference signals, and destroys the accuracy of the ICA.
Therefore, there is an urgent need for a technique for effectively separating a desired signal from an interference signal.
BRIEF SUMMARY OF THE PRESENT DISCLOSURE
Aiming at the problems in the prior art, the method creatively carries out time domain synchronization processing and other methods on the signals, simplifies the steps, solves the technical problem of incomplete signal separation, and achieves the effect of removing interference signals with extremely high precision.
An aspect of the present disclosure is to provide a method for removing a target interference signal from a multiple signal, the method including:
receiving a set of input signals, each input signal in the set of input signals containing both a desired signal and an interfering signal;
improving the independence of the input signals;
calculating a coefficient matrix for improving the independence of the input signals;
synchronizing the input signals;
separating the synchronized input signal into a channel containing a target interference signal and a channel not containing the target interference signal;
and intelligently selecting a proper frequency channel without the target interference signal as a signal output.
Another aspect of the present disclosure is to provide a system for removing a target interference signal from a multiple signal, the system comprising:
a set of input equipment for inputting two or more signals;
a processor; and
a memory storing computer readable instructions that, when executed by the processor, cause the processor to:
improving the independence of the input signals;
calculating a coefficient matrix obtained by improving the independence of the input signals in an input channel;
synchronizing the input signals;
separating the synchronized input signal into a channel containing a target interference signal and a channel not containing the target interference signal;
and intelligently selecting a proper frequency channel without the target interference signal as a signal output.
In another aspect, the present disclosure also provides a non-transitory computer storage medium storing computer readable instructions which, when executed by a processor, implement a method for removing a target interference signal from a multiple signal, the method comprising:
receiving a set of input signals, each input signal in the set of input signals containing both a desired signal and an interfering signal;
improving the independence of the input signals;
calculating a coefficient matrix for improving the independence of the input signals;
synchronizing the input signals;
separating the synchronized input signal into a channel containing a target interference signal and a channel not containing the target interference signal;
and intelligently selecting a proper frequency channel without the target interference signal as a signal output.
The method and the device can eliminate or weaken asynchronous influence, improve the signal source extraction performance, and improve the perception of the target signal by continuously removing the interference signal even in the motion process of the useful signal and the interference signal.
Drawings
Embodiments of the present disclosure will now be described by way of example, and not by way of limitation, with reference to the accompanying drawings. The drawings are exemplary and are not to be limited by the scale shown in the drawings. The same or similar elements in different drawings are denoted by the same reference numerals.
Fig. 1 is a flow chart of a method of removing a target interfering signal from a multiple signal according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a first method for synchronizing input signals according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a second method for synchronizing an input signal according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustration of the operation of a method three of synchronizing input signals in accordance with an embodiment of the present disclosure;
FIG. 5 is a flowchart illustration of the operation of a method four of synchronizing input signals of an embodiment of the present disclosure;
FIG. 6 is a block diagram illustrating a computer system for implementing embodiments of the present disclosure;
FIG. 7 is a schematic diagram of the location of different sound sources to different sensors;
fig. 8 shows the signal delay of two spaced sensors.
Detailed Description
Specific embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a flow chart of a method 1000 of removing a target interferer from an input signal according to an embodiment of the present disclosure.
In step 100, n signal receiving devices are first used to receive signals from m signal sources, and the mixed signal received by each signal receiving device is referred to as an input signal of the signal receiving device. And determining signals emitted by one or more signal sources in each input signal (a mixed signal of signals emitted by m signal sources) as useful signals, and determining the other signals as interference signals. The signal receiving device may be a sensor or a cloud platform. The signal receiving device can also be an input data interface which is connected with a storage unit, the storage unit is stored with signal data in advance, and the input data interface receives the signal data from the storage unit. Furthermore, each input signal may also include a plurality of interfering signals that are not identical to each other. It is to be understood that these interference signals in the input signal may also be the same, and the disclosure is not particularly limited thereto. For example, in the context of an electronic listening device, the electronic listening device typically comprises at least two microphones, each of which is operable to receive a mixed signal consisting of a sound emitting source (desired signal) and an ambient background sound effect (interfering signal). Since the microphones are usually placed at different locations and the wanted signal and the interfering signal are received by two or more microphones at different locations spaced from each other, the ambient background sound effects received by the different microphones differ from each other in time domain and/or amplitude. As another example, in studio recording and/or 360 audio recording scenarios where the sound effects are measured with two or more microphones, the desired signal and the interfering signal are received by the two or more microphones at different locations spaced apart from each other due to the microphones being typically placed at different locations, the ambient background sounds received by the different microphones differing from each other in time domain and/or amplitude. As another example, in the case of a brain-computer interface device, a brain wave device generally includes at least two electrodes, each of which can receive a mixed signal including a brain wave source signal and an interference signal. Since the electrodes are usually placed at different positions, the useful signal and the interference noise are received by two or more electrodes at different positions spaced apart from each other, the ambient noise received by the different electrodes differing from each other in time domain and/or amplitude. Likewise, in an underwater echo detection scenario, the echo receiving device typically includes at least two sensors, each of which may be used to receive a mixed signal from the acoustic source and ambient noise. Since the sensors are usually placed at different locations, the useful signal and the interfering signal are received by two or more sensors at different locations spaced apart from each other, and the ambient noise received by the different sensors differs from each other in time domain and/or amplitude. Assuming that there are two different sensors Mi, Mj and a plurality of different signal sources S1, S2, … Sn, the signal received by Mi, Mj should be formed by the following equations, each of which propagates to the sensors Mi and Mj with a different amplitude a and a different time delay.
Mi=a1iS1(t11i)+a2iS2(t22i)+…+aniSn(tnni)
Mj=a1jS1(t11j)+a2jS2(t22j)+…+anjSn(tnnj)
Similarly, the signals received by other sensors can be analogized by the same formula.
For simplicity of presentation, the positions of the two sensors and the two signal sources in two dimensions are illustrated in FIG. 7. Note that the figure is only for simplicity of explanation in terms of a two-dimensional in-plane representation, and all positions can be extended to one, three or higher dimensional representations. For the sake of simplifying the description, taking the acoustic signals as an example, it is assumed that S1 and S2 are two sound sources, and M1 and M2 are microphones. Let the sound propagation velocity be v, while let the sampling rate of the sensor be Fs. The travel time from the source to the sensor can therefore be expressed by the following equation:
tij=Fs*dis{Si,Mj}/v (1)
in one random example, v is 34029cm/s and Fs is 44.1 kHZ.
Ideally, the energy of the sound decreases inversely with the distance, and the sound signal received by the sensor can be represented by the following formula:
Figure BDA0001379794640000061
with specific reference to FIG. 7, the formula is described below, noting that for simplicity, the formula has reduced all constant terms to 1.
Figure BDA0001379794640000071
Since in practice the coefficient matrices shown at S1, S2 and the right in the formula are unknown; m on the left half of the formula1realAnd M1realIs the mixed signal received by the M1 and M2 microphones. Next, the coefficient matrix is decomposed 200 to reduce a portion of the mixed signal to a useful signal.
In step 200, the independence of the mixed signals is improved by decomposing the coefficient matrix. Preferably, the independence of the mixed signals is maximized by decomposing the coefficient matrix. The premise of this embodiment is that: each signal source is independent, and then the probability statistical distribution of the mixed signal of the embodiment is judged to be more normal than the probability distribution of each signal source according to the statistical probability theory of the central limit theorem (namely, the probability statistical distribution of the sum of a plurality of independent variables is more normal than the probability statistical distribution of each independent variable). Thus, in the normal mode, in the present embodiment, the coefficient matrix is decomposed by statistically separating the probability distribution of the mixed signal from the normal distribution as much as possible to improve the independence of the signal source. Specifically, the coefficient matrix parameters are used as dependent variables, an objective function is set to calculate and measure whether the variables are close to normal distribution, and the optimal parameters of objective function convergence are obtained through calculation, so that the decomposition parameter matrix is obtained.
For example: step 200 selects the following function as the objective function for calculating and measuring whether the variables are close to normal distribution:
kurt(y)=E{y4}-3(E{y2})2 (4)
e { } represents the calculated expectation value, and y is the mixing signal. When the objective function value is 0, it means that the probability distribution of y is normal. Of course, Kurtosis may be replaced by other measures as a criterion for deviation from a normal distribution, and the present disclosure is not particularly limited thereto. For this formula, the objective function can be rewritten as the following formula:
J(y)∝[E{G(y)}-E{G(v)}]2 (5)
therefore, the coefficient matrix parameters are used as dependent variables, the above formula is used as a target function, and the best parameters of the convergence of the target function, namely a decomposition parameter matrix, are found by a Newton iteration method. The specific calculation methods are briefly listed below:
1.Choose an initial(e.g.random)weight vector w.
2.Let w+=E{xg(wTx)}-E{g′(wTx)}w
3.Let w=w+/||w+||
4.If not converged,go back to 2.
where G is the derivative function of G.
Step 300, synchronizing the input signals in the time domain. This step can be implemented in four different ways, and is described in detail below with reference to fig. 2, 3, 4, and 5, in the present step 300.
As shown in fig. 2, step 3101 is to intercept two or more discrete segments of the interfering signal, the duration of the discrete segments being controlled to be n milliseconds. If the signal is an audio signal, n needs to be greater than 0.98 ms and less than 20.03 ms. When the time length n is controlled within this interval, the echo is not heard by human beings while ensuring the accuracy, so that the real-time processing effect is the best and the user hearing effect is the best.
Preferably, step 3101 continuously intercepts each discrete segment of the mixed signal in real time. The method of this embodiment can process the signal in real time.
Then, aiming at the mixed signal of each discrete segment interval, judging whether the discrete segment is a target interference signal or not in a mode of pattern recognition, and extracting the target interference signal. For example, in the acoustic case, there are two sound sources, male and female, respectively, and if the target interference signal is male sound, the pattern recognition will determine whether each discrete segment of n milliseconds of the mixed signal is male sound, if so, extract the segment for the next step, and if several interference signals are female sound, extract the segment determined as female sound for the next step. As another example, the two sound sources are human or non-human, respectively. It will be appreciated by those skilled in the art that other reasonable means are possible.
The interference signal detection process of step 3101 can be implemented by detecting that there is an interference signal from low level to high level within n milliseconds (i.e. the interference signal starts to respond with a step signal or goes from high level to low level, for example, the voice of a man is set as the interference signal, the man does not need to speak the whole word when speaking, and only needs to detect n milliseconds when the voice of the man is present, i.e. determine the voice as the interference signal.
In step 3102, a discrete-time convolution of the two detected interference signal segments is computed to obtain their time delays. Assuming that the two mixed signals are x, y, respectively, the correlation formula between the two signals is calculated as:
Figure BDA0001379794640000091
where mx is the average of x, my is the average of y, and d is the time delay, the molecular part of this formula is the discrete-time convolution.
By selecting different d, i.e. different time delays, the correlation formula is:
Figure BDA0001379794640000101
based on this, the time delay is selected as the d when the maximum value is generated in r (d).
In step 3103, the input signal is processed synchronously based on the acquired time delay d. For example, if the signal f is input from the first input1(t) detected first interference signal and second input signal f2(t) the time delay of the detected second interference signal is denoted delta, the first input signal f1(t) is delayed by a time delta, i.e. corrected to f1(t- δ) thereby interacting with the second input signal f2(t) synchronization. In another embodiment, if the signal f is input from the first input1(t) detected interference signal and from the second input signal f2(t) the time delay of the detected interfering signal is noted as-delta, the first input signal f1(t) is synchronized to f1(t + δ). Since the interference signal segment is continuously monitored in real time in the embodiment, the method can continuously update the iteration time delay when the signal source and the sensor move differently or relatively, and dynamically track the change of the interference signal.
Referring to fig. 3, in step 3201, interference signals are received by two or more sensors at different locations spaced apart from each other, as the plurality of sensors are positioned at different locations. In the embodiment, the position of each interference signal relative to the sensor is calculated in advance, namely the relative delay of each interference signal; one of the interfering signals is then selected based on the relative delays of the respective interfering signals. Wherein, the selection of an interference signal can also be selected by the user in real time.
Preferably, it is assumed that the distance from the signal source to the sensor 1 is d1, the distance from the signal source to the sensor 2 is d2, the signal sampling rate is Fs, and the signal propagation speed is v. The calculation formula of the relative delay dir is as follows:
dir=Fs*(d1-d2)/v (8)
assuming that the distance between the sensors is d, the maximum direction max (dir) is calculated by the following formula:
Max(dir)=Fs*d/v (9)
if the result is not an integer, the integer is obtained by rounding. Then all directions are: -max (dir), …, -1,0,1, …, max (dir).
With particular reference to the direction of the distance description in fig. 8, assuming a sampling rate (Fs) of 48kHZ, a distance (d) between the two transducers (in this example acoustic signals, so this transducer is a microphone) of 2.47cm, a speed (v) of sound propagation in air of 340m/s, so the maximum delay is 3. It can be divided into 7 regions with delays of-3, -2, -1,0,1, 2, and 3, respectively. In the example of fig. 8, if the interference signal is expected to come from the region with a delay of-3, the delay is fixed to-3.
Referring to fig. 3, in step 3202, a time delay is extracted according to an interference signal region selected by a user in real time or a preset interference signal region.
Referring to fig. 3, in step 3203, a synchronization process is performed according to the time delay extracted in 3202, as in step 3103.
Referring to fig. 4, this embodiment selects the interfering signals from all relative delays. In step 3301, all time delays are analyzed based on various signals (e.g., sounds), sensor distances, and signal propagation speeds.
Referring to FIG. 4, in step 3302, all possible time delays τ are extracted12,…,τn,
Referring to fig. 4, in step 3303, the synchronization process in step 3103 is repeated for each different time delay.
Referring to fig. 5, in step 3401, a useful signal direction is selected or preset in real time by a user.
Referring to fig. 5, in step 3402, the time delays for these directions are calculated.
Referring to fig. 5, in the method for obtaining all signal directions based on fig. 4, in step 3403, the time delays of the useful signals are excluded from all possible directions, and the synchronization process in step 3103 is repeated for each of the remaining different time delays. Referring again to fig. 1, in step 400, the synchronized input signal is separated into a frequency channel containing the target interference signal and a frequency channel not containing the target interference signal. Preferably, step 400 is performed by multiplying the synchronized signal matrix and the coefficient matrix determined in step 200.
For example, referring to the example of step 100, assume that the mixed signal consists of:
Figure BDA0001379794640000121
after passing through the coefficient matrix obtained in step 200, the coefficient matrix is multiplied by the synchronized signal matrix, and the formula is as follows:
Figure BDA0001379794640000122
from this equation, two channels will be generated, one of which
Figure BDA0001379794640000131
In other words, the channel is composed of 96% of S2 and 4% of S1, and if the target interference signal is S1, the channel is selected and output. Thus, in this example, the separation effect of the synchronization reached 96%.
How to select between these two channels is expanded in step 500.
Similarly, if the target interference signal is S2, the mixed signal synchronized with S2 is multiplied by the coefficient matrix. And simultaneously selecting a proper channel to output.
Referring to fig. 1, in step 500, based on the two channels obtained in step 400, the channel with relatively low signal energy may be selected as an output channel according to different relative signal energies. The calculation of the energy of the signal may be the root mean square value of the signal. The selection process is applied to the frequency channel with the target interference signal and the frequency channel without the target interference signal obtained in step 500.
Further, in the embodiments of fig. 4 and 5, an output channel is generated at different time delays, and in the embodiment of fig. 4, the optimal channel is selected as the signal output based on the feature detection (e.g., the channel with the least target interference signal component among the channels is generated); in the embodiment of fig. 5, an optimal channel may be selected as a signal output according to the signal energy (e.g., a channel that generates the lowest energy of the target interference signal among the channels).
Preferably, after the interference signal is separated in step 500, a step of further processing the separated useful signal and interference signal may be further included, such as frequency domain enhancement. For example, in hearing aid applications, the separated useful audio signal may be subjected to personalized frequency domain enhancement.
In one embodiment, the present disclosure provides an apparatus comprising a processor, and a human interaction interface. The apparatus may also include, but is not limited to, a memory, a controller, an input-output module, and an information receiving module. The processor is configured to perform the above steps 100,200, 3201-. The user selects which area he wishes to be the interference signal area in real time through a human interactive interface. The human interaction interface comprises but is not limited to a voice receiving module, a sensor and a video receiving module. The touch screen, keyboard, button, knob, projection interface, virtual 3D interface. The real-time selection mode of the user through the manual interaction interface comprises the steps of selecting areas with different identifications through voice instructions and different gestures or actions of the user. When the manual interaction interface is a touch screen, a user can click on a certain area, the disclosure provides a machine for removing interference signals, which can be controlled and selected by the user, and the delay can be adjusted in real time.
The above steps 100-400 may occur in a different order than depicted in the figures. For example, the order of the second embodiment of steps 100 and 300 (i.e., 3201-3203) may be interchanged. For another example, in practical applications, any two steps of steps 100-400 may be executed in parallel or in reverse order, depending on the functions specifically involved.
Preferably, step 200 is performed before step 300, i.e. the coefficient matrix is calculated and then the input signal is synchronized in the time domain. This has the advantage that the coefficient matrix does not need to be recalculated all the time according to different time delays. This can save a lot of computation. In particular, in the embodiments described in fig. 4 and 5, the coefficient matrix needs to be calculated only once to obtain the result. Meanwhile, the present disclosure concludes by conducting a number of experiments: the coefficient matrix calculated for the mixed signal after synchronization is comparable to the coefficient matrix calculated for the original mixed signal. Therefore, the method saves a large amount of calculation while not losing the precision of the coefficient matrix.
Preferably, in step 100, after n input signals are received by m signal receiving devices, whether to remove the input signal received by one or more of the signal receiving devices is determined according to the determination condition.
In a certain embodiment, the input signal is an acoustic signal and the signal receiving means is an acoustic signal receiving means (e.g. a microphone). And when the judgment condition is Fs X X/V < L/3 (wherein L is the length of the intercepted discrete signal, X is the distance between any two acoustic signal receiving devices, V is the signal propagation speed, and Fs is the sampling rate), removing the acoustic signal received by one of the two acoustic signal receiving devices. According to the method and the device, the data volume needing to be calculated is reduced while the accuracy of pattern recognition is not influenced, the calculation efficiency is improved, and the power consumption is reduced.
The signals include audio signals, image signals, electromagnetic signals, brain wave signals, electric signals, radio wave signals, and other forms of signals that can be received by the sensor, and the present disclosure is not particularly limited thereto.
The method and the device can greatly improve the perceptibility of the target signal and reduce the operation cost. In addition, the input signal of the present disclosure is subjected to a synchronization process in the time domain, and thus the method of the present disclosure minimizes frequency distortion.
Fig. 6 is a schematic structural diagram of a computer system 3000 suitable for implementing the above embodiments of the present disclosure.
As shown in fig. 6, the computer system 3000 includes a Central Processing Unit (CPU)3001 which is capable of performing various suitable operations and processes in accordance with program instructions stored on an electrically programmable read-only memory (EPROM)3002 or a Random Access Memory (RAM) 3003. Random Access Memory (RAM)3003 may also store programs and data necessary to operate the system 3000. The Central Processing Unit (CPU)3001, Electrically Programmable Read Only Memory (EPROM)3002, and Random Access Memory (RAM)3003 are interconnected by a bus 3004. An input/output (I/O) interface 3005 is also connected to the bus 3004. A direct memory access interface 3004 is also connected to the bus 3004 to speed up data exchange.
The input/output (I/O) interface 3005 is also connected with the following elements: removable data storage 3007 including USB memory, solid state disk, etc.; a wireless data transmission line 3008 including Local Area Network (LAN), bluetooth, near field communication device (NFC); and a signal converter 3009 connected to the data input path 2010 and the data output path 3011. According to another embodiment of the present disclosure, the processes involved in the above-described flowcharts may be implemented by an embedded computer system without a keyboard, mouse, and hard disk similar to the computer system 3000. The wireless data transmission line 3008 or the removable data storage 3007 facilitates the update and upgrade of the program.
The processor may be a cloud processor, and the memory may be a cloud memory.
Further, according to still another embodiment of the present disclosure, the processes referred to by the above flowcharts may be implemented by a computer software program. For example, embodiments of the disclosure provide a computer program product comprising a computer program stored on a tangible machine-readable medium, the program comprising program code for performing the method illustrated in the flow chart. In this embodiment, the computer program may be downloaded and installed over the wireless data transmission line 3008, and/or installed from the removable medium 3007.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart and block diagrams represents a module, segment, or unit of code. The module, segment, or unit of code comprises one or more executable instructions for implementing the specified logical function(s). It should be noted that, in some preferred embodiments, the functions noted in the block may occur out of the order noted in the figures. For example, in actual implementation, the operations shown in two blocks connected may be executed in parallel or in reverse order according to the functions specifically involved. It will also be noted that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose software and computer instructions.
The units or modules involved in the embodiments of the present disclosure may be implemented by software or hardware, and the units or modules may be installed in a processor. The names of the units or modules should not be limiting on the units or modules themselves.
In another aspect, the present disclosure also provides a computer-readable storage medium. The computer-readable storage medium may be a computer-readable storage medium installed in the instrument device to which the above-described embodiment is applied, or may be a computer-readable storage medium that is independent and not equipped in the instrument device. The computer readable storage medium stores one or more programs for execution by one or more processors to implement the method of the present disclosure for separating a target signal from a noise signal.
The foregoing is a more detailed description of the present disclosure in connection with specific preferred embodiments, and it is not intended that the specific embodiments of the present disclosure be limited to these descriptions. For those skilled in the art to which the disclosure pertains, several simple deductions or substitutions may be made without departing from the concept of the disclosure, which should be considered as falling within the protection scope of the disclosure.

Claims (12)

1. A method for removing a target interferer from a multiple signal, the method comprising:
receiving a set of input signals, each input signal in the set of input signals containing both a desired signal and an interfering signal;
improving the independence of the input signals by maximizing a gaussian distribution of the input signals through independent component analysis;
calculating a coefficient matrix for improving the independence of the input signals;
synchronizing the input signals in the time domain; separating the synchronized input signal into a frequency channel containing a target interference signal and a frequency channel without the target interference signal through multiplication operation of the synchronized signal matrix and the coefficient matrix; and intelligently selecting a frequency channel without the target interference signal as a signal output.
2. The method of claim 1, wherein the operation of synchronizing the input signals comprises:
detecting an interference signal segment in each input signal;
performing discrete time convolution operation on every two detected interference signal segments to obtain relative time delay;
synchronizing the input signals based on the acquired time delays;
selecting a signal preference direction marked as an interference signal;
calculating a relative time delay of the interference signal from the priority direction;
synchronizing the input signals based on a preset time delay;
selecting all possible signal directions marked as interfering signals;
predicting a series of time delays, which are marked as tau 1, tau 2, … and tau n;
synchronizing the input signals based on a series of time delays;
selecting a signal entry direction that is marked as a useful signal;
determining time delays of the interfering signals from the remaining directions;
synchronizing the interfering signal based on the determined time delay.
3. The method of claim 1, wherein the synchronization of the input signal is continuously scalable to accommodate a motion state of the signal source.
4. The method of claim 1, wherein the input signals are taken from spaced apart locations.
5. The method of claim 2, wherein the detecting the jammer signal segments in each input signal comprises: the interfering signal segments in each input signal are detected by pattern recognition.
6. The method of claim 1, wherein the input signal is a signal received by a sensor.
7. A system for removing a target noise from a signal, comprising:
a set of input devices for inputting a set of input signals;
a processor; and
a memory storing computer readable instructions that, when executed by the processor, cause the processor to:
maximizing the Gaussian distribution of the input signals through independent component analysis, and improving the independence of the input signals;
calculating a coefficient matrix obtained by improving the independence of the input signals in an input channel;
synchronizing the input signals in the time domain;
separating the synchronized input signal into a frequency channel containing a target interference signal and a frequency channel without the target interference signal through multiplication operation of the synchronized signal matrix and the coefficient matrix;
and intelligently selecting a frequency channel without the target interference signal as a signal output.
8. The system of claim 7, wherein the synchronizing the input signal comprises:
detecting an interference signal segment in each input signal;
performing discrete time convolution operation on every two detected interference signal segments to obtain relative time delay;
synchronizing the input signals based on the acquired time delays;
selecting a signal preference direction marked as an interference signal;
calculating a relative time delay of the interference signal from the priority direction;
synchronizing the input signals based on a preset time delay;
selecting all possible signal directions marked as interfering signals;
predicting a series of time delays, which are marked as tau 1, tau 2, … and tau n;
synchronizing the input signals based on a series of time delays;
selecting a signal entry direction that is marked as a useful signal;
determining time delays of the interfering signals from the remaining directions;
synchronizing the interfering signal based on the determined time delay.
9. The system of claim 7, wherein the input signals are taken from spaced apart locations.
10. The system of claim 8, wherein the detecting of the jammer slice in each input signal
The section includes: the interfering signal segments in each input signal are detected by pattern recognition.
11. The system of claim 7, wherein the input signal is a signal received by a sensor.
12. A non-transitory computer readable storage medium having stored thereon instructions that, when executed by a processor, perform a method for separating a target interference signal from a multiplexed signal, the method comprising:
receiving a set of input signals, each of the input signals containing a target interference signal;
maximizing the Gaussian distribution of the input signals through independent component analysis, and improving the independence of the input signals;
calculating a coefficient matrix for improving the independence of the input signals;
synchronizing the input signals in the time domain;
separating the synchronized input signal into a frequency channel containing a target interference signal and a frequency channel without the target interference signal through multiplication operation of the synchronized signal matrix and the coefficient matrix;
and intelligently selecting a frequency channel without the target interference signal as a signal output.
CN201710698651.6A 2017-08-15 2017-08-15 Source signal extraction method, system and storage medium Active CN109413543B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201710698651.6A CN109413543B (en) 2017-08-15 2017-08-15 Source signal extraction method, system and storage medium
PCT/CN2017/117813 WO2019033671A1 (en) 2017-08-15 2017-12-21 Method and system for extracting source signal, and storage medium
EP17921701.3A EP3672275A4 (en) 2017-08-15 2017-12-21 Method and system for extracting source signal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710698651.6A CN109413543B (en) 2017-08-15 2017-08-15 Source signal extraction method, system and storage medium

Publications (2)

Publication Number Publication Date
CN109413543A CN109413543A (en) 2019-03-01
CN109413543B true CN109413543B (en) 2021-01-19

Family

ID=65362112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710698651.6A Active CN109413543B (en) 2017-08-15 2017-08-15 Source signal extraction method, system and storage medium

Country Status (3)

Country Link
EP (1) EP3672275A4 (en)
CN (1) CN109413543B (en)
WO (1) WO2019033671A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2430975A1 (en) * 2010-09-17 2012-03-21 Stichting IMEC Nederland Principal component analysis or independent component analysis applied to ambulatory electrocardiogram signals
CN103083012A (en) * 2012-12-24 2013-05-08 太原理工大学 Atrial fibrillation signal extraction method based on blind source separation
CN103426434A (en) * 2012-05-04 2013-12-04 索尼电脑娱乐公司 Source separation by independent component analysis in conjunction with source direction information
CN103426435A (en) * 2012-05-04 2013-12-04 索尼电脑娱乐公司 Source separation by independent component analysis with moving constraint
CN104053107A (en) * 2014-06-06 2014-09-17 重庆大学 Hearing aid device and method for separating and positioning sound sources in noise environments
CN105640500A (en) * 2015-12-21 2016-06-08 安徽大学 Scanning signal feature extraction method based on independent component analysis and recognition method
WO2017084397A1 (en) * 2015-11-19 2017-05-26 The Hong Kong University Of Science And Technology Method, system and storage medium for signal separation
CN107025446A (en) * 2017-04-12 2017-08-08 北京信息科技大学 A kind of vibration signal combines noise-reduction method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100372277C (en) * 2006-02-20 2008-02-27 东南大学 Space time separation soft inputting and outputting detecting method based on spatial domain prewhitening mergence
CN100495388C (en) * 2006-10-10 2009-06-03 深圳市理邦精密仪器有限公司 Signal processing method using space coordinates convert for realizing signal separation
US9100734B2 (en) * 2010-10-22 2015-08-04 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for far-field multi-source tracking and separation
CN102571296B (en) * 2010-12-07 2014-09-03 华为技术有限公司 Precoding method and device
JP2012234150A (en) * 2011-04-18 2012-11-29 Sony Corp Sound signal processing device, sound signal processing method and program
JP2014045793A (en) * 2012-08-29 2014-03-17 Sony Corp Signal processing system, signal processing apparatus, and program
CN102868433B (en) * 2012-09-10 2015-04-08 西安电子科技大学 Signal transmission method based on antenna selection in multiple-input multiple-output Y channel
CN103197183B (en) * 2013-01-11 2015-08-19 北京航空航天大学 A kind of method revising Independent component analysis uncertainty in electromagnetic interference (EMI) separation
CN104091356A (en) * 2014-07-04 2014-10-08 南京邮电大学 X-ray medical image objective reconstruction based on independent component analysis
CN105996993A (en) * 2016-04-29 2016-10-12 南京理工大学 System and method for intelligent video monitoring of vital signs
CN106356075B (en) * 2016-09-29 2019-09-17 合肥美的智能科技有限公司 Blind sound separation method, structure and speech control system and electric appliance assembly

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2430975A1 (en) * 2010-09-17 2012-03-21 Stichting IMEC Nederland Principal component analysis or independent component analysis applied to ambulatory electrocardiogram signals
CN103426434A (en) * 2012-05-04 2013-12-04 索尼电脑娱乐公司 Source separation by independent component analysis in conjunction with source direction information
CN103426435A (en) * 2012-05-04 2013-12-04 索尼电脑娱乐公司 Source separation by independent component analysis with moving constraint
CN103083012A (en) * 2012-12-24 2013-05-08 太原理工大学 Atrial fibrillation signal extraction method based on blind source separation
CN104053107A (en) * 2014-06-06 2014-09-17 重庆大学 Hearing aid device and method for separating and positioning sound sources in noise environments
WO2017084397A1 (en) * 2015-11-19 2017-05-26 The Hong Kong University Of Science And Technology Method, system and storage medium for signal separation
CN105640500A (en) * 2015-12-21 2016-06-08 安徽大学 Scanning signal feature extraction method based on independent component analysis and recognition method
CN107025446A (en) * 2017-04-12 2017-08-08 北京信息科技大学 A kind of vibration signal combines noise-reduction method

Also Published As

Publication number Publication date
WO2019033671A1 (en) 2019-02-21
EP3672275A4 (en) 2023-08-23
CN109413543A (en) 2019-03-01
EP3672275A1 (en) 2020-06-24

Similar Documents

Publication Publication Date Title
Mandel et al. An EM algorithm for localizing multiple sound sources in reverberant environments
US9668066B1 (en) Blind source separation systems
EP3655949B1 (en) Acoustic source separation systems
EP3189521B1 (en) Method and apparatus for enhancing sound sources
KR101349268B1 (en) Method and apparatus for mesuring sound source distance using microphone array
EP3133833B1 (en) Sound field reproduction apparatus, method and program
GB2548325A (en) Acoustic source seperation systems
WO2016100460A1 (en) Systems and methods for source localization and separation
KR102191736B1 (en) Method and apparatus for speech enhancement with artificial neural network
JP2008236077A (en) Target sound extracting apparatus, target sound extracting program
JP2009288215A (en) Acoustic processing device and method therefor
CN111863015A (en) Audio processing method and device, electronic equipment and readable storage medium
EP2437517B1 (en) Sound scene manipulation
CN108353228B (en) Signal separation method, system and storage medium
Hosseini et al. Time difference of arrival estimation of sound source using cross correlation and modified maximum likelihood weighting function
CN109413543B (en) Source signal extraction method, system and storage medium
CN110441730B (en) Microphone array sound source orientation system based on analog signal processing architecture
Cobos et al. Two-microphone separation of speech mixtures based on interclass variance maximization
Jafari et al. Underdetermined blind source separation with fuzzy clustering for arbitrarily arranged sensors
Zohny et al. Modelling interaural level and phase cues with Student's t-distribution for robust clustering in MESSL
Gul et al. Preserving the beamforming effect for spatial cue-based pseudo-binaural dereverberation of a single source
JP2010217268A (en) Low delay signal processor generating signal for both ears enabling perception of direction of sound source
Masnadi-Shirazi et al. Separation and tracking of multiple speakers in a reverberant environment using a multiple model particle filter glimpsing method
Gburrek et al. On source-microphone distance estimation using convolutional recurrent neural networks
JP2006072163A (en) Disturbing sound suppressing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant