CN108282424B - Fourth-order tensor joint diagonalization algorithm for four-dataset joint blind source separation - Google Patents
Fourth-order tensor joint diagonalization algorithm for four-dataset joint blind source separation Download PDFInfo
- Publication number
- CN108282424B CN108282424B CN201810079041.2A CN201810079041A CN108282424B CN 108282424 B CN108282424 B CN 108282424B CN 201810079041 A CN201810079041 A CN 201810079041A CN 108282424 B CN108282424 B CN 108282424B
- Authority
- CN
- China
- Prior art keywords
- matrix
- tensor
- algorithm
- joint
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 85
- 238000000926 separation method Methods 0.000 title claims abstract description 28
- 239000011159 matrix material Substances 0.000 claims abstract description 133
- 238000000034 method Methods 0.000 claims abstract description 24
- 230000002087 whitening effect Effects 0.000 claims abstract description 18
- 238000004364 calculation method Methods 0.000 claims abstract description 14
- 230000009466 transformation Effects 0.000 claims abstract description 9
- 238000002156 mixing Methods 0.000 claims description 28
- 238000005457 optimization Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 10
- 238000009795 derivation Methods 0.000 claims description 5
- 241000287196 Asthenes Species 0.000 claims description 3
- 238000013459 approach Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 239000004615 ingredient Substances 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 230000006870 function Effects 0.000 description 11
- 238000002474 experimental method Methods 0.000 description 7
- 238000005070 sampling Methods 0.000 description 5
- 101001056160 Homo sapiens Methylcrotonoyl-CoA carboxylase subunit alpha, mitochondrial Proteins 0.000 description 3
- 102100026552 Methylcrotonoyl-CoA carboxylase subunit alpha, mitochondrial Human genes 0.000 description 3
- 238000010219 correlation analysis Methods 0.000 description 3
- 238000002599 functional magnetic resonance imaging Methods 0.000 description 3
- 239000010977 jade Substances 0.000 description 3
- 101100126955 Arabidopsis thaliana KCS2 gene Proteins 0.000 description 2
- 241000132023 Bellis perennis Species 0.000 description 2
- 235000005633 Chrysanthemum balsamita Nutrition 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000000747 cardiac effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000001605 fetal effect Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 238000012880 independent component analysis Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
- H04L25/0202—Channel estimation
- H04L25/024—Channel estimation channel estimation algorithms
- H04L25/0242—Channel estimation channel estimation algorithms using matrix methods
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0272—Voice signal separating
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
- H04L25/03—Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
- H04L25/03006—Arrangements for removing intersymbol interference
- H04L25/03012—Arrangements for removing intersymbol interference operating in the time domain
- H04L25/03019—Arrangements for removing intersymbol interference operating in the time domain adaptive, i.e. capable of adjustment during data reception
- H04L25/03082—Theoretical aspects of adaptive time domain methods
- H04L25/03089—Theory of blind algorithms, recursive or not
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
- H04L25/03—Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
- H04L25/03006—Arrangements for removing intersymbol interference
- H04L2025/03592—Adaptation methods
- H04L2025/03598—Algorithms
- H04L2025/03611—Iterative algorithms
- H04L2025/03636—Algorithms using least mean square [LMS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
- H04L25/03—Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
- H04L25/03006—Arrangements for removing intersymbol interference
- H04L2025/03592—Adaptation methods
- H04L2025/03598—Algorithms
- H04L2025/03611—Iterative algorithms
- H04L2025/03649—Algorithms using recursive least square [RLS]
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Power Engineering (AREA)
- Computer Networks & Wireless Communication (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Complex Calculations (AREA)
Abstract
The invention discloses a fourth-order tensor joint diagonalization algorithm for four-data-set joint blind source separation, which comprises the following steps of: s1, observed signal: pre-whitening is respectively carried out on the observation signals of the four data sets; s2, target tensor: after pre-whitening, constructing a group of mutual fourth-order cumulant tensors; s3, initializing a factor matrix; s4, cost function convergence calculation: if the algorithm is converged after one scanning, the calculation is finished, and if the algorithm is not converged yet, the factor matrix obtained by updating at this time is used as an initial value to perform the next scanning, traverse and update the Jacobian rotation matrix, and update the factor matrix until the convergence. The fourth-order tensor joint diagonalization algorithm for the four-dataset joint blind source separation is based on orthogonal rotation transformation, so that the optimal solution in the least square sense can be obtained, and the J-BSS method is specific to no more than four dataset signals.
Description
Technical Field
The invention relates to the field of signal processing of biomedicine, communication, voice, array and the like, in particular to a fourth-order tensor joint diagonalization algorithm for joint blind source separation of four data sets.
Background
Joint Blind Source Separation (J-BSS), as an emerging data-driven-based multiple data set fusion processing technology, has gained a lot of attention in the signal processing fields of biomedicine, communication, voice, array, etc. [1] - [14 ].
Unlike conventional Blind Source Separation (BSS) [14] - [24], J-BSS generally assumes the following multiple data set signal model:
x(m)(t)=A(m)s(m)(t),m=1,2,...,M, (1A)
wherein, x (t)(m)∈CNAnd s (t)(m)∈CRRespectively representing the observed and source signals of the mth data set, A(m)∈CN×RA mixing matrix representing the mth data set.
The parameters M, N, R represent the number of data sets, the number of observation signal channels, and the number of source signals, respectively.
J-BSS aims at knowing only multiple dataset observation signal x(m)(t) mixing matrix A for multiple data sets(m)And a source signal s(m)(t) performing joint identification.
In the above model, multiple dataset signals are often multiple sets of data obtained through different observation modes for the same target or physical phenomenon.
Here, "different observation modalities" are different devices or experimental protocols, such as electroencephalogram (EEG) and functional magnetic resonance (fMRI) [2 ]; or different subjects under the same equipment and scheme, such as multiple subjects fMRI [3], [7 ]; or the observation of the same device, scheme and tested individual in different transformation domains (such as frequency domain, time domain, space domain, statistical domain, etc. [3], [10 ]).
When the multi-data set signal is processed, the J-BSS utilizes the correlation and the dissimilarity among different sets, and the information after fusion processing can more comprehensively describe the characteristics of an observed object in the sense of multilevel, multi-section and multi-mode, so the BSS has performance advantages in the aspects of identification capability, separation accuracy and the like.
Research on J-BSS dates back to 30 th century at the earliest, and a Canonical Correlation Analysis (CCA) was proposed in literature [1 ]. CCA is widely used in the fusion preprocessing of two data set signals for a later, longer period of time.
In recent years, CCA has been extended to the case of more than two datasets, namely multi-set CCA [6], and has been applied in multi-subject fMRI data fusion. Subsequently, the coupled matrix factorization model of M-CCA was further evolved into a matrix Generalized Joint Diagonalization (GJD) model [8 ].
In addition, Independent Vector Analysis (IVA) developed by classical Independent Component Analysis (ICA) is also one of the major methods of J-BSS [26] - [28 ].
In the last two years, joint independent subspace analysis (J-ISA) oriented to the problem of multi-ensemble, multi-dimensional source signal separation has gradually been of interest [29 ].
In the above studies, GJD is an important class of methods for J-BSS. The method converts multi-dataset signals into a group of mutually coupled matrix slices with joint diagonalization structures by calculating the mutual cumulant, thereby realizing J-BSS by performing algebraic fitting on the structures.
Currently, much of the research work on GJD is based on signal second order cross-covariance. And the only method [8] based on the fourth-order mutual cumulant only considers the lower-order combinable diagonalization structure of the fourth-order mutual cumulant on the matrix level, and fails to fully excavate the high-order combinable diagonalization structure. In addition, the method does not utilize the time domain characteristics (such as non-stationarity) of the signal, which results in insufficient algorithm accuracy.
Disclosure of Invention
According to the technical problems provided above, a fourth-order tensor joint diagonalization algorithm for four-data-set joint blind source separation is provided to make up for the defect that the prior art does not utilize the signal joint high-order statistical property, and further the accuracy of the algorithm is insufficient. The technical means adopted by the invention are as follows:
a fourth order tensor joint diagonalization algorithm for joint blind source separation of four data sets, comprising the steps of:
s1, observed signal:
the four dataset observed signals are pre-whitened separately:
let the whitening matrix of the mth data set be W(m)∈CR×N,m=1,…,4;
Then after the pre-whitening process, the mth data set observed signal is:
y(m)(t)=W(m)x(m)(t)=U(m)s(m)(t)∈CR, (1)
wherein, the unitary matrix U(m)@W(m)A(m)∈CR×RIs a mixing matrix of the mth data set signal after pre-whitening;
s2, target tensor:
after pre-whitening, a set of mutual fourth order cumulant tensors is constructed as follows:
Tk==cum(y(1)(tk),y(2)(tk),y(3)(tk),y(4)(tk)),k=1,...,K, (2)
where "cum (·)" is used to calculate the mutual fourth-order cumulant, which is defined as:
where "E (-) is used to calculate the mathematical expectation, and superscript". sup. "represents the conjugate, by definition, of each tensor Tk∈CR×R×R×RRepresenting four sets of observed signals at time tkK is 1, …, K;
the formula is as follows:
Tk=Dk×1U(1)×2U(2)*×3U(3)*×4U(4), (4)
wherein the ingredientnAn n-mode product representing the tensor and matrix, n being 1, …, 4; a fourth order tensorAnd matrixThe n-modulo product of (d) is defined as:
d in formula (4)k=cum(s(1)(tk),s(2)(tk),s(3)(tk),s(4)(tk))∈CR×R×R×RSource signals for four sets of data at time tkThe cross fourth order cumulant tensor of;
s3, initializing a factor matrix;
s4, cost function convergence calculation:
first, the optimization criteria are:
defining an objective function λ as a data tensor TkAnd fitting tensor Dk×1U(1)×2U(2)*×3U(3)*×4U(4)Mean square error between:
wherein | · | purpleFExpressing a Frobenius norm, and calculating an optimal estimation value of the mixing matrix in the least square sense by minimizing lambda;
notice U(m)For unitary matrix, m is 1,2,3,4, and according to the security of unitary transformation, equation (7) is rewritten as:
wherein off (-) is used to set the hyper-diagonal of its input tensor to 0, diag (-) is used to set the non-diagonal of its input tensor to 0,is a constant. Therefore, minimizing λ is equivalent to pairingMaximization is performed, so the present invention estimates the mixing matrix U by the following criterion(m),m=1,2,3,4:
equations (7) - (9) show that by maximizing Tk×1U(1)H×2U(2)T×3U(3)T×4U(4)HOver diagonal norm square sum, i.e. for T1,...,TKJoint diagonalization is carried out to obtain a mixing matrix U(1),U(2),U(3),U(4)An optimal estimate in the least squares sense;
second, jacobian iteration:
jacobian iteration writes a unitary matrix to be solved into a product of a series of Jacobian rotation matrixes, and finally maximization of a target function is achieved by respectively optimizing each Jacobian rotation matrix;
in particular, the data tensor T is recordedkAnd a mixing matrix U(m)The update values of the previous iteration are respectivelyAndthe update values of which at the current iteration are respectivelyAnd
by definition,the positions except the elements of the four positions (i, i), (i, j), (j, i), (j, j) are not 0, and the elements on the diagonal are 1, and the other positions are 0;
let the value of coordinate i be taken from 1 to R, the value of j be taken from i to R, and for a certain pair of fixed coordinate values (i, j) can be known from (9) and (10),is obtained by solving the following optimization problem, m ═ 1,2,3, 4:
to obtainThen, T is processed according to the formula (10)kAnd U(m)Updating is carried out; all iterations involved in traversing all values of the coordinates (i, j) are called a scan;
if the algorithm is converged after one scanning, the calculation is finished, and if the algorithm is not converged yet, the factor matrix obtained by updating at this time is used as an initial value to perform the next scanning, traverse and update the Jacobian rotation matrix, and update the factor matrix until the convergence.
Preferably, before step S1, the following assumptions are made:
(A1) intra-group independence: when R is not less than 1 and not equal to u and not more than R,andcounting independently;
(A3) the source signal is a non-Gaussian and non-stationary signal;
(A4) the multiple dataset model under consideration is positive, namely N is more than or equal to R;
(A5) the number of data sets M is 4.
Preferably, in practice, under the assumption conditions (a1) and (a4), the whitening matrix W(m)∈CR×NCan be measured by observing the signal x(m)(t) singular value decomposition of a second order covariance matrix.
Preferably, it can be seen from the assumptions (a1), (a2) and (A3) that each tensor DkAre all over diagonal tensors, i.e.:
definition of U'(m)@U(m)D(m)P, m ═ 1,2,3,4, where D(1),D(2),D(3),D(4)∈CR×RIs a diagonal matrix and satisfies D(1)D(2)*D(3)*D(4)I is an identity matrix, P ∈ CR×RIs a sorting matrix;
it can be easily known that if U in the formula (4) is used(m)Is replaced by U'(m)The equality is still true;
therefore, U'(m)And the mixing matrix U, without taking into account amplitude/phase ambiguities and sequence ambiguities(m)Equivalence;
by solving equation (4), the mixing matrix U can be solved(m)And (6) estimating.
Updating the Jacobian rotation matrix as a preferred traversal, involving in each iteration stepThe closed-form optimal solution of (a);
closed-form optimal solution of Jacobian rotation matrix
Unitary transformation according to the properties of the Jacobian rotation matrixChanging only the tensorTaking values in the following eight parts:
here we use the MATLAB notation to denoteThe sub-tensor, e.g.Representing a fixed tensorThe first index value of (1) is i, and the obtained sub-tensor has the rest index values which are not fixed;
it can be seen that the tensor is maximizedThe norm square sum of the hyper diagonal elements is equivalent to the norm square sum of the hyper diagonal elements of the sub tensor with the size of 2 multiplied by 2 formed by the maximum intersection of the eight sub tensors;
thus, the optimization criterion (12) can be rewritten as:
to further simplify the optimization process, we use an alternate update approach, i.e. replace (13) with four steps, each for an alternate update
Wherein
first, by definition:
wherein the content of the first and second substances,
according to equation (15), there is:
wherein M isi,j=[m1,i,j,...,mK,i,j,q1,i,j,...,qK,i,j]∈C3×2K;
The above derivation shows that, when the equation (16) is maximized,corresponds to a matrixPrincipal feature vector ofIs estimated asThen there isAnd further obtainable from (11)
Next, an update is performedAnd solve according to the sameSimilar procedure to obtainIn the same way, obtain by calculationAnd
the determination as the preferred iteration stop condition is:
calculating the relative variation value of the diagonal element norm sum of the tensors obtained by two adjacent scans:
when the value is smaller than the threshold value epsilon, the algorithm is considered to be converged, and iteration is terminated;
or, judging whether the Jacobian rotation matrix obtained by the current scanning is close to the identity matrix, specifically, definingWhen xi(1)、ξ(2)、ξ(3)And xi(4)When the value is less than the threshold value epsilon at the same time, the result shows thatAndthe unit matrix is already approximated, the target tensor cannot be updated, the algorithm can be considered to be converged, and the iteration is terminated.
Compared with the prior art, the fourth-order tensor joint diagonalization algorithm for the four-data-set joint blind source separation is a J-BSS method aiming at no more than four data set signals. The method calculates the mutual fourth-order cumulant tensor of the multi-data set signals in different time periods, and performs tensor joint diagonalization on the multi-data set signals to realize the J-BSS of the multi-data set signals.
Furthermore, we propose a tensor joint diagonalization algorithm based on continuous rotation of Givens. Compared with the tensor diagonalization algorithm only suitable for a single data set BSS, which is proposed in the prior art [30], the tensor diagonalization algorithm is suitable for the condition that the loading factor matrixes of the tensors are different, and is thus suitable for the J-BSS. Compared with the non-orthogonal tensor diagonalization algorithm proposed in the prior art [8], the algorithm is based on orthogonal rotation transformation, so that the optimal solution in the least square sense can be obtained.
The fourth-order tensor joint diagonalization algorithm for the four-data-set joint blind source separation applies the time domain characteristic of the signal, calculates the mutual fourth-order cumulant tensor of the multi-data-set signal in different time periods, and improves the accuracy of the algorithm.
According to the fourth-order tensor joint diagonalization algorithm for the four-data-set joint blind source separation, a tensor is constructed by using four factor matrixes, the four factor matrixes are alternately updated simultaneously in the algorithm process, the algorithm is accurate, and compared with the algorithm in the prior art that the first three factor matrixes are alternately updated, the fourth factor matrix is automatically solved after the first three factor matrixes are obtained.
The optimization criterion of the invention directly optimizes the four factor matrixes, and the algorithm of the prior art only optimizes the first three factor matrixes. Because the optimization criteria are different and the tensors to be optimized are also different, the subsequent optimization processes are different, and in the prior art, the first three factor matrices are solved, so the fourth factor matrix is obtained by restoring the target tensor, but the precision of the obtained factor matrix is low. The invention directly optimizes the four factor matrixes and improves the precision of the fourth factor matrix.
When the tensor is calculated, the signal is divided into k sections, k tensors are calculated in total, and compared with the prior art that the algorithm signal is not segmented, n tensors are calculated, namely compared with dimensionality tensors of a factor matrix, more signal information is utilized, so that the algorithm precision is improved.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a flow chart of the algorithm of the present invention.
Fig. 2-1 is a graph showing the a-PI values of JTD and GOJD algorithms as a function of the number of scans under noiseless conditions in example 1 of the present invention (K3, N3).
Fig. 2-2 is a graph of the a-PI values of JTD and GOJD algorithms as a function of scan number (K3, N50) under noiseless conditions in example 1 of the present invention.
Fig. 2-3 are graphs showing the a-PI values of JTD and GOJD algorithms as a function of the number of scans under noiseless conditions in example 1 of the present invention (K50, N3).
Fig. 2-4 are graphs of the a-PI values of JTD and GOJD algorithms as a function of scan number (K50 and N50) under noiseless conditions in example 1 of the present invention.
Fig. 3-1 is a graph showing the change of a-PI with SNR according to example 2 of the present invention (fast beat number 100000).
Fig. 3-2 is a graph showing the variation of a-PI with the number of beats (SNR 5dB) in example 2 of the present invention.
FIG. 4 is a graph of eight ECG data paths in example 3 of the present invention.
Fig. 5 is a graph showing the result of separation of an actual FECG signal in embodiment 3 of the present invention.
Detailed Description
As shown in fig. 1, a fourth-order tensor joint diagonalization algorithm for joint blind source separation of four data sets is first assumed as follows:
(A1) intra-group independence: when R is not less than 1 and not equal to u and not more than R,andcounting independently;
(A2) correlation between groups: when the content is less than or equal to 1When R is u ≦ R,andcarrying out statistical correlation;
(A3) the source signal is a non-Gaussian and non-stationary signal;
(A4) the multiple dataset model under consideration is positive, namely N is more than or equal to R;
(A5) the number of data sets M is 4.
In the above assumptions, (a1) and (a2) are the assumed conditions commonly used in the existing joint blind source separation method; the conditions (A3) and (A4) indicate that the method of the present invention is suitable for the joint blind source separation of non-Gaussian and non-stationary signals under positive definite conditions; the condition (a5) defines that the number of data sets that can be processed by the method of the present invention is not more than four, the present invention illustrates the proposed joint blind source separation algorithm with four data sets as an example, and the case below four data sets is obtained by similar derivation.
The method specifically comprises the following steps:
s1, observed signal:
the four dataset observed signals are pre-whitened separately:
let the whitening matrix of the mth data set be W(m)∈CR×N,m=1,…,4;
Then after the pre-whitening process, the mth data set observed signal is:
y(m)(t)=W(m)x(m)(t)=U(m)s(m)(t)∈CR, (1)
wherein, the unitary matrix U(m)@W(m)A(m)∈CR×RIs a mixing matrix of the mth data set signal after pre-whitening; in practice, under the assumption conditions (a1) and (a4), the whitening matrix W(m)∈CR×NCan be measured by observing the signal x(m)(t) singular value decomposition of a second order covariance matrix to obtain [ 8%]。
S2, target tensor:
after pre-whitening, a set of mutual fourth order cumulant tensors is constructed as follows:
Tk==cum(y(1)(tk),y(2)(tk),y(3)(tk),y(4)(tk)),k=1,...,K, (2)
where "cum (·)" is used to calculate the mutual fourth-order cumulant, which is defined as:
where "E (-) is used to calculate the mathematical expectation, and superscript". sup. "represents the conjugate, by definition, of each tensor Tk∈CR×R×R×RRepresenting four sets of observed signals at time tkK is 1, …, K;
the formula is as follows:
Tk=Dk×1U(1)×2U(2)*×3U(3)*×4U(4), (4)
wherein the ingredientnAn n-mode product representing the tensor and matrix, n being 1, …, 4; a fourth order tensorAnd matrixThe n-modulo product of (d) is defined as:
d in formula (4)k=cum(s(1)(tk),s(2)(tk),s(3)(tk),s(4)(tk))∈CR×R×R×RSource signals for four sets of data at time tkThe cross fourth order cumulant tensor of;
as can be seen from the assumptions (A1), (A2) and (A3), each tensor DkAre all over diagonal tensors, i.e.:
definition of U'(m)@U(m)D(m)P, m ═ 1,2,3,4, where D(1),D(2),D(3),D(4)∈CR×RIs a diagonal matrix and satisfies D(1)D(2)*D(3)*D(4)I is an identity matrix, P ∈ CR×RIs a sorting matrix;
it can be easily known that if U in the formula (4) is used(m)Is replaced by U'(m)The equality is still true;
therefore, U'(m)Without taking into account amplitude/phase ambiguity (by diagonal matrix D)(m)Characterization) and order ambiguity (characterized by an ordering matrix P), and a mixing matrix U(m)And equivalence. By solving equation (4), the mixing matrix U can be solved(m)And (6) estimating.
The purpose of the joint blind source separation is to identify the mixing matrix and further estimate the source signal without considering the two trivial ambiguities.
Unlike the conventional BSS, when m takes different index values, U 'is different'(m)Relative to the true mixing matrix U(m)The ordering matrices P of (a) are identical, which means that the mixing matrices and the source signal components estimated via the J-BSS are naturally aligned in order, which is also one of the advantages of the J-BSS over the BSS.
According to the invention, the tensor is constructed by utilizing the four factor matrixes, the four factor matrixes are alternately updated simultaneously in the algorithm process, and compared with the algorithm in the prior art which only alternately updates the first three factor matrixes, the fourth factor matrix is automatically solved after the first three factor matrixes are obtained.
When the tensor is calculated, the signal is divided into k sections, k tensors are calculated in total, and compared with the prior art that the algorithm signal is not segmented, n tensors are calculated, namely, compared with dimensionality tensors of a factor matrix, more signal information is utilized, so that the algorithm precision is improved.
S3, initializing a factor matrix;
s4, cost function convergence calculation:
first, the optimization criteria are:
defining an objective function λ as a data tensor TkAnd fitting tensor Dk×1U(1)×2U(2)*×3U(3)*×4U(4)Mean square error between:
wherein | · | purpleFExpressing Frobenius norm (Frobenius norm), and calculating an optimal estimation value of the mixing matrix in the least square sense by minimizing lambda;
notice U(m)For unitary matrix, m is 1,2,3,4, and according to the security of unitary transformation, equation (7) is rewritten as:
wherein off (-) is used to set the hyper-diagonal of its input tensor to 0, diag (-) is used to set the non-diagonal of its input tensor to 0,is a constant. Therefore, minimizing λ is equivalent to pairingMaximization is performed, so the present invention estimates the mixing matrix U by the following criterion(m),m=1,2,3,4:
equations (7) - (9) show that by maximizing Tk×1U(1)H×2U(2)T×3U(3)T×4U(4)HOver diagonal norm square sum, i.e. for T1,...,TKJoint diagonalization is carried out to obtain a mixing matrix U(1),U(2),U(3),U(4)An optimal estimate in the least squares sense; next, we propose a tensor joint diagonalization algorithm based on the jacobian iteration.
The optimization criterion of the invention directly optimizes the four factor matrixes, and the algorithm in the prior art only optimizes the first three factor matrixes. Because the optimization criteria are different and the tensors to be optimized are also different, the subsequent optimization processes are different, and in the prior art, the first three factor matrices are solved, so the fourth factor matrix is obtained by restoring the target tensor, but the precision of the obtained factor matrix is low. The invention directly optimizes the four factor matrixes and improves the precision of the fourth factor matrix.
Second, jacobian iteration:
jacobian iteration writes a unitary matrix to be solved into a product of a series of Jacobian rotation matrixes, and finally maximization of a target function is achieved by respectively optimizing each Jacobian rotation matrix;
in particular, the data tensor T is recordedkAnd a mixing matrix U(m)The update values of the previous iteration are respectivelyAndthe update values of which at the current iteration are respectivelyAndeach iteration through the Jacobian torque matrix(also known as Givens rotation matrix) for TkAnd U(m)Updating is carried out, namely:
wherein'i' denotes imaginary unit. By definition,the positions except the elements of the four positions (i, i), (i, j), (j, i), (j, j) are not 0, and the elements on the diagonal are 1, and the other positions are 0;
let the value of coordinate i be taken from 1 to R, the value of j be taken from i to R, and for a certain pair of fixed coordinate values (i, j) can be known from (9) and (10),is obtained by solving the following optimization problem, m ═ 1,2,3, 4:
to obtainThen, T is processed according to the formula (10)kAnd U(m)Updating is carried out; all iterations involved in traversing all values of the coordinates (i, j) are called a Sweep;
if the algorithm is converged after one scanning, the calculation is finished, and if the algorithm is not converged yet, the factor matrix obtained by updating at this time is used as an initial value to perform the next scanning, traverse and update the Jacobian rotation matrix, and update the factor matrix until the convergence.
The determination of the iteration stop condition is as follows:
calculating the relative variation value of the diagonal element norm sum of the tensors obtained by two adjacent scans:
when the value is smaller than the threshold value epsilon, the algorithm is considered to be converged, and iteration is terminated;
or, judging whether the Jacobian rotation matrix obtained by the current scanning is close to the identity matrix, specifically, definingWhen xi(1)、ξ(2)、ξ(3)And xi(4)When the value is less than the threshold value epsilon at the same time, the result shows thatAndthe unit matrix is already approximated, the target tensor cannot be updated, the algorithm can be considered to be converged, and the iteration is terminated.
From the above description, Jacobian iteration resolves the optimization problem (9) into a series of sub-optimization problems (12). Due to each JacobianThe rotation matrix has only two unknown parameters, and the optimal solution of the rotation matrix is expected to have a simple closed-form solution form. Next, we will deduce what is involved in each iteration of stepClosed form optimal solution of.
Traversing and updating Jacobian rotation matrix, wherein each step of iteration involvesThe closed-form optimal solution of (a);
closed-form optimal solution of Jacobian rotation matrix
Unitary transformation according to the properties of the Jacobian rotation matrixChanging only the tensorTaking values in the following eight parts:
here we use the MATLAB notation to denoteThe sub-tensor, e.g.Representing a fixed tensorThe first index value of (1) is i, and the obtained sub-tensor has the rest index values which are not fixed;
it can be seen that the tensor is maximizedThe norm square sum of the hyper diagonal elements is equivalent to the norm square sum of the hyper diagonal elements of the sub tensor with the size of 2 multiplied by 2 formed by the maximum intersection of the eight sub tensors;
thus, the optimization criterion (12) can be rewritten as:
to further simplify the optimization process, we use an alternate update approach, i.e. replace (13) with four steps, each for an alternate update
Wherein
first, by definition:
wherein
According to equation (15), there is:
wherein M isi,j=[m1,i,j,...,mK,i,j,q1,i,j,...,qK,i,j]∈C3×2K;
The above derivation shows that, when the equation (16) is maximized,corresponds to a matrixPrincipal feature vector ofIs estimated asThen there isAnd further obtainable from (11)
Next, an update is performedAnd solve according to the sameSimilar procedure to obtainIn the same way, obtain by calculationAnd
the derivation assumes that the J-BSS problem is complex, and when the J-BSS problem is real, it is also converted into a joint diagonalization problem of a real fourth-order tensor through similar calculation, and solved through a real Jacobian rotation pair, and the calculation steps are similar to those of the complex case.
Note that, at this time, the real value jacobian torque matrix is defined as:
example 1 randomly generating a hyper-diagonal tensor DkAnd unitary matrix U(1),U(2),U(3),U(4)And constructing tensor directly according to the formula (4)
The Performance of the algorithm was evaluated by a Performance Index (Performance Index: PI) defined as follows:
wherein the content of the first and second substances,upper labelRepresents the Moore-Penrose pseudo-inverse. The PI of each of the separation results was calculated, and then averaged. The Average PI (Average PI: A-PI) is used as the evaluation index of the algorithm performance.
The convergence of the algorithm can be reflected by calculating the PI value corresponding to the algorithm at the end of each scanning. FIGS. 2-1 to 2-4 plot the A-PI values of JTD algorithm versus scan number in 10 independent experiments when K and N are different.
For reference, we plot under the same conditions a generalized orthogonal matrix diagonalization algorithm (GOJD, [8]]) A-PI curve (data matrix of which is represented by T)kObtained as a fixed three-dimensional index). It is seen that the proposed JTD algorithm exhibits a monotonically decreasing, linearly converging convergence pattern. Compared with the GOJD algorithm, the JTD algorithm can achieve convergence after 5 scans on average, and has better convergence performance.
FIGS. 2-1 to 2-4 show graphs of A-PI values of JTD and GOJD algorithms with no noise, as a function of scan times. Here are shown the results of the experiment under four conditions: (a) as shown in fig. 2-1, the size N and the number K of tensors are smaller; (b) as shown in fig. 2-2, the size N of the tensor takes a larger value and the number K takes a smaller value; (c) as shown in fig. 2-3, the size N of the tensor takes a smaller value and the number K takes a larger value; (d) as shown in FIGS. 2-4, the size N and the number K of tensors are both large.
Example 2 in this experiment four sets of observed signals x were generated according to equation (1A)(m)(t),m=1,2,3,4:
Wherein the mixing matrix A(m)∈CN×RRandomly generated by a program, noise term n(m)(t)∈CN×TWhite gaussian noise with zero mean unit variance.
The source signal is constructed by:
sr(t)=Qrs′r(t), (22)
whereins′r(t)∈C4The random phase signal is a segmented amplitude modulation random phase signal, and the phase and amplitude modulation coefficient of the random phase signal are randomly generated by a program. Matrix Qr∈R4×4Randomly generated by the program for introducing correlations between sets. It is easy to know that the constructed source signal is a non-stationary non-gaussian signal (assume A3), and the source signals of different datasets satisfy the assumed conditions of inter-group correlation and intra-group independence (assume a1, a 2).
In the formula (21), σsAnd σnRepresenting signal strength and noise strength, respectively. The signal-to-noise ratio (SNR) can be further defined as follows: SNR @20lg σs/σn。
We separate the above multiple dataset signals using the proposed JTD based J-BSS algorithm. And comparing the algorithm with the following BSS and J-BSS algorithms of the same type:
(a) feature matrix approximation Joint Diagonalization algorithm (Joint approximation Diagonalization of edges: JADE) [16 ];
(b) Third-Order Tensor joint diagonalization algorithm (Simultaneous Third-Order transducer diode diagonalization: STOTD) [30 ];
(c) multiple set Canonical Correlation Analysis algorithm (Multi Canonical Correlation Analysis: MCCA) [6 ];
(d) generalized Orthogonal matrix Joint Diagonalization algorithm (Generalized Orthogonal Joint Diagonalization: GOJD) [8 ].
Wherein JADE and STOTD are BSS algorithms based on fourth-order cumulants, and MCCA and GOJD are J-BSS algorithms. In experiments with the GOJD algorithm, we have used algorithms based on fourth order cumulants and second order covariances, denoted GOJD4 and GOJD2, respectively.
An estimated value (2) of the fourth order accumulation amount defined by (2) is used as the sampling fourth order accumulation amount, and the calculation formula is as follows:
whereinDenotes the kth segment of the observed signal in the time dimension after pre-whitening, m is 1, …, 4. Here, the segment length is L, and the overlap ratio α ∈ [0, 1] between adjacent segments is defined as the ratio of the number of overlapping points of adjacent segments to L.
In this experiment, we fixed the number of observation channels N-10 and the number of source signals R-5 for each data set, and each point in the a-PI curve was obtained by averaging the results of 200 monte carlo independent experiments. First, the number of fixed time sampling points T is 100000, L is T/10, and α is 0.3.
The SNR is changed from 0dB to 10 dB. Next, the fixed SNR is 5dB, and the number of time samples is changed from 10000 to 100000. In both cases, the A-PI curves for the compared algorithms are shown in FIGS. 3-1 and 3-2. It is seen that the proposed JTD algorithm performs significantly better than other algorithms than GOJD 2.
Compared with GOJD2, the separation accuracy of JTD algorithm is higher at SNR of 2-8 dB, as seen in FIG. 3-1. FIG. 3-2 shows that the proposed JTD algorithm is more accurate when the snapshot number is higher (80000 + 100000), and is more advantageous than GOJD 2.
This is mainly because the second-order covariance used by GOJD2 has less finite sampling error than the fourth-order cumulant used by JTD in the case of short snapshots.
Comparison of the performance of JTD, GJD, MCCA, JADE, STOTD when separating computer-synthesized signals is shown in FIGS. 3-1 and 3-2.
Example 3, the experiment was directed to extracting a FECG signal from actually collected observation data in which a pregnant woman cardiac signal (motherecg, MECG), a Fetal cardiac signal (Fetal ECG, FECG), and other interference signals are mixed.
The eight ECG data used were taken from DAISY database 0 at the university of Luwen, with signal waveforms as shown in FIG. 4. The sampling frequency was 250Hz, the number of samples was 2500, and the sampling time was 10s, as shown in FIG. 4, which is actually acquired eight ECG data, which was taken from the DAISY database at the university of Luwen.
Dividing the eight observed signals into four data sets according to the following formula, wherein each data set comprises five observed signals:
the observed signals of the four data sets are pre-whitened and segmented, the length of each segment is 500, the overlap ratio is 0.5, and K is 10 at this time. The mutual fourth order cumulants are computed for each segment of the signal to construct the K target tensors. After joint diagonalization is performed on the source signal components by using JTD algorithm to obtain a demixing matrix, each source signal component can be obtained by formula (1), and the result is shown in FIG. 4,an estimate of the source signal representing the mth data set, as shown in fig. 5, can be seen that the MECG signal and the FECG signal were successfully separated.
As seen in the figure, the 1 st and 2 nd signals in each data set are MECG, and the 3 rd signal in the first three data sets is FECG. The result shows that the J-BSS algorithm based on JTD can successfully separate the actually acquired FECG signals.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.
Reference to the literature
[1]Hotelling H.Relations between two sets of variates[J].Biometrika,1936,28(3/4):321-377.
[2]Sui J,Adali T,Yu Q,et al.A review of multivariate methods for multimodal fusion of brain imaging data[J].Journal of Neuroscience Methods,2012,204(1):68-81.
[3]Kim T,Attias H T,Lee S Y,et al.Blind source separation exploiting higher-order frequency dependencies[J].IEEE Trans on Audio,Speech and Language Processing,2007,15(1): 70-79.
[4]Lee J H,Lee T W,Jolesz F A,et al.Independent vector analysis(IVA):multivariate approach for fMRI group study[J].Neuroimage,2008,40(1):86-109.
[5]Anderson M,Adali T,Li X L.Joint blind source separation with multivariate Gaussian model:Algorithms and performance analysis[J].IEEE Trans on Signal Processing,2012, 60(4):1672-1683.
[6]Li Y O,Adali T,Wang W,et al.Joint blind source separation by multiset canonical correlation analysis[J].IEEE Trans on Signal Processing,2009,57(10):3918-3929.
[7]Correa N M,Adali T,Li Y O,et al.Canonical correlation analysis for data fusion and group inferences[J].IEEE Signal Processing Magazine,2010,27(4):39-50.
[8]Li X L,Adal1 T,Anderson M.Joint blind source separation by generalized joint diagonalization of cumulant matrices[J].Signal Processing,2011,91(10):2314-2322.
[9]Congedo M,Phlypo R,Chatel-Goldman J.Orthogonal and non-orthogonal joint blind source separation in the least-squares sense[A].Signal Processing Conference(EUSIPCO)[C]. Bucharest,Romania,2012:1885-1889.
[10]Gong X F,Wang X L,Lin Q H.Generalized Non-Orthogonal Joint Diagonalization With LU Decomposition and Successive Rotations[J].IEEE Trans on Signal Processing,2015, 63(5):1322-1334.
[11]Lahat D,Jutten C.Joint independent subspace analysis:a quasi-Newton algorithm[A]. International Conference on Latent Variable Analysis and Signal Separation[C].Liberec, Czech Republic,2015:111-118.
[12]Lahat D,Adali T,Jutten C.Multimodal data fusion:an overview of methods,challenges,and prospects[J].Proceedings of the IEEE,2015,103(9):1449-1477.
[13]Adali T,Levin-Schwartz Y,Calhoun V D.Multimodal data fusion using source separation: Two effective models based on ICA and IVA and their properties[J].Proceedings of the IEEE, 2015,103(9):1478-1493.
[14]Hérault J,Jutten C,Ans B.Détection de grandeurs primitives dans un message composite par une architecture de calcul neuromimétique en apprentissage non supervisé[A]. Proceedings of the 10th Workshop Traitement du signal et ses applications(GRETSI)[C]. Nice,France,1985:1017–1022.
[15]Comon P.Independent component analysis,a new concept?[J].Signal Processing,1994, 36(3):287-314.
[16]Cardoso J F,Souloumiac A.Blind beamforming for non-Gaussian signals[A].IEE Proceedings F(Radar and Signal Processing)[C].1993,140(6):362-370.
[17]Oja E,Hyvarinen A.A fast fixed-point algorithm for independent component analysis[J]. Neural Computation,1997,9(7):1483-1492.
[18]Belouchrani A,Abed-Meraim K,Cardoso J F,et al.A blind source separation technique using second-order statistics[J].IEEE Trans on Signal Processing,1997,45(2):434-444.
[19]Novey M,Adali T.Complex ICA by negentropy maximization[J].IEEE Trans on Neural Networks,2008,19(4):596-609.
[20]Sawada H,Araki S,Makino S.Underdetermined convolutive blind source separation via frequency bin-wise clustering and permutation alignment[J].IEEE Trans on Audio,Speech and Language Processing,2011,19(3):516-527.
[21]Yu Y,Petropulu A P.PARAFAC-based blind estimation of possibly underdetermined convolutive MIMO systems[J].IEEE Trans on Signal Processing,2008,56(1):111-124.
[22]Lin Q H,Liu J,Zheng Y R,et al.Semiblind spatial ICA of fMRI using spatial constraints[J].Human Brain Mapping,2010,31(7):1076-1088.
[23]Cichocki A,Zdunek R,Phan A H,et al.Nonnegative matrix and tensor factorizations: applications to exploratory multi-way data analysis and blind source separation[M].John Wiley&Sons,2009.
[24]Handbook of Blind Source Separation:Independent component analysis and applications[M].Academic Press,2010.
[25]Li Y O,Wang W,Adali T,et al.CCA for joint blind source separation of multiple datasets with application to group fMRI analysis[A].IEEE International Conference on Acoustics, Speech and Signal Processing[C].2008:1837-1840.
[26]Kim T,Eltoft T,Lee T W.Independent vector analysis:An extension of ICA to multivariate components[A].International Conference on Independent Component Analysis and Signal Separation[C].Springer Berlin Heidelberg,2006:165-172.
[27]Anderson M,Li X L,Adal1 T.Nonorthogonal independent vector analysis using multivariate Gaussian model[A].International Conference on Latent Variable Analysis and Signal Separation[C].Springer Berlin Heidelberg,2010:354-361.
[28]Anderson M,Adali T,Li X L.Joint blind source separation with multivariate Gaussian model:Algorithms and performance analysis[J].IEEE Trans on Signal Processing,2012, 60(4):1672-1683.
[29]Lahat D,Jutten C.Joint independent subspace analysis:a quasi-Newton algorithm[C]. International Conference on Latent Variable Analysis and Signal Separation[C].Liberec, Czech Republic,2015:111-118.
[30]De Lathauwer L,De Moor B,Vandewalle J.Independent component analysis and (simultaneous)third-order tensor diagonalization[J].IEEE Trans on Signal Processing,2001, 49(10):2262-2271.
Moor D.Database for the identification of systems(DaISy)[Z].http://www.esat.kuleuven.ac. be/sista/daisy,2010。
Claims (6)
1. A fourth-order tensor joint diagonalization algorithm for joint blind source separation of four data sets, comprising the steps of:
s1, observed signal:
the four dataset observed signals are pre-whitened separately:
Then after the pre-whitening process, the mth data set observed signal is:
wherein, the unitary matrix U(m)@W(m)A(m)∈CR×RIs a mixing matrix, x, of the mth data set signal after pre-whitening(m)(t) observed Signal, s, of the mth data set(m)(t) is the source signal of the mth data set, A(m)A mixing matrix for the mth data set;
s2, target tensor:
after pre-whitening, a set of mutual fourth order cumulant tensors is constructed as follows:
where "cum (·)" is used to calculate the mutual fourth-order cumulant, which is defined as:
where "E (-) is used to calculate the mathematical expectation, and superscript". sup. "represents the conjugate, by definition, of each tensorRepresenting four sets of observed signals at time tkK is 1, …, K;
the formula is as follows:
wherein the ingredientnAn n-mode product representing the tensor and matrix, n being 1, …, 4; a fourth order tensorAnd matrixThe n-modulo product of (d) is defined as:
in the formula (4)Source signals for four sets of data at time tkThe cross fourth order cumulant tensor of;
s3, initializing a factor matrix;
s4, cost function convergence calculation:
first, the optimization criteria are:
wherein | · | purpleFExpressing a Frobenius norm, and calculating an optimal estimation value of the mixing matrix in the least square sense by minimizing lambda;
notice U(m)For unitary matrix, m is 1,2,3,4, and according to the security of unitary transformation, equation (7) is rewritten as:
wherein off (-) is used to set the hyper-diagonal of its input tensor to 0, diag (-) is used to set the non-diagonal of its input tensor to 0,is a constant;
therefore, minimizing λ is equivalent to pairingThe maximization is performed, therefore, the invention estimates the mixing matrix U by the following criterion(m),m=1,2,3,4:
equations (7) - (9) show that by maximizingOver diagonal norm square sum, i.e. pairJoint diagonalization is carried out to obtain a mixing matrix U(1),U(2),U(3),U(4)An optimal estimate in the least squares sense;
second, jacobian iteration:
jacobian iteration writes a unitary matrix to be solved into a product of a series of Jacobian rotation matrixes, and finally maximization of a target function is achieved by respectively optimizing each Jacobian rotation matrix;
in particular, the data tensor is recordedAnd a mixing matrix U(m)The update values of the previous iteration are respectivelyAndthe update values of which at the current iteration are respectivelyAnd
by definition,the positions except the elements of the four positions (i, i), (i, j), (j, i), (j, j) are not 0, and the elements on the diagonal are 1, and the other positions are 0;
let the value of coordinate i be taken from 1 to R, the value of j be taken from i to R, and for a certain pair of fixed coordinate values (i, j) can be known from (9) and (10),is obtained by solving the following optimization problem, m ═ 1,2,3, 4:
to obtainThen press againstLighting type (10) pairAnd U(m)Updating is carried out; all iterations involved in traversing all values of the coordinates (i, j) are called a scan;
if the algorithm is converged after one scanning, the calculation is finished, and if the algorithm is not converged yet, the factor matrix obtained by updating at this time is used as an initial value to perform the next scanning, traverse and update the Jacobian rotation matrix, and update the factor matrix until the convergence.
2. The joint diagonalization algorithm of the fourth order tensor for joint blind source separation of four datasets as claimed in claim 1, wherein:
before step S1, the following assumptions are made:
(A1) intra-group independence: when R is not less than 1 and not equal to u and not more than R,andcounting independently;
(A3) the source signal is a non-Gaussian and non-stationary signal;
(A4) the multiple dataset model under consideration is positive, namely N is more than or equal to R;
(A5) the number of data sets M is 4.
3. The joint diagonalization algorithm of the fourth order tensor for joint blind source separation of four datasets as claimed in claim 2, wherein:
4. The joint diagonalization algorithm of the fourth order tensor for joint blind source separation of four datasets as claimed in claim 3, wherein:
as can be seen from the assumptions (A1), (A2) and (A3), each tensor isAre all over diagonal tensors, i.e.:
definition ofWherein D(1),D(2),D(3),Is a diagonal matrix and satisfies D(1)D(2)*D(3)*D(4)I, I is an identity matrix,is a sorting matrix;
if the U in the formula (4) is(m)Is replaced by U'(m)The equality is still true;
therefore, U'(m)And the mixing matrix U, without taking into account amplitude/phase ambiguities and sequence ambiguities(m)Equivalence;
by solving equation (4), the mixing matrix U can be solved(m)And (6) estimating.
5. The joint diagonalization algorithm of the fourth order tensor for joint blind source separation of four datasets as claimed in claim 1, wherein:
traversing and updating Jacobian rotation matrix, wherein each step of iteration involvesThe closed-form optimal solution of (a);
the closed optimal solution of the Jacobian rotation matrix is solved as follows:
unitary transformation according to the properties of the Jacobian rotation matrixChanging only the tensorTaking values in the following eight parts:
expressed by Matlab symbolsThe sub-tensors of (a) are,representing a fixed tensorThe first index value of (1) is i, and the obtained sub-tensor has the rest index values which are not fixed;
maximized tensorThe norm square sum of the hyper diagonal elements is equivalent to the norm square sum of the hyper diagonal elements of the sub tensor with the size of 2 multiplied by 2 formed by the maximum intersection of the eight sub tensors;
thus, the optimization criterion (12) can be rewritten as:
to further simplify the optimization process, we use an alternate update approach, i.e. replace (13) with four steps, each for an alternate update
first, by definition:
wherein the content of the first and second substances,
according to equation (15), there is:
The above derivation shows that, when the equation (16) is maximized,corresponds to a matrixPrincipal feature vector ofIs estimated asThen there isAnd further obtainable from (11)
6. the joint diagonalization algorithm of the fourth order tensor for joint blind source separation of four datasets according to claim 1 or 5, characterized by:
the determination of the iteration stop condition is as follows:
calculating the relative variation value of the diagonal element norm sum of the tensors obtained by two adjacent scans:
when the value is smaller than the threshold value epsilon, the algorithm is considered to be converged, and iteration is terminated;
or, judging whether the Jacobian rotation matrix obtained by the current scanning is close to the identity matrix, specifically, definingWhen xi(1)、ξ(2)、ξ(3)And xi(4)When the value is less than the threshold value epsilon at the same time, the result shows that Andthe unit matrix is already approximated, the target tensor cannot be updated, the algorithm converges, and the iteration ends.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810079041.2A CN108282424B (en) | 2018-01-26 | 2018-01-26 | Fourth-order tensor joint diagonalization algorithm for four-dataset joint blind source separation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810079041.2A CN108282424B (en) | 2018-01-26 | 2018-01-26 | Fourth-order tensor joint diagonalization algorithm for four-dataset joint blind source separation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108282424A CN108282424A (en) | 2018-07-13 |
CN108282424B true CN108282424B (en) | 2021-02-12 |
Family
ID=62805422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810079041.2A Active CN108282424B (en) | 2018-01-26 | 2018-01-26 | Fourth-order tensor joint diagonalization algorithm for four-dataset joint blind source separation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108282424B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111366905B (en) * | 2020-04-12 | 2023-09-01 | 南京理工大学 | Space micro-motion group target multichannel blind source separation method |
CN113114597B (en) * | 2021-03-25 | 2022-04-29 | 电子科技大学 | Four-input signal separation method and system based on JADE algorithm |
CN116776108A (en) * | 2023-06-14 | 2023-09-19 | 中国人民解放军空军预警学院 | Underdetermined combined blind source separation method and system based on third-order cumulant and tensor decomposition |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103780522A (en) * | 2014-01-08 | 2014-05-07 | 西安电子科技大学 | Non-orthogonal joint diagonalization instantaneous blind source separation method based on double iteration |
CN105282067A (en) * | 2015-09-16 | 2016-01-27 | 长安大学 | Complex field blind source separation method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102903368B (en) * | 2011-07-29 | 2017-04-12 | 杜比实验室特许公司 | Method and equipment for separating convoluted blind sources |
-
2018
- 2018-01-26 CN CN201810079041.2A patent/CN108282424B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103780522A (en) * | 2014-01-08 | 2014-05-07 | 西安电子科技大学 | Non-orthogonal joint diagonalization instantaneous blind source separation method based on double iteration |
CN105282067A (en) * | 2015-09-16 | 2016-01-27 | 长安大学 | Complex field blind source separation method |
Non-Patent Citations (1)
Title |
---|
Jacobi-type Joint Diagonalization for Complex Symmetric Matrices;WANG Ke;《新型工业化》;20131231;第3卷(第3期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108282424A (en) | 2018-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Li et al. | Joint blind source separation by generalized joint diagonalization of cumulant matrices | |
Adali et al. | Diversity in independent component and vector analyses: Identifiability, algorithms, and applications in medical imaging | |
Zhang et al. | Graph signal processing-a probabilistic framework | |
Bobin et al. | 5 Blind Source Separation: The Sparsity Revolution | |
CN108282424B (en) | Fourth-order tensor joint diagonalization algorithm for four-dataset joint blind source separation | |
JP5323860B2 (en) | Method for separating a mixed signal into a plurality of component signals | |
Staicu et al. | Significance tests for functional data with complex dependence structure | |
Soldati et al. | ICA analysis of fMRI with real-time constraints: an evaluation of fast detection performance as function of algorithms, parameters and a priori conditions | |
Spurek et al. | ICA based on asymmetry | |
Zdunek et al. | Linked CP tensor decomposition algorithms for shared and individual feature extraction | |
Püspöki et al. | Detection of symmetric junctions in biological images using 2-D steerable wavelet transforms | |
Karhunen et al. | Finding dependent and independent components from related data sets: A generalized canonical correlation analysis based method | |
Gabrielson et al. | Independent vector analysis with multivariate Gaussian model: A scalable method by multilinear regression | |
Li et al. | Flexible complex ICA of fMRI data | |
Boukouvalas | Development of ICA and IVA algorithms with application to medical image analysis | |
Hashemi et al. | Towards accelerated greedy sampling and reconstruction of bandlimited graph signals | |
Fukumoto et al. | Node clustering of time-varying graphs based on temporal label smoothness | |
Heinrich | Gaussian limits of empirical multiparameter K-functions of homogeneous Poisson processes and tests for complete spatial randomness | |
CN112399366A (en) | Indoor positioning method based on Hankel matrix and WKNN variance extraction | |
Cohen et al. | Joint tensor compression for coupled canonical polyadic decompositions | |
Prasad | Independent component analysis | |
Zhang et al. | Point cloud segmentation based on hypergraph spectral clustering | |
Kawanabe et al. | Joint low-rank approximation for extracting non-Gaussian subspaces | |
Venkitaraman et al. | Annihilation Filter Approach for Estimating Graph Dynamics from Diffusion Processes | |
Chen et al. | Leveraging Task Structures for Improved Identifiability in Neural Network Representations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |