CN114200392A - High subsonic speed flight target acoustic signal estimation method based on acoustic vector uniform linear array - Google Patents

High subsonic speed flight target acoustic signal estimation method based on acoustic vector uniform linear array Download PDF

Info

Publication number
CN114200392A
CN114200392A CN202111409850.3A CN202111409850A CN114200392A CN 114200392 A CN114200392 A CN 114200392A CN 202111409850 A CN202111409850 A CN 202111409850A CN 114200392 A CN114200392 A CN 114200392A
Authority
CN
China
Prior art keywords
neural network
matrix
acoustic
signal
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111409850.3A
Other languages
Chinese (zh)
Inventor
陈昭男
阎肖鹏
孙贵新
邵翔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unite 91550 Of Pla
Original Assignee
Unite 91550 Of Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unite 91550 Of Pla filed Critical Unite 91550 Of Pla
Priority to CN202111409850.3A priority Critical patent/CN114200392A/en
Publication of CN114200392A publication Critical patent/CN114200392A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/802Systems for determining direction or deviation from predetermined direction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Algebra (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computing Systems (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention relates to the field of acoustic measurement, in particular to a high subsonic speed flight target acoustic signal estimation method based on an acoustic vector uniform linear array. According to the time domain broadband DOA fast estimation method based on PCA-BP, the time domain broadband signal expression of the uniform linear space sound cross sound vector array is constructed, the time domain broadband DOA fast estimation method based on PCA-BP is adopted, the calculation complexity of the system is effectively reduced, and the real-time performance of the high subsonic speed flight target estimation is improved. According to the DOA estimation method, the broadband receiving model is constructed in the time domain, so that the application range of the DOA estimation method based on the artificial neural network is effectively expanded, and a technical basis is provided for effectively realizing the acoustic detection of the high subsonic flight target.

Description

High subsonic speed flight target acoustic signal estimation method based on acoustic vector uniform linear array
Technical Field
The invention relates to the field of acoustic measurement, in particular to a high subsonic speed flight target acoustic signal estimation method based on an acoustic vector uniform linear array.
Background
At present, the over-the-horizon detection of a sea-surface low-altitude flying target is difficult in the conventional radar detection technology due to the restriction of the height of a sea-surface platform. Meanwhile, the electromagnetic background noise is increased due to the complex electromagnetic environment near the sea surface platform, and the sea surface platform radar is in a low elevation mode during sea detection, so that adverse factors such as serious sea clutter and multipath effect influence exist. The above adverse factors cause certain difficulty in radar detection of the sea surface short-range low-altitude flight target.
In recent years, detection and localization techniques based on target acoustic signals have received much attention as a complement to radar detection techniques. The acoustic detection technology determines the position of a target by passively receiving acoustic waves emitted by the target, has the advantages of strong concealment, difficulty in electromagnetic interference, capability of detecting a low-altitude target and the like, and particularly has unique background and environmental noise and less clutter interference when being applied to a sea platform, so that the method has unique advantages in the aspect of detection of the sea low-altitude flying target.
With the sound velocity as a boundary, low-altitude flight targets can be divided into two categories, namely supersonic velocity targets and hypersonic velocity targets. The supersonic velocity target can generate disturbance to the surrounding air to form Mach waves when flying, the wave front of the Mach waves is called shock waves, the shock waves are propagated in a conical surface mode, and the time domain characteristics are obvious, so that relatively mature theories and technologies are provided for acoustic tracking of the supersonic velocity target. For the acoustic signals of the high subsonic flight targets, the high subsonic flight targets have broadband, transient and non-stationary characteristics, and the acoustic tracking method of the targets is relatively less researched.
Disclosure of Invention
The invention discloses a method for estimating sound signals of a high subsonic flight target based on a sound vector uniform linear array, which aims at solving the problem that sound signals of the high subsonic flight target are difficult to perform acoustic tracking due to the fact that the sound signals of the high subsonic flight target have broadband, transient and non-stationary characteristics.
The technical scheme of the invention is as follows:
the method for estimating the acoustic signal of the high subsonic flight target based on the acoustic vector uniform linear array comprises the following specific steps:
s1: performing discrete Fourier transform (FFT) on a received signal of the acoustic vector sensor array, segmenting the frequency domain signal to obtain a plurality of sections of frequency domain signals, and calculating a covariance matrix of each section of frequency domain signal to obtain a plurality of covariance matrices;
s2: respectively inputting each covariance matrix obtained in step S1 into a principal component analysis neural network, namely a PCA neural network, and performing eigenvalue decomposition on the input matrix by the PCA neural network to respectively obtain an eigenvector A (f) of each covariance matrixj),j=0,1,2,…,J;
S3: respectively constructing a focusing matrix of each section of frequency domain signal by utilizing the eigenvector of each covariance matrix, then respectively carrying out focusing processing on the corresponding frequency domain signal by utilizing each focusing matrix, calculating to obtain a total covariance matrix of all the frequency domain signals, and carrying out matrix eigenvalue decomposition on the total covariance matrix by utilizing a PCA (principal component analysis) neural network;
s4: constructing a training data set by using a matrix characteristic value obtained by PCA neural network processing, and training a BP neural network;
s5: if the training error of the BP neural network is smaller than the set error threshold, the training is considered to be finished, the BP neural network which is trained is obtained, otherwise, the next round of training is carried out until the training error of the BP neural network is smaller than the set error threshold;
s6: and for signals acquired by the acoustic vector sensor, calculating a total covariance matrix of all frequency domain signals, inputting the total covariance matrix into a PCA neural network to obtain characteristic values and characteristic vectors of the signals, forming a matrix by the characteristic vectors corresponding to the characteristic values of the total covariance matrix, which are greater than a threshold value, inputting the matrix into a trained BP neural network, and outputting the matrix, namely an estimated value of the DOA of the high subsonic velocity flying target at the moment.
The step S1 specifically includes: the method is realized by adopting an acoustic vector sensor array which is uniformly and linearly distributed in space and is positioned on a space cross array, a three-dimensional rectangular coordinate system is established by using an array base line of the space cross array, a reference sensor in the acoustic vector sensor array is positioned at the origin of coordinates of the three-dimensional rectangular coordinate system, the acoustic vector sensors are uniformly distributed on positive and negative half shafts of an x axis, a y axis and a z axis, and the distances between adjacent sensors, namely the length of the base line, are d.
For K far-field sound sources, to
Figure BDA0003370964540000021
K is 1,2, …, K, and the position coordinate of the mth acoustic vector sensor is (x)m,ym,zm) If K sound sources are incident to the m-th acoustic vector sensor at the time t, the output x of the acoustic vector sensor(m)(t) is expressed as:
Figure BDA0003370964540000031
in the formula, akIs the direction vector of the incident signal of the kth sound source, ak=[uk,vk,wk],qmk)=exp(-j2πfτkm),τkmRepresenting the time delay, τ, between the k-th incident signal to the m-th acoustic vector sensor and to the reference sensorkm=(xmuk+ymvk+zmwk) C, c is the speed of sound, sk(t) is a sound pressure signal of the incident signal of the kth sound source, n(m)And (t) is a measurement noise vector of the mth acoustic vector sensor.
For K far-field sound sources, the received signal vector x (t) of the acoustic vector sensor array is represented as:
Figure BDA0003370964540000032
in the formula (I), the compound is shown in the specification,
Figure BDA0003370964540000033
for acoustic vector arrays steering vectors, q (θ)k)=[q1k),…,qMk)]TSymbol of
Figure BDA0003370964540000034
Representing a kronecker product operation, M being the acoustic vectorThe number of quantity sensors. After discrete sampling is carried out on signals acquired by the acoustic vector sensors, L discrete signals of snapshots are obtained, a corresponding acoustic vector sensor array receives a discrete signal matrix X with dimension of M multiplied by K, and n (t) is a measurement noise vector of the acoustic vector sensor array.
FFT is carried out on signals collected by the acoustic vector sensor, the obtained frequency domain signals are divided into J sections, and the J section frequency domain signal X (f) is processedj) Its covariance matrix RX(fj) The calculation formula of (2) is as follows:
RX(fj)=E[X(fj)XH(fj)],j=0,1,2,…,J
in step S2, the matrix eigenvalue decomposition is implemented by using a PCA neural network, which specifically includes: the PCA neural network is realized by adopting a single-layer forward propagation neural network without supervision learning, and the weight vector W of the network meets WWHAnd inputting the discrete signal matrix X into the PCA neural network, wherein the output of the PCA neural network is Y-WX, and after the k-th iteration, the cost function of the PCA neural network is as follows:
L(W)=(X-WH(k)Y)H(X-WH(k)Y)
wherein, the optimal weight vector W of the neural network is solved by multiple iterations by using the steepest gradient descent algorithmoptWhen the difference between the network weight vectors of adjacent iterations is less than the threshold epsilon, i.e., | W (k) -W (k-1) | survival2Less than or equal to epsilon, judging the PCA neural network convergence, thereby obtaining the optimal weight vector WoptI.e. the eigenvector of the input matrix X.
The step S3 specifically includes that the covariance matrix of the j-th segment of the frequency domain signal is RX(fj) By the pair RX(fj) Performing eigenvalue decomposition to obtain a focusing matrix, wherein J is 0,1,2, …, J, and performing covariance matrix R of each section of frequency domain signalX(fj) After being input into PCA neural network for eigenvalue decomposition, an eigenvalue matrix U formed by typical eigenvalues is obtainedS(fj) For reference frequency point f0Covariance matrix R ofX(f0) The feature matrix is denoted as VS(f0) So as to obtain the focusing matrix of the j-th section of frequency domain signal as follows:
T(fj)=Us(fj)Vs H(f0),
the total covariance matrix of all the focused frequency domain signals is:
Figure BDA0003370964540000041
in step S4, a feature training matrix is constructed by using the feature vectors corresponding to the feature values greater than the threshold of the total covariance matrix of all frequency domain signals to form a matrix U
Figure BDA0003370964540000042
And target DOA parameter vector
Figure BDA0003370964540000043
Wherein
Figure BDA0003370964540000044
Data set
Figure BDA0003370964540000045
And as a training data set, determining the number of neurons of each layer in the BP neural network, and initializing the connection weight values among the layers and the threshold values of the layers.
In the step S4, the BP neural network specifically includes: the number of neurons included in the input layer, hidden layer, and output layer of the BP neural network is I, H and J, respectively. The excitation function of the hidden layer is a sigmoid function, and the excitation function of the output layer is a purelin function. Inputting the training data set into BP neural network, and outputting b ═ b of h hidden layer neuron1,b2,…,bH]Wherein b ish=[b1,b2,…,bH],
Figure BDA0003370964540000046
The nth output representing the h hidden layer neuron,
Figure BDA0003370964540000047
wherein alpha ishRepresents the input to the h hidden layer neuron,
Figure BDA0003370964540000048
wihrepresenting weight values between the h hidden layer neuron and the i input layer neuron,
Figure BDA00033709645400000414
threshold, f, representing the h neuron of the hidden layer1Representing the sigmoid function. The output result of the output layer neurons is
Figure BDA0003370964540000049
Figure BDA00033709645400000410
Representing the output of the jth neuron of the output layer,
Figure BDA00033709645400000411
the output of the jth neuron of the output layer is calculated by
Figure BDA00033709645400000412
Wherein, betajIs the input to the jth output layer neuron,
Figure BDA00033709645400000413
vhjis the weight value, χ, between the h hidden layer neuron and the j output layer neuronjThreshold, f, representing the j-th neuron of the output layer2Representing the purelin function.
The step S5 is specifically as follows: computing output of output neurons of a BP neural network
Figure BDA0003370964540000052
And the mean square error of the expected output y, and calculating the sum of the mean square errors of all samples, namely the training error. Using steepest gradient descent algorithm to each layerThe weight and the threshold value of the user are updated. And when the training error is smaller than the set error threshold value, the training is considered to be finished, otherwise, the next round of training is continued.
The invention has the beneficial effects that:
according to the time domain broadband DOA fast estimation method based on PCA-BP, the time domain broadband signal expression of the uniform linear space sound cross sound vector array is constructed, the time domain broadband DOA fast estimation method based on PCA-BP is adopted, the calculation complexity of the system is effectively reduced, and the real-time performance of the high subsonic speed flight target estimation is improved. According to the DOA estimation method, the broadband receiving model is constructed in the time domain, so that the application range of the DOA estimation method based on the artificial neural network is effectively expanded, and a technical basis is provided for effectively realizing the acoustic detection of the high subsonic flight target.
Drawings
FIG. 1 is a schematic diagram of an acoustic vector sensor array of the present invention;
FIG. 2 is a diagram of a single-layer forward propagation neural network model for unsupervised learning according to the present invention;
FIG. 3 is a schematic diagram of the topology of the BP neural network of the present invention;
fig. 4 is a flow chart of the implementation of the method of the present invention.
Detailed Description
The following further describes a specific embodiment of the present invention with reference to the drawings and technical solutions.
The invention discloses a high subsonic speed flying target acoustic signal estimation method based on an acoustic vector uniform linear array, which is realized by adopting an acoustic vector sensor array which is uniformly and linearly distributed in space and is positioned on a space cross array, a three-dimensional rectangular coordinate system is established by using an array base line of the space cross array, a reference sensor in the acoustic vector sensor array is positioned at the origin of coordinates of the three-dimensional rectangular coordinate system, the acoustic vector sensors are uniformly distributed on positive and negative half shafts of an x axis, a y axis and a z axis, and the distances between adjacent sensors, namely the base line lengths, are d.
For K far-field sound sources, to
Figure BDA0003370964540000051
Into the sound as shown in fig. 1The vector sensor array, K is 1,2, …, K, the propagation medium is isotropic, the sensor at the origin of the coordinate is used as the reference, and the position coordinate of the mth sensor is (x)m,ym,zm) Then K sound sources are incident to the output x of the mth acoustic vector sensor at the time t(m)(t) can be expressed as:
Figure BDA0003370964540000061
in the formula, akIs the direction vector of the k-th incident signal, ak=[uk,vk,wk],qmk)=exp(-j2πfτkm),τkmRepresenting the time delay, τ, between the k-th incident signal to the m-th element and to the reference elementkm=(xmuk+ymvk+zmwk)/c,sk(t) is a sound pressure signal of the kth incident signal, n(m)And (t) is a measurement noise vector of the mth acoustic vector sensor.
According to the output expression of the acoustic vector sensor, the received signal vector of the uniform linear space cross matrix based on the acoustic vector sensor is obtained as follows:
Figure BDA0003370964540000062
in the formula (I), the compound is shown in the specification,
Figure BDA0003370964540000063
for acoustic vector arrays steering vectors, q (θ)k)=[q1k),…,qMk)]TSymbol of
Figure BDA0003370964540000064
Representing the kronecker product operation, M is the number of acoustic vector sensors. Discrete sampling is carried out on the signals acquired by the acoustic vector sensors to obtain L discrete signals of snapshots, and the corresponding acoustic vector sensor array receives a discrete signal matrix X which is MAnd (5) K dimension.
The principal component analysis method is to express the high-dimensional data by using a plurality of mutually irrelevant low-dimensional vectors, so as to realize the purpose of reducing the dimension of the high-dimensional data vectors, and the mutually irrelevant low-dimensional vectors are called as principal components of the original high-dimensional data. After PCA dimensionality reduction, the number of characteristic variables of the original data vector is reduced, and redundant information and noise existing in the original characteristic variables are removed. For an N-dimensional vector X, assuming its mean value is 0, the projection of the vector X onto the unit vector v is:
a=XTv=vTX,
the mean of the variable a is 0 and the variance is:
σ2=E[a2]=E[(vTX)(XTv)]=vTE[XXT]v=vTRxv,
wherein R isxRepresenting the covariance matrix of vector X. For a unit vector v, its variance function
Figure BDA0003370964540000065
The basic process of the PCA method is to find
Figure BDA0003370964540000066
Obtaining the maximum value of v, where v is the matrix RxAnd satisfies Rxv is λ v, and λ is an eigenvalue of the matrix. So that
Figure BDA0003370964540000067
The unit vector of the first N maximum values is obtained, namely the matrix RxThe corresponding eigenvalues of the first N eigenvectors are the first N largest eigenvalues, thereby completing the process of eigenvalue decomposition of the matrix.
Assuming that the number of the acoustic vector sensors is M, dividing the broadband incident signal into J sections to carry out FFT, and carrying out FFT on the J-th section of the frequency domain signal X (f)j) Its covariance matrix RX(fj) The calculation formula of (2) is as follows:
RX(fj)=E[X(fj)XH(fj)]=A(fj)Rs(fj)AH(fj)+RN(fj),
wherein, A (f)j) A focusing matrix representing the j-th section of the frequency domain signal, RN(fj) Covariance matrix, R, representing noise in the j-th segment of the frequency domain signals(fj) And the cross-correlation matrix represents the j section frequency domain signal.
Corresponding reference frequency point f0The data covariance matrix R ofX(f0) Comprises the following steps:
RX(f0)=E[X(f0)XH(f0)]=A(f0)Rs(f0)AH(f0)+RN(f0),
for DOA estimation of a wideband signal, it is necessary to use an aggregation matrix to focus the signals of all frequencies to f0The above. By the pair RX(fj) Eigenvalue decomposition is performed to obtain a focus matrix, where J is 0,1,2, …, J. For the eigenvalue decomposition process, a PCA neural network is used for implementation. Because the data after FFT is complex data, and the general neural network model only processes real data, the cross-correlation complex matrix is converted. For M × M dimensions of cross-correlation complex matrix RXConstructing a real matrix of 2 Mx 2M dimensions
Figure BDA0003370964540000071
The real matrix is input into the PCA neural network.
Substituting the cross-correlation matrix X into the PCA neural network, wherein the network weight W satisfies WWHThe output of the PCA neural network is Y-WX, and the input is reconstructed by using the output to obtain the result
Figure BDA0003370964540000074
Thus, the cost function for constructing the PCA network is as follows:
L(W)=||e||2=(X-WHY)H(X-WHY)
=XHX-2XHWHWX+XHWHWWHWX,
the cost function is derived to obtain a negative gradient of l (w):
Figure BDA0003370964540000072
due to WWH→ I, the above formula can be written as:
Figure BDA0003370964540000073
solving optimal weight vector W by using steepest gradient descent algorithmoptIn the solving process, the weight vector is updated along the negative gradient direction, and the updating formula is as follows:
Figure BDA0003370964540000075
using the GHA algorithm, only the matrix W is usedH(k) The lower triangular part of Y (k) can obtain the network weight, so that the iterative formula of the complex weight of the PCA neural network can be obtained as follows:
Figure BDA0003370964540000081
when the difference between the network weight vectors of adjacent iterations is less than the threshold, i.e., | | w (k) -w (k-1) | survival2And if not more than epsilon, judging that the PCA neural network converges, thereby obtaining an optimal weight vector, namely the eigenvector of the cross-correlation matrix X. For the learning efficiency eta, the value is generally set to satisfy 0 < eta < 1/lambda1,λ1Is the maximum eigenvalue of the cross-correlation matrix X. For the PCA neural network, the PCA neural network is realized by adopting a single-layer forward propagation neural network without supervised learning, and the basic structure is shown in FIG. 2.
As shown in fig. 2, for M input components, P principal components thereof can be proposed using the network. R is to beX(fj) After being input into PCA neural network for eigenvalue decomposition, the characteristic composed of typical eigenvalue is obtainedSign matrix US(fj) For reference frequency point f0The data covariance matrix R ofX(f0) The feature matrix is denoted as VS(f0) The resulting focus matrix is:
T(fj)=Us(fj)Vs H(f0)
the covariance matrix of the focused received data is:
Figure BDA0003370964540000082
obtaining a covariance matrix R of the received datayThen, the signal is continuously input into a PCA neural network to obtain RyThe feature vector of (2). The complex weight of the PCA neural network forms the characteristic vector of the cross-correlation matrix of the signals collected by the acoustic vector sensor.
In the existing DOA estimation method based on PCA, after a signal subspace is obtained by using the PCA method, a space spectrum still needs to be constructed by using an MUSIC method, and then a maximum value point of the space spectrum is searched as a spectrum estimation result. For the two-dimensional DOA estimation problem, the combined search is carried out in the range of azimuth angle 360 degrees and the range of pitch angle 90 degrees, and meanwhile, in order to obtain high-precision search precision, the search step length needs to be small enough. If the range of the azimuth angle and the pitch angle is searched according to the step length of 0.1 degree, about 106 spatial spectrum calculations are needed, and the calculation amount of each spatial spectrum calculation is about O (M)2+ M), the computation effort of the spatial spectrum search process is also huge. In order to avoid the process, the invention provides a PCA-BP algorithm, and after a signal subspace is obtained by using a PCA method, a BP neural network is further used for estimating the two-dimensional DOA.
The BP neural network is a multi-layer feedforward neural network trained according to an error back propagation algorithm, and the topology structure of the BP neural network is shown in fig. 3. The learning process of the BP neural network is mainly divided into two processes of signal forward propagation and error backward propagation, in the signal forward propagation process, an input signal passes through a hidden layer from an input layer to an output layer, the result of the output layer is judged, and if the obtained result is not an expected output value, the process of error backward propagation is carried out. In the error back propagation process, the weight and the bias from the hidden layer to the output layer and the weight and the bias from the input layer to the hidden layer are sequentially adjusted from the output layer to the input layer through the hidden layer.
In the context of figure 3 of the drawings,
Figure BDA00033709645400000915
representing the ith feature of the nth training sample, i.e. the ith component, w, in the nth snapshot data vectorihIs the weight value between the ith input layer neuron and the h hidden layer neuron,
Figure BDA00033709645400000916
for the output of the h hidden layer neuron corresponding to the n training sample, vhjIs the weight value between the h hidden layer neuron and the j output layer neuron,
Figure BDA0003370964540000091
and the output value of the j output layer neuron corresponding to the n training sample is obtained.
Using received data covariance matrix RyThe eigenvectors corresponding to the larger eigenvalues form a matrix U, and a characteristic training matrix is constructed
Figure BDA0003370964540000092
And target DOA parameter vector
Figure BDA0003370964540000093
Wherein
Figure BDA0003370964540000094
Figure BDA0003370964540000095
Data set
Figure BDA0003370964540000096
Determining BP neural networks as training data setsThe number of neurons in each layer, and the connection weight between each layer and the threshold value of each layer are initialized. The numbers of neurons included in the input layer, hidden layer, and output layer are I, H and J, respectively. The excitation function of the hidden layer is a sigmoid function, and the excitation function of the output layer is a purelin function. Inputting the training data set into BP neural network, and outputting b ═ b of h hidden layer neuron1,b2,…,bH]Wherein b ish=[b1,b2,…,bH],
Figure BDA0003370964540000097
The nth output representing the h hidden layer neuron,
Figure BDA0003370964540000098
wherein alpha ishRepresents the input to the h hidden layer neuron,
Figure BDA0003370964540000099
wihrepresenting weight values between the h hidden layer neuron and the i input layer neuron,
Figure BDA00033709645400000914
threshold, f, representing the h neuron of the hidden layer1Representing the sigmoid function. The output result of the output layer neurons is
Figure BDA00033709645400000910
Figure BDA00033709645400000911
Representing the output of the jth neuron of the output layer,
Figure BDA00033709645400000912
the output of the jth neuron of the output layer is calculated by
Figure BDA00033709645400000913
Wherein, betajIs the input to the jth output layer neuron,
Figure BDA0003370964540000101
vhjis the weight value, χ, between the h hidden layer neuron and the j output layer neuronjThreshold, f, representing the j-th neuron of the output layer2Representing the purelin function. Calculating the output of the output neurons of the neural network
Figure BDA0003370964540000102
And the mean square error of the expected output y, and calculating the sum of the mean square errors of all samples, namely the training error. And updating the weight value and the threshold value of each layer by using a steepest gradient descent algorithm. And when the training error is smaller than the set error, the training is considered to be finished, otherwise, the next round of training is continued. And (3) after the signal acquired by the acoustic vector sensor is processed by PCA, inputting the signal into the trained BP neural network, wherein the output of the network is the estimated value of DOA.
Based on the above description, fig. 4 shows a flow chart of an implementation of the PCA-BP algorithm for two-dimensional DOA estimation.
As shown in fig. 4, the PCA-BP algorithm estimates the two-dimensional DOA of the high subsonic flight target by the following basic steps:
1) carrying out FFT on the received signals of the acoustic vector sensor array, segmenting the frequency domain signals, and calculating a covariance matrix of each segment of frequency domain signals;
2) inputting each covariance matrix into a PCA neural network respectively to obtain a characteristic vector of each covariance matrix;
3) respectively constructing a focusing matrix of each frequency band by utilizing the eigenvector of each covariance matrix, carrying out focusing processing on the frequency domain signals, then calculating to obtain a total covariance matrix of frequency domain data, and carrying out PCA processing on the total covariance matrix;
4) constructing a training data set by using a feature matrix obtained by PCA (principal component analysis) processing, and training a BP (back propagation) neural network;
5) if the training error of the BP neural network is smaller than the set error threshold, the training is considered to be finished, otherwise, the next round of training is carried out;
6) and processing the signal acquired by the acoustic vector sensor after PCA processing by using the trained BP neural network to obtain a two-dimensional DOA estimated value of the target.
The above description is only an example of the present invention, and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (6)

1. The method for estimating the sound signal of the high subsonic speed flying target based on the sound vector uniform linear array is characterized by comprising the following specific steps of:
s1: performing discrete Fourier transform (FFT) on a received signal of the acoustic vector sensor array, segmenting a frequency domain signal to obtain multiple sections of frequency domain signals, and calculating a covariance matrix of each section of frequency domain signal to obtain a plurality of covariance matrices;
s2: respectively inputting each covariance matrix obtained in step S1 into a principal component analysis neural network, namely a PCA neural network, and performing eigenvalue decomposition on the input matrix by the PCA neural network to respectively obtain an eigenvector A (f) of each covariance matrixj),j=0,1,2,…,J;
S3: respectively constructing a focusing matrix of each section of frequency domain signal by utilizing the eigenvector of each covariance matrix, then respectively carrying out focusing processing on the corresponding frequency domain signal by utilizing each focusing matrix, calculating to obtain a total covariance matrix of all the frequency domain signals, and carrying out matrix eigenvalue decomposition on the total covariance matrix by utilizing a PCA (principal component analysis) neural network;
s4: constructing a training data set by using a matrix characteristic value obtained by PCA neural network processing, and training a BP neural network;
s5: if the training error of the BP neural network is smaller than the set error threshold, the training is considered to be finished, the BP neural network which is trained is obtained, otherwise, the next round of training is carried out until the training error of the BP neural network is smaller than the set error threshold;
s6: and for signals acquired by the acoustic vector sensor, calculating a total covariance matrix of all frequency domain signals, inputting the total covariance matrix into a PCA neural network to obtain characteristic values and characteristic vectors of the signals, forming a matrix by the characteristic vectors corresponding to the characteristic values of the total covariance matrix, which are greater than a threshold value, inputting the matrix into a trained BP neural network, and outputting the matrix, namely an estimated value of the DOA of the high subsonic velocity flying target at the moment.
2. The method for estimating the acoustic signal of the high subsonic flight target based on the uniform linear array of acoustic vectors as claimed in claim 1, wherein:
the step S1 specifically includes: the method is realized by adopting an acoustic vector sensor array which is uniformly and linearly distributed in space and is positioned on a space cross array, a three-dimensional rectangular coordinate system is established by using an array base line of the space cross array, a reference sensor in the acoustic vector sensor array is positioned at the coordinate origin of the three-dimensional rectangular coordinate system, the acoustic vector sensors are uniformly distributed on positive and negative half shafts of an x axis, a y axis and a z axis, and the distances between adjacent sensors, namely the base line lengths, are d;
for K far-field sound sources, to
Figure FDA0003370964530000021
K is 1,2, …, K, and the position coordinate of the mth acoustic vector sensor is (x)m,ym,zm) If K sound sources are incident to the m-th acoustic vector sensor at the time t, the output x of the acoustic vector sensor(m)(t) is expressed as:
Figure FDA0003370964530000022
in the formula, akIs the direction vector of the incident signal of the kth sound source, ak=[uk,vk,wk],qmk)=exp(-j2πfτkm),τkmRepresenting the time delay, τ, between the k-th incident signal to the m-th acoustic vector sensor and to the reference sensorkm=(xmuk+ymvk+zmwk) C, c is the speed of sound, sk(t) is a sound pressure signal of the incident signal of the kth sound source, n(m)(t) is the measurement noise vector of the mth acoustic vector sensor;
for K far-field sound sources, the received signal vector x (t) of the acoustic vector sensor array is represented as:
Figure FDA0003370964530000023
in the formula (I), the compound is shown in the specification,
Figure FDA0003370964530000024
for acoustic vector arrays steering vectors, q (θ)k)=[q1k),…,qMk)]TSymbol of
Figure FDA0003370964530000025
Representing a kronecker product operation, M being the number of acoustic vector sensors; carrying out discrete sampling on signals acquired by the acoustic vector sensors to obtain L discrete signals of snapshots, wherein a discrete signal matrix X received by a corresponding acoustic vector sensor array is in dimension of M multiplied by K, and n (t) is a measurement noise vector of the acoustic vector sensor array;
FFT is carried out on signals collected by the acoustic vector sensor, the obtained frequency domain signals are divided into J sections, and the J section frequency domain signal X (f) is processedj) Its covariance matrix RX(fj) The calculation formula of (2) is as follows:
RX(fj)=E[X(fj)XH(fj)],j=0,1,2,…,J。
3. the method for estimating the acoustic signal of the high subsonic flight target based on the uniform linear array of acoustic vectors as claimed in claim 1 or 2, characterized in that:
in step S2, the matrix eigenvalue decomposition is implemented by using a PCA neural network, which specifically includes: the PCA neural network is realized by adopting a single-layer forward propagation neural network of unsupervised learning, and the network thereofThe weight vector W of the network satisfies WWHAnd inputting the discrete signal matrix X into the PCA neural network, wherein the output of the PCA neural network is Y-WX, and after the k-th iteration, the cost function of the PCA neural network is as follows:
L(W)=(X-WH(k)Y)H(X-WH(k)Y)
wherein, the optimal weight vector W of the neural network is solved by multiple iterations by using the steepest gradient descent algorithmoptWhen the difference between the network weight vectors of adjacent iterations is less than the threshold epsilon, i.e., | W (k) -W (k-1) | survival2Less than or equal to epsilon, judging the PCA neural network convergence, thereby obtaining the optimal weight vector WoptI.e. the eigenvector of the input matrix X.
4. The method for estimating the acoustic signal of the high subsonic flight target based on the uniform linear array of acoustic vectors as claimed in claim 1 or 2, characterized in that:
the step S3 specifically includes: the covariance matrix of the j-th section of the frequency domain signal is RX(fj) By the pair RX(fj) Performing eigenvalue decomposition to obtain a focusing matrix, wherein J is 0,1,2, …, J, and performing covariance matrix R of each section of frequency domain signalX(fj) After being input into PCA neural network for eigenvalue decomposition, an eigenvalue matrix U formed by typical eigenvalues is obtainedS(fj) For reference frequency point f0Covariance matrix R ofX(f0) The feature matrix is denoted as VS(f0) So as to obtain the focusing matrix of the j-th section of frequency domain signal as follows:
T(fj)=Us(fj)Vs H(f0),
the total covariance matrix of all the focused frequency domain signals is:
Figure FDA0003370964530000031
5. the method for estimating the acoustic signal of the high subsonic flight target based on the uniform linear array of acoustic vectors as claimed in claim 1 or 2, characterized in that:
in step S4, a feature training matrix is constructed by using the feature vectors corresponding to the feature values greater than the threshold of the total covariance matrix of all frequency domain signals to form a matrix U
Figure FDA0003370964530000032
And target DOA parameter vector
Figure FDA0003370964530000033
Wherein
Figure FDA0003370964530000034
Data set
Figure FDA0003370964530000035
As a training data set, determining the number of neurons of each layer in the BP neural network, and initializing the connection weight between each layer and the threshold value of each layer;
the BP neural network is specifically as follows: the number of the neurons contained in the input layer, the hidden layer and the output layer of the BP neural network is I, H and J respectively; the excitation function of the hidden layer selects a sigmoid function, and the excitation function of the output layer selects a purelin function; inputting the training data set into BP neural network, and outputting b ═ b of h hidden layer neuron1,b2,…,bH]Wherein b ish=[b1,b2,…,bH],
Figure FDA0003370964530000041
The nth output representing the h hidden layer neuron,
Figure FDA0003370964530000042
wherein alpha ishRepresents the input to the h hidden layer neuron,
Figure FDA0003370964530000043
wihrepresenting weight values between the h hidden layer neuron and the i input layer neuron,
Figure FDA0003370964530000044
threshold, f, representing the h neuron of the hidden layer1Representing a sigmoid function; the output result of the output layer neurons is
Figure FDA0003370964530000045
Figure FDA0003370964530000046
Representing the output of the jth neuron of the output layer,
Figure FDA0003370964530000047
the output of the jth neuron of the output layer is calculated by
Figure FDA0003370964530000048
Wherein, betajIs the input to the jth output layer neuron,
Figure FDA0003370964530000049
vhjis the weight value, χ, between the h hidden layer neuron and the j output layer neuronjThreshold, f, representing the j-th neuron of the output layer2Representing the purelin function.
6. The method for estimating the acoustic signal of the high subsonic flight target based on the uniform linear array of acoustic vectors as claimed in claim 1 or 2, characterized in that:
the step S5 is specifically as follows: computing output of output neurons of a BP neural network
Figure FDA00033709645300000410
The mean square error between the desired output y and the sum of the mean square errors of all samples, namely training error is calculated; weighting and threshold of each layer by using steepest gradient descent algorithmUpdating the value; and when the training error is smaller than the set error threshold value, the training is considered to be finished, otherwise, the next round of training is continued.
CN202111409850.3A 2021-11-24 2021-11-24 High subsonic speed flight target acoustic signal estimation method based on acoustic vector uniform linear array Pending CN114200392A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111409850.3A CN114200392A (en) 2021-11-24 2021-11-24 High subsonic speed flight target acoustic signal estimation method based on acoustic vector uniform linear array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111409850.3A CN114200392A (en) 2021-11-24 2021-11-24 High subsonic speed flight target acoustic signal estimation method based on acoustic vector uniform linear array

Publications (1)

Publication Number Publication Date
CN114200392A true CN114200392A (en) 2022-03-18

Family

ID=80648951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111409850.3A Pending CN114200392A (en) 2021-11-24 2021-11-24 High subsonic speed flight target acoustic signal estimation method based on acoustic vector uniform linear array

Country Status (1)

Country Link
CN (1) CN114200392A (en)

Similar Documents

Publication Publication Date Title
Xiang et al. Improved de-multipath neural network models with self-paced feature-to-feature learning for DOA estimation in multipath environment
CN103902826B (en) Method for tracking multiple moving targets under impact noise environment
CN111123192B (en) Two-dimensional DOA positioning method based on circular array and virtual extension
CN110197112B (en) Beam domain Root-MUSIC method based on covariance correction
CN110824428A (en) Vertical vector array underwater sound ray matching passive positioning method
CN107121665B (en) A kind of passive location method of the near field coherent source based on Sparse Array
CN109239646B (en) Two-dimensional dynamic direction finding method for continuous quantum water evaporation in impact noise environment
CN113109759B (en) Underwater sound array signal direction-of-arrival estimation method based on wavelet transform and convolution neural network
CN110174659A (en) MIMO radar based on the projection of iteration proximal end measures vector DOA estimation method more
CN109212466B (en) Quantum dragonfly evolution mechanism-based broadband direction finding method
CN113567913B (en) Two-dimensional plane DOA estimation method based on iterative re-weighting dimension-reducible
CN112328965B (en) Method for DOA tracking of multiple mechanical signal sources using acoustic vector sensor array
CN108089146B (en) High-resolution broadband direction-of-arrival estimation method for pre-estimated angle error robustness
CN115480206A (en) Off-grid DOA estimation method
CN108614235B (en) Single-snapshot direction finding method for information interaction of multiple pigeon groups
Battista et al. Inverse methods for three-dimensional acoustic mapping with a single planar array
CN105068090B (en) Method for suppressing interference to GNSS antenna array by using single snapshot data
CN114200392A (en) High subsonic speed flight target acoustic signal estimation method based on acoustic vector uniform linear array
CN116299193A (en) MIMO radar intelligent DOA estimation method
CN113093098B (en) Axial inconsistent vector hydrophone array direction finding method based on lp norm compensation
CN116481630A (en) Jet transient sound field reconstruction method based on equivalent source and convolution network
CN109490840A (en) Based on the noise reduction and reconstructing method for improving the sparse radar target HRRP from encoding model
CN114755628A (en) Method for estimating direction of arrival of acoustic vector sensor array under non-uniform noise
CN114265004A (en) Subspace cancellation-based target angle estimation method under interference
Li et al. DOA Estimation Based on Sparse Reconstruction via Acoustic Vector Sensor Array under Non-uniform Noise

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination