CN115712867A - Multi-component radar signal modulation identification method - Google Patents

Multi-component radar signal modulation identification method Download PDF

Info

Publication number
CN115712867A
CN115712867A CN202211369792.0A CN202211369792A CN115712867A CN 115712867 A CN115712867 A CN 115712867A CN 202211369792 A CN202211369792 A CN 202211369792A CN 115712867 A CN115712867 A CN 115712867A
Authority
CN
China
Prior art keywords
mbconv
radar signal
feature map
component
fused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211369792.0A
Other languages
Chinese (zh)
Inventor
司伟建
万晨霞
侯长波
邓志安
刘睿智
张春杰
乔玉龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202211369792.0A priority Critical patent/CN115712867A/en
Publication of CN115712867A publication Critical patent/CN115712867A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a multi-component radar signal modulation identification method, which comprises the steps of obtaining a radar signal to be identified; inputting a radar signal to be identified into a pre-trained deep convolution neural network model for identification, and outputting the prediction probability of a label vector; the deep convolutional neural network model is obtained by training a radar signal data set marked with a tag vector, and the tag vector represents a signal modulation type contained in a multi-component radar signal; the deep convolutional neural network model comprises a plurality of mobile inverted bottleneck volume blocks MBConv, a plurality of Fused mobile inverted bottleneck volume blocks Fused-MBConv and a convolutional attention mechanism module; and (4) performing threshold judgment according to the prediction probability of the label vector and outputting a multi-component radar signal adjustment recognition result. The method can identify not only the single-component radar signals, but also the double-component and three-component radar signals; signal recognition accuracy is improved over other methods.

Description

Multi-component radar signal modulation identification method
Technical Field
The invention belongs to the technical field of deep learning and signal processing, relates to a multi-component radar signal modulation identification method, and particularly relates to a multi-component radar signal modulation identification method in a complex electromagnetic environment.
Background
Accurate identification of radar signals can help not only infer the function of the radar, but also improve the accuracy of parameter estimation. With the rapid increase of the number of different radar signals and the increasing complexity and diversity of the radar signal types, a radar detection system often intercepts pulses overlapping in time domain and frequency domain to form a multi-component radar signal. However, most proposed intra-pulse modulation identification methods do not accurately classify multi-component radar signals. Therefore, there is a strong need for an effective multi-component radar signal modulation identification method in the current radar detection system.
A modulation identification method based on a traditional multi-component radar signal has been proposed. Some researchers have proposed traditional multi-component radar signal identification methods based on blind source separation, parameterized time-frequency analysis and time-frequency image preprocessing. In recent years, with the rapid development of deep learning, some researchers have proposed a radar signal modulation identification method based on a deep convolutional neural network. However, the research on the modulation recognition of radar signals is mainly focused on single-component or double-component radar signals, the recognition performance is relatively low at low signal-to-noise ratio, and the research on the modulation recognition of three-component radar signals is almost not.
Disclosure of Invention
Aiming at the prior art, the technical problem to be solved by the invention is to provide a multi-component radar signal modulation and identification method, which is characterized in that more detailed multi-component radar signal characteristics are extracted by designing a deep convolutional neural network model comprising a mobile inverted bottleneck convolutional block (MBConv), a Fused mobile inverted bottleneck convolutional block (Fused-MBConv) and a convolutional attention mechanism module; the multi-component radar signals can be accurately identified in a complex electromagnetic environment.
In order to solve the technical problem, the invention provides a method for modulating and identifying a multi-component radar signal, which comprises the following steps:
acquiring a radar signal to be identified;
inputting the radar signal to be recognized into a pre-trained deep convolution neural network model for recognition, and outputting the prediction probability of a label vector; the deep convolutional neural network model is obtained by training a radar signal data set marked with a label vector, and the label vector represents a signal modulation type contained in a multi-component radar signal; the deep convolutional neural network model comprises a plurality of mobile inverted bottleneck volume blocks MBConv, a plurality of Fused mobile inverted bottleneck volume blocks Fused-MBConv and a convolutional attention mechanism module;
and performing threshold judgment according to the prediction probability of the label vector and outputting a multi-component radar signal modulation identification result.
Further, the deep convolutional neural network model comprises: the system comprises a first convolution layer, a depth feature extraction network, a second convolution layer, a global average pooling layer, a full connection layer and a Sigmoid layer; the depth feature extraction network comprises a plurality of Fused mobile inversion bottleneck convolution blocks Fused-MBConv, a plurality of mobile inversion bottleneck convolution blocks MBConv and a convolution attention mechanism module.
Further, training of the deep convolutional neural network model comprises:
acquiring a radar signal data set, and marking a radar signal data set sample by adopting a label vector;
dividing the marked data set into a training set, a verification set and a test set, and sequentially inputting the training set into the first convolution layer and the depth feature extraction network for feature extraction to obtain a depth feature map;
inputting the depth feature map into a second convolution layer, a global average pooling layer, a full-link layer and a Sigmoid layer in sequence respectively, and outputting the prediction probability of a label vector;
and adjusting the weight of the depth feature extraction network by using the verification set until the deviation between the actual recognition result and the target recognition result is within the threshold range, and finishing training to obtain a depth convolution neural network model.
Further, the depth feature extraction network comprises 2 first Fused-MBConv, 8 second Fused-MBConv, 6 first MBConv, 21 second MBConv and an attention mechanism module, wherein the expansion rate of the first Fused-MBConv is 1, and the expansion rate of the second Fused-MBConv is not 1.
Further, the size of the first convolution layer core is 3 × 3, and the step length is 2; the first Fused-MBConv expansion rate is 1, the core size is 3 x 3, and the step size is 1; the second Fused-MBConv expansion rate is 4, the kernel size is 3 x 3, and the step size is 2; the first MBConv expansion ratio is 4, the kernel size is 3 x 3, the step size is 2, and the SE is 0.25; the 21 second MBConv expansion rates are 6, the kernel size is 3 × 3, the step size is 1, and the SE is 0.25; the 2 nd convolution kernel size is 1 multiplied by 1, and the step length is 1; the size of the global average pooling layer core is 7 multiplied by 7, and the number of channels is 1280; the fully connected layer is 8-dimensional.
Furthermore, the radar signal data set is composed of a time-frequency graph obtained by performing time-frequency conversion on the multi-component radar signals by using an improved smooth pseudo Wigner-Ville distribution time-frequency conversion technology, and the data set comprises single-component, double-component and three-component radar signals.
Further, the moving inversion bottleneck convolution block MBConv includes: point-by-point convolutional layers, depth convolutional layers, batch normalization layers, activation function Swish, squeeze excitation module, and dropout.
Further, the Fused mobile inverted bottleneck volume block Fused-MBConv comprises two types of an expansion ratio of 1 and an expansion ratio different from 1, and when the expansion ratio is 1, the Fused-MBConv comprises a volume layer, a batch normalization layer, an activation function SiLU, an SE module, a dropout layer and a jump connection structure; when the extension ratio is not 1, the constructed Fused-MBConv comprises a convolution layer, a batch normalization layer, an activation function SiLU, an SE module, a point-by-point convolution, a dropout layer and a jump connection structure when the extension ratio is not 1.
Further, the sequentially inputting a training set into the first convolution layer and the deep feature extraction network for feature extraction includes:
inputting the first feature map into the first convolution layer to carry out convolution processing to obtain a second feature map;
inputting the second feature map into 2 first Fused-MBConv modules with the expansion ratio equal to 1 to obtain a third feature map;
inputting the third feature map into 8 second Fused-MBConv modules with the expansion ratio not equal to 1 to obtain a fourth feature map;
inputting the fourth feature map into the 6 first MBConv modules to obtain a fifth feature map;
inputting the fifth feature map into the 9 second MBConv modules to obtain a sixth feature map;
inputting the sixth feature map into the 12 second MBConv modules to obtain a seventh feature map;
inputting the seventh feature map as an initial feature map F into an attention mechanism module, outputting a channel attention feature map Mc after processing by the channel attention module, multiplying Mc and F to obtain a refined feature map F ', inputting F ' into a spatial attention module of the attention mechanism module, generating a spatial attention feature map Ms, and multiplying F ' by Ms to obtain a depth feature map.
Further, adjusting the weight of the depth feature extraction network includes: the method is characterized in that the method with weighting gradient and learning rate dynamic boundary optimization on the basis of Adam is adopted to adjust the weight of the depth feature extraction network, the dynamic boundary of the learning rate is adopted, and the upper boundary and the lower boundary are respectively initialized to zero and infinity and smoothly converge to a constant value.
The invention has the beneficial effects that: compared with the prior art, the multi-component radar signal modulation identification method under the complex electromagnetic environment is constructed, and not only can a single-component radar signal be identified, but also a double-component radar signal and a three-component radar signal can be identified; compared with other methods, the multi-component radar signal modulation identification method improves the signal identification accuracy. The invention provides a method for extracting multi-component radar signal features in more detail by using MBConv, fused-MBConv and attention mechanism modules, improving focusing and information expression capability of multi-component radar signals, and jointly optimizing classification precision and parameter efficiency, so that the identification performance of the multi-component radar signals under low signal-to-noise ratio is improved, and the method can be applied to radar signal modulation identification under complex electromagnetic environment.
Drawings
FIG. 1 is a frame diagram of a multi-component radar signal modulation identification method in a complex electromagnetic environment according to the present invention;
FIG. 2 is a diagram of the MBConv structure designed by the present invention;
FIG. 3 is a structural diagram of Fused-MBConv designed by the present invention;
FIG. 4 is a diagram of an attention mechanism of the present design;
FIGS. 5 (a) to 5 (h) are graphs showing the results of simulation experiments according to the present invention;
FIG. 6 is a graph of the results of a simulation experiment of the present invention in comparison to other methods;
Detailed Description
The invention is further described with reference to the drawings and examples.
In order to achieve the purpose, the invention adopts the following technical scheme that the method comprises the following steps:
step 1, manufacturing a multi-component radar signal data set, wherein the manufactured data set comprises a training set, a verification set and a test set, and the data set comprises single-component, double-component and three-component radar signals;
step 2, constructing a multi-component radar signal modulation identification framework based on a deep convolutional neural network;
step 3, designing MBConv and Fused-MBConv in order to extract more detailed multi-component radar signal characteristics;
step 4, integrating an attention mechanism into the constructed deep convolution neural network model in order to improve the focusing and information expression capability of the multi-component radar signal;
step 5, in order to optimize the proposed deep convolutional neural network model, an optimization algorithm with a weighted gradient and a learning rate dynamic boundary is designed on the basis of Adam;
and 6, inputting the manufactured training set and verification set into the constructed deep convolutional neural network model to train and verify the network, and finally inputting the signals to be recognized into the trained network to realize the recognition of the multi-component radar signals in the complex electromagnetic environment.
Step 1, the manufacturing of the multi-component radar signal data set comprises the following processing:
(1) The radar signals used mainly include 2FSK,4FSK, BPSK, EQFM, FRANK, LFM, NS and SFM. For a single-component radar signal, the expression is:
Figure BDA0003925119430000041
where n (t) represents the noise of the ith signal component; k represents the number of radar signal components; a. The i ,T i
Figure BDA0003925119430000051
And phi i (t) represents the amplitude, pulse width, carrier frequency, initial phase and phase function, respectively, of the ith radar signal component.
(2) In a practical electromagnetic environment, the signals intercepted by a radar system typically include single-component, two-component, and three-component radar signals. The dual-component radar signal is formed by randomly overlapping two types of single-component radar signals, and the three-component radar signal is formed by randomly overlapping three types of single-component radar signals. Therefore, k is set to 1, 2, and 3, respectively. Assuming that the received signal is a randomly overlapping single component signal, given a sample data set D = { (x) i ,t i ) I1 ≦ i ≦ N } includes N samples of 8 modulation types, the ith sample being denoted x i ,t i =[t i1 ,t i2 ,…,t i8 ]Denotes x i The true tag vector of (2); if the jth radar signal appears at x i In, t ij =1; otherwise t ij And =0. For example, with tag vector t = [1,0,0,1,0,0,0,1]Sample x of (a) indicates that there is a radar signal at the 1,4 and 8 positions.
(3) And adopting improved smooth pseudo Wigner-Ville distribution time-frequency conversion to convert the multi-component radar signals into a time-frequency graph, and making a multi-component radar signal data set comprising a training set, a verification set and a test set from the converted time-frequency graph.
The multi-component radar signal modulation identification framework of the deep convolutional neural network model constructed in the step 2 comprises the following processing steps:
(1) The constructed deep convolutional neural network model comprises a first convolutional layer, the kernel size is 3 multiplied by 3, and the step length is 2; two Fused-MBConv, the expansion rate is 1, the core size is 3 multiplied by 3, and the step length is 1;8 Fused-MBConv, the expansion rate is 4, the core size is 3 multiplied by 3, and the step length is 2;6 MBConv, the expansion ratio is 4, the core size is 3 x 3, the step length is 2, and the SE is 0.25;9 MBConv, the expansion rate is 6, the kernel size is 3 x 3, the step size is 1, and the SE is 0.25;12 MBConv, the expansion rate is 6, the kernel size is 3 multiplied by 3, the step size is 1, and the SE is 0.25;1 attention mechanism module; the 2 nd convolutional layer, the kernel size is 1 × 1, the step length is 1; the kernel size of the global average pooling layer is 7 × 7, and the number of channels is 1280; an 8-dimensional fully connected layer, sigmoid layer.
The multi-component radar signal modulation identification of the constructed deep convolutional neural network model comprises the following steps:
obtaining a time-frequency diagram sample, converting a multi-component radar signal into the time-frequency diagram sample by adopting improved smooth pseudo Wigner-Ville distribution time-frequency conversion, preprocessing the time-frequency diagram sample, carrying out multi-label marking on the time-frequency diagram sample, and finally dividing the time-frequency diagram sample into a training set, a verification set and a test set;
inputting the training set into the deep convolutional neural network model for feature extraction to obtain a deep feature map of the multi-component radar signal;
respectively inputting the depth feature map into a global average pooling layer, a full-link layer and a Sigmoid layer in sequence, wherein the output result is the prediction probability of each label, and outputting the final classification result after threshold judgment;
inputting the verification set into the trained deep convolution neural network model, verifying the trained network, and adjusting the weight until the deviation between the actual classification result and the target classification result is within an allowable range, and finishing the training;
the MBConv and Fused-MBConv structures constructed in step 3 include the following processes:
(1) The constructed MBConv structure mainly comprises a point-by-point convolution layer, a depth convolution layer, a batch normalization layer (BN), an activation function Swish, an extrusion excitation module (SE) and a dropout; the constructed Fused-MBConv structure mainly comprises two structures, namely the expansion ratio is equal to 1 and the expansion ratio is not equal to 1. When the extension ratio is 1, the constructed Fused-MBConv includes a convolution layer, a batch normalization layer, an activation function SiLU, an SE module, a dropout layer and a jump connection structure. Including only one convolution layer with a kernel size of 3 x 3. When the extension ratio is not 1, the constructed Fused-MBConv comprises a convolution layer, a batch normalization layer, an activation function SiLU, an SE module, a point-by-point convolution, a dropout layer and a jump connection structure; the method comprises the following steps of obtaining a deep feature map, wherein the convolutional layer comprises a convolutional layer with the kernel size of 3 x 3 and a convolutional layer with the kernel size of 1 x 1, and the step of inputting the training set into the deep convolutional neural network model for feature extraction to obtain the deep feature map comprises the following steps:
inputting the first feature map into the first convolution layer to carry out convolution processing to obtain a second feature map; wherein the first feature map is a feature map of the training set output by a convolution module located before a Fused-MBConv module;
inputting the second feature map into two Fused-MBConv modules with the expansion ratio equal to 1 to obtain a third feature map;
inputting the third feature map into the eight Fused-MBConv modules with the expansion ratios not equal to 1 to obtain a fourth feature map;
inputting the fourth feature map into the six MBConv modules to obtain a fifth feature map;
and inputting the fifth feature map into the nine MBConv modules to obtain a sixth feature map.
And inputting the sixth feature map into the 12 MBConv modules to obtain a seventh feature map.
The attention mechanism module constructed in the step 4 mainly comprises a channel attention module and a space attention module; when the received seventh feature map (initial feature map F) is input to the attention mechanism module, firstly, the F is input to the channel attention module for processing, and a channel attention feature map Mc is output; then, by multiplying Mc by F, a refined feature map F' is obtained. Then, F' is input into the spatial attention module and a spatial attention feature map Ms is generated. Finally, multiplying F' by Ms to produce a final eighth feature map (feature map F "); inputting the eighth feature map into a convolution layer and a pooling layer having a convolution kernel size of 1 × 1.
Step 5, in order to optimize the proposed deep convolutional neural network model, designing a dynamic boundary optimization algorithm with a weighted gradient and a learning rate on the basis of Adam comprises the following steps: using the dynamic bound of the learning rate, the upper and lower bounds are initialized to zero and infinity, respectively, and both converge smoothly to a constant value.
And 6, inputting the manufactured training set and verification set into the constructed deep convolutional neural network model to train and verify the network, extracting more detailed characteristics of the multi-component radar signals, and finally inputting the multi-component radar signal image to be recognized to realize modulation recognition of the multi-component radar signals in the complex electromagnetic environment.
The invention is further explained by combining the attached drawings of the specification.
The invention comprises the following steps:
step 1, adopting improved smooth pseudo Wigner-Ville distribution time-frequency conversion to manufacture a multi-component radar signal data set;
step 2, constructing a multi-component radar signal modulation identification framework based on a deep convolutional neural network, and extracting more detailed multi-component radar signal characteristics;
and 3, inputting the manufactured training set and verification set into the constructed deep convolutional neural network model to train and verify the network, and finally inputting the multi-component radar signal to be recognized to realize modulation recognition of the multi-component radar signal in the complex electromagnetic environment.
Step 1, adopting improved smooth pseudo Wigner-Ville distribution time-frequency conversion to convert multi-component radar signals into a time-frequency graph and manufacturing a multi-component radar signal data set, wherein the manufactured data set comprises a training set, a verification set and a test set, and the data set comprises single-component, double-component and three-component radar signals.
Step 2 comprises the following processing:
(1) In order to extract more detailed multi-component radar signal characteristics, MBConv and Fused-MBConv are designed;
(2) In order to improve the focusing and information expression capacity of the multi-component radar signals, an attention mechanism is integrated into the constructed deep convolutional neural network model;
(3) A deep convolutional neural network model for multi-component radar signal identification in a complex electromagnetic environment is constructed.
And 3, inputting the manufactured training set and verification set into the constructed deep convolutional neural network model to train and verify the network, and finally inputting the test set to realize modulation recognition of the multi-component radar signal in the complex electromagnetic environment.
The frame of the multi-component radar signal modulation identification method under the complex electromagnetic environment is shown in figure 1. The proposed model comprises a first convolution layer, the kernel size is 3 x 3, the step size is 2; two Fused-MBConv, the expansion rate is 1, the core size is 3 multiplied by 3, and the step length is 1;8 Fused-MBConv, the expansion rate is 4, the core size is 3 multiplied by 3, and the step length is 2;6 MBConv, the expansion ratio is 4, the kernel size is 3 x 3, the step size is 2, and the SE is 0.25;9 MBConv, the expansion rate is 6, the kernel size is 3 x 3, the step size is 1, and the SE is 0.25;12 MBConv, the expansion rate is 6, the kernel size is 3 multiplied by 3, the step size is 1, and the SE is 0.25;1 attention mechanism module; the 2 nd convolutional layer, the kernel size is 1 × 1, the step length is 1; the kernel size of the global average pooling layer is 7 × 7, and the number of channels is 1280; an 8-dimensional fully connected layer, sigmoid layer.
FIG. 2 shows the structure of MBConv designed by the present invention. As can be seen from fig. 2, the structure of the MBConv block consists of point-by-point convolution, depth convolution, batch normalization, an activation function Swish, an SE module, and dropout. The convolutional layer can effectively reserve the background information of the time-frequency graph and realize the nonlinear interaction between cross-channel information; the activation function Swish is intended to add non-linearity to the neural network model; batch normalization and dropout can solve the problems of gradient vanishing and overfitting, respectively. When receiving the time-frequency feature map, MBConv first executes a convolution layer with a kernel size of 1 × 1, and changes the channel dimension of the input feature map according to the expansion ratio. Then, a deep convolution layer with a kernel size of 3 × 3 was used, and then an SE module was introduced to improve the expression capability of the model. The feature map is then returned to its original channel dimensions using a convolution layer with a kernel size of 1 x 1. And finally, performing jump connection by using dropout operation.
FIG. 3 shows the structure of Fused-MBConv designed by the present invention. As can be seen from FIG. 3, the Fused-MBConv structure mainly includes two types, i.e., the expansion ratio is equal to 1 and the expansion ratio is not equal to 1. When the extension ratio is equal to 1, the Fused-MBConv consists of a depth convolution layer, a batch normalization layer, an activation function SiLU, an SE module and a dropout; when the extension ratio is not equal to 1, fused-MBConv consists of a depth convolution layer, a batch normalization layer, an activation function SiLU, an SE module, a point-by-point convolution and a dropout.
Fig. 4 is a structural diagram of an attention mechanism designed by the invention. The attention mechanism mainly comprises a channel attention module and a space attention module. When the received feature map F is input to the attention mechanism, the attention mechanism first processes the F input channel attention module and outputs a channel attention feature map Mc. Then, by multiplying Mc by F, a refined feature map F' is obtained. Then, F' is input into the spatial attention module and a spatial attention feature map Ms is generated. Finally, F' is multiplied by Ms to produce the final feature map F ".
Fig. 5 (a) to 5 (h) show simulation experiment results of radar signal classification by the network model constructed by the present invention. To evaluate the classification accuracy of multi-component radar signals, fig. 5 (a) to 5 (h) show the variation of the classification accuracy of single-component, two-component and three-component radar signals. As can be seen from fig. 5 (a) to 5 (h), under various modulation types, the classification accuracy of the single-component radar signal is higher than that of the two-component and three-component radar signals, especially under a lower signal-to-noise ratio. When the signal-to-noise ratio is-10 dB, the classification accuracy of the EQFM on single-component, double-component and three-component radar signals respectively reaches 99.58%, 92.14% and 67.74%, and the SFM respectively reaches 97.50%, 81.91% and 65.95%. This is because one or two of the three-component radar signals are easily mistaken for noise, especially at low signal-to-noise ratios, and the three types of radar signals overlapping in the same time-frequency diagram are difficult to classify correctly. Furthermore, the proposed model shows a much improved recognition performance for recognizing single-component radar signals compared to three-component radar signals, especially for BPSK, EQFM, frame and SFM. The work provides a good experimental basis for further improving the multi-component radar signal identification performance in the modern electronic warfare system.
FIG. 6 is a graph showing the results of the simulation experiment of the present invention compared with other methods. In order to evaluate the Recognition performance of the proposed method DCNN, the average Recognition accuracy of the dual-component Radar Signal was compared with CNN-Softmax and CNN-DQLN (Z.Qu, C.Hou, C.Hou, and W.Wang, "radio Signal Intra-Pulse Modulation correlation Based on statistical Neural Network and Deep Q-Learning Network," IEEE Access, vol.8, pp.49125-49136, mar.2020). As can be seen from FIG. 6, when the SNR is-10 dB, the classification accuracy of HEDCNN, CNN-DQLN and CNN-Softmax of the method provided by the invention is respectively as high as 41.25%, 37.37% and 26.49%. HEDCNN's recognition accuracy was 76.84% at-8 dB, while CNN-DQLN and CNN-Softmax were 74.74% and 57.02%, respectively. Furthermore, HEDCNN, CNN-DQLN and CNN-Softmax accuracy at 4dB can reach 100%, 98.77% and 95.44%, respectively. Therefore, the proposed HEDCNN has better recognition performance than CNN-DQLN and CNN-Softmax, especially at lower signal-to-noise ratios.

Claims (10)

1. A multi-component radar signal modulation identification method is characterized by comprising the following steps:
acquiring a radar signal to be identified;
inputting the radar signal to be identified into a pre-trained deep convolutional neural network model for identification, and outputting the prediction probability of a label vector; the deep convolutional neural network model is obtained by training a radar signal data set marked with a label vector, and the label vector represents a signal modulation type contained in a multi-component radar signal; the deep convolutional neural network model comprises a plurality of mobile inverted bottleneck volume blocks MBConv, a plurality of Fused mobile inverted bottleneck volume blocks Fused-MBConv and a convolutional attention mechanism module;
and performing threshold judgment according to the prediction probability of the label vector and outputting a multi-component radar signal modulation identification result.
2. The method for multi-component radar signal modulation identification according to claim 1, wherein: the deep convolutional neural network model includes: the system comprises a first convolution layer, a depth feature extraction network, a second convolution layer, a global average pooling layer, a full connection layer and a Sigmoid layer; the depth feature extraction network comprises a plurality of Fused mobile inversion bottleneck convolution blocks Fused-MBConv, a plurality of mobile inversion bottleneck convolution blocks MBConv and a convolution attention mechanism module.
3. The method of claim 2, wherein the method comprises: the training of the deep convolutional neural network model comprises the following steps:
acquiring a radar signal data set, and marking a radar signal data set sample by adopting a tag vector;
dividing the marked data set into a training set, a verification set and a test set, and sequentially inputting the training set into the first convolution layer and the depth feature extraction network for feature extraction to obtain a depth feature map;
inputting the depth feature map into a second convolution layer, a global average pooling layer, a full-link layer and a Sigmoid layer in sequence respectively, and outputting the prediction probability of a label vector;
and adjusting the weight of the depth feature extraction network by using the verification set until the deviation between the actual recognition result and the target recognition result is within the threshold range, and finishing training to obtain a depth convolution neural network model.
4. A method of identifying a multi-component radar signal modulation as claimed in claim 3, wherein: the depth feature extraction network comprises 2 first Fused-MBConv, 8 second Fused-MBConv, 6 first MBConv, 21 second MBConv and an attention mechanism module, wherein the expansion rate of the first Fused-MBConv is 1, and the expansion rate of the second Fused-MBConv is not 1.
5. The method of claim 4, wherein the method comprises the steps of: the first convolution layer core has the size of 3 multiplied by 3 and the step length of 2; the first Fused-MBConv expansion rate is 1, the core size is 3 x 3, and the step size is 1; the second Fused-MBConv expansion rate is 4, the kernel size is 3 x 3, and the step size is 2; the first MBConv expansion ratio is 4, the kernel size is 3 x 3, the step size is 2, and the SE is 0.25; the 21 second MBConv expansion rates are 6, the kernel size is 3 × 3, the step size is 1, and the SE is 0.25; the 2 nd convolution kernel size is 1 multiplied by 1, and the step length is 1; the size of the global average pooling layer core is 7 multiplied by 7, and the number of channels is 1280; the fully connected layer is 8-dimensional.
6. A method of identifying a multi-component radar signal modulation as claimed in claim 3, wherein: the radar signal data set is composed of a time-frequency graph obtained by performing time-frequency conversion on multi-component radar signals through an improved smooth pseudo Wigner-Ville distribution time-frequency conversion technology, and the data set comprises single-component, double-component and three-component radar signals.
7. The method for multi-component radar signal modulation identification according to claim 1, wherein: the moving inversion bottleneck convolution block MBConv includes: point-by-point convolution layers, depth convolution layers, batch normalization layers, an activation function Swish, a squeeze excitation module, and dropout.
8. The method for multi-component radar signal modulation identification according to claim 1, wherein: the Fused mobile inverted bottleneck volume block Fused-MBConv comprises two types of a volume ratio of 1 and a volume ratio of not 1, and when the volume ratio is 1, the Fused-MBConv comprises a volume layer, a batch normalization layer, an activation function SiLU, an SE module, a dropout layer and a jump connection structure; when the extension ratio is not 1, the constructed Fused-MBConv comprises a convolution layer, a batch normalization layer, an activation function SiLU, an SE module, a point-by-point convolution, a dropout layer and a jump connection structure when the extension ratio is not 1.
9. The method for multi-component radar signal modulation identification according to claim 4, wherein: the sequentially inputting the training set into the first convolution layer and the depth feature extraction network for feature extraction comprises:
inputting the first feature map into the first convolution layer to carry out convolution processing to obtain a second feature map;
inputting the second feature map into 2 first Fused-MBConv modules with the expansion ratio equal to 1 to obtain a third feature map;
inputting the third feature map into 8 second Fused-MBConv modules with the expansion ratio not equal to 1 to obtain a fourth feature map;
inputting the fourth feature map into the 6 first MBConv modules to obtain a fifth feature map;
inputting the fifth feature map into the 9 second MBConv modules to obtain a sixth feature map;
inputting the sixth feature map into the 12 second MBConv modules to obtain a seventh feature map;
inputting the seventh feature map as an initial feature map F into an attention mechanism module, outputting a channel attention feature map Mc after processing by the channel attention module, multiplying Mc and F to obtain a refined feature map F ', inputting F ' into a spatial attention module of the attention mechanism module, generating a spatial attention feature map Ms, and multiplying F ' by Ms to obtain a depth feature map.
10. A method of identifying a modulation of a multicomponent radar signal as claimed in claim 3, characterized in that: the adjusting the weight of the depth feature extraction network comprises: and adjusting the weight of the depth feature extraction network by adopting a method with weighting gradient and learning rate dynamic boundary optimization on the basis of Adam, and respectively initializing an upper boundary and a lower boundary into zero and infinity by adopting a dynamic boundary of the learning rate, wherein the upper boundary and the lower boundary smoothly converge to a constant value.
CN202211369792.0A 2022-11-03 2022-11-03 Multi-component radar signal modulation identification method Pending CN115712867A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211369792.0A CN115712867A (en) 2022-11-03 2022-11-03 Multi-component radar signal modulation identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211369792.0A CN115712867A (en) 2022-11-03 2022-11-03 Multi-component radar signal modulation identification method

Publications (1)

Publication Number Publication Date
CN115712867A true CN115712867A (en) 2023-02-24

Family

ID=85232075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211369792.0A Pending CN115712867A (en) 2022-11-03 2022-11-03 Multi-component radar signal modulation identification method

Country Status (1)

Country Link
CN (1) CN115712867A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116186593A (en) * 2023-03-10 2023-05-30 山东省人工智能研究院 Electrocardiosignal detection method based on separable convolution and attention mechanism

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532932A (en) * 2019-08-26 2019-12-03 哈尔滨工程大学 A kind of multi -components radar emitter signal intra-pulse modulation mode recognition methods
CN112149061A (en) * 2020-09-25 2020-12-29 电子科技大学 Multi-class average maximization true and false target feature extraction method
CN113376586A (en) * 2021-06-03 2021-09-10 哈尔滨工程大学 Method for constructing classification model of double-component radar signals
CN114118142A (en) * 2021-11-05 2022-03-01 西安晟昕科技发展有限公司 Method for identifying radar intra-pulse modulation type
CN114742091A (en) * 2021-11-26 2022-07-12 中山大学 Method, system and medium for identifying radar individual radiation based on convolution block attention

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532932A (en) * 2019-08-26 2019-12-03 哈尔滨工程大学 A kind of multi -components radar emitter signal intra-pulse modulation mode recognition methods
CN112149061A (en) * 2020-09-25 2020-12-29 电子科技大学 Multi-class average maximization true and false target feature extraction method
CN113376586A (en) * 2021-06-03 2021-09-10 哈尔滨工程大学 Method for constructing classification model of double-component radar signals
CN114118142A (en) * 2021-11-05 2022-03-01 西安晟昕科技发展有限公司 Method for identifying radar intra-pulse modulation type
CN114742091A (en) * 2021-11-26 2022-07-12 中山大学 Method, system and medium for identifying radar individual radiation based on convolution block attention

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MINGXING TAN等: ""EfficientNetV2: Smaller Models and Faster Training"", 《ARXIV》, pages 1 - 11 *
侯琛璠: ""多分量雷达信号脉内调制方式识别方法研究"", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 7 - 44 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116186593A (en) * 2023-03-10 2023-05-30 山东省人工智能研究院 Electrocardiosignal detection method based on separable convolution and attention mechanism
CN116186593B (en) * 2023-03-10 2023-10-03 山东省人工智能研究院 Electrocardiosignal detection method based on separable convolution and attention mechanism

Similar Documents

Publication Publication Date Title
CN111783558A (en) Satellite navigation interference signal type intelligent identification method and system
CN110532932B (en) Method for identifying multi-component radar signal intra-pulse modulation mode
CN110598530A (en) Small sample radio signal enhanced identification method based on ACGAN
CN114564982B (en) Automatic identification method for radar signal modulation type
Wei et al. PRI modulation recognition based on squeeze-and-excitation networks
CN111050315B (en) Wireless transmitter identification method based on multi-core two-way network
Wei et al. Automatic modulation recognition for radar signals via multi-branch ACSE networks
CN111582236A (en) LPI radar signal classification method based on dense convolutional neural network
CN115712867A (en) Multi-component radar signal modulation identification method
CN114943245A (en) Automatic modulation recognition method and device based on data enhancement and feature embedding
CN114021458B (en) Small sample radar radiation source signal identification method based on parallel prototype network
CN114980122A (en) Small sample radio frequency fingerprint intelligent identification system and method
CN114492744A (en) Method for generating ground-sea clutter spectrum data sample based on confrontation generation network
CN111310680B (en) Radiation source individual identification method based on deep learning
CN115327544B (en) Little-sample space target ISAR defocus compensation method based on self-supervision learning
CN113343924B (en) Modulation signal identification method based on cyclic spectrum characteristics and generation countermeasure network
CN116680608A (en) Signal modulation identification method based on complex graph convolutional neural network
CN116430317A (en) Radiation source modulation pattern and individual identification method and system
CN116243248A (en) Multi-component interference signal identification method based on multi-label classification network
Pan et al. Specific radar emitter identification using 1D-CBAM-ResNet
CN115329821A (en) Ship noise identification method based on pairing coding network and comparison learning
CN113030849A (en) Near-field source positioning method based on self-encoder and parallel network
CN112434716A (en) Underwater target data amplification method and system based on conditional adversarial neural network
CN117992760B (en) Electromagnetic environment monitoring task planning method based on cognitive map
Huynh-The et al. Racomnet: High-performance deep network for waveform recognition in coexistence radar-communication systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20230224

WD01 Invention patent application deemed withdrawn after publication