CN114019467A - Radar signal identification and positioning method based on MobileNet model transfer learning - Google Patents

Radar signal identification and positioning method based on MobileNet model transfer learning Download PDF

Info

Publication number
CN114019467A
CN114019467A CN202111241399.9A CN202111241399A CN114019467A CN 114019467 A CN114019467 A CN 114019467A CN 202111241399 A CN202111241399 A CN 202111241399A CN 114019467 A CN114019467 A CN 114019467A
Authority
CN
China
Prior art keywords
convolution
model
time
radar signal
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111241399.9A
Other languages
Chinese (zh)
Other versions
CN114019467B (en
Inventor
司伟建
骆家冀
邓志安
张春杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202111241399.9A priority Critical patent/CN114019467B/en
Publication of CN114019467A publication Critical patent/CN114019467A/en
Application granted granted Critical
Publication of CN114019467B publication Critical patent/CN114019467B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/418Theoretical aspects

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention belongs to the technical field of radar signal modulation mode identification, and particularly relates to a radar signal identification and positioning method based on MobileNet model transfer learning. According to the method, based on the mobileNet model transfer learning and the gradient weighting activation mapping, the network model is built by using the depth separable convolution, so that the model parameters can be effectively reduced, and the calculation efficiency of the model is improved; in the training process, the pre-training model is loaded for transfer learning training, so that the convergence speed and generalization performance of the model can be improved; meanwhile, the prediction result of the network model is visualized by adopting a gradient weighting category activation mapping method, so that the interpretability and the transparency of the deep learning model are improved.

Description

Radar signal identification and positioning method based on MobileNet model transfer learning
Technical Field
The invention belongs to the technical field of radar signal modulation mode identification, and particularly relates to a radar signal identification and positioning method based on MobileNet model transfer learning.
Background
The traditional radar signal identification algorithm utilizes manually extracted features to classify the signal into perfect classes, such as high-order cumulant, cyclostationary features, distribution distance, probability distance, spectral correlation, autocorrelation function and time-frequency image features. These features work well for certain radar signals, but the method is computationally complex, expensive to develop, and lacks flexibility and universality. In recent years, with the increasing level of computer hardware, deep learning has been a remarkable result in the field of computer vision and the like because of its excellent performance. Meanwhile, a great deal of research results emerge in the field of radar radiation source signal identification through deep learning.
Typically, convolutional neural networks need to be designed from scratch and trained on a corresponding training data set to achieve optimal performance. This method not only requires sufficient hardware resources, but also requires a lot of time and enough annotation data to train the model to converge to the optimal state. The transfer learning can adapt the model trained on one problem to a new problem through simple adjustment, so that the model can quickly and efficiently realize convergence on a small-scale data set. The pre-trained model has good generalization capability, and can enable rough characteristics such as visual shape, texture, edges and the like of the target object. Therefore, the algorithm adopts the idea of transfer learning, and realizes the identification of the radar signal modulation mode by reusing a part of the trained network.
Although current convolutional neural networks have achieved unprecedented breakthrough in many areas, deep learning models are often viewed as black boxes because they lack proper interpretability and cannot intuitively analyze their internal working principles, thereby leaving researchers and users with a lack of necessary trust in deep learning models and intelligent systems. Therefore, a radar signal identification and positioning algorithm based on Gradient-weighted Class Activation Mapping (Grad-CAM) is provided, radar signals can be identified, and the radar signals can be positioned in a time-frequency diagram, so that the interpretability of the model is improved, and the understanding of the calculation process of the convolutional neural network in the image classification process is facilitated. The gradient weighted class activation mapping can generate visual interpretation for decision in the convolutional neural network, so that the deep learning model is more transparent and interpretable.
The Grad-CAM is a weak supervision positioning method and can visually, quickly and accurately position the target. The complex deep learning model can be made to have good interpretability. In addition, the method only needs image-level labels for training, does not need expensive bounding box labeling for training, and can effectively reduce the manual labeling cost. In addition, the method does not need to modify the deep learning model, so that the classification performance of the model is not influenced, and the balance between the model identification precision and the interpretability can be well balanced.
Disclosure of Invention
The invention aims to provide a radar signal identification and positioning method based on mobileNet model transfer learning.
A radar signal identification and positioning method based on MobileNet model transfer learning comprises the following steps:
step 1: acquiring a radar signal data set, converting radar signals into a two-dimensional time-frequency graph through Choi-Williams time-frequency conversion, and generating a training set and a test set;
step 2: forming a depth separable convolution module through convolution of a space convolution kernel channel, and building a MobileNet network model based on the depth separable convolution;
and step 3: loading pre-trained MobileNet network parameters, and initializing model parameters by adopting model parameters pre-trained on ImageNet;
and 4, step 4: training a deep learning model in a radar signal data set by adopting a cross entropy and Adam optimization algorithm;
and 5: loading test data to realize the identification of a radar signal debugging mode;
step 6: the output result is subjected to derivation and is reversely transmitted to the last convolutional layer for output, gradient weighting category activation mapping is obtained through weighting summation, and the recognizable area of the prediction result is generated and highlighted;
and 7: and (4) upsampling the visualized gradient weighted class activation mapping image, and fusing the upsampled gradient weighted class activation mapping image with an original radar signal time-frequency image to obtain a final prediction positioning result.
Further, the representation form of the radar signal is assumed in the step 1 as
y(t)=x(t)+N(t)
Converting the one-dimensional radar signals into a two-dimensional time-frequency graph through Choi-Williams time-frequency distribution, and displaying the change of the radar signal frequency along with time; the Choi-Williams time-frequency distribution has the characteristics of high resolution, unobvious cross terms and the like, and the Choi-Williams distribution is expressed as follows:
Figure BDA0003319672090000021
wherein t and w respectively represent time domain components and frequency domain components of time frequency distribution; f (theta, tau) is a kernel function of time-frequency distribution; τ represents time delay; the kernel function can be regarded as a low-pass filter, which can effectively reduce the interference of cross terms, and is expressed as follows:
Figure BDA0003319672090000022
the time-frequency diagram of the radar signal can be regarded as a two-dimensional image, and the time component and the frequency component of the time-frequency distribution respectively represent the x axis and the frequency component of the imageyA shaft; the time-frequency diagram can visually represent the change relation of the radar signal frequency along with time, so that the modulation mode characteristics of the radar signal can be effectively represented.
Further, the step 2 specifically includes:
step 2.1: the depth separable convolution consists of two layers: depth convolution and channel convolution; applying a single convolution on each input channel by using a depth convolution, then creating a linear combination of depth layer outputs by using a channel convolution, and decomposing a standard convolution operation into the depth convolution and the channel convolution can greatly reduce model parameters and calculation cost;
depth convolution with one filter per input channel can be written as
Figure BDA0003319672090000031
Wherein,
Figure BDA0003319672090000032
is of size DK×DKA deep convolution kernel of x M; will be provided with
Figure BDA0003319672090000033
Is applied to the mth channel in F to generate a filtered output signature
Figure BDA0003319672090000034
The mth channel of (1);
step 2.2: in order to keep the feature map with higher resolution, deleting all layers 7 × 7 convolution and later layers in the pre-training model, extracting the last 14 × 14 convolution output as feature output, then reducing the spatial resolution of features output by the convolution layers to 1 through global average pooling, and finally sending into a fully-connected classification layer and calculating a prediction result of the model through softmax; the MobileNet network model comprises 11 1 × 1 channel convolution layers and 11 3 × 3 depth convolution layers, wherein the two convolution layers are stacked alternately, each convolution layer is provided with a batch normalization layer behind and uses a rectification linear unit ReLU as an activation layer of an activation function; downsampling is achieved by setting the step to 2 in the 3 x 3 depth convolutional layer and the first 3 x 3 conventional convolutional layer.
Further, the method for obtaining the gradient weighting class activation mapping in step 6 specifically includes:
calculating the category score y corresponding to the category ccActivation feature mapping for convolutional layer AkGradient of (i), i.e.
Figure BDA0003319672090000035
These counter-propagating gradients are then pooled globally averaged over both width and height dimensions (i, j) to obtain importance weights for the neurons
Figure BDA0003319672090000036
Figure BDA0003319672090000037
Figure BDA0003319672090000038
That is, the importance of the c class to the kth channel of the feature map output by the last convolutional layer, and then
Figure BDA0003319672090000039
Performing weighted linear combination on the activation characteristic mapping of the last layer as a weight, and then obtaining a final result through the treatment of a ReLU activation function;
Figure BDA00033196720900000310
the reason for applying the ReLU activation function to the activation feature map is that the class activation map only focuses on features that have a positive impact on a particular class, for which negative pixels may belong to other classes.
The invention has the beneficial effects that:
the invention provides a radar signal modulation mode identification and positioning method based on mobileNet model transfer learning and gradient weighting activation mapping, wherein a network model is built by using depth separable convolution, so that model parameters can be effectively reduced, and the calculation efficiency of the model is improved; in the training process, the pre-training model is loaded for transfer learning training, so that the convergence speed and generalization performance of the model can be improved; meanwhile, the prediction result of the network model is visualized by adopting a gradient weighting category activation mapping method, so that the interpretability and the transparency of the deep learning model are improved.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a time-frequency diagram of different radar signals.
FIG. 3 is a schematic diagram of the structure of a standard convolution with a depth convolution and a channel convolution.
FIG. 4 is a schematic diagram of a standard convolution module versus a depth separable convolution module.
Fig. 5 is a schematic diagram of migration learning.
FIG. 6 is a graph of MobileNet model recognition rate versus signal-to-noise ratio for different training methods.
FIG. 7 is a comparison of results of a MobileNet confusion matrix based on a transfer learning training model and a de novo training model.
FIG. 8 is a comparison graph of the visualization results of weighted class activation mapping of different radar signals based on a transfer learning training model and a de novo training model.
Fig. 9 is a table of MobileNet model structures in an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention relates to the field of radar signal modulation mode identification, in particular to a mobility training based on a MobileNet lightweight model, and the visualization of model prediction and the positioning of radar signals in a time-frequency diagram are realized through gradient class weighted activation mapping (Grad-CAM).
The invention provides a radar signal modulation mode identification and positioning method based on mobileNet model transfer learning and gradient weighting activation mapping, wherein a network model is built by using depth separable convolution, so that model parameters can be effectively reduced, and the calculation efficiency of the model is improved; in the training process, the pre-training model is loaded for transfer learning training, so that the convergence speed and generalization performance of the model can be improved; meanwhile, the prediction result of the network model is visualized by adopting a gradient weighting category activation mapping method, so that the interpretability and the transparency of the deep learning model are improved, researchers can be helped to analyze the deep learning model better, and the deep learning system can be better understood and trusted by the researchers.
A radar signal identification and positioning method based on MobileNet model transfer learning comprises the following steps:
step 1: converting radar signals into a two-dimensional time-frequency graph through Choi-Williams time-frequency conversion to generate a training set and a test set;
step 2: forming a depth separable convolution module through convolution of a space convolution kernel channel, and building a MobileNet network model based on the depth separable convolution;
and step 3: loading pre-trained MobileNet network parameters, and initializing model parameters by adopting model parameters pre-trained on ImageNet;
and 4, step 4: training a deep learning model in a radar signal data set by adopting a cross entropy and Adam optimization algorithm;
and 5: loading test data to realize the identification of a radar signal debugging mode;
step 6: the output result is subjected to derivation and is reversely transmitted to the last convolutional layer for output, gradient weighting category activation mapping is obtained through weighting summation, and the recognizable area of the prediction result is generated and highlighted;
and 7: and (4) upsampling the visualized gradient weighted class activation mapping image, and fusing the upsampled gradient weighted class activation mapping image with an original radar signal time-frequency image to obtain a final prediction positioning result.
1. In step 1, the radar signal is assumed to be represented in the form of
y(t)=x(t)+N(t) (1)
Through Choi-Williams Time Frequency distribution, the one-dimensional radar signal is converted into a two-dimensional Time-Frequency graph (Time-Frequency image), and the change of the radar signal Frequency along with Time can be displayed. The Choi-Williams time frequency distribution has the characteristics of high resolution, unobvious cross terms and the like. The distribution of Choi-Williams is shown below
Figure BDA0003319672090000051
Wherein t and w respectively represent time domain components and frequency domain components of time frequency distribution, and f (theta, t) is a kernel function of the time frequency distribution; τ denotes the time delay. The kernel function can be regarded as a low-pass filter, and interference of cross terms can be effectively reduced. The kernel function is expressed as follows
Figure BDA0003319672090000052
The time-frequency diagram of the radar signal can be regarded as a two-dimensional image, and the time component and the frequency component of the time-frequency distribution respectively represent the x axis and the frequency component of the imageyA shaft. The time-frequency diagram can visually represent the change relation of the radar signal frequency along with time, so that the modulation mode characteristics of the radar signal can be effectively represented.
2. Step 2 specifically comprises the steps of forming a depth separable convolution module through convolution of a space convolution kernel channel, and building a MobileNet network model:
step 2-1: the depth separable convolution consists of two layers: depth convolution and channel convolution. A single convolution is applied on each input channel (input depth) using depth convolution. Then, a channel convolution (simple 1 × 1 convolution) is used to create a linear combination of the depth layer outputs. The decomposition of standard convolution operations into depth convolution and channel convolution can greatly reduce model parameters and computation cost.
Depth convolution with one filter per input channel can be written as
Figure BDA0003319672090000053
Wherein
Figure BDA0003319672090000054
Is of size DK×DKX M deep convolution kernel in which
Figure BDA0003319672090000055
Is applied to the mth channel in F to generate a filtered output signature
Figure BDA0003319672090000056
The mth channel of (1).
The computation cost of the deep convolution is
DK·DK·M·DF·DF (5)
Deep convolution is very efficient compared to standard convolution, but the channels are independent during convolution and do not combine to create new features. Therefore, it is necessary to add a 1 × 1 volume base layer in order to generate new composite features. The combination of the deep convolution kernel 1 x 1 channel convolution is referred to as a deep separable convolution. The computation cost of the depth separable convolution is
DK·DK·M·DF·DF+M·N·DF·DF (6)
I.e., the sum of the deep convolution computation cost and the 1 x 1 channel convolution computation cost. By decomposing the convolution into a two-step process of combining a deep convolution with a 1 x 1 channel convolution, the number of convolutions can be reduced
Figure BDA0003319672090000061
MobileNet uses a 3 x 3 deep separable convolution that is 8-9 times less computationally expensive than the standard 3 x 3 convolution. Therefore, the calculation efficiency of the model can be effectively improved, the model delay is reduced, and the model can be easily deployed in a mobile or embedded system.
Step 2-2: in order to keep the feature map with higher resolution, all layers after 7 × 7 convolution in the pre-training model are deleted, the last 14 × 14 convolution output is extracted as the feature output, then the spatial resolution of the feature output by the convolution layer is reduced to 1 through global average pooling, and finally the fully-connected classification layer is sent in and the prediction result of the model is calculated through softmax. The model contains 11 1 × 1 channel convolutional layers, and 11 3 × 3 depth convolutional layers, stacked alternately, each with a Batch Normalization layer and an active layer using a rectifying linear unit (ReLU) as an activation function. Downsampling is achieved by setting the step to 2 in the 3 x 3 depth convolutional layer and the first 3 x 3 conventional convolutional layer.
3. And 3, initializing model parameters by adopting model parameters pre-trained on ImageNet. The model is trained from the beginning in a source domain with a large amount of labeled data by transfer learning, so that the model can learn parameters such as weight and deviation, and is called a pre-training model. These learned parameters are then migrated to other target domains and the model can be trained starting from the weights of the pre-trained model without starting from the beginning. The convergence speed of the model in the target domain can be obviously improved through transfer learning;
4. and 4, training the model by adopting a cross entropy and Adam optimization algorithm, and adjusting the learning rate in the model training process by adopting a cosine learning rate annealing method. Training is carried out on an RTX 3090GPU, and 120 epochs are iterated;
5. step 6, calculating the category score y corresponding to the category ccActivation feature mapping for convolutional layer AkGradient of (i), i.e.
Figure BDA0003319672090000062
These counter-propagating gradients are then pooled globally averaged over both width and height dimensions (i, j) to obtain importance weights for the neurons
Figure BDA0003319672090000071
Figure BDA0003319672090000072
Obtained in this step
Figure BDA0003319672090000073
Is the importance of the c class to the kth channel of the signature graph output by the last convolutional layer. Then hold
Figure BDA0003319672090000074
And performing weighted linear combination on the activation characteristic mapping of the last layer as a weight, and then processing through a ReLU activation function to obtain a final result.
Figure BDA0003319672090000075
The reason for applying the ReLU activation function to the activation feature map is that the class activation map only focuses on features that have a positive impact on a particular class, for which negative pixels may belong to other classes.
6. Step 7, upsampling the visual gradient weighted class activation mapping graph, fusing the upsampled visual gradient weighted class activation mapping graph with an original radar signal time-frequency graph to obtain a final predicted positioning result, upsampling the gradient weighted class activation mapping graph by using a bilinear interpolation method, and linearly overlapping the upsampled gradient weighted class activation mapping graph with a test time-frequency graph;
compared with the prior art, the invention has the beneficial effects that:
the MobileNet model structure is a single channel, a multi-branch structure similar to a ResNet model and an inclusion model is not provided, the structure is simple, the reasoning speed is high, and the MobileNet model structure occupies a smaller memory than a branch structure;
2. the transfer learning method can improve the convergence speed of the model, reduce the requirements of training on the scale of a training set and effectively improve the generalization performance of the model;
3. the gradient weighting category activation mapping can make visual interpretation on the prediction of the model, so that the convolutional neural network model is more transparent, researchers can be helped to analyze the deep learning model better, and the researchers can be helped to understand and trust the deep learning system better.
Example 1:
referring to fig. 1, a flow chart of a radar signal modulation mode identification and positioning algorithm system based on a MobileNet transfer learning model and gradient weighting class activation mapping is shown.
Step 1: referring to fig. 2, radar signals are converted into two-dimensional time-frequency images through time-frequency analysis by Choi-Williams time-frequency distribution, and training sets and test sets are generated. The debugging mode of the radar signal comprises the following steps: linear Frequency Modulation (LFM), BPSK, Frank code, Costas code, P1 code, P2 code, P3 code, P4 code signal;
step 2: referring to fig. 3, a depth separable convolution module is constructed by using space convolution and channel convolution;
step 2-1: the depth separable convolution consists of two layers: depth convolution and channel convolution. A single convolution is applied on each input channel (input depth) using depth convolution. Then, a channel convolution (simple 1 × 1 convolution) is used to create a linear combination of the depth layer outputs. The decomposition of standard convolution operations into depth convolution and channel convolution can greatly reduce model parameters and computation cost.
Step 2-2: referring to fig. 4, all layers 7 × 7 convolution and later in the pre-training model are deleted, the last 14 × 14 convolution output is extracted as the feature output, then the spatial resolution of the feature output by the convolution layer is reduced to 1 through global average pooling, and finally the fully-connected classification layer is sent in and the prediction result of the model is calculated through softmax. The model contains 11 1 × 1 channel convolutional layers, and 11 3 × 3 depth convolutional layers, stacked alternately, each with a Batch Normalization layer and an active layer using a rectifying linear unit (ReLU) as an activation function. Downsampling is achieved by setting the step to 2 in the 3 x 3 depth convolutional layer and the first 3 x 3 conventional convolutional layer.
And step 3: referring to fig. 5, initialization is performed using model parameters pre-trained on ImageNet;
and 4, step 4: and training the model by adopting a cross entropy and random gradient descent algorithm, and adjusting the learning rate in the model training process by adopting a cosine learning rate annealing method. Training is carried out on an RTX 3090GPU, and 120 epochs are iterated;
and 5: calculating the category score y corresponding to the category ccActivation feature mapping for convolutional layer AkGradient of (i), i.e.
Figure BDA0003319672090000081
These counter-propagating gradients are then pooled globally averaged over both width and height dimensions (i, j) to obtain importance weights for the neurons
Figure BDA0003319672090000082
Figure BDA0003319672090000083
Obtained in this step
Figure BDA0003319672090000084
Is the importance of the c class to the kth channel of the signature graph output by the last convolutional layer. Then hold
Figure BDA0003319672090000085
And performing weighted linear combination on the activation characteristic mapping of the last layer as a weight, and then processing through a ReLU activation function to obtain a final result.
Figure BDA0003319672090000086
The reason for applying the ReLU activation function to the activation feature map is that the class activation map only focuses on features that have a positive impact on a particular class, for negative pixels possibly belonging to other classes;
step 6: and upsampling the visual gradient weighting class activation mapping graph, fusing the upsampled visual gradient weighting class activation mapping graph with an original radar signal time-frequency graph to obtain a final prediction positioning result, upsampling the gradient weighting class activation mapping graph by using a bilinear interpolation method, and linearly overlapping the upsampled gradient weighting class activation mapping graph with a test time-frequency graph.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (4)

1. A radar signal identification and positioning method based on MobileNet model transfer learning is characterized by comprising the following steps:
step 1: acquiring a radar signal data set, converting radar signals into a two-dimensional time-frequency graph through Choi-Williams time-frequency conversion, and generating a training set and a test set;
step 2: forming a depth separable convolution module through convolution of a space convolution kernel channel, and building a MobileNet network model based on the depth separable convolution;
and step 3: loading pre-trained MobileNet network parameters, and initializing model parameters by adopting model parameters pre-trained on ImageNet;
and 4, step 4: training a deep learning model in a radar signal data set by adopting a cross entropy and Adam optimization algorithm;
and 5: loading test data to realize the identification of a radar signal debugging mode;
step 6: the output result is subjected to derivation and is reversely transmitted to the last convolutional layer for output, gradient weighting category activation mapping is obtained through weighting summation, and the recognizable area of the prediction result is generated and highlighted;
and 7: and (4) upsampling the visualized gradient weighted class activation mapping image, and fusing the upsampled gradient weighted class activation mapping image with an original radar signal time-frequency image to obtain a final prediction positioning result.
2. The method for identifying and positioning radar signals based on MobileNet model transfer learning of claim 1, wherein the method comprises the following steps: in the step 1, the representation form of the radar signal is assumed to be
y(t)=x(t)+N(t)
Converting the one-dimensional radar signals into a two-dimensional time-frequency graph through Choi-Williams time-frequency distribution, and displaying the change of the radar signal frequency along with time; the Choi-Williams time-frequency distribution has the characteristics of high resolution, unobvious cross terms and the like, and the Choi-Williams distribution is expressed as follows:
Figure FDA0003319672080000011
wherein t and w respectively represent time domain components and frequency domain components of time frequency distribution; f (theta, tau) is a kernel function of time-frequency distribution; τ represents time delay; the kernel function can be regarded as a low-pass filter, which can effectively reduce the interference of cross terms, and is expressed as follows:
Figure FDA0003319672080000012
the time-frequency diagram of the radar signal can be regarded as a two-dimensional image, and the time component and the frequency component of time-frequency distribution respectively represent the x axis and the y axis of the image; the time-frequency diagram can visually represent the change relation of the radar signal frequency along with time, so that the modulation mode characteristics of the radar signal can be effectively represented.
3. The method for identifying and positioning radar signals based on MobileNet model transfer learning of claim 1, wherein the method comprises the following steps: the step 2 specifically comprises the following steps:
step 2.1: the depth separable convolution consists of two layers: depth convolution and channel convolution; applying a single convolution on each input channel by using a depth convolution, then creating a linear combination of depth layer outputs by using a channel convolution, and decomposing a standard convolution operation into the depth convolution and the channel convolution can greatly reduce model parameters and calculation cost;
depth convolution with one filter per input channel can be written as
Figure FDA0003319672080000021
Wherein,
Figure FDA0003319672080000022
is of size DK×DKA deep convolution kernel of x M; will be provided with
Figure FDA0003319672080000023
Is applied to the mth channel in F to generate a filtered output signature
Figure FDA0003319672080000024
The mth channel of (1);
step 2.2: in order to keep the feature map with higher resolution, deleting all layers 7 × 7 convolution and later layers in the pre-training model, extracting the last 14 × 14 convolution output as feature output, then reducing the spatial resolution of features output by the convolution layers to 1 through global average pooling, and finally sending into a fully-connected classification layer and calculating a prediction result of the model through softmax; the MobileNet network model comprises 11 1 × 1 channel convolution layers and 11 3 × 3 depth convolution layers, wherein the two convolution layers are stacked alternately, each convolution layer is provided with a batch normalization layer behind and uses a rectification linear unit ReLU as an activation layer of an activation function; downsampling is achieved by setting the step to 2 in the 3 x 3 depth convolutional layer and the first 3 x 3 conventional convolutional layer.
4. The method for identifying and positioning radar signals based on MobileNet model transfer learning of claim 1, wherein the method comprises the following steps: the method for obtaining the gradient weighting class activation mapping in step 6 specifically includes:
calculating the category score y corresponding to the category ccActivation feature mapping for convolutional layer AkGradient of (i), i.e.
Figure FDA0003319672080000025
These counter-propagating gradients are then pooled globally averaged over both width and height dimensions (i, j) to obtain importance weights for the neurons
Figure FDA0003319672080000026
Figure FDA0003319672080000027
Figure FDA0003319672080000028
That is, the importance of the c class to the kth channel of the feature map output by the last convolutional layer, and then
Figure FDA0003319672080000029
As a weightPerforming weighted linear combination on the activation characteristic mapping of the last layer, and then processing through a ReLU activation function to obtain a final result;
Figure FDA00033196720800000210
the reason for applying the ReLU activation function to the activation feature map is that the class activation map only focuses on features that have a positive impact on a particular class, for which negative pixels may belong to other classes.
CN202111241399.9A 2021-10-25 2021-10-25 Radar signal identification and positioning method based on MobileNet model transfer learning Active CN114019467B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111241399.9A CN114019467B (en) 2021-10-25 2021-10-25 Radar signal identification and positioning method based on MobileNet model transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111241399.9A CN114019467B (en) 2021-10-25 2021-10-25 Radar signal identification and positioning method based on MobileNet model transfer learning

Publications (2)

Publication Number Publication Date
CN114019467A true CN114019467A (en) 2022-02-08
CN114019467B CN114019467B (en) 2024-07-09

Family

ID=80057807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111241399.9A Active CN114019467B (en) 2021-10-25 2021-10-25 Radar signal identification and positioning method based on MobileNet model transfer learning

Country Status (1)

Country Link
CN (1) CN114019467B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114818777A (en) * 2022-03-18 2022-07-29 北京遥感设备研究所 Training method and device for active angle deception jamming recognition model
CN114896307A (en) * 2022-06-30 2022-08-12 北京航空航天大学杭州创新研究院 Time series data enhancement method and device and electronic equipment
CN115424084A (en) * 2022-11-07 2022-12-02 浙江省人民医院 Fundus photo classification method and device based on class weighting network
CN116401588A (en) * 2023-06-08 2023-07-07 西南交通大学 Radiation source individual analysis method and device based on deep network
CN117233723A (en) * 2023-11-14 2023-12-15 中国电子科技集团公司第二十九研究所 Radar tracking envelope extraction method based on CNN class activation diagram

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597043A (en) * 2018-11-16 2019-04-09 江苏科技大学 Radar Signal Recognition method based on quantum particle swarm convolutional neural networks
CN113033473A (en) * 2021-04-15 2021-06-25 中国人民解放军空军航空大学 ST2DCNN + SE-based radar overlapped signal identification method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597043A (en) * 2018-11-16 2019-04-09 江苏科技大学 Radar Signal Recognition method based on quantum particle swarm convolutional neural networks
CN113033473A (en) * 2021-04-15 2021-06-25 中国人民解放军空军航空大学 ST2DCNN + SE-based radar overlapped signal identification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KONOPKO, K ET AL.: "Radar signal recognition based on time-frequency representations and multidimensional probability density function estimator", SIGNAL PROCESSING SYMPOSIUM, 31 December 2015 (2015-12-31) *
王廷银;林明贵;陈达;吴允平;: "基于北斗RDSS的核辐射监测应急通讯方法", 计算机***应用, no. 12, 15 December 2019 (2019-12-15) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114818777A (en) * 2022-03-18 2022-07-29 北京遥感设备研究所 Training method and device for active angle deception jamming recognition model
CN114896307A (en) * 2022-06-30 2022-08-12 北京航空航天大学杭州创新研究院 Time series data enhancement method and device and electronic equipment
CN114896307B (en) * 2022-06-30 2022-09-27 北京航空航天大学杭州创新研究院 Time series data enhancement method and device and electronic equipment
CN115424084A (en) * 2022-11-07 2022-12-02 浙江省人民医院 Fundus photo classification method and device based on class weighting network
CN116401588A (en) * 2023-06-08 2023-07-07 西南交通大学 Radiation source individual analysis method and device based on deep network
CN116401588B (en) * 2023-06-08 2023-08-15 西南交通大学 Radiation source individual analysis method and device based on deep network
CN117233723A (en) * 2023-11-14 2023-12-15 中国电子科技集团公司第二十九研究所 Radar tracking envelope extraction method based on CNN class activation diagram
CN117233723B (en) * 2023-11-14 2024-01-30 中国电子科技集团公司第二十九研究所 Radar tracking envelope extraction method based on CNN class activation diagram

Also Published As

Publication number Publication date
CN114019467B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
CN110335290B (en) Twin candidate region generation network target tracking method based on attention mechanism
CN114019467A (en) Radar signal identification and positioning method based on MobileNet model transfer learning
CN110210551B (en) Visual target tracking method based on adaptive subject sensitivity
CN112818903B (en) Small sample remote sensing image target detection method based on meta-learning and cooperative attention
CN108537192B (en) Remote sensing image earth surface coverage classification method based on full convolution network
CN108038445B (en) SAR automatic target identification method based on multi-view deep learning framework
CN112329658B (en) Detection algorithm improvement method for YOLOV3 network
CN112183203B (en) Real-time traffic sign detection method based on multi-scale pixel feature fusion
CN113128558B (en) Target detection method based on shallow space feature fusion and adaptive channel screening
CN112347888B (en) Remote sensing image scene classification method based on bi-directional feature iterative fusion
CN111369522B (en) Light field significance target detection method based on generation of deconvolution neural network
CN113705769A (en) Neural network training method and device
CN112818969A (en) Knowledge distillation-based face pose estimation method and system
CN109064389B (en) Deep learning method for generating realistic images by hand-drawn line drawings
CN115222946A (en) Single-stage example image segmentation method and device and computer equipment
CN115131557A (en) Lightweight segmentation model construction method and system based on activated sludge image
CN112149526A (en) Lane line detection method and system based on long-distance information fusion
CN112183269B (en) Target detection method and system suitable for intelligent video monitoring
CN112132880B (en) Real-time dense depth estimation method based on sparse measurement and monocular RGB image
CN116844039A (en) Multi-attention-combined trans-scale remote sensing image cultivated land extraction method
CN108052981B (en) Image classification method based on nonsubsampled Contourlet transformation and convolutional neural network
CN113780305B (en) Significance target detection method based on interaction of two clues
CN115223033A (en) Synthetic aperture sonar image target classification method and system
CN116665033A (en) Satellite remote sensing image building extraction method
Mujtaba et al. Automatic solar panel detection from high-resolution orthoimagery using deep learning segmentation networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant