CN116908808A - RTN-based high-resolution one-dimensional image target recognition method - Google Patents

RTN-based high-resolution one-dimensional image target recognition method Download PDF

Info

Publication number
CN116908808A
CN116908808A CN202311178885.XA CN202311178885A CN116908808A CN 116908808 A CN116908808 A CN 116908808A CN 202311178885 A CN202311178885 A CN 202311178885A CN 116908808 A CN116908808 A CN 116908808A
Authority
CN
China
Prior art keywords
layer
domain data
model
rtn
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311178885.XA
Other languages
Chinese (zh)
Other versions
CN116908808B (en
Inventor
王国帅
刘云申
张弘
敖呈欢
陈帅
卢建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Guorui Defense System Co ltd
Original Assignee
Nanjing Guorui Defense System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Guorui Defense System Co ltd filed Critical Nanjing Guorui Defense System Co ltd
Priority to CN202311178885.XA priority Critical patent/CN116908808B/en
Publication of CN116908808A publication Critical patent/CN116908808A/en
Application granted granted Critical
Publication of CN116908808B publication Critical patent/CN116908808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/411Identification of targets based on measurements of radar reflectivity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application discloses a high-resolution one-dimensional image target identification method based on RTN, which comprises the following steps: establishing an echo signal model; generating a training set and a testing set; constructing an improved RTN model based on the one-dimensional CNN model; training the improved RTN model by using a training set to obtain a final improved RTN model; and inputting the test set into a final improved RTN model to obtain a final target identification result. The application improves the generalization capability of the radar target recognition model based on CNN, and enhances the recognition capability of the model under the background of clutter and noise interference.

Description

RTN-based high-resolution one-dimensional image target recognition method
Technical Field
The application belongs to the field of signal processing, and particularly relates to a high-resolution one-dimensional image target identification method based on RTN.
Background
With the increase in radar resolution and the development of broadband radar technology, radar returns may provide more reliable target features. High Resolution Range Profile (HRRP) has become a major class of broadband high resolution radar imaging technology, and has become a hot spot of research in recent years. Convolutional neural networks (Convolutional Neural Network, CNN) are one of the most influential models in computer vision research and application fields. Likewise, if time is considered as a spatial dimension, similar to the height or width of a two-dimensional image, CNN can also produce a surprising effect on time-series processing.
The deep learning model based on the Convolutional Neural Network (CNN) is excellent in the field of target recognition, but when clutter and noise affect echo data, the original distribution of the data is destroyed, so that the recognition rate of the deep learning model is greatly reduced.
Disclosure of Invention
In order to solve the above problems, the present application provides a method for identifying a high-resolution one-dimensional image target based on RTN, comprising the following steps:
step 1: establishing an echo signal model; generating target echo data according to the echo signal model, preprocessing the target echo data, generating source domain data and target domain data, taking part of data in the source domain data and part of data in the target domain data as training sets, and taking the other part of data in the source domain data and the other part of data in the target domain data as test sets; the source domain data has tag information, and the target domain data has no tag information;
step 2: constructing an improved RTN model based on the one-dimensional CNN model;
step 3: training the improved RTN model by using a training set to obtain a final improved RTN model;
step 4: and inputting the test set into a final improved RTN model to obtain a final target identification result.
Further, the improved RTN model in the step 2 comprises an input layer, a convolution layer, a full connection layer, a residual layer, a classification layer and an output layer; the input layer and the convolution layer are realized based on a one-dimensional CNN model, the input layer, the convolution layer and the full-connection layer are connected in series, and a convolution result obtained by the convolution layer is input into the full-connection layer;
in the step 3, the parameters of the improved RTN model are updated through SGD until the model converges, and the final improved RTN model is obtained.
Further, the step 3 specifically includes:
and calculating cross entropy loss of the source domain data at an output layer by using a cross entropy loss function, calculating MK-MMD distance between residual layer output and source domain data output of target domain data after passing through a residual layer, calculating entropy loss of the target domain data at a classification layer by using the entropy loss function, summarizing calculation results to form a total loss value, and updating parameters of an improved RTN model by using SGD.
Further, the calculation formula of the cross entropy loss of the source domain data at the output layer is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing source domain dataIs defined as the total number of (a),source domain data for output layer predictionIs used to determine the vector of the result,is source domain dataThe corresponding true value vector is used to determine,representing a cross entropy loss function.
Further, the calculation formula of MK-MMD distance between the residual layer output and the source domain data output after the target domain data passes through the residual layer is as follows:
wherein l= { fc1, fc2, output layer }, output layer represents the output layer, and the data of the output layer is in an inactive state;is the firstImportance weight of the layer;representing input NoThe number of data of a layer is determined,for the gaussian kernel corresponding to the nth data,in the first place for source domain dataThe feature vector of the layer is used to determine,in the first place for the target domain dataA feature vector of the layer;represent the firstMK-MMD distance of layer.
Further, the calculation formula of the entropy loss of the target domain data in the classification layer is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing target domain dataThe source domain data and the target domain data are in one-to-one correspondence, and H represents an entropy loss function.
Further, the calculation formula of the total loss value is:
wherein, the liquid crystal display device comprises a liquid crystal display device,as a value of the total loss,is thatThe penalty coefficient for the distance is set,penalty coefficients for entropy loss of the target domain data at the classification layer.
Further, the target domain data includes sea clutter following the Rayleigh distribution.
Compared with the prior art, the application has the advantages that:
the method constructs an HRRP target recognition model based on the residual error migration network; the method has the advantages that the joint distribution self-adaption is innovatively applied to the target recognition model based on the residual migration network, the generalization capability of the radar target recognition model based on the CNN is improved, the recognition capability of the model under the background of clutter and noise interference is enhanced, and the recognition rate is improved by 20% -30% compared with that of the traditional method.
Drawings
Fig. 1 is a schematic diagram of a CNN structure of a conventional two-dimensional CNN model.
Fig. 2 is a schematic diagram of a convolution process of a conventional two-dimensional CNN model.
Fig. 3 is a schematic diagram of a one-dimensional CNN structure of a one-dimensional CNN model.
Fig. 4 is a schematic diagram of a convolution process of a one-dimensional CNN model.
FIG. 5 is a schematic diagram of an improved RTN model according to an embodiment of the present application.
Fig. 6 is a graph showing the entropy loss change when scr=30db sea clutter is added in the conventional method.
Fig. 7 is a graph showing the change of entropy loss when scr=30db sea clutter is added according to the method of the embodiment of the present application.
Detailed Description
In order that the application may be readily understood, a more complete description of the application will be rendered by reference to the appended drawings. Embodiments of the application are illustrated in the accompanying drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
The application provides a high-resolution one-dimensional image target identification method based on RTN (Residual Transfer Networks, residual error migration network), and the specific implementation mode is described below.
(1) Establishing an echo signal model and a convolutional neural network model
The carrier frequency of the radar isHere, whereFor the number of pulses in a coherent processing interval, set up the firstThe transmitting signal of each pulse is
(1)
(2)
Wherein:in order to transmit the amplitude of the signal,in order to be able to take a short time,in order to be a slow time period,is carried out in a full time period,for the pulse repetition period to be a pulse repetition period,in order to be a pulse width,as a function of the rectangle,j represents the imaginary part.
For a far point target, its echo signal is expressed as
(3)
Wherein:for the amplitude of the echo,in order for the attenuation factor to be a factor,for the echo time delay,for the initial distance of the object to be measured,in order to achieve a speed of the object,in order to achieve the light velocity, the light beam is,is the radar carrier frequency.B/T, wherein B is the operating bandwidth. Range resolution of radarIs bandwidth dependent and can be expressed as:
(4)
when the high-resolution radar detects large targets such as airplanes, ships and the like, the target size is far larger than the resolution, and at the moment, target echo (HRRP) can be regarded as a collection of a plurality of scattering centers and contains abundant target structure information, and the target recognition task can be completed by extracting the structure characteristics. Most of the existing Convolutional Neural Network (CNN) application scenarios are used for processing two-dimensional images and the like. However, HRRP belongs to one-dimensional structural information, so two models can be proposed to target HRRP.
When using the conventional two-dimensional CNN structure (fig. 1 and 2), it is necessary to change the original HRRP data into two-dimensional data and then perform operations such as convolution. Fig. 3 and 4 are diagrams of direct input of raw one-dimensional image data without changing the data structure. The model performs batch normalization (BN layer), nonlinear activation (activation layer) and pooling (pooling layer) of the convolution results as shown in fig. 4. The fully connected layer also uses BN and active layers. The cost function (cross entropy loss) is the following:
(5)
wherein the method comprises the steps ofIs the output layer prediction related dataIs added to the resulting vector (after normalization),is the corresponding true value vector (one-hot code),representing a cross entropy loss function. The cost function is continuously reduced in the training process of the model. The parameter settings for each layer are shown in table 1.
Table 1 settings of parameters for each layer in two models
Layer number Traditional two-dimensional CNN model One-dimensional CNN model
Input layer 16×16 256
Convolutional layer 1 Convolution kernel: 3×3, number of channels 64, step size: 1 post-convolution feature size: 14×14 Convolution kernel: 1×5, number of channels 64, step size: 1 post-convolution feature size: 252
Maximum pooling 7×7, step size: 2 126. Step size: 2
Convolutional layer 2 Convolution kernel: 2×2, number of channels 50, step size: 1 post-convolution feature size: 6×6 Convolution kernel: 1×5, number of channels 50, step size: 1 post-convolution feature size: 122
Maximum pooling 3×3, step size: 2 61. Step size: 2
Full tie layer 1 128(Dropout) 128(Dropout)
Full tie layer 2 100 100
Full tie layer 3 50 50
Output layer 10(SoftMax) 10(SoftMax)
The experimental data adopts the HRRP of 10 kinds of ships under the simulated conventional sea condition, each kind has 10000 frames of data, and each frame of data has 256 distance units. The total data amount trained in the experiment is 60000 frames and the test set is 40000 frames. The batch size during training is 150, so there are 400 total batches. All data in the whole training set is iterated for 400 times, and the large iteration times are set to be 5 times during training. The convolution process in the model does not use filling, nonlinear mapping selects a ReLU activation function, BN is used for feature normalization before the function is activated, the pooling mode uses maximum pooling, the two-dimensional CNN model uses 2 multiplied by 2 pooling, the one-dimensional CNN uses 2 multiplied by 1 pooling, and the step length is 2. The learning rate was set to 0.001, the learning rate decay rate was 0.99, dropout was used at the full connection layer 1, the random inactivation rate was 0.2, and the Momentum factor (Momentum) was set to 0.9. And in the model training process, the test set is identified at the same time, and the identification rate is shown in table 2. As can be seen from the data in table 2, the convergence rate of the one-dimensional CNN is significantly faster than that of the two-dimensional CNN, and the recognition rate is higher, so that the one-dimensional CNN model is used as a basic model of the subsequent method.
TABLE 2 test set identification rate (%)
Number of iterations 0 1 2 3 4 5
Two-dimensional CNN 10.34 81.55 92.66 94.9 96.31 98.52
One-dimensional CNN 9.12 91.27 97.16 98.75 99.23 99.7
In order to explore the influence of clutter and noise on the model recognition rate, a test set is constructed by adding noise to the existing data. In the experiment, sea clutter (or noise) with different intensities and obeying Rayleigh distribution is added to the test set to construct HRRP data under different external environments. The added intensities are calculated according to signal-to-noise ratios (or signal-to-noise ratios) of 20dB, 25dB and 30dB respectively. The stored one-dimensional CNN model is used for identifying the damaged test set, and the identification rate is shown in Table 3. As can be seen from the comparison of the data in the table, the recognition rate of the model on the test set is greatly reduced due to the influence of noise and sea clutter. As the intensity of added noise and sea clutter increases, the model recognition performance drops rapidly, and existing models need to be improved in order to reduce the impact of these external factors on recognition.
TABLE 3 recognition rate after adding sea clutter and noise of different intensities (%)
Parameters (parameters) Original test set 30 dB 25 dB 20 dB
Sea clutter 99.43 66.47 46.83 33.4
Noise 99.2 78.4 63.3 40.29
(II) residual error migration network based on joint distribution
From the above section, the model recognition rate is greatly reduced due to the increasing intensity of the added noise and sea noise. The main reason for the reduced recognition rate is that the original distribution of the data is destroyed by the addition of noise and sea clutter. At this time, the test set and the training set do not belong to the data set with the same distribution, so that the feature space extracted by the model finally also has a difference, and the model is sensitive to the difference, so that the recognition performance is reduced. At this time, a residual migration network can be used to improve Fan Huaneng force of the model and improve recognition capability of the model under noise or sea clutter background.
Table 4 network model parameter settings
Layer number One-dimensional CNN model
Input layer 256
Convolutional layer 1 Convolution kernel: 1×5, number of channels 64, step size: 1, a step of; post-convolution feature size: 252
Maximum pooling 126. Step size: 2
Convolutional layer 2 Convolution kernel: 1×5, number of channels 50, step size: 1 post-convolution feature size: 122
Maximum pooling 61. Step size: 2
Full tie layer 1 128(Dropout)
Full tie layer 2 100
Full tie layer 3 50
Residual layer 1 10
Residual layer 2 10
Output layer 10(SoftMax)
In order to train out an identification model with better generalization and robustness, according to the ideas of depth residual error migration network and domain self-adaption, an RTN (real-time transport network) is applied to HRRP (hybrid-remote-peer redundancy protocol) target identification, and the RTN model is changed to a certain extent, and the joint distribution self-adaption and the residual error network are innovatively combined to optimize the migration network model, so that an improved RTN model is obtained, as shown in fig. 5 and table 4. In the model there are input layers (input), convolution layers (convolution layers 1 and 2, respectively denoted conv1 and conv 2), full-connection layers (full-connection layers 1, 2 and 3, respectively denoted fc1, fc2 and fc 3), residual layers (residual layers 1 and 2, respectively denoted fc4 and fc 5), classification layers (softmax) and output layers (output), the lower dark grey part of fig. 5 being the added adaptation layer (belonging to the full-connection layer for distinguishing the passage of source domain data and target domain data), the upper dark grey part being the residual block. The target domain data output layer in the improved RTN model does not employ a dotted line because pseudo tags of target domain data are used in the model. The improved RTN model continues to adopt a one-dimensional CNN method to extract the data characteristics, and the source domain and the target domain share the parameters of the model. Because deep features are finally transited from general to specific along the network, the original RTN only uses the condition distribution self-adaption, and the improved RTN model provided by the application adds a self-adaption layer on the output layers of the source domain and the target domain to achieve the effect of joint distribution self-adaption, the joint distribution mentioned here is not joint distribution in the traditional sense, but self-adaption processing is performed on edge distribution and condition distribution at the same time. While the network is updated using the MK-MMD metric as a loss function, the distance between the different distributions.
(6)
(7)
Wherein l= { fc1, fc2, output layer }, output layer represents the output layer, and the data of the output layer is in an inactive state;is the firstImportance weight of the layer;representing input NoThe number of data of a layer is determined,for the gaussian kernel to which the nth data corresponds, a plurality of values may be taken,in the first place for source domain dataThe feature vector of the layer is used to determine,in the first place for the target domain dataA feature vector of the layer;represent the firstMK-MMD distance of layer.
In the transfer learning, the data is divided into source domain data) And target domain data [ ]). The source domain data usually has complete annotation information, and is an object to be migrated; target domain data refers to those objects that lack annotations and are to be given some knowledge. The migration learning is mainly by means ofAndto improve the knowledge of the predictive model or function pairsIs a classification effect of (a). In the model presented in FIG. 5, because of the source domain dataDoes not undergo identity mapping, soDirectly through fc4, fc5, the output at the fc5 layer is defined as. While the target domain data passes through the residual layer, the output of the residual layer can be expressed as:
(8)
wherein the method comprises the steps ofFor target domain dataThe characteristics obtained at fc3 are such that,is the result before nonlinear mapping, so that it can be ensured that the final Softmax classifier is not affected. MK-MMD is used in the model to characterize the magnitude of the difference between the source domain output and the target domain residual layer output. Through continuous trainingSlowly approachThereby reducing D s And D t And the difference between the two is improved, so that the migration performance of the identification model is improved.
Reuse at D s Trained model pair D t The pseudo tag is marked, and the entropy loss of the pseudo tag is also added to the loss function. The final cost function of the recognition model is therefore:
(9)
(10)
(11)
wherein the first part on the right of the Loss equation representsClassification loss (cross entropy loss), dominant; the second part represents the mixed kernel MMD distance, used to measureA joint distribution difference between the two; the third part is the model pair D t The entropy loss of the predicted pseudo tag is also added to the loss function.Penalty coefficients for different constituent molecules in the loss function.Andrespectively representing source domain dataAnd target domain dataH represents the entropy loss function;is source domain dataA corresponding truth vector.
Minimizing cost function through continuous trainingFurther, the difference between the distribution is reduced, and the recognition model is improved in D t As well as identification properties.
The learning rate is set to 0.002, the learning rate attenuation rate is 0.99, and dropout is set to0.2, batch size 150. In experiments to D t Sea clutter which is subjected to Rayleigh distribution and has different signal-clutter ratio (SCR=15db, 20db, 25db, 30 db) intensities is added, and then D is added t And (3) preprocessing data, wherein a model is trained for HRRP data in each SCR background in the experiment. During training process D s And D t Sharing parameters of model, adding D t Is to enable model to adaptively learn D through migration learning s And D t Thereby improving the recognition effect of the model on the target under the clutter-containing background. The settings in the Gaussian kernel of MK-MMD in the experiment were: 10 -6 、10 -5 、10 -4 、10 -3 、10 -2 、10 -1 、1、5、10、15、20、25、30、35、100、10 3 、10 4 、10 5 And 10 6
And (III) experimental inspection.
In the experiment, the simulated ship HRRP data is divided into source domain data D according to the ratio of 4:6 s And destination domain data D t . From which a batch of data is randomly selected to form a training set and a test set, as shown in the table below. In experiments by means of the pair D t Constructing HRRP data under different signal-to-noise ratios (signal-to-interference ratios) by adding sea clutter, D t The training set has no tags.
Table 5 data set partitioning (Frames)
Data set Training set Test set Total number of data
D s 20000 40000 128000
D t 20000 40000 192000
Training pseudocode is as follows:
network model optimization
Input, ds training set after pretreatment (with tag information) and Dt training set (without tag information)
Output: parameters of model (weights, bias terms), accuracy of Ds and Dt test sets
Begin:
1, according to the set batch size, ds and Dt are simultaneously fed into the model
Repeat:
Calculating MK-MMD distances of Ds and Dt in fc1 and fc2 layers, wherein the cross entropy loss of Ds in an output layer, the extracted Dt characteristic continuously passes through a residual error module, and MK-MMD distances of residual error layer output and Ds output (in an unactivated state) are calculated;
predicting Dt using the model being trained, then calculating the entropy loss of Dt at the classification layer,
summarizing all loss values, updating model parameters by using SGD (fastest gradient descent method),
until model convergence
End, testing accuracy of Ds and Dt test sets (the accuracy is further summarized by identification rate of the target), and storing the model
Fig. 6 and 7 (in fig. 6 and 7, loss represents Loss, class Loss represents classification Loss, MMD Loss represents MK-MMD Loss, entropy Loss represents Entropy Loss, and Epoch represents the number of iterations) show graphs of Loss when training a network model using different methods. The traditional method is to directly transfer the model without using a distributed self-adaptive method when training the model. As can be seen from fig. 6, with the conventional method, MMD loss shows an increasing tendency during training. That is because the added sea clutter changes the original distribution of the data, and the capability of extracting effective features from the data after the clutter is reduced by using a model trained by the source domain data. The entropy loss also tends to decrease when using the conventional method, but the decrease is not obvious because the damaged data is recognized with a certain ability by continuous training model, but the effect is poor. Only the classification loss in fig. 6 shows a significantly reduced tendency, since only the cross entropy classification loss works in the conventional method. As shown in fig. 7, when the RTN method is adopted, the MMD loss is small, only about 0.25, and is lower than that of the conventional method by about 1. This indicates that the extracted feature distribution is less different under MK-MMD loss constraints. As can be seen by comparing the entropy loss changes in both fig. 6 and fig. 7, the entropy loss also decreases with increasing iteration number. Comparing the classification loss drop curves, it can be seen that the classification loss drops faster when using the conventional approach.
Table 6 gives the recognition rates measured in the two models. Under the same noise intensity, the RTN method (the method for carrying out one-dimensional image target identification by the improved RTN model) is improved by 10% compared with the traditional method. The improvement in performance is most pronounced with scr=25 dB and 30dB, with an improvement in recognition rate of about 15%.
TABLE 6 recognition rate of models at different signal-to-noise ratios (%)
The application is illustrated below with reference to an application example.
The high-resolution one-dimensional image target identification method based on RTN specifically comprises the following steps:
step 1: establishing an echo signal model; generating target echo data according to the echo signal model, preprocessing the target echo data, generating source domain data and target domain data, taking part of data in the source domain data and part of data in the target domain data as training sets, and taking the other part of data in the source domain data and the other part of data in the target domain data as test sets; the source domain data has tag information, and the target domain data has no tag information. The target domain data includes sea clutter following Rayleigh distribution.
Step 2: constructing an improved RTN model based on the one-dimensional CNN model; the improved RTN model comprises an input layer, a convolution layer, a full connection layer, a residual layer, a classification layer and an output layer; the input layer and the convolution layer are realized based on a one-dimensional CNN model, the input layer, the convolution layer and the full-connection layer are connected in series, and a convolution result obtained by the convolution layer is input into the full-connection layer; the fully connected layers comprise fully connected layers 1, 2 and 3, denoted fc1, fc2 and fc3 respectively; the residual layers include residual layers 1 and 2, denoted fc4 and fc5, respectively.
Step 3: training the improved RTN model by using a training set to obtain a final improved RTN model; in the step 3, updating parameters of the improved RTN model by SGD until the model converges, to obtain a final improved RTN model, specifically: and calculating cross entropy loss of the source domain data at an output layer by using a cross entropy loss function, calculating MK-MMD distance between residual layer output and source domain data output of target domain data after passing through a residual layer, calculating entropy loss of the target domain data at a classification layer by using the entropy loss function, summarizing calculation results to form a total loss value, and updating parameters of an improved RTN model by using SGD.
The calculation formula of the cross entropy loss of the source domain data at the output layer is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing source domain dataIs defined as the total number of (a),source domain data for output layer predictionIs used to determine the vector of the result,is source domain dataThe corresponding true value vector is used to determine,representing a cross entropy loss function.
The calculation formula of MK-MMD distance between the residual layer output and the source domain data output after the target domain data passes through the residual layer is as follows:
wherein l= { fc1, fc2, output layer }, output layer represents the output layer, and the data of the output layer is in an inactive state;is the firstImportance weight of the layer;representing input NoThe number of data of a layer is determined,is the Gaussian kernel corresponding to the nth data;represent the firstMK-MMD distance of layer.
The calculation formula of the entropy loss of the target domain data in the classification layer is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing target domain dataThe source domain data and the target domain data are in one-to-one correspondence, and H represents an entropy loss function.
The calculation formula of the total loss value is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,as a value of the total loss,is thatThe penalty coefficient for the distance is set,penalty coefficients for entropy loss of the target domain data at the classification layer.
Step 4: and inputting the test set into a final improved RTN model to obtain a final target identification result.
In summary, the application provides a high-resolution one-dimensional image target recognition method based on RTN, which constructs an HRRP target recognition model based on a residual migration network; the method has the advantages that the joint distribution self-adaption is innovatively applied to the target recognition model based on the residual migration network, the generalization capability of the radar target recognition model based on the CNN is improved, the recognition capability of the model under the background of clutter and noise interference is enhanced, and the recognition rate is improved by 20% -30% compared with that of the traditional method.
The foregoing description of the preferred embodiment of the application is not intended to limit the application to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.

Claims (8)

1. The high-resolution one-dimensional image target identification method based on RTN is characterized by comprising the following steps of:
step 1: establishing an echo signal model; generating target echo data according to the echo signal model, preprocessing the target echo data, generating source domain data and target domain data, taking part of data in the source domain data and part of data in the target domain data as training sets, and taking the other part of data in the source domain data and the other part of data in the target domain data as test sets; the source domain data has tag information, and the target domain data has no tag information;
step 2: constructing an improved RTN model based on the one-dimensional CNN model;
step 3: training the improved RTN model by using a training set to obtain a final improved RTN model;
step 4: and inputting the test set into a final improved RTN model to obtain a final target identification result.
2. The RTN-based high resolution one-dimensional image target recognition method according to claim 1, wherein the modified RTN model in step 2 includes an input layer, a convolution layer, a full connection layer, a residual layer, a classification layer, and an output layer; the input layer and the convolution layer are realized based on a one-dimensional CNN model, the input layer, the convolution layer and the full-connection layer are connected in series, and a convolution result obtained by the convolution layer is input into the full-connection layer;
in the step 3, the parameters of the improved RTN model are updated through SGD until the model converges, and the final improved RTN model is obtained.
3. The RTN-based high resolution one-dimensional image target recognition method according to claim 2, wherein the step 3 specifically includes:
and calculating cross entropy loss of the source domain data at an output layer by using a cross entropy loss function, calculating MK-MMD distance between residual layer output and source domain data output of target domain data after passing through a residual layer, calculating entropy loss of the target domain data at a classification layer by using the entropy loss function, summarizing calculation results to form a total loss value, and updating parameters of an improved RTN model by using SGD.
4. The RTN-based high resolution one-dimensional image target recognition method according to claim 3, wherein a calculation formula of cross entropy loss of source domain data at an output layer is:
,/>wherein (1)>Representing Source Domain data +.>Is defined as the total number of (a),is output layer prediction about source domain data +.>Result vector of>Is source domain data->Corresponding truth vector, ">Representing a cross entropy loss function.
5. The RTN-based high resolution one-dimensional image target recognition method according to claim 4, wherein a calculation formula of MK-MMD distance between a residual layer output of target domain data after passing through the residual layer and a source domain data output is:
,/>wherein l= { fc1, fc2, output layer }, fc1, fc2 respectively represent full connection layer 1, full connection layer 2, output layer represents output layer, and data of the output layer is in inactive state; />Is->Importance weight of the layer; />Representing input +.>Number of data of layer->For the Gaussian kernel corresponding to the nth data, < >>In order to source domain data->Layer feature vector, ">In the first place for the target domain dataA feature vector of the layer; />Indicate->MK-MMD distance of layer.
6. The RTN-based high resolution one-dimensional image target recognition method according to claim 5, wherein a calculation formula of entropy loss of the target domain data at the classification layer is:
,/>wherein (1)>Representing target Domain data->The source domain data and the target domain data are in one-to-one correspondence, and H represents an entropy loss function.
7. The RTN-based high resolution one-dimensional image target recognition method according to claim 6, wherein a calculation formula of a total loss value is:
wherein (1)>For the total loss value, +.>Is->Penalty coefficient for distance,/->Penalty coefficients for entropy loss of the target domain data at the classification layer.
8. The RTN-based high resolution one-dimensional image target recognition method according to claim 7, wherein the target domain data includes sea clutter following rayleigh distribution.
CN202311178885.XA 2023-09-13 2023-09-13 RTN-based high-resolution one-dimensional image target recognition method Active CN116908808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311178885.XA CN116908808B (en) 2023-09-13 2023-09-13 RTN-based high-resolution one-dimensional image target recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311178885.XA CN116908808B (en) 2023-09-13 2023-09-13 RTN-based high-resolution one-dimensional image target recognition method

Publications (2)

Publication Number Publication Date
CN116908808A true CN116908808A (en) 2023-10-20
CN116908808B CN116908808B (en) 2023-12-01

Family

ID=88355098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311178885.XA Active CN116908808B (en) 2023-09-13 2023-09-13 RTN-based high-resolution one-dimensional image target recognition method

Country Status (1)

Country Link
CN (1) CN116908808B (en)

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9625632D0 (en) * 1996-12-10 1997-03-12 Marconi Gec Ltd Doppler radar
WO1999031525A1 (en) * 1997-12-15 1999-06-24 Milkovich Systems Engineering Signal processing architecture which improves sonar and pulse doppler radar performance and tracking capability
US20050134500A1 (en) * 2003-12-22 2005-06-23 Pillai S. U. Target identification from a pool of targets using a new adaptive transmitter-receiver design
US20120127021A1 (en) * 2010-04-27 2012-05-24 Tc License Ltd. System and method for microwave ranging to a target in presence of clutter and multi-path effects
US20140333475A1 (en) * 2013-05-08 2014-11-13 Eigenor Oy Method and arrangement for removing ground clutter
CN110824450A (en) * 2019-10-15 2020-02-21 中国人民解放军国防科技大学 Radar target HRRP robust identification method in noise environment
EP3620983A1 (en) * 2018-09-05 2020-03-11 Sartorius Stedim Data Analytics AB Computer-implemented method, computer program product and system for data analysis
CN111220958A (en) * 2019-12-10 2020-06-02 西安宁远电子电工技术有限公司 Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network
AU2020104006A4 (en) * 2020-12-10 2021-02-18 Naval Aviation University Radar target recognition method based on feature pyramid lightweight convolutional neural network
CN112699966A (en) * 2021-01-14 2021-04-23 中国人民解放军海军航空大学 Radar HRRP small sample target recognition pre-training and fine-tuning method based on deep migration learning
CN112882010A (en) * 2021-01-12 2021-06-01 西安电子科技大学 High-resolution range profile target identification method based on signal-to-noise ratio field knowledge network
CN112966667A (en) * 2021-04-06 2021-06-15 中国人民解放军海军航空大学 Method for identifying one-dimensional distance image noise reduction convolution neural network of sea surface target
US20210201010A1 (en) * 2019-12-31 2021-07-01 Wuhan University Pedestrian re-identification method based on spatio-temporal joint model of residual attention mechanism and device thereof
CN113095475A (en) * 2021-03-02 2021-07-09 华为技术有限公司 Neural network training method, image processing method and related equipment
CN113240081A (en) * 2021-05-06 2021-08-10 西安电子科技大学 High-resolution range profile target robust identification method aiming at radar carrier frequency transformation
IT202000004573A1 (en) * 2020-03-04 2021-09-04 Nuovo Pignone Tecnologie Srl Hybrid risk model for the optimization of maintenance and system for the execution of this method.
WO2021205743A1 (en) * 2020-04-08 2021-10-14 Mitsubishi Electric Corporation Radar detection of moving object with waveform separation residual
CN113625227A (en) * 2021-07-05 2021-11-09 西安电子科技大学 Radar high-resolution range profile target identification method based on attention transformation network
CN113887661A (en) * 2021-10-25 2022-01-04 济南大学 Image set classification method and system based on representation learning reconstruction residual analysis
EP3992661A1 (en) * 2020-10-30 2022-05-04 Infineon Technologies AG Radar-based target set generation
KR20220091713A (en) * 2020-12-24 2022-07-01 포항공과대학교 산학협력단 Radar-based detection system and method for domain adaptation
US20230093385A1 (en) * 2021-09-17 2023-03-23 Microsoft Technology Licensing, Llc Visibility-based attribute detection
US20230108140A1 (en) * 2021-10-05 2023-04-06 Infineon Technologies Ag Radar-based motion classification using one or more time series
KR20230097525A (en) * 2021-12-24 2023-07-03 성균관대학교산학협력단 Deep learning based keypoint detection system using radar and metasurface

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9625632D0 (en) * 1996-12-10 1997-03-12 Marconi Gec Ltd Doppler radar
WO1999031525A1 (en) * 1997-12-15 1999-06-24 Milkovich Systems Engineering Signal processing architecture which improves sonar and pulse doppler radar performance and tracking capability
US20050134500A1 (en) * 2003-12-22 2005-06-23 Pillai S. U. Target identification from a pool of targets using a new adaptive transmitter-receiver design
US20120127021A1 (en) * 2010-04-27 2012-05-24 Tc License Ltd. System and method for microwave ranging to a target in presence of clutter and multi-path effects
US20140333475A1 (en) * 2013-05-08 2014-11-13 Eigenor Oy Method and arrangement for removing ground clutter
EP3620983A1 (en) * 2018-09-05 2020-03-11 Sartorius Stedim Data Analytics AB Computer-implemented method, computer program product and system for data analysis
CN110824450A (en) * 2019-10-15 2020-02-21 中国人民解放军国防科技大学 Radar target HRRP robust identification method in noise environment
CN111220958A (en) * 2019-12-10 2020-06-02 西安宁远电子电工技术有限公司 Radar target Doppler image classification and identification method based on one-dimensional convolutional neural network
US20210201010A1 (en) * 2019-12-31 2021-07-01 Wuhan University Pedestrian re-identification method based on spatio-temporal joint model of residual attention mechanism and device thereof
IT202000004573A1 (en) * 2020-03-04 2021-09-04 Nuovo Pignone Tecnologie Srl Hybrid risk model for the optimization of maintenance and system for the execution of this method.
WO2021205743A1 (en) * 2020-04-08 2021-10-14 Mitsubishi Electric Corporation Radar detection of moving object with waveform separation residual
EP3992661A1 (en) * 2020-10-30 2022-05-04 Infineon Technologies AG Radar-based target set generation
AU2020104006A4 (en) * 2020-12-10 2021-02-18 Naval Aviation University Radar target recognition method based on feature pyramid lightweight convolutional neural network
KR20220091713A (en) * 2020-12-24 2022-07-01 포항공과대학교 산학협력단 Radar-based detection system and method for domain adaptation
CN112882010A (en) * 2021-01-12 2021-06-01 西安电子科技大学 High-resolution range profile target identification method based on signal-to-noise ratio field knowledge network
CN112699966A (en) * 2021-01-14 2021-04-23 中国人民解放军海军航空大学 Radar HRRP small sample target recognition pre-training and fine-tuning method based on deep migration learning
CN113095475A (en) * 2021-03-02 2021-07-09 华为技术有限公司 Neural network training method, image processing method and related equipment
CN112966667A (en) * 2021-04-06 2021-06-15 中国人民解放军海军航空大学 Method for identifying one-dimensional distance image noise reduction convolution neural network of sea surface target
CN113240081A (en) * 2021-05-06 2021-08-10 西安电子科技大学 High-resolution range profile target robust identification method aiming at radar carrier frequency transformation
CN113625227A (en) * 2021-07-05 2021-11-09 西安电子科技大学 Radar high-resolution range profile target identification method based on attention transformation network
US20230093385A1 (en) * 2021-09-17 2023-03-23 Microsoft Technology Licensing, Llc Visibility-based attribute detection
US20230108140A1 (en) * 2021-10-05 2023-04-06 Infineon Technologies Ag Radar-based motion classification using one or more time series
CN113887661A (en) * 2021-10-25 2022-01-04 济南大学 Image set classification method and system based on representation learning reconstruction residual analysis
KR20230097525A (en) * 2021-12-24 2023-07-03 성균관대학교산학협력단 Deep learning based keypoint detection system using radar and metasurface

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DONG, YB (DONG, YINGBO)等: "Fine-grained ship classification based on deep residual learning for high-resolution SAR images", REMOTE SENSING LETTERS, vol. 10, no. 11, pages 1095 - 1104 *
LEE, CHUL MIN等: "DNN-based Residual Echo Suppression", 16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), pages 1775 - 1779 *
LUKŠIČ, PRIMOŽ等: "Distance-residual subgraphs", DISCRETE MATHEMATICS, vol. 310, no. 12, pages 1653 - 1660, XP027006874 *
卢建, 索莲, 陈帅: "基于孤立森林的海面小目标检测", 电子技术与软件工程, no. 16, pages 60 - 64 *
杨甜甜, 郭大波, 孙佳: "基于多残差网络的遥感图像语义分割方法", 测试技术学报, vol. 35, no. 3, pages 245 - 252 *
樊帅昌, 易晓梅, 李剑等: "基于深度残差网络与迁移学习的毒蕈图像识别", 传感技术学报, vol. 33, no. 1, pages 74 - 83 *

Also Published As

Publication number Publication date
CN116908808B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
CN111209497B (en) DGA domain name detection method based on GAN and Char-CNN
CN109686108B (en) Vehicle target track tracking system and vehicle track tracking method
CN110365612A (en) A kind of deep learning Beam Domain channel estimation methods based on approximate Message Passing Algorithm
CN109188414A (en) A kind of gesture motion detection method based on millimetre-wave radar
CN113408743A (en) Federal model generation method and device, electronic equipment and storage medium
CN112949387B (en) Intelligent anti-interference target detection method based on transfer learning
CN112906828A (en) Image classification method based on time domain coding and impulse neural network
CN111880158A (en) Radar target detection method and system based on convolutional neural network sequence classification
CN112215292A (en) Image countermeasure sample generation device and method based on mobility
CN113988357B (en) Advanced learning-based high-rise building wind induced response prediction method and device
CN112949820A (en) Cognitive anti-interference target detection method based on generation of countermeasure network
CN116112193B (en) Lightweight vehicle-mounted network intrusion detection method based on deep learning
Wang et al. Knowledge transfer for structural damage detection through re-weighted adversarial domain adaptation
CN112468230A (en) Wireless ultraviolet light scattering channel estimation method based on deep learning
Ye et al. Recognition algorithm of emitter signals based on PCA+ CNN
CN116908808B (en) RTN-based high-resolution one-dimensional image target recognition method
KR20200038072A (en) Entropy-based neural networks partial learning method and system
CN116433909A (en) Similarity weighted multi-teacher network model-based semi-supervised image semantic segmentation method
CN115664804A (en) LDoS attack detection method based on radial basis function neural network
CN115620100A (en) Active learning-based neural network black box attack method
CN115422977A (en) Radar radiation source signal identification method based on CNN-BLS network
CN114814776A (en) PD radar target detection method based on graph attention network and transfer learning
CN114139655A (en) Distillation type competitive learning target classification system and method
CN111931412A (en) Underwater target noise LOFAR spectrogram simulation method based on generative countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant