CN114137518A - Radar high-resolution range profile open set identification method and device - Google Patents
Radar high-resolution range profile open set identification method and device Download PDFInfo
- Publication number
- CN114137518A CN114137518A CN202111199838.4A CN202111199838A CN114137518A CN 114137518 A CN114137518 A CN 114137518A CN 202111199838 A CN202111199838 A CN 202111199838A CN 114137518 A CN114137518 A CN 114137518A
- Authority
- CN
- China
- Prior art keywords
- neural network
- convolutional neural
- sample set
- layer
- resolution range
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 99
- 238000012549 training Methods 0.000 claims abstract description 72
- 238000012360 testing method Methods 0.000 claims abstract description 57
- 238000007781 pre-processing Methods 0.000 claims abstract description 15
- 238000012545 processing Methods 0.000 claims description 42
- 238000010586 diagram Methods 0.000 claims description 20
- 238000010606 normalization Methods 0.000 claims description 19
- 239000000126 substance Substances 0.000 claims description 7
- 230000005484 gravity Effects 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000004088 simulation Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 13
- 238000012706 support-vector machine Methods 0.000 description 7
- 230000035945 sensitivity Effects 0.000 description 6
- 230000004913 activation Effects 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a method and a device for identifying a radar high-resolution range profile open set based on a convolutional neural network, wherein the method comprises the following steps: acquiring a radar high-resolution range profile and establishing a training sample set and a test sample set; preprocessing data in the training sample set and the test sample set to obtain a preprocessed training sample set and a preprocessed test sample set; constructing a convolutional neural network model; training a convolutional neural network model by utilizing the preprocessed training sample set to obtain a trained convolutional neural network; and performing open-set identification on the trained convolutional neural network by using the preprocessed test sample set to obtain a radar high-resolution range profile open-set identification result based on the convolutional neural network. The method provided by the invention can be used for identifying and classifying the targets of known classes in the database, and meanwhile, the targets of unknown classes outside the database can be rejected, so that the target identification accuracy is improved, and the automation and intelligence level of the radar is further improved.
Description
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a high-resolution range profile open set identification method and device for a radar based on a convolutional neural network.
Background
The range resolution of the radar is proportional to the receiving pulse width after matched filtering, and the range unit length of the radar transmitting signal meets the following requirements: and Δ R is the distance unit length of the radar emission signal, c is the speed of light, τ is the pulse width matched with the receiving, and B is the bandwidth of the radar emission signal. The large radar transmission signal bandwidth provides a High Range Resolution (HRR). In practice, the radar range resolution is high or low relative to the observed target, when the dimension of the observed target along the radar sight line direction is L, if L < < Δ R, the corresponding radar echo signal width is approximately the same as the radar transmission pulse width (the received pulse after matching processing), generally referred to as "point" target echo, this type of radar is a low resolution radar, if Δ R < < L, the target echo becomes a "one-dimensional range profile" extending over the range according to the characteristics of the target, this type of radar is a high resolution radar (less than < < means).
The high-resolution radar transmits broadband coherent signals (linear frequency modulation or step frequency signals), and receives echo data through backscattering of the target to transmitted electromagnetic waves. Generally, echo characteristics are calculated using a simplified scattering point model, i.e., using a Born first order approximation that ignores multiple scattering. Fluctuations and peaks appearing in high-resolution radar echoes reflect the distribution condition of the radar scattering cross-sectional area (RCS) of scatterers (such as a nose, a wing, a tail rudder, an air inlet, an engine and the like) on a target along a radar sight line (RLOS) at a certain radar viewing angle, and reflect the relative geometric relationship of scattering points in the radial direction, which is often called High Resolution Range Profile (HRRP). Therefore, the HRRP sample contains important structural features of the target and is valuable for target identification and classification.
The traditional target identification method aiming at high-resolution range profile data mainly adopts a support vector machine to directly classify targets, or uses a characteristic extraction method based on a limiting Boltzmann machine to firstly project data into a high-dimensional space and then uses a classifier to classify the data. However, the method only utilizes the time domain characteristics of the signal, and the target identification accuracy is not high.
In recent years, target identification methods for radar high-resolution range profile data are mainly directed to closed set identification, and the identification requires that data classes in a test sample set are consistent with data classes in a training sample set. In practice, however, the radar will capture not only high-resolution range images of targets in the library, but also high-resolution range images of targets of unknown classes outside the library. Under the condition, the existing closed set identification algorithm cannot reject the unknown class data outside the database, but can misjudge the unknown class data as a certain class in the database, so that the target identification accuracy of the radar is greatly reduced.
Some researchers have therefore begun to research on open-set identification of radar high-resolution range images. For example, on the basis of support vector field description (SVDD), chaijing et al proposes a Multi-kernel SVDD model to more flexibly describe multimode distribution of HRRP data in a high-dimensional feature space, thereby improving the identification and rejection performance of radar HRRP. Zhankou et al propose a multi-classifier fusion algorithm based on a Maximum Correlation Classifier (MCC), a Support Vector Machine (SVM) and a Relevance Vector Machine (RVM) to implement rejection and identification functions of the radar HRRP. However, both of the above algorithms need to rely on a kernel function in a specific form to extract features, which limits the ability of the model to extract enough separable features, thereby affecting the accuracy of target recognition and the intelligence level of radar.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a method and a device for identifying a radar high-resolution range profile open set based on a convolutional neural network. The technical problem to be solved by the invention is realized by the following technical scheme:
a radar high-resolution range profile open set identification method based on a convolutional neural network comprises the following steps:
acquiring a radar high-resolution range profile and establishing a training sample set and a test sample set; the training sample set comprises a plurality of target radar high-resolution range profiles of known classes, and the test sample set comprises a plurality of target radar high-resolution range profiles of known classes in a library and unknown classes out of the library;
preprocessing the data in the training sample set and the test sample set to obtain a preprocessed training sample set and a preprocessed test sample set;
constructing a convolutional neural network model;
training the convolutional neural network model by using the preprocessed training sample set to obtain a trained convolutional neural network;
and performing open-set identification on the trained convolutional neural network by using the preprocessed test sample set to obtain a radar high-resolution range profile open-set identification result based on the convolutional neural network.
In one embodiment of the present invention, preprocessing the training sample set and the testing sample set includes:
and sequentially carrying out gravity center alignment and normalization processing on the data in the training sample set and the test sample set to obtain a preprocessed training sample set and a preprocessed test sample set.
In one embodiment of the invention, constructing the convolutional neural network model comprises:
constructing a convolutional neural network model with a four-layer structure; the four-layer structure comprises a first layer of convolution layer, a second layer of convolution layer, a third layer of convolution layer and a fourth layer of full-connection layer, and each convolution layer is set to have the same convolution step length; each convolution layer comprises a plurality of convolution kernels, and the sizes of the convolution kernels are the same.
In one embodiment of the present invention, the method further comprises constructing a loss function of the convolutional neural network, which is expressed as:
wherein, theta (x) is the output result of the convolution neural network, Oi(i=1, …, N) are N prototypes randomly initialized according to a Gaussian distribution, d (Θ (x), Ok) Is theta (x) to OkIs a hyperparameter, ri=d(Oi,Oc) Represents each prototype OiTo the centerThe distance of (c).
In an embodiment of the present invention, training the convolutional neural network model using the preprocessed training sample set to obtain a trained convolutional neural network, including:
randomly dividing the preprocessed training sample set into q batches, wherein the data of each batch is n multiplied by D dimensional matrix data; wherein the content of the first and second substances,floor () represents rounding down, P represents the number of high-resolution range profiles in the training sample set;
sequentially inputting the data of each batch into a convolutional neural network for processing to obtain an output result of the convolutional neural network;
and calculating the value of a loss function according to the output result of the convolutional neural network, and updating the parameter value of the convolutional neural network by using a random gradient method until the network converges to obtain the trained convolutional neural network.
In an embodiment of the present invention, sequentially inputting each batch of data into a trained convolutional neural network for processing, and obtaining an output result of the convolutional neural network, including:
performing convolution and downsampling processing on current input data by using the first layer of convolution layer to obtain a first characteristic diagram;
performing convolution and downsampling processing on the first feature map by using a second layer of convolution layer to obtain a second feature map;
carrying out convolution and downsampling processing on the second feature map by using a third layer of convolution layer to obtain a third feature map;
carrying out nonlinear transformation processing on the third characteristic diagram by utilizing a fourth full-connection layer to obtain a processing result of current data;
and repeating the steps until all the input data are processed to obtain the output result of the convolutional neural network.
In an embodiment of the present invention, the performing open-set identification on the trained convolutional neural network by using the preprocessed test sample set to obtain a radar high-resolution range profile open-set identification result based on the convolutional neural network, includes:
the probability that the sample x to be detected belongs to the class k is predicted by the convolutional neural networkThe expression is as follows:
probability of sample under testWhen the sample to be detected is smaller than a preset threshold value, judging that the sample to be detected is an unknown class outside the library; otherwise, the classification is judged as a known class in the library, and the specific class to which the classification belongs is further obtained.
Another embodiment of the present invention provides a convolutional neural network-based radar high-resolution range profile open set identification apparatus, including:
the data acquisition module is used for acquiring a radar high-resolution range profile and establishing a training sample set and a test sample set; the training sample set comprises a plurality of target radar high-resolution range profiles of known classes, and the test sample set comprises a plurality of target radar high-resolution range profiles of known classes in a library and unknown classes out of the library;
the preprocessing module is used for preprocessing the data in the training sample set and the test sample set to obtain a preprocessed training sample set and a preprocessed test sample set;
the model construction module is used for constructing a convolutional neural network model;
the training module is used for training the convolutional neural network model by utilizing the preprocessed training sample set to obtain a trained convolutional neural network;
and the target identification module is used for carrying out open-set identification on the trained convolutional neural network by utilizing the preprocessed test sample set to obtain a radar high-resolution range profile open-set identification result based on the convolutional neural network.
The invention has the beneficial effects that:
1. the radar high-resolution range profile open-set identification method provided by the invention adopts the convolutional neural network technology to combine the primary characteristics of each layer so as to obtain the characteristics of higher layers for identification, so that the identification rate is obviously improved, the method can be used for identifying and classifying the known class targets in the database, meanwhile, the unknown class targets outside the database can be rejected, the target identification accuracy rate is improved, and the automation and intelligence level of the radar is further improved;
2. the invention adopts a multilayer convolutional neural network structure, carries out energy normalization and alignment pretreatment on the data, can mine the high-level characteristics of the high-resolution range profile data, removes the amplitude sensitivity, the translation sensitivity and the attitude sensitivity of the high-resolution range profile data, and has stronger robustness compared with the traditional direct classification method.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
Fig. 1 is a schematic flow chart of a method for identifying a radar high-resolution range profile based on a convolutional neural network according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a radar high-resolution range profile open set identification device based on a convolutional neural network according to an embodiment of the present invention;
fig. 3 is a simulation test result provided by the embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for identifying a radar high-resolution range profile based on a convolutional neural network according to an embodiment of the present invention, including the following steps:
step 1: acquiring a radar high-resolution range profile and establishing a training sample set and a test sample set; the training sample set comprises a plurality of target radar high-resolution range profiles of known classes, and the testing sample set comprises a plurality of target radar high-resolution range profiles of known classes in the database and unknown classes out of the database.
Firstly, P radar high-resolution range profile original data of N categories are obtained to serve as a training sample set, wherein N is larger than or equal to 3, and P is larger than or equal to 900;
then, Q radar high-resolution range profile original data of N training sample classes and L radar high-resolution range profile original data of M unknown classes are obtained to serve as a test sample set, wherein Q is larger than or equal to 900, M is larger than or equal to 1, and L is larger than or equal to 300.
Step 2: and preprocessing the data in the training sample set and the test sample set to obtain a preprocessed training sample set and a preprocessed test sample set.
In this embodiment, the center of gravity alignment and normalization processing are sequentially performed on the data in the training sample set and the test sample set to obtain the training sample set and the test sample set after the preprocessing.
Specifically, the original data in the training sample set or the test sample set is recorded as x0First, for the original data x0Carrying out center of gravity alignment to obtain data x 'after center of gravity alignment'0(ii) a Then, data x 'aligned according to the gravity center is processed'0Carrying out normalization processing to obtain data x after normalization processing, wherein the expression of the data x is as follows:
the preprocessed training sample set and the preprocessed testing sample set are P × D (Q + L) × D dimensional matrixes respectively, wherein D represents the total number of distance units contained in original data of the radar high-resolution range profile.
And step 3: and constructing a convolutional neural network model.
In this embodiment, the convolutional neural network model is set to be a four-layer structure, which includes three convolutional layers and a full-link layer, which are respectively denoted as a first convolutional layer, a second convolutional layer, a third convolutional layer and a fourth full-link layer. Each convolution layer has the same convolution step size, each convolution layer comprises a plurality of convolution kernels, and the sizes of the convolution kernels are the same.
Specifically, for the first layer convolutional layer:
setting the convolution kernel to comprise C convolution kernels, recording the C convolution kernels of the first layer of convolution layer as K, setting the size of the K to be 1 xwx1, wherein w represents each convolution kernel window in the first layer of convolution layer, and 1< w < D; c is a positive integer greater than 0; setting the convolution step length of the first layer of convolution layer to be L; setting the core window size of downsampling processing of the first layer of convolution layer to be m multiplied by m, wherein 1< m < D, D represents the total number of distance units contained in each type of high-resolution distance imaging data in a training sample, and m is a positive integer larger than 0; and setting the step length of the down-sampling processing of the first layer convolution layer as I, wherein the values of I and m are equal.
Setting the activation function of the first convolutional layer asx represents the sample data after pre-processing,representing the convolution operation, b represents the all 1 offset of the third layer convolution layer.
For the second layer of convolutional layers:
setting the convolution kernel to comprise C 'convolution kernels, and marking the C' convolution kernels of the second convolution layer as K ', wherein the value of the C' convolution kernels is the same as that of the K convolution kernels of the first convolution layer; the convolution step length of the second layer of convolution layer is recorded as L ', w is not less than L ' and not more than D-w, and the value of L ' is equal to that of the convolution step length L of the first layer of convolution layer; setting the core window size of the downsampling process of the second convolution layer to be m 'multiplied by m', 1< m '< D, wherein m' is a positive integer larger than 0; the step length of the down-sampling processing of the second layer is I ', and the values of I ' and m ' are equal.
Setting the activation function of the second convolution layer to A first characteristic diagram representing an output of the first layer convolution layer,representing the convolution operation and b' representing the all 1 offset of the second convolutional layer.
For the third layer of convolutional layers:
setting the convolution kernel to comprise C 'convolution kernels, and enabling the C' convolution kernels of the third layer of convolution layer to be K 'and the size of the C' convolution kernels to be the same as the size of each convolution kernel window in the second layer of convolution layer; setting the convolution step length of the third layer of convolution layer to be L 'which is equal to the convolution step length L' of the second layer of convolution layer in value; meanwhile, the core window size of the downsampling processing of the third layer of convolution layer is set to be m 'multiplied by m', 1< m '< D, and m' is a positive integer larger than 0; the step size of the down-sampling processing of the third layer is I ', and the values of I ' and m ' are equal.
Setting the activation function of the second convolution layer to A second characteristic diagram representing an output of the second convolution layer,representing the convolution operation, b "represents the all 1 offset of the third convolutional layer.
For the fourth fully connected layer:
setting its randomly initialized weight matrixIs a matrix with dimension of B multiplied by U,floor () represents rounding down, D represents the total number of distance units contained in each type of high-resolution range imaging data in the training sample, B is greater than or equal to D, and B is a positive integer greater than 0; setting an activation function toA third characteristic diagram showing an output of the third layer convolution layer,represents an all-1 bias of the fourth layer all-connected layer, andis U × 1 dimension.
After the model of the convolutional neural network is constructed, constructing a loss function of the convolutional neural network, wherein the expression of the loss function is as follows:
wherein, theta (x) is the output result of the convolution neural network, Oi(i ═ 1, …, N) are N prototypes randomly initialized according to a gaussian distribution, d (Θ (x), O)k) Is theta (x) to OkIs a hyperparameter, ri=d(Oi,Oc) Represents each prototype OiTo the centerThe distance of (c).
And 4, step 4: training the convolutional neural network model by utilizing the preprocessed training sample set to obtain a trained convolutional neural network, which specifically comprises the following steps:
41) randomly dividing the preprocessed training sample set into q batches, wherein each batch isThe data is n multiplied by D dimensional matrix data; wherein the content of the first and second substances,floor () represents the rounding down and P represents the number of high-resolution range images in the training sample set.
42) And sequentially inputting the data of each batch into the convolutional neural network for processing to obtain an output result of the convolutional neural network.
42-1) after the data is input into the convolutional neural network, carrying out convolution and down-sampling processing on the current input data by utilizing a first layer convolutional layer to obtain a first characteristic diagram.
Specifically, the input data x and C convolution kernels of the first convolutional layer are convolved respectively by using the convolution step length L of the first convolutional layer, so as to obtain C convolved results of the first convolutional layer, and the results are recorded as C feature maps y of the first convolutional layer:
performing Gaussian normalization processing on the C feature maps y of the first layer of convolution layer to obtain C feature maps of the first layer of convolution layer after the Gaussian normalization processing
To pairRespectively performing downsampling processing on each feature map to obtain C feature maps after downsampling processing of the first layer of convolutional layerI.e. the first characteristic diagram, expressed as:
wherein the content of the first and second substances,c feature maps representing the first layer convolution layer after Gaussian normalization within a kernel window size m × m of the first layer downsampling processThe maximum value of (a) is,and C characteristic graphs of the first layer convolution layer after Gaussian normalization processing are shown.
42-2) convolving and downsampling the first feature map by the second convolution layer to obtain a second feature map.
Specifically, the C feature maps obtained by downsampling the first convolutional layer are processed by using the convolution step L' of the second convolutional layer(i.e. the first characteristic diagram) and the C 'convolution kernels K' of the second convolution layer are respectively convolved to obtain C 'convolved results of the second convolution layer, and the results are recorded as C' characteristic diagrams of the second convolution layer
C' feature maps for the second convolutional layerPerforming Gaussian normalization to obtain C characteristic maps of the second convolution layer after the Gaussian normalization
To pairRespectively performing downsampling processing on each feature map to obtain C' feature maps after downsampling processing of the second layer of convolutional layerI.e. the second characteristic diagram, expressed as:
wherein the content of the first and second substances,c characteristic diagrams of the second convolution layer after Gaussian normalization within a kernel window size m 'x m' of the second downsampling processThe maximum value of (a) is,and C characteristic graphs of the second convolution layer after the Gaussian normalization processing are shown.
42-3) performing convolution and downsampling processing on the second feature map by using the third layer of convolution layer to obtain a third feature map.
Specifically, the C feature maps obtained by downsampling the second convolutional layer are obtained by using the convolution step L' of the third convolutional layer(i.e. the second characteristic diagram) and C 'convolution kernels K' of the third convolutional layer respectively to obtain C 'convolved results of the third convolutional layer, and the results are recorded as C' characteristic diagrams of the third convolutional layer
C' number of feature maps for the third layer convolutional layerPerforming Gaussian normalization to obtain C' feature maps of the third layer convolution layer after the Gaussian normalization
To pairRespectively performing downsampling processing on each feature map to obtain C' feature maps after downsampling processing of the third layer of convolutional layerI.e. the third characteristic diagram, expressed as:
wherein the content of the first and second substances,c ' feature maps representing the third layer convolution layer after Gaussian normalization within a kernel window size m ' × m ' of the third layer downsampling processingThe maximum value of (a) is,and C' feature maps of the convolution layer of the third layer after Gaussian normalization processing are shown.
42-4) carrying out nonlinear transformation processing on the third characteristic diagram by utilizing the fourth full-connection layer to obtain the processing result of the current dataThe expression is as follows:
wherein the content of the first and second substances,a weight matrix representing a random initialization of the fourth layer fully connected layer,indicating a full 1 bias of the fourth layer full link layer.
42-5) repeating the steps until all input data are processed, and obtaining the output result of the convolutional neural network.
43) And calculating the value of the loss function according to the output result of the convolutional neural network, and updating the parameter value of the convolutional neural network by using a random gradient method until the network converges to obtain the trained convolutional neural network.
Specifically, obtained in the step 42) aboveSubstituting the output result of the convolutional neural network into the loss function expression to obtain the value of the loss function, and updating the parameter value of the convolutional neural network by adopting the conventional random gradient method until the network converges to obtain the trained convolutional neural network. The random gradient method is well known in the art, and the present embodiment is not specifically described.
In the embodiment, a multilayer convolutional neural network structure is adopted, and the data is subjected to energy normalization and alignment preprocessing, so that the high-level features of the high-resolution range profile data can be mined, the amplitude sensitivity, the translation sensitivity and the attitude sensitivity of the high-resolution range profile data are removed, and the method has stronger robustness compared with the traditional direct classification method.
And 5: and performing open-set identification on the trained convolutional neural network by using the preprocessed test sample set to obtain a radar high-resolution range profile open-set identification result based on the convolutional neural network.
51) The probability that the sample x to be detected belongs to the class k is predicted by the convolutional neural networkThe expression is as follows:
52) probability of sample under testWhen the sample to be detected is smaller than a preset threshold value, judging that the sample to be detected is an unknown class outside the library; otherwise, the classification is judged as a known class in the library, and the specific class to which the classification belongs is further obtained.
Specifically, a threshold value tau is set according to the training result of the known class data in the library, and the probability of the sample to be tested is obtainedWhen the sample to be detected is smaller than the threshold tau, judging the sample to be detected as an unknown class outside the library; otherwise, the classification is judged as the known classification in the library, and the specific classification k is judged according to the formula in the step 51).
The radar high-resolution range profile open-set identification method provided by the embodiment can combine the primary features of all layers by adopting the convolutional neural network technology, so that the features of higher layers are obtained for identification, the identification rate is obviously improved, the method can be used for identifying and classifying the known class targets in the database, meanwhile, the unknown class targets outside the database can be rejected, the target identification accuracy rate is improved, and the automation and intelligence level of the radar is further improved.
Example two
On the basis of the first embodiment, the embodiment also provides a radar high-resolution range profile open set identification device based on the convolutional neural network. Referring to fig. 2, fig. 2 is a schematic structural diagram of a radar high-resolution range profile open-set identification device based on a convolutional neural network according to an embodiment of the present invention, which includes:
the data acquisition module 1 is used for acquiring a radar high-resolution range profile and establishing a training sample set and a test sample set; the training sample set comprises a plurality of target radar high-resolution range profiles of known classes, and the test sample set comprises a plurality of target radar high-resolution range profiles of known classes in a library and unknown classes out of the library;
the preprocessing module 2 is used for preprocessing the data in the training sample set and the test sample set to obtain a preprocessed training sample set and a preprocessed test sample set;
the model building module 3 is used for building a convolutional neural network model;
the training module 4 is used for training the convolutional neural network model by using the preprocessed training sample set to obtain a trained convolutional neural network;
and the target identification module 5 is used for performing open-set identification on the trained convolutional neural network by using the preprocessed test sample set to obtain a radar high-resolution range profile open-set identification result based on the convolutional neural network.
The radar high-resolution range profile open set identification device provided by this embodiment can implement the radar high-resolution range profile open set identification method provided by the first embodiment, and the detailed process is not repeated here.
Therefore, the radar high-resolution range profile open-set identification device provided by the embodiment also has the advantages that the identification and classification of the targets with known classes in the database can be realized, meanwhile, the judgment of the targets with unknown classes outside the database can be refused, and the target identification accuracy rate is high.
EXAMPLE III
The following is a simulation test to verify the beneficial effects of the present invention.
1. Simulation conditions
The hardware platform of the simulation experiment of this embodiment is:
a processor: intel (R) core (TM) i9-10980XE, with a primary frequency of 3.00GHz and 256GB of memory.
The software platform of the simulation experiment of the embodiment is as follows: ubuntu 20.04 operating system and python 3.9.
The data used in the simulation test is the measured data of the high-resolution distance image of 10 types of civil aircrafts, and the types of the 10 types of civil aircrafts are A319, A320, A330-2, A330-3, B737-8, CRJ-900, A321, A350-941, B737-7 and B747-89L respectively. And (3) taking the first 6 classes of airplanes as known in-library target classes, taking the second 4 classes of airplanes as unknown out-library target classes, and manufacturing a training sample set and a testing sample set. Wherein, the training sample set totals 30921 samples, and each in-library category sample is about 5000 samples; the test sample set included a total of 18424 samples from the known 6 classes in the library and a total of 12359 samples from the unknown 4 classes out of the library, with about 3000 samples from each class.
Before performing the experiment, all raw data were preprocessed according to step 2 in the first embodiment, and then an open set identification experiment was performed using a convolutional neural network.
2. Simulation content and result analysis
The simulation experiment compares the method of the invention with the traditional two-stage rejection identification method and the SoftMax threshold value method.
The traditional two-stage rejection identification method mainly uses methods based on SVDD, OCSVM, Isolation-Forest and the like to reject targets outside the library, and further classifies the targets judged to be in the library by methods such as SVM and the like. The SoftMax threshold value law considers that the in-library target features extracted by the convolutional neural network are presented as a plurality of clusters in the feature space, the clusters can be respectively wrapped by a plurality of hyper-spheres, if the features of the sample to be detected fall in the hyper-spheres, the sample to be detected is considered to belong to the category, and if the features do not fall in any hyper-sphere, the target is judged to be an out-of-library target.
The simulation experiment utilizes the area AUC under the characteristic curve (ROC) of the operation of a testee to evaluate the rejection capability of different methods to the targets outside the library, wherein the larger the value of the AUC is, the stronger the rejection capability to the targets outside the library is represented.
Referring to fig. 3, fig. 3 is a comparison diagram of simulation test results provided in the embodiment of the present invention, and as can be seen from fig. 3, in the present simulation experiment, the rejection capability of the present invention to the out-of-library target is strongest, and next, the SoftMax threshold method is used, and the rejection capability of the 3 conventional methods to the out-of-library target is general.
As the simulation experiment uses more data types, the open set identification capability of different methods is comprehensively evaluated by using Macro Average F1-Score, wherein the larger the value of F1-Score is, the stronger the open set identification capability is represented. The results of the simulation are shown in the following table.
Method | AUC | F1-Score |
SVDD+SVM | 0.5552 | 0.4394 |
OCSVM+SVM | 0.5443 | 0.3216 |
Isolation Forest+SVM | 0.4825 | 0.4283 |
SoftMax threshold method | 0.7141 | 0.4627 |
The invention | 0.8329 | 0.5418 |
It can be seen that in the simulation experiment, the comprehensive open set identification capability of the invention is strongest and is obviously superior to other 4 methods.
In conclusion, the invention obtains the optimal result no matter in the aspect of refusal judgment capability of the targets outside the warehouse or in the aspect of comprehensive consideration of the opening set identification capability, and proves the effectiveness of the invention.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.
Claims (8)
1. A radar high-resolution range profile open set identification method based on a convolutional neural network is characterized by comprising the following steps:
acquiring a radar high-resolution range profile and establishing a training sample set and a test sample set; the training sample set comprises a plurality of target radar high-resolution range profiles of known classes, and the test sample set comprises a plurality of target radar high-resolution range profiles of known classes in a library and unknown classes out of the library;
preprocessing the data in the training sample set and the test sample set to obtain a preprocessed training sample set and a preprocessed test sample set;
constructing a convolutional neural network model;
training the convolutional neural network model by using the preprocessed training sample set to obtain a trained convolutional neural network;
and performing open-set identification on the trained convolutional neural network by using the preprocessed test sample set to obtain a radar high-resolution range profile open-set identification result based on the convolutional neural network.
2. The method of claim 1, wherein preprocessing the training sample set and the testing sample set comprises:
and sequentially carrying out gravity center alignment and normalization processing on the data in the training sample set and the test sample set to obtain a preprocessed training sample set and a preprocessed test sample set.
3. The method of claim 1, wherein constructing the convolutional neural network model comprises:
constructing a convolutional neural network model with a four-layer structure; the four-layer structure comprises a first layer of convolution layer, a second layer of convolution layer, a third layer of convolution layer and a fourth layer of full-connection layer, and each convolution layer is set to have the same convolution step length; each convolution layer comprises a plurality of convolution kernels, and the sizes of the convolution kernels are the same.
4. The method of claim 3, further comprising constructing a loss function of the convolutional neural network, wherein the loss function is expressed as:
5. The method of claim 4, wherein training the convolutional neural network model with the preprocessed training sample set to obtain a trained convolutional neural network comprises:
randomly dividing the preprocessed training sample set into q batches, wherein the data of each batch is n multiplied by D dimensional matrix data; wherein the content of the first and second substances,floor () represents rounding down, P represents the number of high-resolution range profiles in the training sample set;
sequentially inputting the data of each batch into a convolutional neural network for processing to obtain an output result of the convolutional neural network;
and calculating the value of a loss function according to the output result of the convolutional neural network, and updating the parameter value of the convolutional neural network by using a random gradient method until the network converges to obtain the trained convolutional neural network.
6. The radar high-resolution range profile open set identification method of claim 5, wherein the step of sequentially inputting the data of each batch into the trained convolutional neural network for processing to obtain the output result of the convolutional neural network comprises the following steps:
performing convolution and downsampling processing on current input data by using the first layer of convolution layer to obtain a first characteristic diagram;
performing convolution and downsampling processing on the first feature map by using a second layer of convolution layer to obtain a second feature map;
carrying out convolution and downsampling processing on the second feature map by using a third layer of convolution layer to obtain a third feature map;
carrying out nonlinear transformation processing on the third characteristic diagram by utilizing a fourth full-connection layer to obtain a processing result of current data;
and repeating the steps until all the input data are processed to obtain the output result of the convolutional neural network.
7. The method for identifying the open set of the radar high-resolution range profile according to claim 1, wherein the open set identification of the trained convolutional neural network is performed by using the preprocessed test sample set, so as to obtain the open set identification result of the radar high-resolution range profile based on the convolutional neural network, and the method comprises the following steps:
the probability that the sample x to be detected belongs to the class k is predicted by the convolutional neural networkThe expression is as follows:
probability of sample under testWhen the sample to be detected is smaller than a preset threshold value, judging that the sample to be detected is an unknown class outside the library; otherwise, the classification is judged as a known class in the library, and the specific class to which the classification belongs is further obtained.
8. A radar high-resolution range profile open set identification device based on a convolutional neural network is characterized by comprising the following components:
the data acquisition module (1) is used for acquiring a radar high-resolution range profile and establishing a training sample set and a test sample set; the training sample set comprises a plurality of target radar high-resolution range profiles of known classes, and the test sample set comprises a plurality of target radar high-resolution range profiles of known classes in a library and unknown classes out of the library;
the preprocessing module (2) is used for preprocessing the data in the training sample set and the test sample set to obtain a preprocessed training sample set and a preprocessed test sample set;
the model building module (3) is used for building a convolutional neural network model;
the training module (4) is used for training the convolutional neural network model by utilizing the preprocessed training sample set to obtain a trained convolutional neural network;
and the target identification module (5) is used for carrying out open-set identification on the trained convolutional neural network by utilizing the preprocessed test sample set to obtain a radar high-resolution range profile open-set identification result based on the convolutional neural network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111199838.4A CN114137518B (en) | 2021-10-14 | 2021-10-14 | Radar high-resolution range profile open set identification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111199838.4A CN114137518B (en) | 2021-10-14 | 2021-10-14 | Radar high-resolution range profile open set identification method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114137518A true CN114137518A (en) | 2022-03-04 |
CN114137518B CN114137518B (en) | 2024-07-12 |
Family
ID=80395049
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111199838.4A Active CN114137518B (en) | 2021-10-14 | 2021-10-14 | Radar high-resolution range profile open set identification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114137518B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116089821A (en) * | 2023-02-23 | 2023-05-09 | 中国人民解放军63921部队 | Method for monitoring and identifying state of deep space probe based on convolutional neural network |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107728142A (en) * | 2017-09-18 | 2018-02-23 | 西安电子科技大学 | Radar High Range Resolution target identification method based on two-dimensional convolution network |
CN107728143A (en) * | 2017-09-18 | 2018-02-23 | 西安电子科技大学 | Radar High Range Resolution target identification method based on one-dimensional convolutional neural networks |
CN108520199A (en) * | 2018-03-04 | 2018-09-11 | 天津大学 | Based on radar image and the human action opener recognition methods for generating confrontation model |
CN109376574A (en) * | 2018-08-14 | 2019-02-22 | 西安电子科技大学 | Refuse to sentence radar HRRP target identification method based on CNN |
CN111458688A (en) * | 2020-03-13 | 2020-07-28 | 西安电子科技大学 | Radar high-resolution range profile target identification method based on three-dimensional convolution network |
AU2020104006A4 (en) * | 2020-12-10 | 2021-02-18 | Naval Aviation University | Radar target recognition method based on feature pyramid lightweight convolutional neural network |
CN112904299A (en) * | 2021-03-03 | 2021-06-04 | 西安电子科技大学 | Radar high-resolution range profile open set target identification method based on deep intra-class division |
-
2021
- 2021-10-14 CN CN202111199838.4A patent/CN114137518B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107728142A (en) * | 2017-09-18 | 2018-02-23 | 西安电子科技大学 | Radar High Range Resolution target identification method based on two-dimensional convolution network |
CN107728143A (en) * | 2017-09-18 | 2018-02-23 | 西安电子科技大学 | Radar High Range Resolution target identification method based on one-dimensional convolutional neural networks |
CN108520199A (en) * | 2018-03-04 | 2018-09-11 | 天津大学 | Based on radar image and the human action opener recognition methods for generating confrontation model |
CN109376574A (en) * | 2018-08-14 | 2019-02-22 | 西安电子科技大学 | Refuse to sentence radar HRRP target identification method based on CNN |
CN111458688A (en) * | 2020-03-13 | 2020-07-28 | 西安电子科技大学 | Radar high-resolution range profile target identification method based on three-dimensional convolution network |
AU2020104006A4 (en) * | 2020-12-10 | 2021-02-18 | Naval Aviation University | Radar target recognition method based on feature pyramid lightweight convolutional neural network |
CN112904299A (en) * | 2021-03-03 | 2021-06-04 | 西安电子科技大学 | Radar high-resolution range profile open set target identification method based on deep intra-class division |
Non-Patent Citations (2)
Title |
---|
Y. WANG 等: ""Open set radar HRRP recognition based on random forest and extreme value theory""", 《PROC. INT. CONF. RADAR (RADAR)》, 31 August 2018 (2018-08-31), pages 1 - 4, XP033466128, DOI: 10.1109/RADAR.2018.8557327 * |
李正伟: "" 基于对抗自编码网络的开集识别研究"", 《中国优秀硕士论文全文数据库》, 15 January 2021 (2021-01-15) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116089821A (en) * | 2023-02-23 | 2023-05-09 | 中国人民解放军63921部队 | Method for monitoring and identifying state of deep space probe based on convolutional neural network |
CN116089821B (en) * | 2023-02-23 | 2023-08-15 | 中国人民解放军63921部队 | Method for monitoring and identifying state of deep space probe based on convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN114137518B (en) | 2024-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shao et al. | Convolutional neural network-based radar jamming signal classification with sufficient and limited samples | |
CN110109060B (en) | Radar radiation source signal sorting method and system based on deep learning network | |
CN107728142B (en) | Radar high-resolution range profile target identification method based on two-dimensional convolutional network | |
CN111913156B (en) | Radar radiation source individual identification method based on deep learning model and feature combination | |
CN107728143B (en) | Radar high-resolution range profile target identification method based on one-dimensional convolutional neural network | |
CN110619352A (en) | Typical infrared target classification method based on deep convolutional neural network | |
CN110109110B (en) | HRRP target identification method based on priori optimal variation self-encoder | |
CN111352086B (en) | Unknown target identification method based on deep convolutional neural network | |
CN109711314B (en) | Radar radiation source signal classification method based on feature fusion and SAE | |
CN108256436A (en) | A kind of radar HRRP target identification methods based on joint classification | |
CN110516728B (en) | Polarized SAR terrain classification method based on denoising convolutional neural network | |
CN104732244A (en) | Wavelet transform, multi-strategy PSO (particle swarm optimization) and SVM (support vector machine) integrated based remote sensing image classification method | |
CN112904299B (en) | Radar high-resolution range profile open set target identification method based on deep class segmentation | |
CN110703221A (en) | Urban low-altitude small target classification and identification system based on polarization characteristics | |
CN111368653B (en) | Low-altitude small target detection method based on R-D graph and deep neural network | |
CN114137518A (en) | Radar high-resolution range profile open set identification method and device | |
CN113239959B (en) | Radar HRRP target identification method based on decoupling characterization variation self-encoder | |
CN117665807A (en) | Face recognition method based on millimeter wave multi-person zero sample | |
CN113901878A (en) | CNN + RNN algorithm-based three-dimensional ground penetrating radar image underground pipeline identification method | |
CN113900101A (en) | Obstacle detection method and device and electronic equipment | |
Meng et al. | A target-region-based SAR ATR adversarial deception method | |
CN110969203B (en) | HRRP data redundancy removing method based on self-correlation and CAM network | |
CN115792908B (en) | Target detection method based on high-resolution multi-angle spaceborne SAR feature fusion | |
CN116311067A (en) | Target comprehensive identification method, device and equipment based on high-dimensional characteristic map | |
CN114821335B (en) | Unknown target discrimination method based on fusion of depth features and linear discrimination features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |