CN115909078A - Ship classification method based on HRRP and SAR data feature level fusion - Google Patents

Ship classification method based on HRRP and SAR data feature level fusion Download PDF

Info

Publication number
CN115909078A
CN115909078A CN202211739219.4A CN202211739219A CN115909078A CN 115909078 A CN115909078 A CN 115909078A CN 202211739219 A CN202211739219 A CN 202211739219A CN 115909078 A CN115909078 A CN 115909078A
Authority
CN
China
Prior art keywords
hrrp
feature
data
sar
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211739219.4A
Other languages
Chinese (zh)
Inventor
董文倩
崔继洲
曲家慧
肖嵩
李云松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202211739219.4A priority Critical patent/CN115909078A/en
Priority to CN202310321258.0A priority patent/CN116343041A/en
Publication of CN115909078A publication Critical patent/CN115909078A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Image Processing (AREA)

Abstract

A ship classification method based on HRRP and SAR data feature level fusion comprises the following steps; inputting HRRP and SAR detection data of the same target at the same time, respectively preprocessing the HRRP and SAR detection data, and dividing a training set and a test set; an SAR image feature separation module is constructed, the correlation of features among samples is reduced, and the sample feature distance is increased; constructing an SAR image feature aggregation module, and aggregating the similar features in the separated sample features; constructing a one-dimensional range profile feature extraction module based on an attention mechanism, and extracting ship detail features in HRRP; constructing a HRRP and SAR data feature fusion classification module, and fusing the features of the HRRP and SAR data to classify the target; carrying out supervised training on the built multi-source feature fusion classification model to obtain parameters suitable for the model; and sending the ship target data to be classified into the trained multi-source characteristic fusion classification model for classification to obtain a classification result. The invention improves the precision and the robustness of ship classification.

Description

Ship classification method based on HRRP and SAR data feature level fusion
Technical Field
The invention belongs to the technical field of radars, and particularly relates to a ship classification method based on HRRP and SAR data feature level fusion.
Background
Both synthetic aperture radar and one-dimensional range profile belong to high resolution radar data. The synthetic aperture radar is an active earth observation system, has the advantages of all-time, all-weather and wide detection range, and can obtain high-resolution radar imaging similar to optics under the conditions of covering and low visibility. Synthetic aperture radar images are able to reflect geometric features as well as scattering features of the target. The method plays an important role in the identification and classification of civil fishing boats and military ships. The one-dimensional range profile is obtained by a high-resolution radar, and when the size of a target is far larger than the size of a radar resolution unit, radar echoes of the target form the one-dimensional range profile. The radar one-dimensional distance image has the advantages of small data volume, good real-time performance, easiness in processing and strong anti-interference capability, and reflects geometrical structural characteristics of a target in the distance direction, including the size of the target, the position of a scattering center and the like. One-dimensional distance images are considered as the most promising method for object recognition in industry, and have recently become the focus of research.
There are two main categories of synthetic aperture radar image ship classification techniques. One is to classify target ships by using a traditional method, mainly extracting the geometric characteristics of the ships, and then completing the classification of the ships by various machine learning classifiers such as a Support Vector Machine (SVM), a Logistic Regression (LR) and the like. Another class is based on deep learning classification methods. The deep learning realizes effective extraction of the features by utilizing a nonlinear network structure, does not need to artificially design an extraction method of the features, and has good feature extraction and learning capabilities, thereby completing classification of ships.
The ship classification of the one-dimensional range profile data is divided into two types, namely one-dimensional range profile classification based on a traditional method and a classification algorithm based on a deep neural network. The traditional one-dimensional range profile classification algorithm mainly comprises a dimension reduction method and a transformation method, wherein the dimension reduction method is used for carrying out dimension reduction mapping on a high-dimensional range profile signal to obtain characteristics capable of being classified. The transformation method is to project the one-dimensional range profile signal to the frequency domain to extract the spectrogram feature for identification and classification. The one-dimensional distance image recognition network based on deep learning adopts an end-to-end supervision learning mode to automatically extract the separability characteristic of a one-dimensional distance image signal of a sample, and overcomes the defect of the aspect of sign extraction of the traditional method.
The synthetic aperture radar image is generally poor in imaging quality and has serious speckle noise, and details of a ship are seriously lost after filtering. The one-dimensional range image data contains more ship detail information, but the problem of azimuth sensitivity is not solved all the time. Therefore, the ship classification precision using a single synthetic aperture radar image and one-dimensional range profile data has a certain bottleneck and poor stability.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a ship classification method based on HRRP and SAR data feature level fusion, so that the precision and robustness of ship classification are improved, and the problems that in the prior art, only a single data source is used for ship classification, the precision is bottleneck and the stability is poor are solved.
In order to achieve the purpose, the invention adopts the technical scheme that:
a ship classification method based on HRRP and SAR data feature level fusion comprises the following steps;
s101: acquiring HRRP and SAR detection data of the same ship target at the same moment, respectively preprocessing the HRRP and SAR detection data, and dividing a training set and a test set;
s102: an SAR image feature separation module is constructed, the correlation of features among all detection data samples is reduced, and the sample feature distance is increased;
s103: constructing an SAR image feature aggregation module, aggregating the similar features in the separated sample features output by S102, reducing the intra-class distance, increasing the inter-class distance and enhancing the classification performance;
s104: constructing a one-dimensional range profile feature extraction module based on an attention mechanism, and extracting ship detail features in HRRP;
s105: constructing a HRRP and SAR data feature fusion classification module, and fusing the features of the HRRP and SAR data to classify the target;
s106: carrying out supervised training on the built multi-source feature fusion classification model to obtain parameters suitable for the model;
s107: and sending the ship target data to be classified into the trained multi-source characteristic fusion classification model for classification to obtain a classification result.
S105 is the last module of the multi-source feature fusion model, and the feature fusion classification is carried out, namely the classification is finished in the module, S106 is the training optimization process of the model, and S107 is the ship classification by using the model.
In the step S101, performing fine Lee filtering, morphological filtering and data enhancement on the SAR image; 2 norm normalization, logarithmic transformation, gravity center alignment, isometric processing, sliding window processing and scattering center information extraction are carried out on HRRP data;
the SAR image is firstly subjected to refined Lee filtering to remove speckle noise, the refined Lee filtering is simulated by using a neural network model, a channel attention mechanism is added based on a self-encoder framework, then morphological filtering is carried out to enhance geometric contour information, and the preprocessing step can be expressed as the following formula:
Figure BDA0004033896390000041
Lee(X)=X+f conv (Cat(f conv (X),f CBAM (f ReLU (f conv (X)))))
wherein
Figure BDA0004033896390000042
Respectively an input SAR image and a preprocessed SAR image, wherein H and W are the size of the images; lee (-) is the sophisticated Lee filtering of network simulations; MF (. Cndot.)) Filtering for morphology; f. of c o nv (·)、f ReLU (. H) convolution and activation operations, respectively; cat (·) represents the concatenation of the feature channel dimensions; f. of CBAM (. To) represents the channel attention mechanism;
HRRP firstly needs 2 norm normalization, then carries out logarithmic transformation, carries out center-of-gravity alignment, and intercepts 3200 center point for isometric processing; performing sliding window processing by taking the step length as 200 overlap as 60 to obtain an HRRP sliding window matrix; meanwhile, extracting relevant information of the scattering centers, and extracting the radial length, the number of the scattering centers, the profile skewness, the variance, the sum of squares of gradients, the overall entropy, the second moment, the third moment, the mean value, the symmetry and the structural characteristics of scale removal, wherein the steps can be expressed as follows:
h w =F w (F el (F g (F log (F L2norm (h)))))
h info =F w (F el (F g (F log (F L2norm (h)))))
Figure BDA0004033896390000043
Figure BDA0004033896390000044
wherein h = [ h = 1 ,h 2 ,...,h M ]Representing original HRRP data, M representing the total number of range cells contained in the HRRP data, h w Processed HRRP data, h, for the final sliding window output info Is extracted HRRP scattering center information.
The step S102 specifically includes:
firstly, a pair of SAR images subjected to data enhancement by using ResNet50 network
Figure BDA0004033896390000051
Performing feature extraction to obtain the feature->
Figure BDA0004033896390000052
Processing in Batch>
Figure BDA0004033896390000053
B is the Batch size, constructs the characteristic separation module and carries out the characteristic separation to different samples, is about to different sample characteristics projection to the characteristic space that the degree of separation is high, and the characteristic separation module comprises convolution layer activation layer and Batch normalization layer, and this structure can effectively project the characteristic space, and its step can be expressed as follows:
Figure BDA0004033896390000054
Figure BDA0004033896390000055
Separate(·)=f ReLU (f BN (f conv (f ReLU (f BN (f conv (·))))))
wherein
Figure BDA0004033896390000056
The SAR data is obtained after different data enhancement changes; />
Figure BDA0004033896390000057
Is the feature extracted by ResNet50, c, h, w are the channel number, length, width of the feature respectively; separate (-) is a feature separation module that projects different sample features into a feature space with high separation.
The structure of the SAR image feature aggregation module in step S103 is as follows:
constructing an integration module to reduce the dimension of the separated sample features and simultaneously perform the aggregation of the same kind of features, wherein the feature integration module is formed by two-step convolution, the first 1 x 1 convolution layer is used for reducing the number of the features from the channel dimension, the second convolution layer is used for performing information fusion from the space dimension with 3 x 3 kernels, and the aggregated features are formed by the integration module
Figure BDA0004033896390000058
Indicating that adding class codes P directs feature aggregation, each class is encoded by multiple classes P, which can be represented by the following steps:
Figure BDA0004033896390000061
Integration(·)=f ReLU (f BN (f conv (f ReLU (f BN (f conv (·))))))
Figure BDA0004033896390000062
wherein
Figure BDA0004033896390000063
Is a separated sample characteristic; integration () is a feature Integration module; />
Figure BDA0004033896390000064
And C, N and K respectively represent the number of channels, the number of codes of each class and the number of classes.
The structure of the one-dimensional range profile feature extraction module in step S104 is as follows:
the method comprises the following steps of constructing a VGG11 network with an attention mechanism to extract the features of HRRP data, wherein the attention mechanism adopts a 1D convolution Channel Attention Module (CAM), can effectively extract the features of the HRRP data, and can be represented by the following formula:
h f =VGG11 CAM (h)
wherein h represents the original HRRP data; VGG11 CBAM (. To) a feature extraction network with a channel attention mechanism; h is f Is the obtained one-dimensional range profile feature.
Constructing an HRRP and SAR data feature fusion classification module in the step S105, and fusing the features of the HRRP and SAR data to classify the target;
fusing the polymerized SAR characteristics, the extracted HRRP characteristics and the HRRP priori information to obtain combined characteristics, and classifying by using the combined characteristics, wherein the structure is shown as the following formula:
z f =f flatten (Z af )
Figure BDA0004033896390000065
/>
f classifer (·)=f Linear (f ReLU (f Linear (f Dr o p o ut (·))))
wherein Z is af SAR image features aggregated by category;
Figure BDA0004033896390000066
a flattened feature; />
Figure BDA0004033896390000071
Is a classification result; f. of flatten (·)、f classifer (. H) flattening operations and classification models, respectively; f. of Dropout (. Cndot.) is a random deactivation layer with a deactivation ratio of 0.2.
S106, carrying out supervised training on the built multi-source feature fusion classification model to obtain parameters suitable for the model;
(1) Inputting the training sample with the label into a network model to be trained, and outputting the label prediction of the training sample;
(2) Calculating a feature separation loss and aggregation loss function and calculating a loss function between the predicted label and the real label by using the following intersection loss functions:
L=L Agg +L Sep +L Cls
Figure BDA0004033896390000072
Figure BDA0004033896390000073
Figure BDA0004033896390000074
wherein
Figure BDA0004033896390000075
Features between different samples; />
Figure BDA0004033896390000076
p k Class encoding of sample features and corresponding classes;
Figure BDA0004033896390000077
respectively a predicted label and a real label; sep (a, b) = a.b/(| a | | non-conducting phosphor) 2 ·||b|| 2 ),Agg(a,b)=-a·b/(||a|| 2 ·||b|| 2 ) Respectively as a function of the characteristic separation loss and the polymerization loss; CE (a, b) is a cross entropy loss function;
(3) And training the network parameters by using a random gradient descent method until the network converges, and storing the optimal network parameters to finish the classification of the ships.
The invention has the beneficial effects that:
1. the invention uses various preprocessing methods to process HRRP and SAR data, eliminates irrelevant information in the data, recovers useful real information, enhances the detectability of the relevant information and improves the reliability and accuracy of identification. And extracting prior information of the HRRP data to accelerate the convergence of the model.
2. The invention adopts the strategy of firstly separating according to the sample characteristics and then aggregating according to the class characteristics in the characteristic extraction module of the SAR, and can weaken the problems of small difference between classes and large difference between classes in the ship detection. The effectiveness of extracting the features is improved, and the classification precision and robustness of the ships are enhanced.
3. The method adopts a method of HRRP and SAR data characteristic level fusion for classification, fully combines the geometric information of the target in the SAR image and the detail information of the target in the HRRP data, and improves the precision and the robustness of ship classification.
Drawings
Fig. 1 is a flowchart of a ship classification method provided by an embodiment of the present invention.
Fig. 2 is a schematic diagram of a SAR preprocessing flow provided by an embodiment of the present invention.
Fig. 3 is a schematic diagram of an HRRP preprocessing flow provided by an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a SAR feature separation and aggregation module according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of an HRRP feature extraction module according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of the overall structure provided by the embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The details of the embodiment of the invention are as follows, a ship classification method based on HRRP and SAR data characteristic level fusion, and the details of the invention are further explained by combining the attached drawings.
As shown in fig. 1, the ship classification method based on the characteristic level fusion of the HRRP and SAR data provided by the present invention includes the following steps
S101: inputting HRRP and SAR detection data of the same target at the same time, respectively preprocessing the HRRP and SAR detection data, and dividing a training set and a test set;
s102: an SAR image feature separation module is constructed, the correlation of features among samples is reduced, and the sample feature distance is increased;
s103: constructing an SAR image feature aggregation module, aggregating the similar features in the separated sample features output by S102, reducing the intra-class distance, increasing the inter-class distance and enhancing the classification performance;
s104: constructing a one-dimensional range profile feature extraction module based on an attention mechanism, and extracting ship detail features in HRRP;
s105: constructing a HRRP and SAR data feature fusion classification module, and fusing the features of the HRRP and SAR data to classify the target;
s106: carrying out supervised training on the built multi-source feature fusion classification model to obtain parameters suitable for the model;
s107: sending the ship target data to be classified into a trained multi-source feature fusion classification model for classification to obtain a classification result
As shown in fig. 1, the ship classification method based on the characteristic level fusion of the HRRP and SAR data provided by the present invention is implemented as follows
(1) Inputting HRRP and SAR detection data of the same target at the same time, and performing refined Lee filtering, morphological filtering and data enhancement on the SAR image; and 2 norm normalization, logarithmic transformation, gravity center alignment, isometric processing, sliding window processing and scattering center information extraction are carried out on the HRRP data.
(1a) To eliminate irrelevant information from the data and enhance the detectability of relevant information, the HRRP and SAR data are preprocessed separately. The SAR image is firstly subjected to refined Lee filtering to remove speckle noise, the refined Lee filtering is simulated by using a neural network model, and the speckle noise in the SAR image can be well inhibited by adding a channel attention mechanism based on an auto-encoder framework. And then carrying out morphological filtering to enhance the geometric outline information. The pretreatment step can be represented by the following formula:
Figure BDA0004033896390000101
Lee(X)=X+f conv (Cat(f conv (X),f CBAM (f ReLU (f conv (X)))))
wherein
Figure BDA0004033896390000102
Respectively input and preprocessed SAR images, wherein H and W are the size of the image; lee (-) is the sophisticated Lee filtering of network simulations; MF (-) is morphological filtering; f. of conv (·)、f ReLU (. H) convolution and activation operations, respectively; cat (·) represents the concatenation of the feature channel dimensions; f. of CBAM (. Cndot.) represents the channel attention mechanism.
(1b) HRRP firstly carries out 2 norm normalization to weaken amplitude sensitivity; then carrying out logarithmic transformation to reduce the difference of strong and weak scattering centers; the center of gravity is aligned, and the translation sensitivity is weakened; intercepting 3200 central point for equal length treatment; performing sliding window processing by taking the step length as 200 overlap as 60 to obtain an HRRP sliding window matrix; and simultaneously extracting relevant information of the scattering centers, and extracting the radial length, the number of the scattering centers, the profile skewness, the variance, the sum of squares of gradients, the overall entropy, the second moment, the third moment, the mean value, the symmetry and the structure characteristics of scale removal of the target. The steps can be represented by the following formula:
h w =F w (F el (F g (F log (F L2norm (h)))))
h info =F w (F el (F g (F log (F L2norm (h)))))
Figure BDA0004033896390000111
Figure BDA0004033896390000112
wherein h = [ h = 1 ,h 2 ,...,h M ]Representing the original HRRP data and M representing the total number of range cells contained in the HRRP data. h is w Processed HRRP data, h, for the final sliding window output info Is extracted HRRP scattering center information.
(2) And an SAR image feature separation module is constructed, so that the correlation of features among samples is reduced, and the sample feature distance is increased.
Firstly, a pair of SAR images subjected to data enhancement by using ResNet50 network
Figure BDA0004033896390000113
Performing feature extraction to obtain features>
Figure BDA0004033896390000114
Processing in Batch units>
Figure BDA0004033896390000115
B is the size of Batch. And constructing a feature separation module to perform feature separation on different samples, namely projecting the features of the different samples to a feature space with high separation degree. The feature separation module is composed of a convolutional layer active layer and a batch normalization layer, and the structure can effectively project a feature space, and the steps can be expressed as follows:
Figure BDA0004033896390000116
Figure BDA0004033896390000117
Separate(·)=f ReLU (f BN (f conv (f ReLU (f BN (f conv (·))))))
wherein
Figure BDA0004033896390000118
The SAR data is obtained after different data enhancement changes; />
Figure BDA0004033896390000119
Is the feature extracted by ResNet50, c, h, w are the channel number, length, width of the feature respectively; separate (·) is a feature separation module that projects different sample features into a feature space with high separation.
(3) And constructing an SAR image feature aggregation module, aggregating the similar features in the separated sample features, reducing the intra-class distance, increasing the inter-class distance and enhancing the classification performance.
(3a) And constructing an integration module to reduce the dimension of the separated sample features and simultaneously perform aggregation of the same kind of features, wherein the feature integration module is formed by two convolution steps, and the first 1 multiplied by 1 convolution layer is used for reducing the number of the features from the channel dimension. The second convolutional layer is used for information fusion from spatial dimensions with 3 x 3 kernels. By integrating modules, the network can better handle previous separationsThe potential relationship of the features, thereby speeding up the class feature aggregation process. The characteristic of the polymerization is represented by Z i f I =1,2.
(3b) In order to be able to aggregate features of each class, adding class codes P directs feature aggregation, each class being coded by multiple classes P. The steps can be represented by the following formula:
Figure BDA0004033896390000121
Integration(·)=f ReLU (f BN (f conv (f ReLU (f BN (f conv (·))))))
Figure BDA0004033896390000122
wherein
Figure BDA0004033896390000123
Is a separated sample characteristic; integration () is a feature Integration module; />
Figure BDA0004033896390000124
And C, N and K respectively represent the number of channels, the number of codes of each class and the number of classes.
(4) And constructing a one-dimensional range profile feature extraction module based on an attention mechanism, and extracting ship detail features in the HRRP.
The VGG11 network with attention mechanism is constructed to extract the features of the HRRP data, and the attention mechanism adopts a 1D convolution Channel Attention Module (CAM) and can effectively extract the features of the HRRP data. Can be represented by the following formula:
h f =VGG11 CAM (h)
wherein h represents the original HRRP data; VGG11 CBAM (. To) a feature extraction network with a channel attention mechanism; h is f Is the obtained one-dimensional range profile feature.
(5) And constructing a HRRP and SAR data feature fusion classification module, and fusing the features of the HRRP and SAR data to classify the target.
And fusing the aggregated SAR characteristics, the extracted HRRP characteristics and the HRRP priori information to obtain combined characteristics, and classifying by using the combined characteristics. The structure is shown as the following formula:
z f =f flatten (Z af )
Figure BDA0004033896390000131
f classifer (·)=f Linear (f ReLU (f Linear (f Dropout (·))))
wherein, Z af SAR image features aggregated by category;
Figure BDA0004033896390000132
a flattened feature; />
Figure BDA0004033896390000133
Is a classification result; f. of flatten (·)、f classifer (. H) flattening operations and classification models, respectively; f. of Dropout (. Cndot.) is a random deactivation layer with a deactivation ratio of 0.2.
(6) And carrying out supervised training on the built multi-source feature fusion classification model to obtain parameters suitable for the model.
(6a) Inputting the training sample with the label into a network model to be trained, and outputting the label prediction of the training sample;
(6b) Calculating a feature separation loss and aggregation loss function and calculating a loss function between the predicted label and the real label by using the following intersection loss functions:
L=L Agg +L Sep +L Cls
Figure BDA0004033896390000134
Figure BDA0004033896390000141
Figure BDA0004033896390000142
wherein
Figure BDA0004033896390000143
Features between different samples; />
Figure BDA0004033896390000144
p k Class coding of sample features and corresponding classes;
Figure BDA0004033896390000145
respectively a predicted label and a real label; sep (a, b) = a.b/(| a | | non-conducting phosphor) 2 ·||b|| 2 ),Agg(a,b)=-a·b/(||a|| 2 ·||b|| 2 ) Respectively as a function of the characteristic separation loss and the polymerization loss; CE (a, b) is a cross entropy loss function.
(6c) Training the network parameters by using a random gradient descent method until the network converges, storing the optimal network parameters, and completing the classification of ships
In conclusion, the invention realizes a classification model with HRRP and SAR data feature level fusion, and is used for ship classification.
Various corresponding changes and modifications can be made by those skilled in the art based on the above technical solutions and concepts, and all such changes and modifications should be included in the protection scope of the present invention.

Claims (7)

1. A ship classification method based on HRRP and SAR data feature level fusion is characterized by comprising the following steps;
s101: acquiring HRRP and SAR detection data of the same ship target at the same moment, respectively preprocessing the HRRP and SAR detection data, and dividing a training set and a test set;
s102: an SAR image feature separation module is constructed, the correlation of features among all detection data samples is reduced, and the sample feature distance is increased;
s103: constructing an SAR image feature aggregation module, aggregating the similar features in the separated sample features output by S102, reducing the intra-class distance, increasing the inter-class distance and enhancing the classification performance;
s104: constructing a one-dimensional range profile feature extraction module based on an attention mechanism, and extracting ship detail features in HRRP;
s105: constructing a HRRP and SAR data feature fusion classification module, and fusing the features of the HRRP and SAR data to classify the target;
s106: carrying out supervised training on the built multi-source feature fusion classification model to obtain parameters suitable for the model;
s107: and sending the ship target data to be classified into the trained multi-source characteristic fusion classification model for classification to obtain a classification result.
2. The vessel classification method based on the HRRP and SAR data feature level fusion as claimed in claim 1, wherein in step S101, the SAR image is subjected to refined Lee filtering, morphological filtering and data enhancement; 2 norm normalization, logarithmic transformation, gravity center alignment, isometric processing, sliding window processing and scattering center information extraction are carried out on HRRP data;
the SAR image is firstly subjected to refined Lee filtering to remove speckle noise, the refined Lee filtering is simulated by using a neural network model, a channel attention mechanism is added based on a self-encoder framework, then morphological filtering is carried out to enhance geometric contour information, and the preprocessing step can be expressed as the following formula:
Figure FDA0004033896380000021
Lee(X)=X+f conv (Cat(f conv (X),f CBAM (f ReLU (f conv (X)))))
wherein
Figure FDA0004033896380000022
Respectively input and preprocessed SAR images, wherein H and W are the size of the image; lee (-) is the delicate Lee filtering of network simulation; MF (-) is morphological filtering; f. of conv (·)、f ReLU () convolution and activation operations, respectively; cat (-) represents the concatenation of the feature channel dimensions; f. of CBAM (. Cndot.) represents the channel attention mechanism;
HRRP firstly carries out 2 norm normalization, then carries out logarithmic transformation, carries out center-of-gravity alignment, intercepts 3200 center point and carries out isometric processing; performing sliding window processing by overlapping the step length of 200 to 60 to obtain an HRRP sliding window matrix; meanwhile, extracting relevant information of the scattering centers, and extracting the radial length, the number of the scattering centers, the profile skewness, the variance, the sum of squares of gradients, the overall entropy, the second moment, the third moment, the mean value, the symmetry and the structural characteristics of scale removal, wherein the steps can be expressed as follows:
h M =F w (F el (F g (F log (F L2norm (h)))))
h info =F w (F el (F g (F log (F L2norm (h)))))
Figure FDA0004033896380000023
Figure FDA0004033896380000024
wherein h = [ h = 1 ,h 2 ,…,h M ]Representing original HRRP data, M representing the total number of range cells contained in the HRRP data, h w Processed HRRP data, h, for the final sliding window output info Is extracted HRRP scattering center information.
3. The ship classification method based on the HRRP and SAR data feature level fusion as claimed in claim 1, wherein said step S102 specifically comprises:
firstly, a pair of SAR images subjected to data enhancement is subjected to data enhancement by using a ResNet50 network
Figure FDA0004033896380000031
Extracting the features to obtain the features
Figure FDA0004033896380000032
Processing in Batch units>
Figure FDA0004033896380000033
B is the Batch size, constructs the characteristic separation module and carries out the characteristic separation to different samples, is about to different sample characteristics project to the feature space that the degree of separation is high, and the characteristic separation module comprises convolution layer active layer and Batch normalization layer, and this structure can effectively project the feature space, and its step can be expressed as the following formula:
Figure FDA0004033896380000034
Figure FDA0004033896380000035
Separate(·)=f ReLU (f BN (f conv (f ReLU (f BN (f conv (·))))))
wherein
Figure FDA0004033896380000036
The SAR data is the SAR data after different data enhancement changes; />
Figure FDA0004033896380000037
Is the feature extracted by ResNet50, c, h, w are the channel number, length, width of the feature respectively; separate (. Cndot.) as a feature separation module, separate samplesThe features are projected into a feature space with high degrees of separation.
4. The vessel classification method based on the HRRP and SAR data feature level fusion as claimed in claim 1, wherein the SAR image feature aggregation module in the step S103 has the following structure:
constructing an integration module to reduce dimension of separated sample features and aggregate same-class features, wherein the feature integration module is formed by two-step convolution, the first 1 x 1 convolution layer is used for reducing the number of the features from the channel dimension, the second convolution layer is used for carrying out information fusion from the space dimension with 3 x 3 kernels, and the aggregated features are formed by the integration module
Figure FDA0004033896380000041
Indicating that adding class codes P directs feature aggregation, each class is encoded by multiple classes P, which can be represented by the following steps:
Figure FDA0004033896380000042
Integration(·)=f ReLU (f BN (f conv (f ReLU (f BN (f conv (·))))))
Figure FDA0004033896380000043
wherein
Figure FDA0004033896380000044
Is a separated sample characteristic; integration () is a feature Integration module; />
Figure FDA0004033896380000045
And C, N and K respectively represent the number of channels, the number of codes of each class and the number of classes.
5. The vessel classification method based on the feature level fusion of the HRRP and SAR data as claimed in claim 1, wherein the structure of the one-dimensional range profile feature extraction module in step S104 is as follows:
the method comprises the following steps of constructing a VGG11 network with an attention mechanism to extract the features of HRRP data, wherein the attention mechanism adopts a 1D convolution Channel Attention Module (CAM), can effectively extract the features of the HRRP data, and can be represented by the following formula:
h f =VGG11 CAM (h)
wherein h represents the original HRRP data; VGG11 CBAM (. To) a feature extraction network with a channel attention mechanism; h is a total of f Is the obtained one-dimensional range profile feature.
6. The vessel classification method based on the HRRP and SAR data feature level fusion as claimed in claim 1, wherein the step S105 is implemented by constructing a HRRP and SAR data feature fusion classification module, and fusing the features of the HRRP and SAR data to classify the target;
fusing the polymerized SAR characteristics, the extracted HRRP characteristics and the HRRP priori information to obtain combined characteristics, and classifying by using the combined characteristics, wherein the structure is shown as the following formula:
z f =f flatten (Z af )
Figure FDA0004033896380000051
f classifer (·)=f Linear (f ReLU (f Linear (f Dropout (·))))
wherein Z is af SAR image features aggregated by category;
Figure FDA0004033896380000052
a flattened feature; />
Figure FDA0004033896380000053
Is a classification result; f. of flatten (·)、f classifer (. H) flattening operations and classification models, respectively; f. of Dropout (. Cndot.) is a random deactivation layer with a deactivation ratio of 0.2.
7. The ship classification method based on HRRP and SAR data feature level fusion as claimed in claim 1, characterized in that S106 performs supervised training on the built multi-source feature fusion classification model to obtain parameters suitable for the model;
(1) Inputting the training sample with the label into a network model to be trained, and outputting the label prediction of the training sample;
(2) Calculating a characteristic separation loss and aggregation loss function and calculating a loss function between the predicted label and the real label by using the following intersection loss function:
L=L Agg +L Sep +L Cls
Figure FDA0004033896380000054
Figure FDA0004033896380000055
Figure FDA0004033896380000056
wherein
Figure FDA0004033896380000061
Features between different samples; />
Figure FDA0004033896380000064
Class encoding of sample features and corresponding classes; />
Figure FDA0004033896380000063
Respectively a predicted label and a real label; sep (a, b) = a.b/(| a | | non-conducting phosphor) 2 ·||b|| 2 ),Agg(a,b)=-a·b/(||a|| 2 ·||b|| 2 ) Respectively as a function of the characteristic separation loss and the polymerization loss; CE (a, b) is a cross entropy loss function;
(3) And training the network parameters by using a random gradient descent method until the network converges, and storing the optimal network parameters to finish the classification of the ships.
CN202211739219.4A 2022-12-31 2022-12-31 Ship classification method based on HRRP and SAR data feature level fusion Pending CN115909078A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211739219.4A CN115909078A (en) 2022-12-31 2022-12-31 Ship classification method based on HRRP and SAR data feature level fusion
CN202310321258.0A CN116343041A (en) 2022-12-31 2023-03-29 Ship classification method based on feature level fusion of HRRP and SAR data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211739219.4A CN115909078A (en) 2022-12-31 2022-12-31 Ship classification method based on HRRP and SAR data feature level fusion

Publications (1)

Publication Number Publication Date
CN115909078A true CN115909078A (en) 2023-04-04

Family

ID=86484739

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211739219.4A Pending CN115909078A (en) 2022-12-31 2022-12-31 Ship classification method based on HRRP and SAR data feature level fusion
CN202310321258.0A Pending CN116343041A (en) 2022-12-31 2023-03-29 Ship classification method based on feature level fusion of HRRP and SAR data

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202310321258.0A Pending CN116343041A (en) 2022-12-31 2023-03-29 Ship classification method based on feature level fusion of HRRP and SAR data

Country Status (1)

Country Link
CN (2) CN115909078A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385319A (en) * 2023-05-29 2023-07-04 中国人民解放军国防科技大学 Radar image speckle filtering method and device based on scene cognition

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116385319A (en) * 2023-05-29 2023-07-04 中国人民解放军国防科技大学 Radar image speckle filtering method and device based on scene cognition
CN116385319B (en) * 2023-05-29 2023-08-15 中国人民解放军国防科技大学 Radar image speckle filtering method and device based on scene cognition

Also Published As

Publication number Publication date
CN116343041A (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN110135267B (en) Large-scene SAR image fine target detection method
CN114202696A (en) SAR target detection method and device based on context vision and storage medium
Zhang et al. DAGN: A real-time UAV remote sensing image vehicle detection framework
Zhang et al. Pedestrian detection method based on Faster R-CNN
CN110084234B (en) Sonar image target identification method based on example segmentation
CN110348384B (en) Small target vehicle attribute identification method based on feature fusion
CN108764063A (en) A kind of pyramidal remote sensing image time critical target identifying system of feature based and method
CN111898432B (en) Pedestrian detection system and method based on improved YOLOv3 algorithm
CN111967480A (en) Multi-scale self-attention target detection method based on weight sharing
Xiao et al. Enhancing multiscale representations with transformer for remote sensing image semantic segmentation
CN113052211A (en) Pruning method based on characteristic rank and channel importance
Gong et al. Object detection based on improved YOLOv3-tiny
CN110348357A (en) A kind of fast target detection method based on depth convolutional neural networks
CN110826462A (en) Human body behavior identification method of non-local double-current convolutional neural network model
CN109886147A (en) A kind of more attribute detection methods of vehicle based on the study of single network multiple-task
CN104657717A (en) Pedestrian detection method based on layered kernel sparse representation
CN113850783B (en) Sea surface ship detection method and system
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN115909078A (en) Ship classification method based on HRRP and SAR data feature level fusion
Lin et al. Small object detection in aerial view based on improved YoloV3 neural network
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance
CN114283326A (en) Underwater target re-identification method combining local perception and high-order feature reconstruction
Sun et al. Deep learning based pedestrian detection
Li et al. Evaluation the performance of fully convolutional networks for building extraction compared with shallow models
CN117523394A (en) SAR vessel detection method based on aggregation characteristic enhancement network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20230404

WD01 Invention patent application deemed withdrawn after publication