CN114973016A - Dual-polarization radar ship classification method based on grouping bilinear convolutional neural network - Google Patents

Dual-polarization radar ship classification method based on grouping bilinear convolutional neural network Download PDF

Info

Publication number
CN114973016A
CN114973016A CN202210618884.1A CN202210618884A CN114973016A CN 114973016 A CN114973016 A CN 114973016A CN 202210618884 A CN202210618884 A CN 202210618884A CN 114973016 A CN114973016 A CN 114973016A
Authority
CN
China
Prior art keywords
layer
bilinear
convolution
series
pooling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210618884.1A
Other languages
Chinese (zh)
Inventor
何敬鲁
常文龙
王富平
刘颖
李莹华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN202210618884.1A priority Critical patent/CN114973016A/en
Publication of CN114973016A publication Critical patent/CN114973016A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to a dual-polarization radar ship classification method based on a grouping bilinear convolutional neural network, which comprises the steps of obtaining a data set, expanding the data set, constructing the grouping bilinear convolutional neural network, constructing a multi-polarization channel fusion loss function, training the grouping bilinear convolutional neural network and testing the grouping bilinear convolutional neural network. Compared with the traditional bilinear pooling layer, the method greatly reduces the calculated amount and improves the training efficiency; the invention provides a new loss function, balances the importance of self-bilinear pooling and cross-bilinear pooling, and realizes accurate classification of ships.

Description

Dual-polarization radar ship classification method based on grouping bilinear convolutional neural network
Technical Field
The invention belongs to the technical field of radar target identification, and particularly relates to a synthetic aperture radar image ship target classification method.
Background
In the past decades, the emission of several advanced Synthetic Aperture Radar (SAR) satellites has driven the development of the SAR remote sensing field, and classification of vessels using SAR images is a more advanced and challenging task, and its essence is to fully mine key features between different categories. Early work focused on designing traditional handcrafted features, derived from geometric and scattering properties. These features, such as geometry, backscatter intensity density, are mostly evaluated on limited high resolution SAR ship samples, and their performance is significantly degraded if applied to medium resolution SAR images. In addition, large-scale SAR ship data presents challenges to the above features. Researchers in the field of SAR remote sensing strive to apply deep learning techniques to obtain a more discriminative SAR ship representation. Bentes et al propose a multi-input resolution convolutional neural network for classifying marine targets in X-band terrestrial radar satellite (Terras SAR-X) images. In previous studies, two domain-specific resolution convolutional neural network models suitable for residual networks (ResNet) and dense convolutional networks (DenseNet) were proposed, performing vessel classification in high-and medium-resolution SAR images, respectively, both achieving significant performance improvements.
Most of the existing classification methods focus on the intensity information of a single-channel SAR image, and the existing classification methods are used as pioneer work, and Xi et al propose a new characteristic loss dual fusion network (DFSN) which is added with fusion loss and used for dual-polarization SAR ship classification. Zeng et al also developed a Hybrid Channel Feature Loss (HCFL) for deep convolutional neural network features to jointly utilize the information contained in SAR images for both polarizations. Furthermore, Zhang et al proposes a dual polarization feature fusion (SE-LPN-DPFF) compressed excitation laplacian pyramid network for SAR vessel classification.
Most of the previous work focuses on the classification of single-polarization SAR images, and the inherent information mining in the imaging process is omitted; the bilinear pooling method can capture second-order interaction between different feature channels, but has very much redundant information, and generates great parameter quantity and calculated quantity.
A technical problem to be urgently solved at present in the technical field of radar ship classification is to provide a classification method with high classification speed and accurate classification.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a dual-polarization radar ship classification method based on a grouping bilinear convolutional neural network, which has the advantages of small calculated amount, high training efficiency, high classification speed and accurate classification.
The technical scheme adopted for solving the technical problems comprises the following steps:
(1) acquiring a data set
Two radar ship images of vertical transmitting and vertical receiving signals VV polarization and vertical transmitting and horizontal receiving signals VH polarization are selected from an OpenSARShip database, and the radar ship images are obtained according to the following steps of 8: 2 into a training set and a test set.
(2) Augmenting data sets
And (3) expanding the training set by 8 times by using a turning, rotating, translating and noise adding method to obtain an expanded training set.
(3) Constructing a grouped bilinear convolutional neural network
The grouping bilinear convolutional neural network is formed by connecting a deep dense connection layer and a grouping bilinear pooling layer in series.
(4) Construction of a multi-polarization channel fusion loss function
Determining a multi-polarization channel fusion loss function L according to MPFL
L MPFL =α(L VH-VH +L VV-VV )+(1-α)L VH-VV
Figure BDA0003673203040000021
Figure BDA0003673203040000022
Figure BDA0003673203040000023
In the formula, L VH-VH Represents the loss from bilinear pooling of the vertical-transmit horizontal-receive VH-polarized radar images, L VV-VV Represents the loss of self-bilinear pooling of the vertical transmitting and vertical receiving VV polarized radar image, L VH-VV Representing the loss of cross-bilinear pooling of a vertical transmitting horizontal receiving VH polarized radar image and a vertical transmitting vertical receiving VV polarized radar image, wherein alpha is a hyper-parameter and belongs to (0,1), y i Is a single hot coding form of a real ship label, N is the total number of the extended training set,
Figure BDA0003673203040000024
is the output result of the softmax function to the double/single polarization synthetic aperture radar image.
(5) Training packet bilinear convolutional neural network
Inputting the extended training set into a grouped bilinear convolutional neural network, outputting a classification result, and using a loss function L MPFL And training the grouped bilinear convolutional neural network until the network converges to obtain the trained grouped bilinear convolutional neural network.
(6) Test packet bilinear convolutional neural network
And inputting the ship test set into the trained grouped bilinear convolutional neural network to obtain a dual-polarization radar ship classification result.
In the step (3) of the present invention, the deep dense connection layer is composed of a first deep dense connection layer and a second deep dense connection layer which have the same structure and are connected in parallel with each other, and the first deep dense connection layer is composed of a base layer S, a dense connection layer D1, a transition dimensionality reduction layer T1, a dense connection layer D2, a transition dimensionality reduction layer T2, a dense connection layer D3, a transition dimensionality reduction layer T3, a dense connection layer D4, a transition dimensionality reduction layer T4, and a dense connection layer D5 which are connected in series in sequence.
The grouping bilinear pooling layer is composed of a convolution layer C 1 A channel grouping layer, a first auto-bilinear pooling layer, a cross-bilinear pooling layer, a second auto-bilinear pooling layer, a full-connection layer FC, an output layer, a convolution layer C 1 The input of the dense connection layer D5 is connected with the output of the dense connection layer D5, the output of the dense connection layer D5 is connected with the channel grouping layer, the output of the channel grouping layer is respectively connected with the input of the first auto-bilinear pooling layer, the cross-bilinear pooling layer and the second auto-bilinear pooling layer, the output of the first auto-bilinear pooling layer, the cross-bilinear pooling layer and the second auto-bilinear pooling layer is connected with the input of the full connection layer FC, and the output of the full connection layer FC is connected with the input of the output layer.
In step (3) of the present invention, the first deep dense connection layer is constructed by the following method:
the base layer S is composed of 2 convolution blocks connected in series, each convolution block is composed of a convolution layer L 1 With batch normalization layer L 2 An activation function layer L 3 Formed in series in sequence, the activation function layer L 3 The output non-linear mapping relu (x) is as follows:
ReLU(x)=max(0,x)
wherein x is batch normalization L 2 The output of the layers, the convolution kernel size is 3, and the step length is 1.
The dense connection layer D1 is formed by connecting 3 convolution blocks with the growth rate of 3 in series, and each convolution block is composed of a batch normalization layer
Figure BDA0003673203040000031
And activation function layer
Figure BDA0003673203040000032
Convolutional layer
Figure BDA0003673203040000033
dropout layer
Figure BDA0003673203040000034
Formed in series, wound layers
Figure BDA0003673203040000035
Has a convolution kernel size of 3, a step size of 1, n represents the nth convolution block, and n belongs to {1,2,3 }.
The transition dimensionality reduction layer T1 is formed by a batch normalization layer T 1 1 And activation function layer
Figure BDA0003673203040000036
Convolutional layer
Figure BDA0003673203040000037
Average pooling layer
Figure BDA0003673203040000038
Formed in series, wound layers
Figure BDA0003673203040000039
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure BDA00036732030400000310
Has an average pooled kernel size of 2 and a step size of 2.
The dense connection layer D2 is formed by connecting 3 convolution blocks with the growth rate of 6 in series, and each convolution block is composed of a batch normalization layer
Figure BDA0003673203040000041
And activation function layer
Figure BDA0003673203040000042
Convolutional layer
Figure BDA0003673203040000043
dropout layer
Figure BDA0003673203040000044
Formed in series, wound layers
Figure BDA0003673203040000045
Has a convolution kernel size of 3 and a step size of 1.
The transition dimensionality reduction layer T2 is formed by a batch normalization layer T 1 2 And activation function layer
Figure BDA0003673203040000046
Convolutional layer
Figure BDA0003673203040000047
Average pooling layer
Figure BDA0003673203040000048
Formed in series, wound layers
Figure BDA0003673203040000049
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure BDA00036732030400000410
Has an average pooled kernel size of 2 and a step size of 2.
The dense connection layer D3 is formed by connecting 3 convolution blocks with the growth rate of 9 in series, and each convolution block is composed of a batch normalization layer
Figure BDA00036732030400000411
And activation function layer
Figure BDA00036732030400000412
Convolutional layer
Figure BDA00036732030400000413
dropout layer
Figure BDA00036732030400000414
Formed in series, wound layers
Figure BDA00036732030400000415
Is convolved withThe kernel size is 3 and the step size is 1.
The transition dimensionality reduction layer T3 is formed by a batch normalization layer
Figure BDA00036732030400000416
And activation function layer
Figure BDA00036732030400000417
Convolutional layer
Figure BDA00036732030400000418
Average pooling layer
Figure BDA00036732030400000419
Formed in series, wound layers
Figure BDA00036732030400000420
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure BDA00036732030400000421
Has an average pooled kernel size of 2 and a step size of 2.
The dense connection layer D4 is formed by connecting 3 convolution blocks with the growth rate of 12 in series, and each convolution block is a batch normalization layer
Figure BDA00036732030400000422
And activation function layer
Figure BDA00036732030400000423
Convolutional layer
Figure BDA00036732030400000424
dropout layer
Figure BDA00036732030400000425
Formed in series, wound layers
Figure BDA00036732030400000426
Has a convolution kernel size of 3 and a step size of 1.
The transition dimension-reduction layer T4 is,by batch normalization of layers T 1 4 And activation function layer
Figure BDA00036732030400000427
Convolutional layer
Figure BDA00036732030400000428
Average pooling layer
Figure BDA00036732030400000429
Formed in series, wound layers
Figure BDA00036732030400000430
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure BDA00036732030400000431
Has an average pooling kernel size of 2 and a step size of 2.
The dense connection layer D5 is formed by connecting 3 convolution blocks with the growth rate of 15 in series, and each convolution block is a batch normalization layer
Figure BDA00036732030400000432
Layer of activation function
Figure BDA00036732030400000433
Convolutional layer
Figure BDA00036732030400000434
dropout layer
Figure BDA00036732030400000435
Formed in series, wound layers
Figure BDA00036732030400000436
Has a convolution kernel size of 3 and a step size of 1.
The second deep dense connection layer is constructed in the same manner as the first deep dense connection layer.
The method for constructing the grouped bilinear pooling layer comprises the following steps:
the convolution layer C 1 Has a convolution kernel size of 1, a step length of 1, a number of D, and belongs to [3, 165 ]]。
The construction method of the channel grouping layer comprises the following steps: dividing the feature maps of the two polarizations into G groups from 1 to D according to the coding sequence of the channels to obtain a sub-feature map F p
F={F p }
Wherein
Figure BDA00036732030400000437
For the separated sub-feature map, H, W, D are height, width, and number of channels of the sub-feature map, respectively, p ∈ {1,2, …, G }, G is the number of categories to be classified, and D can be evenly divided by G.
The method for constructing the first auto-bilinear pooling layer, the cross-bilinear pooling layer and the second auto-bilinear layer is as follows: carrying out outer product on the sub-feature graph to obtain a feature descriptor
Figure BDA0003673203040000051
Figure BDA0003673203040000052
Wherein the content of the first and second substances,
Figure BDA0003673203040000053
the mth group of sub-feature maps for the first polarization,
Figure BDA0003673203040000054
set n of sub-feature maps for the second polarization, f k Is the feature vector of the sub-feature map along the channel dimension at spatial position k of the input feature map,
Figure BDA0003673203040000055
performing auto-bilinear pooling on single polarization, performing cross-bilinear pooling on two polarizations, and determining 3 bilinear vectors z according to the following formula G
Figure BDA0003673203040000056
z=y/||y|| 2
Figure BDA0003673203040000057
b ═ vec (B), in which
Figure BDA0003673203040000058
And (3) representing the concatenation of sub-bilinear vectors after element normalization and L2 regularization processing are carried out on the feature descriptors of the vectorization between the sub-feature maps of the two single-polarized images.
The output of the full connection layer FC is mapped to 128 dimensions.
The output layer outputs the number of categories mapped to the classification.
Compared with the prior art, the invention has the following advantages:
the invention provides a grouping bilinear pooling layer structure aiming at improvement of bilinear pooling, and the grouping bilinear pooling layer structure divides a feature map into a plurality of sub-feature maps, and performs bilinear pooling series connection on the sub-feature maps to form compact bilinear vectors. Compared with the traditional bilinear pooling layer, the method greatly reduces the calculated amount and improves the training efficiency; the invention provides a new loss function, balances the importance of self-bilinear pooling and cross-bilinear pooling, and realizes accurate classification of ships.
Drawings
FIG. 1 is a process flow diagram of example 1 of the present invention.
Fig. 2 is a schematic diagram of the structure of a grouped bilinear convolutional neural network.
Detailed description of the preferred embodiments
The present invention will be described in further detail below with reference to the drawings and examples, but the present invention is not limited to the embodiments described below.
Example 1
The dual-polarization radar ship classification method based on the grouping bilinear convolutional neural network in the embodiment comprises the following steps (see fig. 1):
(1) acquiring a data set
Two radar ship images of vertical transmitting and vertical receiving signals VV polarization and vertical transmitting and horizontal receiving signals VH polarization are selected from an OpenSARShip database, and the radar ship images are obtained according to the following steps of 8: 2 into a training set and a test set.
(2) Augmenting data sets
And (3) expanding the training set by 8 times by using a turning, rotating, translating and noise adding method to obtain an expanded training set, wherein the turning, rotating, translating and noise adding method is a conventional method in the technical field of expansion.
(3) Constructing a grouped bilinear convolutional neural network
In fig. 2, the grouped bilinear convolutional neural network of the present embodiment is formed by connecting a deep dense connection layer and a grouped bilinear pooling layer in series.
The deep dense connection layer of the present embodiment is composed of a first deep dense connection layer and a second deep dense connection layer which have the same structure and are connected in parallel. The first depth dense connection layer is formed by sequentially connecting a base layer S, a dense connection layer D1, a transition dimensionality reduction layer T1, a dense connection layer D2, a transition dimensionality reduction layer T2, a dense connection layer D3, a transition dimensionality reduction layer T3, a dense connection layer D4, a transition dimensionality reduction layer T4 and a dense connection layer D5 in series.
The first deep dense connection layer is constructed by the following method:
the base layer S of this embodiment is composed of 2 convolution blocks connected in series, each convolution block being composed of a convolution layer L 1 With batch normalization layer L 2 An activation function layer L 3 Formed in series in sequence, the activation function layer L 3 The output non-linear mapping relu (x) is as follows:
ReLU(x)=max(0,x)
wherein x is batch normalization L 2 The output of the layers, the convolution kernel size is 3, and the step length is 1.
The dense connection layer D1 of the present embodiment is formed by connecting 3 convolution blocks with a growth rate of 3 in series, each convolution block being formed by a batch normalization layer
Figure BDA0003673203040000061
And activation function layer
Figure BDA0003673203040000062
Convolutional layer
Figure BDA0003673203040000063
dropout layer
Figure BDA0003673203040000064
Formed in series, wound layers
Figure BDA0003673203040000065
Has a convolution kernel size of 3, a step size of 1, n represents the nth convolution block, and n belongs to {1,2,3 }.
The transitional dimensionality reduction layer T1 of the embodiment is composed of a batch normalization layer T 1 1 And activation function layer
Figure BDA0003673203040000071
Convolutional layer
Figure BDA0003673203040000072
Average pooling layer
Figure BDA0003673203040000073
Formed in series, wound layers
Figure BDA0003673203040000074
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure BDA0003673203040000075
Has an average pooled kernel size of 2 and a step size of 2.
The dense connection layer D2 of the present embodiment is composed of 3 convolution blocks with a growth rate of 6 connected in series, each convolution block being composed of a batch normalization layer
Figure BDA0003673203040000076
And activation function layer
Figure BDA0003673203040000077
Convolutional layer
Figure BDA0003673203040000078
dropout layer
Figure BDA0003673203040000079
Formed in series, wound layers
Figure BDA00036732030400000710
Has a convolution kernel size of 3 and a step size of 1.
The transitional dimensionality reduction layer T2 of the embodiment is composed of a batch normalization layer T 1 2 And activation function layer
Figure BDA00036732030400000711
Convolutional layer
Figure BDA00036732030400000712
Average pooling layer
Figure BDA00036732030400000713
Formed in series, wound layers
Figure BDA00036732030400000714
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure BDA00036732030400000715
Has an average pooled kernel size of 2 and a step size of 2.
The dense connection layer D3 of the present embodiment is composed of 3 convolution blocks with a growth rate of 9 connected in series, each convolution block being composed of a batch normalization layer
Figure BDA00036732030400000716
And activation function layer
Figure BDA00036732030400000717
Convolutional layer
Figure BDA00036732030400000718
dropout layer
Figure BDA00036732030400000719
Formed in series, wound layers
Figure BDA00036732030400000720
Has a convolution kernel size of 3 and a step size of 1.
The transitional dimensionality reduction layer T3 of the embodiment is composed of a batch normalization layer T 1 3 And activation function layer
Figure BDA00036732030400000721
Convolutional layer
Figure BDA00036732030400000722
Average pooling layer
Figure BDA00036732030400000723
Formed in series, wound layers
Figure BDA00036732030400000724
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure BDA00036732030400000725
Has an average pooled kernel size of 2 and a step size of 2.
The dense connection layer D4 of the present embodiment is formed by connecting 3 convolution blocks with a growth rate of 12 in series, each convolution block being formed by a batch normalization layer
Figure BDA00036732030400000726
And activation function layer
Figure BDA00036732030400000727
Convolutional layer
Figure BDA00036732030400000728
dropout layer
Figure BDA00036732030400000729
Formed in series, wound layers
Figure BDA00036732030400000730
Has a convolution kernel size of 3 and a step size of 1.
The transitional dimensionality reduction layer T4 of the embodiment is a batch normalization layer T 1 4 And activation function layer
Figure BDA00036732030400000731
Convolutional layer
Figure BDA00036732030400000732
Average pooling layer
Figure BDA00036732030400000733
Formed in series, wound layers
Figure BDA00036732030400000734
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure BDA00036732030400000735
Has an average pooled kernel size of 2 and a step size of 2.
The dense connection layer D5 of the present embodiment is formed by connecting 3 convolution blocks with a growth rate of 15 in series, and each convolution block is formed by a batch normalization layer
Figure BDA00036732030400000736
Layer of activation function
Figure BDA00036732030400000737
Convolutional layer
Figure BDA00036732030400000738
dropout layer
Figure BDA00036732030400000739
Formed in series, wound layers
Figure BDA00036732030400000740
Has a convolution kernel size of 3 and a step size of 1.
The second deep dense connection layer is constructed in the same manner as the first deep dense connection layer.
The method for constructing the grouped bilinear pooling layer in the embodiment is as follows:
convolutional layer C of the present embodiment 1 Has a convolution kernel size of 1, a step length of 1, a number of D, and belongs to [3, 165 ]]And can divide 3 evenly, the value of D in this embodiment is 78.
The method for constructing the channel grouping layer in the embodiment comprises the following steps: dividing the characteristic graphs of the two polarizations into G groups from 1 to D according to the coding sequence of the channels to obtain a sub-characteristic graph F p
F={F p }
Wherein
Figure BDA0003673203040000081
For the separated sub-feature map, H, W, D are height, width, and number of channels of the sub-feature map, respectively, p ∈ {1,2, …, G }, G is the number of categories to be classified, and D can be evenly divided by G.
The method for constructing the first auto-bilinear pooling layer, the cross-bilinear pooling layer and the second auto-bilinear layer in this embodiment is as follows: carrying out outer product on the sub-feature graph to obtain a feature descriptor
Figure BDA0003673203040000082
Figure BDA0003673203040000083
Wherein the content of the first and second substances,
Figure BDA0003673203040000084
the mth group of sub-feature maps for the first polarization,
Figure BDA0003673203040000085
set n of sub-feature maps for the second polarization, f k Is the feature vector of the sub-feature map along the channel dimension at spatial position k of the input feature map,
Figure BDA0003673203040000086
performing self-bilinear pooling on single polarization, performing cross-bilinear pooling on two polarizations, and respectively determining 3 bilinear vectors z according to the following formula G
Figure BDA0003673203040000087
z=y/||y|| 2
Figure BDA0003673203040000088
b ═ vec (B), wherein
Figure BDA0003673203040000089
And (3) representing the concatenation of sub-bilinear vectors after element normalization and L2 regularization processing are carried out on the feature descriptors of the vectorization between the sub-feature maps of the two single-polarized images.
The step provides a grouping bilinear pooling layer structure, the feature map is divided into a plurality of sub-feature maps, and the sub-feature maps are subjected to bilinear pooling series connection to form a compact bilinear vector. Compared with the traditional bilinear pooling layer, the method greatly reduces the calculated amount and improves the training efficiency.
The output of the full connection layer FC of the present embodiment is mapped to 128 dimensions.
The output layer output of this embodiment maps to the number of categories classified.
The step improves bilinear pooling, provides a grouping bilinear pooling layer structure, divides the feature graph into a plurality of sub-feature graphs, and performs bilinear pooling series connection on the sub-feature graphs to form compact bilinear vectors. Compared with the traditional bilinear pooling layer, the method greatly reduces the calculated amount and improves the training efficiency; the invention provides a new loss function, balances the importance of self-bilinear pooling and cross-bilinear pooling, and realizes accurate classification of ships.
(4) Constructing a multi-polarization channel fusion loss function
Determining a multi-polarization channel fusion loss function L according to MPFL
L MPFL =α(L VH-VH +L VV-VV )+(1-α)L VH-VV
Figure BDA0003673203040000091
Figure BDA0003673203040000092
Figure BDA0003673203040000093
In the formula, L VH-VH Represents the loss from bilinear pooling of the vertical-transmit horizontal-receive VH-polarized radar images, L VV-VV Represents the loss of self-bilinear pooling of the vertical transmitting and vertical receiving VV polarized radar image, L VH-VV The loss of cross-bilinear pooling of the vertical transmitting horizontal receiving VH polarization radar image and the vertical transmitting vertical receiving VV polarization radar image is shown, alpha is a hyper-parameter, alpha belongs to (0,1), the value of alpha in the embodiment is 0.3, y is a value i Is a single hot coding form of a real ship label, N is the total number of the extended training set,
Figure BDA0003673203040000094
is the output result of the softmax function to the double/single polarization synthetic aperture radar image.
(5) Training packet bilinear convolutional neural network
Inputting the extended training set into a grouped bilinear convolutional neural network, outputting a classification result, and using a loss function L MPFL Training the grouping bilinear convolution neural network to a loss function L MPFL And converging to obtain the trained grouped bilinear convolutional neural network.
(6) Test packet bilinear convolutional neural network
And inputting the ship test set into the trained grouped bilinear convolutional neural network to obtain a dual-polarization radar ship classification result.
And finishing the dual-polarization radar ship classification method based on the grouping bilinear convolutional neural network.
Example 2
The dual-polarization radar ship classification method based on the grouping bilinear convolutional neural network comprises the following steps:
(1) acquiring a data set
This procedure is the same as in example 1.
(2) Augmenting data sets
This procedure is the same as in example 1.
(3) Constructing a grouped bilinear convolutional neural network
The grouping bilinear convolutional neural network is formed by connecting a deep dense connection layer and a grouping bilinear pooling layer in series.
The structure of the deep dense connection layer and the grouped bilinear pooling layer is the same as that of embodiment 1.
The grouped bilinear pooling layer of this embodiment is composed of convolutional layer C 1 A channel grouping layer, a first self-bilinear pooling layer, a cross-bilinear pooling layer, a second self-bilinear pooling layer, a full-connection layer FC, an output layer, a convolution layer C 1 The input of the dense connection layer D5 is connected with the output of the dense connection layer D5, the output of the dense connection layer D5 is connected with the channel grouping layer, the output of the channel grouping layer is respectively connected with the input of the first auto-bilinear pooling layer, the cross-bilinear pooling layer and the second auto-bilinear pooling layer, the output of the first auto-bilinear pooling layer, the cross-bilinear pooling layer and the second auto-bilinear pooling layer is connected with the input of the full connection layer FC, and the output of the full connection layer FC is connected with the input of the output layer. Convolutional layer C 1 Has a convolution kernel size of 1, a step length of 1, a number of D, and belongs to [3, 165 ]]In this embodiment, D is 3.
The other steps of this step are the same as in example 1.
(4) Construction of a multi-polarization channel fusion loss function
Determining a multi-polarization channel fusion loss function L according to MPFL
L MPFL =α(L VH-VH +L VV-VV )+(1-α)L VH-VV
Figure BDA0003673203040000101
Figure BDA0003673203040000102
Figure BDA0003673203040000111
In the formula, L VH-VH Represents the loss from bilinear pooling of the vertical-transmit horizontal-receive VH-polarized radar images, L VV-VV Represents the loss of self-bilinear pooling of the vertical transmitting and vertical receiving VV polarization radar image, L VH-VV The loss of cross-bilinear pooling of the vertical transmitting horizontal receiving VH polarization radar image and the vertical transmitting vertical receiving VV polarization radar image is shown, alpha is a hyper-parameter, alpha belongs to (0,1), the value of alpha in the embodiment is 0.1, y is a value i Is a single hot coding form of a real ship label, N is the total number of the extended training set,
Figure BDA0003673203040000112
is the output result of the softmax function to the double/single polarization synthetic aperture radar image.
The other steps are the same as in example 1.
And finishing the dual-polarization radar ship classification method based on the grouping bilinear convolutional neural network.
Example 3
The dual-polarization radar ship classification method based on the grouping bilinear convolutional neural network comprises the following steps:
(1) acquiring a data set
This procedure is the same as in example 1.
(2) Augmenting data sets
This procedure is the same as in example 1.
(3) Constructing a grouped bilinear convolutional neural network
The grouping bilinear convolutional neural network is formed by connecting a deep dense connection layer and a grouping bilinear pooling layer in series.
The structure of the deep dense connection layer and the grouped bilinear pooling layer is the same as that of embodiment 1.
The grouping bilinear pooling layer is composed of a convolution layer C 1 A channel grouping layer, a first self-bilinear pooling layer, a cross-bilinear pooling layer, a second self-bilinear pooling layer, a full-connection layer FC, an output layer, a convolution layer C 1 The input of the dense connection layer D5 is connected with the output of the dense connection layer D5, the output of the dense connection layer D5 is connected with the channel grouping layer, the output of the channel grouping layer is respectively connected with the input of the first auto-bilinear pooling layer, the cross-bilinear pooling layer and the second auto-bilinear pooling layer, the output of the first auto-bilinear pooling layer, the cross-bilinear pooling layer and the second auto-bilinear pooling layer is connected with the input of the full connection layer FC, and the output of the full connection layer FC is connected with the input of the output layer. Convolutional layer C 1 Has a convolution kernel size of 1, a step length of 1, a number of D, and belongs to [3, 165 ]]In this embodiment, D is 165.
The other steps of this step are the same as in example 1.
(4) Construction of a multi-polarization channel fusion loss function
Determining a multi-polarization channel fusion loss function L according to MPFL
L MPFL =α(L VH-VH +L VV-VV )+(1-α)L VH-VV
Figure BDA0003673203040000121
Figure BDA0003673203040000122
Figure BDA0003673203040000123
In the formula, L VH-VH Represents the loss from bilinear pooling of the vertical-transmit horizontal-receive VH-polarized radar images, L VV-VV Represents the loss of self-bilinear pooling of the vertical transmitting and vertical receiving VV polarized radar image, L VH-VV The loss of cross-bilinear pooling of the vertical transmitting horizontal receiving VH polarization radar image and the vertical transmitting vertical receiving VV polarization radar image is shown, alpha is a hyper-parameter, alpha belongs to (0,1), the value of alpha in the embodiment is 0.9, y is a value i Is a single hot coding form of a real ship label, N is the total number of the extended training set,
Figure BDA0003673203040000124
is the output result of the softmax function to the double/single polarized synthetic aperture radar image.
The other steps were the same as in example 1.
And finishing the dual-polarization radar ship classification method based on the grouping bilinear convolutional neural network.
In order to verify the effectiveness of the invention, the inventor carries out experimental study and comparative simulation experiment, and the experimental conditions are as follows:
1. influence of grouping bilinear convolutional neural network on classification accuracy of radar naval vessel
Grouping a bilinear convolutional neural network, a deep dense connection layer and a bilinear pooling method, adopting different loss functions and different polarization modes, and influencing the radar ship classification accuracy to obtain a result shown in table 1.
TABLE 1 table of comparative experimental results of effectiveness of the methods of the various layers in the process of the present invention
Figure BDA0003673203040000125
As can be seen from table 1, the deep dense connection layer can extract features well, has excellent performance, realizes better classification, and the combined VV polarization and VH polarization can provide supplementary information with each other, thereby improving classification performance. The method of the invention is improved slightly with the bilinear pooling method under the same loss function, the method of the invention uses different loss functions for comparison, the validity of the proposed multi-polarization channel fusion loss function is verified, and the accurate classification performance is realized by adopting multi-polarization information.
2. Simulation contrast experiment
The inventor carries out a comparative simulation experiment by adopting the dual-polarization radar ship classification method based on the grouping bilinear convolutional neural network in the embodiment 1 of the invention and a characteristic loss dual fusion method (hereinafter referred to as a comparative experiment 1), a mixed channel characteristic loss method (hereinafter referred to as a comparative experiment 2) and a compressed excitation Laplacian pyramid method (hereinafter referred to as a comparative experiment 3), and the experimental results are shown in a table 2.
TABLE 2 Experimental results Table for the inventive method and 3 comparative experimental methods
Figure BDA0003673203040000131
As can be seen from table 2, compared with other existing methods, the method of the present invention employs the grouping bilinear convolutional neural network, can better extract the features of the data set, explore the polarization information in the dual-polarized ship image, and employs the grouping bilinear pooling model to fuse the dual-polarized feature information, thereby reducing the calculation amount, improving the training efficiency, obtaining a better classification effect, and having higher accuracy.
The foregoing description is only an example of the present invention and is not intended to limit the invention, so that it will be apparent to those skilled in the art that various changes and modifications in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (4)

1. A dual-polarization radar ship classification method based on a grouping bilinear convolutional neural network is characterized by comprising the following steps:
(1) acquiring a data set
Two radar ship images of vertical transmitting and vertical receiving signals VV polarization and vertical transmitting and horizontal receiving signals VH polarization are selected from an OpenSARShip database, and the radar ship images are obtained according to the following steps of 8: 2, dividing the training set into a training set and a testing set;
(2) augmenting data sets
Expanding the training set by 8 times by using a turning, rotating, translating and noise adding method to obtain an expanded training set;
(3) constructing a grouped bilinear convolutional neural network
The grouping bilinear convolutional neural network is formed by connecting a deep dense connection layer and a grouping bilinear pooling layer in series;
(4) construction of a multi-polarization channel fusion loss function
Determining a multi-polarization channel fusion loss function L according to MPFL
L MPFL =α(L VH-VH +L VV-VV )+(1-α)L VH-VV
Figure FDA0003673203030000011
Figure FDA0003673203030000012
Figure FDA0003673203030000013
In the formula, L VH-VH Represents the loss from bilinear pooling of the vertical-transmit horizontal-receive VH-polarized radar images, L VV-VV Represents the loss of self-bilinear pooling of the vertical transmitting and vertical receiving VV polarized radar image, L VH-VV Representing the loss of cross-bilinear pooling of a vertical transmitting horizontal receiving VH polarized radar image and a vertical transmitting vertical receiving VV polarized radar image, wherein alpha is a hyper-parameter and belongs to (0,1), y i In the form of a one-hot code of a real ship tag, NFor the total number of training sets after the expansion,
Figure FDA0003673203030000014
the output result of the softmax function to the double/single polarization synthetic aperture radar image is obtained;
(5) training packet bilinear convolutional neural network
Inputting the extended training set into a grouped bilinear convolutional neural network, outputting a classification result, and using a loss function L MPFL Training the grouped bilinear convolutional neural network until the network converges to obtain a trained grouped bilinear convolutional neural network;
(6) test packet bilinear convolutional neural network
And inputting the ship test set into the trained grouped bilinear convolutional neural network to obtain a dual-polarization radar ship classification result.
2. The dual-polarized radar ship classification method based on the grouped bilinear convolutional neural network as claimed in claim 1, wherein: in the step (3), the deep dense connection layer is composed of a first deep dense connection layer and a second deep dense connection layer which have the same structure and are connected in parallel, wherein the first deep dense connection layer is composed of a base layer S, a dense connection layer D1, a transition dimensionality reduction layer T1, a dense connection layer D2, a transition dimensionality reduction layer T2, a dense connection layer D3, a transition dimensionality reduction layer T3, a dense connection layer D4, a transition dimensionality reduction layer T4 and a dense connection layer D5 which are connected in series in sequence;
the grouping bilinear pooling layer is composed of a convolution layer C 1 A channel grouping layer, a first self-bilinear pooling layer, a cross-bilinear pooling layer, a second self-bilinear pooling layer, a full-connection layer FC, an output layer, a convolution layer C 1 The input of the channel grouping layer is connected with the output of the dense connection layer D5, the output of the channel grouping layer is connected with the input of the first auto-bilinear pooling layer, the cross-bilinear pooling layer and the second auto-bilinear pooling layer, the output of the first auto-bilinear pooling layer, the cross-bilinear pooling layer and the second auto-bilinear pooling layer is connected with the input of the full connection layer FC, and the full connection layer is connected with the output of the full connection layer FCThe output of the FC is connected to the input of the output layer.
3. The dual-polarized radar ship classification method based on the grouped bilinear convolutional neural network as claimed in claim 2, wherein in the step (3), the first deep dense connection layer is constructed as follows:
the base layer S is composed of 2 convolution blocks connected in series, each convolution block is composed of a convolution layer L 1 With batch normalization layer L 2 An activation function layer L 3 Formed in series in sequence, the activation function layer L 3 The output non-linear mapping relu (x) is as follows:
ReLU(x)=max(0,x)
wherein x is batch normalization L 2 The output of the layer, the convolution kernel size is 3, and the step length is 1;
the dense connection layer D1 is formed by connecting 3 convolution blocks with the growth rate of 3 in series, and each convolution block is composed of a batch normalization layer
Figure FDA0003673203030000021
And activation function layer
Figure FDA0003673203030000022
Convolutional layer
Figure FDA0003673203030000023
dropout layer
Figure FDA0003673203030000024
Are sequentially connected in series to form a convolution layer
Figure FDA0003673203030000025
The convolution kernel size of (1) is 3, the step length is 1, n represents the nth convolution block, and n belongs to {1,2,3 };
the transition dimensionality reduction layer T1 is formed by a batch normalization layer
Figure FDA0003673203030000026
And activation function layer
Figure FDA0003673203030000027
Convolutional layer
Figure FDA0003673203030000028
Average pooling layer
Figure FDA0003673203030000029
Formed in series, wound layers
Figure FDA00036732030300000210
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure FDA00036732030300000211
The average pooled kernel size of (a) is 2, the step size is 2;
the dense connection layer D2 is formed by connecting 3 convolution blocks with the growth rate of 6 in series, and each convolution block is composed of a batch normalization layer
Figure FDA0003673203030000031
And activation function layer
Figure FDA0003673203030000032
Convolutional layer
Figure FDA0003673203030000033
dropout layer
Figure FDA0003673203030000034
Formed in series, wound layers
Figure FDA0003673203030000035
The size of the convolution kernel of (1) is 3, and the step length is 1;
the transition dimensionality reduction layer T2 is formed by a batch normalization layer
Figure FDA0003673203030000036
And activation function layer
Figure FDA0003673203030000037
Convolutional layer
Figure FDA0003673203030000038
Average pooling layer
Figure FDA0003673203030000039
Are sequentially connected in series to form a convolution layer
Figure FDA00036732030300000310
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure FDA00036732030300000311
The average pooled kernel size of (a) is 2, the step size is 2;
the dense connection layer D3 is formed by connecting 3 convolution blocks with the growth rate of 9 in series, and each convolution block is composed of a batch normalization layer
Figure FDA00036732030300000312
And activation function layer
Figure FDA00036732030300000313
Convolutional layer
Figure FDA00036732030300000314
dropout layer
Figure FDA00036732030300000315
Formed in series, wound layers
Figure FDA00036732030300000316
The size of the convolution kernel of (1) is 3, and the step length is 1;
the transition dimensionality reduction layer T3 is formed by a batch normalization layer
Figure FDA00036732030300000317
And activation function layer
Figure FDA00036732030300000318
Convolutional layer
Figure FDA00036732030300000319
Average pooling layer
Figure FDA00036732030300000320
Formed in series, wound layers
Figure FDA00036732030300000321
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure FDA00036732030300000322
The average pooled kernel size of (a) is 2, the step size is 2;
the dense connection layer D4 is formed by connecting 3 convolution blocks with the growth rate of 12 in series, and each convolution block is composed of a batch normalization layer
Figure FDA00036732030300000323
And activation function layer
Figure FDA00036732030300000324
Convolutional layer
Figure FDA00036732030300000325
dropout layer
Figure FDA00036732030300000326
Formed in series, wound layers
Figure FDA00036732030300000327
The size of the convolution kernel of (1) is 3, and the step length is 1;
the transition dimensionality reduction layer T4 is a batch normalization layer
Figure FDA00036732030300000328
And activation function layer
Figure FDA00036732030300000329
Convolutional layer
Figure FDA00036732030300000330
Average pooling layer
Figure FDA00036732030300000331
Formed in series, wound layers
Figure FDA00036732030300000332
Has a convolution kernel size of 3, a step size of 1, and an average pooling layer
Figure FDA00036732030300000333
The average pooled kernel size of (a) is 2, the step size is 2;
the dense connection layer D5 is formed by connecting 3 convolution blocks with the growth rate of 15 in series, and each convolution block is a batch normalization layer
Figure FDA00036732030300000334
Layer of activation function
Figure FDA00036732030300000335
Convolutional layer
Figure FDA00036732030300000336
dropout layer
Figure FDA00036732030300000337
Formed in series, wound layers
Figure FDA00036732030300000338
The size of the convolution kernel of (1) is 3, and the step length is 1;
the second deep dense connection layer is constructed in the same manner as the first deep dense connection layer.
4. The dual-polarized radar ship classification method based on the grouped bilinear convolutional neural network as claimed in claim 2, wherein the method for constructing the grouped bilinear pooling layer is as follows:
the convolution layer C 1 Has a convolution kernel size of 1, a step length of 1, a number of D, and belongs to [3, 165 ]];
The construction method of the channel grouping layer comprises the following steps: dividing the feature maps of the two polarizations into G groups from 1 to D according to the coding sequence of the channels to obtain a sub-feature map F p
F={F p }
Wherein
Figure FDA00036732030300000339
For the separated sub-feature graph, H, W and D are respectively the height, width and channel number of the sub-feature graph, p belongs to {1,2, … and G }, G is the number of categories to be classified, and D can be divided by G;
the method for constructing the first auto-bilinear pooling layer, the cross-bilinear pooling layer and the second auto-bilinear layer is as follows: carrying out outer product on the sub-feature graph to obtain a feature descriptor
Figure FDA0003673203030000041
Figure FDA0003673203030000042
Wherein the content of the first and second substances,
Figure FDA0003673203030000043
the mth group of sub-feature maps for the first polarization,
Figure FDA0003673203030000044
set n of sub-feature maps for the second polarization, f k Is the spatial position of the sub-feature map along the input feature mapThe feature vector of the channel dimension at k,
Figure FDA0003673203030000045
performing auto-bilinear pooling on single polarization, performing cross-bilinear pooling on two polarizations, and determining 3 bilinear vectors z according to the following formula G
Figure FDA0003673203030000046
z=y/||y|| 2
Figure FDA0003673203030000047
b=vec(B),
Wherein
Figure FDA0003673203030000048
Representing the concatenation of sub-bilinear vectors after element normalization and L2 regularization processing are carried out on the vectorized feature descriptors between the sub-feature maps of the two single-polarized images;
the output of the full connection layer FC is mapped into 128 dimensions;
the output layer outputs the number of categories mapped to the classification.
CN202210618884.1A 2022-05-31 2022-05-31 Dual-polarization radar ship classification method based on grouping bilinear convolutional neural network Pending CN114973016A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210618884.1A CN114973016A (en) 2022-05-31 2022-05-31 Dual-polarization radar ship classification method based on grouping bilinear convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210618884.1A CN114973016A (en) 2022-05-31 2022-05-31 Dual-polarization radar ship classification method based on grouping bilinear convolutional neural network

Publications (1)

Publication Number Publication Date
CN114973016A true CN114973016A (en) 2022-08-30

Family

ID=82959522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210618884.1A Pending CN114973016A (en) 2022-05-31 2022-05-31 Dual-polarization radar ship classification method based on grouping bilinear convolutional neural network

Country Status (1)

Country Link
CN (1) CN114973016A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237777A (en) * 2023-11-13 2023-12-15 四川观想科技股份有限公司 Ship target identification method based on multi-mode fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117237777A (en) * 2023-11-13 2023-12-15 四川观想科技股份有限公司 Ship target identification method based on multi-mode fusion
CN117237777B (en) * 2023-11-13 2024-02-27 四川观想科技股份有限公司 Ship target identification method based on multi-mode fusion

Similar Documents

Publication Publication Date Title
CN108564109B (en) Remote sensing image target detection method based on deep learning
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
CN110363215B (en) Method for converting SAR image into optical image based on generating type countermeasure network
CN109636742B (en) Mode conversion method of SAR image and visible light image based on countermeasure generation network
CN113159051B (en) Remote sensing image lightweight semantic segmentation method based on edge decoupling
CN110135267A (en) A kind of subtle object detection method of large scene SAR image
CN112232156B (en) Remote sensing scene classification method based on multi-head attention generation countermeasure network
CN110245711B (en) SAR target identification method based on angle rotation generation network
CN105046276A (en) Hyperspectral image band selection method based on low-rank expression
CN106951915B (en) One-dimensional range profile multi-classifier fusion recognition method based on category confidence
CN108229551B (en) Hyperspectral remote sensing image classification method based on compact dictionary sparse representation
CN107133648B (en) One-dimensional range profile identification method based on adaptive multi-scale fusion sparse preserving projection
CN113240040A (en) Polarized SAR image classification method based on channel attention depth network
CN104809471A (en) Hyperspectral image residual error fusion classification method based on space spectrum information
CN114973016A (en) Dual-polarization radar ship classification method based on grouping bilinear convolutional neural network
CN112905828A (en) Image retriever, database and retrieval method combined with significant features
CN114937202A (en) Double-current Swin transform remote sensing scene classification method
CN115631427A (en) Multi-scene ship detection and segmentation method based on mixed attention
CN110647977B (en) Method for optimizing Tiny-YOLO network for detecting ship target on satellite
Wang et al. SAR target classification based on multiscale attention super-class network
CN113344110B (en) Fuzzy image classification method based on super-resolution reconstruction
CN104504391A (en) Hyperspectral image classification method based on sparse feature and Markov random field
CN116977747B (en) Small sample hyperspectral classification method based on multipath multi-scale feature twin network
CN117523394A (en) SAR vessel detection method based on aggregation characteristic enhancement network
CN117173556A (en) Small sample SAR target recognition method based on twin neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination