CN112464836A - AIS radiation source individual identification method based on sparse representation learning - Google Patents

AIS radiation source individual identification method based on sparse representation learning Download PDF

Info

Publication number
CN112464836A
CN112464836A CN202011393425.5A CN202011393425A CN112464836A CN 112464836 A CN112464836 A CN 112464836A CN 202011393425 A CN202011393425 A CN 202011393425A CN 112464836 A CN112464836 A CN 112464836A
Authority
CN
China
Prior art keywords
dictionary
ais
feature
neural network
radiation source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011393425.5A
Other languages
Chinese (zh)
Inventor
蒯小燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Hanchen Technology Co ltd
Original Assignee
Zhuhai Hanchen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Hanchen Technology Co ltd filed Critical Zhuhai Hanchen Technology Co ltd
Priority to CN202011393425.5A priority Critical patent/CN112464836A/en
Publication of CN112464836A publication Critical patent/CN112464836A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

The invention belongs to the technical field of signal identification, and relates to an AIS radiation source individual identification method based on sparse representation learning. The method has the advantages that the shallow feature and the deep feature of the category are extracted based on the neural network respectively, and the sparse representation method based on the multi-level features is adopted. In the aspect of multi-level feature extraction, the method carries out supervised training on a feature extraction network, excavates shallow and deep features which are beneficial to classification in signals, expands an original signal dictionary by utilizing the shallow and deep features extracted by the feature extraction network, carries out dimensionality reduction and sparse reconstruction on a test sample on the expanded multi-level dictionary, and carries out classification judgment according to reconstruction errors. The experimental results show that: the method provided by the invention has a good identification effect on the actually acquired AIS data set.

Description

AIS radiation source individual identification method based on sparse representation learning
Technical Field
The invention belongs to the technical field of signal identification, and relates to an AIS radiation source individual identification method based on sparse representation learning.
Background
An Automatic Identification System (AIS) for a general ship is an on-board broadcast response system. In the AIS system, if someone modifies a Maritime Mobile Service Identity (MMSI) that uniquely identifies ship Identity information, a great threat is brought to navigation security. And a Radio frequency fingerprint (RF) is an essential characteristic of a physical layer of the AIS terminal transmitting device and is difficult to tamper. Therefore, the radiation source individual identification technology based on the radio frequency fingerprint provides a physical layer method for protecting the safety of the AIS communication system, and can be applied to detecting illegal radiation source signals. The advanced AIS radiation source individual identification technology is effectively and comprehensively applied to marine traffic transportation, and intelligent management of marine traffic can be enhanced, so that a marine comprehensive transportation system which guarantees safety, improves efficiency and saves resources is formed.
In the field of individual identification of communication radiation sources, a traditional processing mode is to extract the characteristics of signals and then classify the signals by using a classification model such as a Support Vector Machine (SVM). By utilizing statistical characteristics such as high-order spectrum, nonlinear dynamic characteristics and the like and transformation domain characteristics obtained by decomposing a radiation source or Hilbert-Huang transformation, better identification effect can be realized. The above methods usually require manual parameter setting, rely on a certain priori knowledge, and have no universality. In recent years, Deep Learning (DL) has been widely used in various fields such as medical care and transportation. Compared with the traditional feature extraction method, the deep Neural Network can automatically extract essential features of strong discriminative power of signals, so that some scholars begin to use various classical Neural Network models for individual radiation source identification, such as a method of using a Convolutional Neural Network (CNN) to identify a specific radiation source, and a residual error Network is used to process a Hilbert spectrogram of a signal, so as to identify individual radiation sources. However, most of the radiation source individual identification methods based on the neural network also have the traditional characteristics, and have similar defects with the traditional methods. Some networks specially designed for Time series classification, such as inclusion Time, Encoder, Resnet, etc., can also be used for individual identification of radiation sources because the processed objects are all one-dimensional Time sequence signals.
Sparse representation of data is another emerging research focus in recent years. Sparse Representation theory is used to solve the Classification problem, also called Sparse Representation Based Classification algorithm (SRC). In the method, training samples of all classes form an over-complete dictionary, a test sample is sparsely represented by basis vectors in the dictionary, and classification judgment is made according to sparse representation coefficients. The sparse representation coefficient can be used for solving a representative algorithm such as Basis Pursuit (BP), Orthogonal Matching Pursuit (OMP), and the like.
Disclosure of Invention
The invention provides a classification algorithm based on multilevel sparse representation learning for the first time, which is used for AIS radiation source individual identification. The technical scheme of the invention is that a neural network is innovatively combined with a sparse representation classifier, a multi-scale convolution neural network is designed to extract hidden features in signals, a dictionary is expanded by using features of a shallow neural network layer and a deep neural network layer, and AIS signals are sparsely represented and classified based on the expanded dictionary.
The technical scheme of the invention is as follows: an AIS radiation source individual identification method based on sparse representation learning comprises the following steps:
s1, acquiring AIS signals to construct a training data set; the invention intercepts the rising edge, the training sequence and the start mark as effective data of the AIS signal.
The individual identification process of the radiation source usually needs to intercept a section of effective signal to extract the radio frequency fingerprint. For general signals, the start-stop position of valid data is usually located by selecting the detection signal variation part. For AIS type signals with strict transmission specifications, the effective data is positioned by using the synchronization sequence in the signals, which is obviously more accurate and efficient. The rising edge, the training sequence and the start mark in the AIS signal require the transmitted symbols to be consistent, do not contain any data information, and contain a signal section from zero to rated power of a transmitter, thereby representing the subtle characteristics caused by hardware between different AIS radiation sources, which can be used for distinguishing. The invention therefore intercepts the rising edge, training sequence and start flag as valid data for the AIS signal.
S2, constructing a neural network: the method comprises the steps that a neural network is constructed by adopting two inclusion modules with channel attention mechanisms, wherein the neural network is respectively defined as a first inclusion module and a second inclusion module, the first inclusion module and the second inclusion module are cascaded, a training data set is input into the first inclusion module, the output of the first inclusion module reduces the number of channels to 1 through a bottleneck layer, and then shallow layer characteristics are obtained; obtaining deep features after the output of the second inclusion module is subjected to global average pooling;
the invention uses an inclusion module in a classical neural network to extract the characteristics of an input signal, and integrates a channel Attention Mechanism (Attention Mechanism) into neural network learning, thereby focusing on a more useful channel. Two inclusion modules with channel attention mechanisms are cascaded, and the number of channels is reduced to 1 by a bottleneck layer in the middle as a shallow layer characteristic. The output of the second inclusion module is globally averaged pooled in the time dimension as a deep feature. The inclusion module adopts three convolution kernels with different scales to slide on an input signal at the same time, and the convolution kernels with different scales have different receptive fields, so that local information with different resolutions can be extracted from a time sequence signal. Meanwhile, the integrated channel attention mechanism focuses the neural network on a useful channel, and effective characteristics can be obtained.
S3, training the constructed neural network by adopting a training data set to obtain a trained neural network;
s4, constructing a multi-level feature dictionary: hypothesis original signal dictionary
Figure BDA0002813559050000031
The number of classes of all samples is K, M is the dimension of the original signal, N is the number of training samples, and each class corresponds to the original sub-dictionary
Figure BDA0002813559050000032
Are all composed of NiAn ith type original sample
Figure BDA0002813559050000033
The structure of the utility model is that the material,
Figure BDA0002813559050000034
each original signal sample soAfter the trained neural network is subjected to feature extraction network, two corresponding features, namely shallow features, are obtained
Figure BDA0002813559050000035
And deep layer characteristics
Figure BDA0002813559050000036
The original signal dictionary is expanded by the two characteristics to obtain an expanded multi-level characteristic dictionary
Figure BDA0002813559050000037
Wherein the sub-dictionary corresponding to each class is expanded to
Figure BDA0002813559050000038
From the original sub-dictionary
Figure BDA0002813559050000039
Shallow feature sub-dictionary
Figure BDA00028135590500000310
Deep-layer feature sub-dictionary
Figure BDA00028135590500000311
Forming;
s5, reducing the dimension of the multilevel feature dictionary S by adopting principal component analysis to obtain a multilevel dictionary D:
Figure BDA00028135590500000312
wherein the content of the first and second substances,
Figure BDA00028135590500000313
is a vector formed by corresponding mean values of each line in a dictionary S, and (S-m.1) is solved for decentralized operationCovariance matrix Cov ═ (S-m.1) · (S-m.1)Te.M multiplied by M eigenvalues and corresponding eigenvectors, and arranging the corresponding eigenvectors of the first P eigenvalues from top to bottom in a row into a projection matrix according to the sequence of the eigenvalues from large to small
Figure BDA00028135590500000314
P is the feature dimension after projection;
s6, AIS radiation source individual identification: signal to be identified
Figure BDA00028135590500000315
Is mapped into through a projection matrix
Figure BDA00028135590500000316
Solving for sparse representation coefficients of y
Figure BDA00028135590500000317
Figure BDA00028135590500000318
The code vector of y on the multilevel dictionary matrix D is theta, and l is obtained by adopting a basis pursuit algorithm1Norm minimum solution, i.e.
Figure BDA00028135590500000319
According to
Figure BDA00028135590500000320
And (3) performing signal reconstruction and classification judgment:
Figure BDA00028135590500000321
wherein the content of the first and second substances,
Figure BDA00028135590500000322
is a corresponding encoded coefficient vector of class i, i.e. sparse representation coefficient of y
Figure BDA00028135590500000323
And (4) retaining the element corresponding to the ith class, setting all the other elements to be zero, reconstructing on each class respectively, and solving a reconstruction error, wherein the class with the minimum reconstruction error is judged as the radiation source individual to which the AIS signal to be identified belongs.
The principle of the invention is as follows: each original signal sample soTwo corresponding features, namely shallow feature s, are obtained after the feature extraction networksAnd deep layer characteristics sd. The original signal dictionary is expanded by utilizing the two characteristics, and sparse reconstruction is carried out on a multi-level dictionary, because of the shallow characteristic dictionary SsAnd deep feature dictionary SdIs an original dictionary SoThey can be regarded as S after learning of neural networkoProvides a higher level of features. Therefore, compared with an original dictionary, shallow and deep feature dictionaries which can represent category information better are introduced, and samples can be described better. And respectively reconstructing on each type and solving a reconstruction error, wherein the type with the minimum reconstruction error is judged as the radiation source individual to which the test AIS signal belongs.
The method has the advantages that the shallow feature and the deep feature of the category are extracted based on the neural network respectively, and the sparse representation method based on the multi-level features is adopted. In the aspect of multi-level feature extraction, the method carries out supervised training on a feature extraction network, excavates shallow and deep features which are beneficial to classification in signals, expands an original signal dictionary by utilizing the shallow and deep features extracted by the feature extraction network, carries out dimensionality reduction and sparse reconstruction on a test sample on the expanded multi-level dictionary, and carries out classification judgment according to reconstruction errors. The experimental results show that: the method proposed herein has a good identification effect on the AIS data sets actually collected.
Drawings
FIG. 1 is a schematic flow chart of an implementation of the recognition method of the present invention;
FIG. 2 is a schematic diagram of a feature extraction architecture;
fig. 3 is a schematic structural diagram of an inclusion module, which extracts local signals with different resolutions from a time sequence signal;
FIG. 4 is a graph of experimental results of different methods.
Detailed Description
The present invention is described in detail below with reference to the attached drawings so that those skilled in the art can better understand the present invention.
As shown in fig. 1, the method of the present invention mainly comprises the following steps: firstly, effective data interception is carried out, then a feature extraction network and a multi-level dictionary structure are trained, and finally an AIS radiation source individual is identified.
The method comprises the following specific steps:
step 1, intercepting effective data
The individual identification process of the radiation source usually needs to intercept a section of effective signal to extract the radio frequency fingerprint. For general signals, the start-stop position of valid data is usually located by selecting the detection signal variation part. For AIS type signals with strict transmission specifications, the effective data is positioned by using the synchronization sequence in the signals, which is obviously more accurate and efficient. The rising edge, the training sequence and the start mark in the AIS signal require the transmitted symbols to be consistent, do not contain any data information, and contain a signal section from zero to rated power of a transmitter, thereby representing the subtle characteristics caused by hardware between different AIS radiation sources, which can be used for distinguishing. The invention therefore intercepts the rising edge, training sequence and start flag as valid data for the AIS signal.
Step 2, training feature extraction network
The architecture of the feature extraction network is shown in fig. 2. The method makes full use of the category information provided by the dictionary data set, and performs supervised training on the network, so that the network learns more effective characteristics for subsequent dictionary expansion and classification.
In fig. 2, the network employs an inclusion module to extract the characteristics of the input signal. The acquired effective data set is passed through the input layer, and three convolution kernels with different scales are adopted to simultaneously slide on the input signal, and as shown in the figure, the sizes of the convolution kernels are respectively set to 40,20 and 10. The convolution kernels with different scales have different receptive fields, and the method can extract local information with different resolutions from the time sequence signal. Parallel maximum pooling operation is introduced, and the number of channels is adjusted through a Bottleneck layer (Bottleneeck), so that the model has robustness, overfitting is prevented, and the generalization capability of the model is improved. The Channel Attention mechanism (Channel Attention) is integrated in the inclusion module, as shown in fig. 3, to focus the neural network to a more useful Channel. The Squeeze and Excitation Network are used as a channel attention mechanism for timing signals. First, using Global Average firing as the Squeeze operation, feature compression is performed in the time dimension, changing each one-dimensional feature channel into a real number with a Global receptive field. And then performing an Excitation operation, namely increasing nonlinearity and reducing parameter quantity of a bottleneck structure consisting of two full-connected (FC) layers based on a Softmax function, and obtaining a normalized weight w between 0 and 1 through a Sigmoid activation function. And finally, performing a re-weighting operation, and weighting the weight w output by the Excitation channel by channel through multiplication to realize re-calibration of the original features on the channel dimension. The output of the inclusion module at this time is a shallow feature.
As shown in fig. 2, two inclusion modules with channel attention mechanism are stacked, and the output of the second inclusion module is globally averaged and pooled in the time dimension as the output feature of the deep neural network.
Step 3, constructing a multi-level dictionary:
suppose an original dictionary
Figure BDA0002813559050000051
Where K is the number of classes of all samples, M is the dimension of the original signal, and N is the number of training samples. Original sub-dictionary corresponding to each class
Figure BDA0002813559050000052
Are all composed of NiAn ith type original sample
Figure BDA0002813559050000053
The structure of the utility model is that the material,
Figure BDA0002813559050000061
testing AIS samples assuming sparse reconstruction only on original dictionaries
Figure BDA0002813559050000062
Can be expressed as:
x=So·α (1)
wherein, alpha is the original dictionary S of the test sample xoThe above sparse representation coefficients, and the classification result of the test sample x can be obtained by processing α. Each original signal sample soTwo corresponding features, namely shallow features, are obtained after the feature extraction network
Figure BDA0002813559050000063
And deep layer characteristics
Figure BDA0002813559050000064
Firstly, the original signal dictionary is expanded by utilizing the two characteristics to obtain an expanded dictionary
Figure BDA0002813559050000065
Wherein the sub-dictionary corresponding to each class is expanded to
Figure BDA0002813559050000066
From the original sub-dictionary
Figure BDA0002813559050000067
Shallow feature sub-dictionary
Figure BDA0002813559050000068
Deep-layer feature sub-dictionary
Figure BDA0002813559050000069
And (4) forming. Sparse reconstruction is performed on a multi-level dictionary, and a test sample x can be expressed as:
x=So·α+Ss·β+Sd·γ (2)
dictionary for shallow features SsAnd deep feature dictionary SdIs an original wordDian SoObtained after passing through a neural network, they can be regarded as SoProvides a higher level of features. Therefore, compared with the formula (1), the formula (2) not only uses the original dictionary for representing details, but also introduces the shallow and deep feature dictionaries which can represent the category information better, and can better describe the sample.
However, the relevance between each column basis vector of the multilevel feature dictionary S is large, and the effect of directly using sparse representation is poor. Reducing the dimension by Principal Component Analysis (PCA), weakening the correlation among the basis vectors, and obtaining a multilayer dictionary D:
Figure BDA00028135590500000610
wherein the content of the first and second substances,
Figure BDA00028135590500000611
is a vector composed of corresponding means of each line in the dictionary S, and (S-m.1) is a decentralized operation. The covariance matrix Cov of (S-m.1) ·Te.M multiplied by M eigenvalues and corresponding eigenvectors, and arranging the corresponding eigenvectors of the first P eigenvalues from top to bottom in a row into a projection matrix according to the sequence of the eigenvalues from large to small
Figure BDA00028135590500000612
P is the projected feature dimension. D obtained after projection is a final multilayer dictionary, and then, the multilayer dictionary D is used for carrying out classification based on sparse representation.
Step 4, AIS radiation source individual identification
For a test specimen
Figure BDA0002813559050000071
Is mapped into through a projection matrix
Figure BDA0002813559050000072
Solving for sparse representation coefficients of y
Figure BDA0002813559050000073
Figure BDA0002813559050000074
Wherein theta is a coding vector of the test sample y on the multi-level dictionary matrix D, and l of the test sample y can be solved by adopting a basis pursuit algorithm1Norm minimum solution, i.e.
Figure BDA0002813559050000075
According to
Figure BDA0002813559050000076
And (3) performing signal reconstruction and classification judgment:
Figure BDA0002813559050000077
wherein the content of the first and second substances,
Figure BDA0002813559050000078
is the corresponding coding coefficient vector of the ith class, i.e. the sparse representation coefficient of the test sample y
Figure BDA0002813559050000079
The element corresponding to the ith type in the middle is reserved, and all the other elements are set to be zero. And respectively reconstructing on each type and solving a reconstruction error, wherein the type with the minimum reconstruction error is judged as the radiation source individual to which the test AIS signal belongs.
By contrast, as shown in fig. 4, the recognition accuracy of the neural network-based method is higher than that of the conventional method (SIB + SVM), which represents the superiority of the neural network for AIS radiation source individual recognition. Meanwhile, a sparse representation Method (MSRC) based on multi-level feature learning is superior to other identification methods, namely increment Time and Resnet.

Claims (1)

1. An AIS radiation source individual identification method based on sparse representation learning is characterized by comprising the following steps:
s1, acquiring AIS signals to construct a training data set;
s2, constructing a neural network: the method comprises the steps that a neural network is constructed by adopting two inclusion modules with channel attention mechanisms, wherein the neural network is respectively defined as a first inclusion module and a second inclusion module, the first inclusion module and the second inclusion module are cascaded, a training data set is input into the first inclusion module, the output of the first inclusion module reduces the number of channels to 1 through a bottleneck layer, and then shallow layer characteristics are obtained; obtaining deep features after the output of the second inclusion module is subjected to global average pooling;
s3, training the constructed neural network by adopting a training data set to obtain a trained neural network;
s4, constructing a multi-level feature dictionary: hypothesis original signal dictionary
Figure FDA0002813559040000011
The number of classes of all samples is K, M is the dimension of the original signal, N is the number of training samples, and each class corresponds to the original sub-dictionary
Figure FDA0002813559040000012
Are all composed of NiAn ith type original sample
Figure FDA0002813559040000013
The structure of the utility model is that the material,
Figure FDA0002813559040000014
each original signal sample soAfter the trained neural network is subjected to feature extraction network, two corresponding features, namely shallow features, are obtained
Figure FDA0002813559040000015
And deep layer characteristics
Figure FDA0002813559040000016
The original signal dictionary is expanded by utilizing the two characteristics, and the expanded original signal dictionary is expandedMulti-level feature dictionary
Figure FDA0002813559040000017
Wherein the sub-dictionary corresponding to each class is expanded to
Figure FDA0002813559040000018
From the original sub-dictionary
Figure FDA0002813559040000019
Shallow feature sub-dictionary
Figure FDA00028135590400000110
Deep-layer feature sub-dictionary
Figure FDA00028135590400000111
Forming;
s5, reducing the dimension of the multilevel feature dictionary S by adopting principal component analysis to obtain a multilevel dictionary D:
Figure FDA00028135590400000112
wherein the content of the first and second substances,
Figure FDA00028135590400000113
is a vector composed of corresponding mean values of each line in a dictionary S, and the covariance matrix Cov of (S-m.1) · (S-m.1) is solved for decentralization operation (S-m.1)Te.M multiplied by M eigenvalues and corresponding eigenvectors, and arranging the corresponding eigenvectors of the first P eigenvalues from top to bottom in a row into a projection matrix according to the sequence of the eigenvalues from large to small
Figure FDA00028135590400000114
P is the feature dimension after projection;
s6, AIS radiation source individual identification: signal to be identified
Figure FDA00028135590400000115
Is mapped into through a projection matrix
Figure FDA00028135590400000116
Solving for sparse representation coefficients of y
Figure FDA00028135590400000117
Figure FDA0002813559040000021
The code vector of y on the multilevel dictionary matrix D is theta, and l is obtained by adopting a basis pursuit algorithm1Norm minimum solution, i.e.
Figure FDA0002813559040000022
According to
Figure FDA0002813559040000023
And (3) performing signal reconstruction and classification judgment:
Figure FDA0002813559040000024
wherein the content of the first and second substances,
Figure FDA0002813559040000025
is a corresponding encoded coefficient vector of class i, i.e. sparse representation coefficient of y
Figure FDA0002813559040000026
And (4) retaining the element corresponding to the ith class, setting all the other elements to be zero, reconstructing on each class respectively, and solving a reconstruction error, wherein the class with the minimum reconstruction error is judged as the radiation source individual to which the AIS signal to be identified belongs.
CN202011393425.5A 2020-12-02 2020-12-02 AIS radiation source individual identification method based on sparse representation learning Withdrawn CN112464836A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011393425.5A CN112464836A (en) 2020-12-02 2020-12-02 AIS radiation source individual identification method based on sparse representation learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011393425.5A CN112464836A (en) 2020-12-02 2020-12-02 AIS radiation source individual identification method based on sparse representation learning

Publications (1)

Publication Number Publication Date
CN112464836A true CN112464836A (en) 2021-03-09

Family

ID=74805318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011393425.5A Withdrawn CN112464836A (en) 2020-12-02 2020-12-02 AIS radiation source individual identification method based on sparse representation learning

Country Status (1)

Country Link
CN (1) CN112464836A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778863A (en) * 2016-12-12 2017-05-31 武汉科技大学 The warehouse kinds of goods recognition methods of dictionary learning is differentiated based on Fisher
US20180137393A1 (en) * 2015-06-04 2018-05-17 Siemens Healthcare Gmbh Medical pattern classification using non-linear and nonnegative sparse representations
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN111934749A (en) * 2020-08-07 2020-11-13 上海卫星工程研究所 Satellite-borne AIS message real-time receiving and processing system with wide and narrow beam cooperation
CN112183300A (en) * 2020-09-23 2021-01-05 厦门大学 AIS radiation source identification method and system based on multi-level sparse representation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180137393A1 (en) * 2015-06-04 2018-05-17 Siemens Healthcare Gmbh Medical pattern classification using non-linear and nonnegative sparse representations
CN106778863A (en) * 2016-12-12 2017-05-31 武汉科技大学 The warehouse kinds of goods recognition methods of dictionary learning is differentiated based on Fisher
US20180240219A1 (en) * 2017-02-22 2018-08-23 Siemens Healthcare Gmbh Denoising medical images by learning sparse image representations with a deep unfolding approach
CN111934749A (en) * 2020-08-07 2020-11-13 上海卫星工程研究所 Satellite-borne AIS message real-time receiving and processing system with wide and narrow beam cooperation
CN112183300A (en) * 2020-09-23 2021-01-05 厦门大学 AIS radiation source identification method and system based on multi-level sparse representation

Similar Documents

Publication Publication Date Title
CN109063565B (en) Low-resolution face recognition method and device
CN109815956B (en) License plate character recognition method based on self-adaptive position segmentation
Adler et al. Probabilistic subspace clustering via sparse representations
Sampath et al. Decision tree and deep learning based probabilistic model for character recognition
CN113378971B (en) Classification model training method and system for near infrared spectrum and classification method and system
Nasrollahi et al. Printed persian subword recognition using wavelet packet descriptors
Narang et al. Devanagari ancient character recognition using HOG and DCT features
Hui et al. Research on face recognition algorithm based on improved convolution neural network
Prasad et al. Gujrati character recognition using weighted k-NN and mean χ 2 distance measure
Fadhilah et al. Non-halal ingredients detection of food packaging image using convolutional neural networks
Du et al. Low-shot palmprint recognition based on meta-siamese network
Chen et al. Subcategory-aware feature selection and SVM optimization for automatic aerial image-based oil spill inspection
CN114972904A (en) Zero sample knowledge distillation method and system based on triple loss resistance
Vishwakarma et al. Generalized DCT and DWT hybridization based robust feature extraction for face recognition
CN112183300A (en) AIS radiation source identification method and system based on multi-level sparse representation
CN110781822B (en) SAR image target recognition method based on self-adaptive multi-azimuth dictionary pair learning
CN112364809A (en) High-accuracy face recognition improved algorithm
CN116894207A (en) Intelligent radiation source identification method based on Swin transducer and transfer learning
CN112464836A (en) AIS radiation source individual identification method based on sparse representation learning
CN116055270A (en) Modulation recognition model, training method thereof and signal modulation processing method
Yamada et al. The character generation in handwriting feature extraction using variational autoencoder
Kishan et al. Handwritten character recognition using CNN
CN108052981B (en) Image classification method based on nonsubsampled Contourlet transformation and convolutional neural network
Hao et al. A study on the use of Gabor features for Chinese OCR
Bian et al. Binarization of color character strings in scene images using deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210309