CN108983187B - Online radar target identification method based on EWC - Google Patents

Online radar target identification method based on EWC Download PDF

Info

Publication number
CN108983187B
CN108983187B CN201810757440.XA CN201810757440A CN108983187B CN 108983187 B CN108983187 B CN 108983187B CN 201810757440 A CN201810757440 A CN 201810757440A CN 108983187 B CN108983187 B CN 108983187B
Authority
CN
China
Prior art keywords
resolution range
range profile
batch
data
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810757440.XA
Other languages
Chinese (zh)
Other versions
CN108983187A (en
Inventor
陈渤
刘应祺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201810757440.XA priority Critical patent/CN108983187B/en
Publication of CN108983187A publication Critical patent/CN108983187A/en
Application granted granted Critical
Publication of CN108983187B publication Critical patent/CN108983187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses an online radar target identification method based on EWC, belonging to the technical field of radar, and mainly comprising the following steps: determining the p-th batch of original radar high-resolution range profile training data SpObject class l ofpAnd the p-th batch of original radar high-resolution range profile test data TpTarget class Tl ofp;p=1,2,…,P,P>1; establishing a convolutional neural network model to obtain a trained convolutional neural network; then obtain SpFisher information matrix of the m data; determining 1 st batch of original radar high-resolution range profile test data T1Target class Tl of1Original radar high-resolution range profile test data T from batch PPTarget class Tl ofPAnd predicted target class l 'of 1 st batch of raw radar high-resolution range profile test data'1Predicted target class l 'of P-th batch of original radar high-resolution range profile test data'P(ii) a Further obtaining the 1 st type identification correct target to the 1 st type identification correct target
Figure DDA0001727067930000011
Each of the categories identifies the correct target,
Figure DDA0001727067930000012
obtained at this time
Figure DDA0001727067930000013
The accurate target of each category identification is an online radar target identification result based on the EWC.

Description

Online radar target identification method based on EWC
Technical Field
The invention belongs to the technical field of radars, and particularly relates to an online radar target identification method based on EWC (EWC), namely an online radar target identification method based on Elastic Weight Consolidation (EWC), which is suitable for an online learning radar target identification task.
Background
With the development of advanced technology of modern war, the demand of radar target identification technology is increasingly strong; a radar High Resolution Range Profile (HRRP) is an amplitude waveform of a vector sum of a target scattering point sub-echo obtained by using a broadband radar signal projected on a radar ray, and a High Resolution HRRP sample reflects the distribution condition of radar scattering cross-sectional areas (RCS) of scattering bodies (such as a machine head, a wing, a machine tail rudder, an air inlet, an engine and the like) on a target along a radar sight line (RLOS) when the radar is at a certain radar visual angle, so that the relative geometric relation of scattering points is embodied; therefore, the HRRP sample comprises rich structural information of the target, such as target size, scattering point structure and the like, and is valuable for target identification and classification.
The target detection method based on deep learning develops rapidly in recent years, the convolutional neural network is used as one of deep learning and becomes a research hotspot in the field of current voice analysis and image recognition, the weight sharing network structure of the convolutional neural network is more similar to a biological neural network, the complexity of a network model is reduced, the number of weights is reduced, the method has the advantages that the performance is more obvious when the input of the network is a multi-dimensional image, the image can be directly used as the input of the network, and the complex characteristic extraction and data reconstruction processes in the traditional recognition algorithm are avoided; convolutional networks are multi-layer perceptrons specifically designed to recognize two-dimensional shapes, and such network structures are highly invariant to translation, scaling, tilting, or other forms of deformation.
At present, due to the particularity of radars, continuously acquired data needs to be identified on line; with the increase of data, many algorithms forget the past data characteristics in the training and recognition process of new data, so that the recognition capability of the past data is rapidly reduced.
Disclosure of Invention
In view of the above disadvantages of the prior art, an object of the present invention is to provide an EWC-based online radar target recognition method, which extracts and recognizes features of a target by using a high resolution range profile HRRP, and particularly, prevents forgetting of past data features in a current data training process by using Elastic Weight Consolidation (EWC), so that a radar can train and recognize current data while ensuring recognition capability of past data, and retain features of past data.
The technical idea of the invention is as follows: training an end-to-end convolutional neural network model through data after short-time Fourier transform is carried out on an HRRP data set, adding an EWC (enhanced programmable logic controller) in each batch of data training to improve the memory capacity of the network model on data characteristics, and ensuring the current data identification capacity while maintaining the identification capacity on the past data characteristics.
In order to achieve the technical purpose, the invention adopts the following technical scheme to realize.
An online radar target identification method based on EWC comprises the following steps:
step 1, determining the p-th batch of original radar high-resolution range profile training data SpAnd the p-th batch of original radar high-resolution range profile test data TpAnd determining the p-th batch of original radar high-resolution range profile training data SpObject class l ofpAnd the p-th batch of original radar high-resolution range profile test data TpTarget class Tl ofp;p=1,2,…,P,P>1;
Step 2, establishing a convolution neural network model, and training data S according to the high-resolution range profile of the p-th batch of original radarpObtaining a trained convolutional neural network;
step 3, obtaining a p batch of original radar high-resolution range profile training data S according to the trained convolutional neural networkpFisher information matrix of the m data; m is more than or equal to 1;
step 4, according to the p-th batch of original radar high-resolution range profile training data SpDetermining a Fisher information matrix of the M data to determine a convolution neural network model M updated by the p' +1 datap'+1(ii) a P '+1 is 1,2,3, …, P-1, P' +1 is 2,3, …, P 'has an initial value of 1, P' +1 has an initial value of 2;
and 5, adding 1 to the value of P ', repeating the step 4 until P ' ═ P-1 and P ' +1 ═ P, and further obtaining the convolution neural network model M after the updating of the No. P data PThen, the value of p' is initialized to 1;
step 6, determining the 1 st batch of original radar high-resolution range profile test data T1Target class Tl of1Original radar high-resolution range profile test data T from batch PPTarget class Tl ofPAnd updating the convolution neural network model M according to the P batch of dataPObtaining the predicted target class l of the 1 st batch of original radar high-resolution range profile test data1' to the firstP-batch original radar high-resolution range profile test data prediction target category l'P
Step 7, if le' and TleWhen the distance image is equal to the distance image, e is 1,2, …, P, it means that the target in the e-th original radar high resolution distance image training data is recognized and is marked as the e ' th category to recognize the correct target, the initial value of e ' is 1, and the value of e ' is added with 1; if le' and TleIf the data are not equal, the target class identification error of the e-th batch of original radar high-resolution range profile test data is indicated, and the result of the first batch of original radar high-resolution range profile test data when the target class identification error is abandoned;
let e take 1 to P respectively, and then get the 1 st type identification correct target to the 1 st
Figure GDA0003623644840000031
Each of the categories identifies the correct target,
Figure GDA0003623644840000032
obtained at this time
Figure GDA0003623644840000033
And identifying the correct target by each category as an online radar target identification result based on the EWC.
Compared with the prior art, the invention has the following advantages:
firstly, the invention solves the defect that the traditional neural network can not process a plurality of tasks in a time sequence, and provides a practical and effective method to ensure the importance of the previous task in the time sequence training model process, so that the memory and the recognition capability of the previous task are maintained while a new task is learned.
Secondly, the method extracts radar high-resolution range profile features by using a deep network model structure, can automatically learn features in data, particularly high-dimensional features of the data for identifying large batches of radar high-resolution range profile data, and improves the operation efficiency.
Drawings
The invention is described in further detail below with reference to the drawings and the detailed description.
FIG. 1 is a flow chart of an implementation of an EWC-based on-line radar target identification method of the present invention;
FIG. 2a is a view of the measured scene of the Yark-42 aircraft;
FIG. 2b is a view of the actual measurement scene of the Cessna circulation S/II aircraft;
FIG. 2c is a view of a measured scene of An-26 aircraft;
FIG. 3 is a graph of performance variation identified for three types of aircraft mission A according to the present invention;
FIG. 4 is a graph of performance variation identified for three types of aircraft mission B according to the present invention.
Detailed Description
Referring to fig. 1, it is a flow chart of the method for identifying the target of the online radar based on EWC of the present invention; the online radar target identification method based on the EWC comprises the following steps:
step 1, acquiring a training sample and a testing sample, and initializing data.
Determining a high-resolution radar, receiving target echo data in a detection range of the high-resolution radar, and then randomly extracting N data from the target echo data as a p batch of original radar high-resolution range profile training data SpRandomly extracting N' data in the target echo data except the extracted N data to serve as the p-th batch of original radar high-resolution range profile test data TpAnd P is 1,2, …, and P represents the total batch number of the original radar high-resolution range profile training data and the original radar high-resolution range profile test data.
(1a) High-resolution range profile training data S of original radar of the p-th batchp={s1,s2,…,sn,…,sNIn which snRepresenting the p-th batch of original radar high-resolution range profile training data SpMiddle nth range image, sn=[sn1,sn2,…,sni,…,snD]T,[·]TRepresenting the transpose of a matrix, sniRepresenting the p-th batch of original radar high-resolution range profile training data SpAt the nth distance like at the ith distanceThe value in the distance cell, N is 1,2, …, N represents the pth raw radar high resolution range profile training data S pThe total number of included distance images, i.e. the pth original radar high-resolution distance image training data SpThe total number of training samples is included, i is 1,2, …, D represents the p-th original radar high resolution range profile training data SpThe total number of range cells included in each high resolution range profile (i.e., a single sample vector dimension).
(1b) Calculating high-resolution range profile training data S of the p-th original radarpMiddle nth high resolution range profile snCenter of gravity W ofn
Figure GDA0003623644840000041
(1c) The p-th batch of original radar high-resolution range profile training data SpMiddle nth high resolution range profile snIs moved to its center of gravity WnAnd calculating to obtain the value x of the moved nth high-resolution range profile at the ith range cellniThe expression is as follows:
Figure GDA0003623644840000042
wherein FFT denotes the fast Fourier transform, IFFT denotes the inverse fast Fourier transform, sniRepresenting the p-th batch of original radar high-resolution range profile training data SpValue of the middle nth high resolution range profile at the ith range bin, CnRepresenting the p-th batch of original radar high-resolution range profile training data SpMiddle nth high resolution range profile snIs located in the center of the (c),
Figure GDA0003623644840000043
φ[Wn]representing the p-th batch of original radar high-resolution range profile training data SpMiddle nth high resolution range profile snCenter of gravity W ofnCorresponding phase, phi [ C ] n]Representing the p-th batch of original radar high-resolution range profile training data SpTo middlen high-resolution range profiles snCenter C ofnThe corresponding phase, a represents the p-th batch of original radar high-resolution range profile training data SpMiddle nth high resolution range profile snCenter C ofnHigh-resolution range profile training data S of range unit and pth original radarpMiddle nth high resolution range profile snCenter of gravity W ofnThe distance between the distance cells, e represents an exponential function, and j represents an imaginary unit.
(1d) Let i take 1 to D, repeat (1c), and then obtain the value x of the moved nth high resolution range profile at the 1 st range unitn1Value x of the nth high resolution range profile at the Dth range cell after the shiftnDAnd is recorded as the n-th high-resolution range profile x after the shiftn,xn=[xn1,xn2,…,xni,…,xnD]The value of i is then initialized to 1.
(1e) N is respectively 1 to N, and (1c) and (1d) are repeatedly executed, so that the 1 st high-resolution range profile x after movement is respectively obtained1To the Nth high-resolution range profile x after movementNAnd recording as the shifted high-resolution range profile training data X of the p-th original radarp,Xp={x1,x2,…,xn,…,xN},xn=[xn1,xn2,…,xni,…,xnD]。
With 1,2, …, D as the abscissa and x1,x2,…,xn,…,xNAs a vertical coordinate, the shifted high-resolution range profile training data X of the p-th batch of original radarspDrawing a two-dimensional plane graph, recording as a p sample echo oscillogram, and performing high-resolution range profile training on a p batch of original radar high-resolution range profile training data S according to the p sample echo oscillogram pAdding target classes, and recording as the p-th batch of original radar high-resolution range profile training data SpObject class l ofp
(1f) High-resolution range profile test data T of original radars in batch pp,Tp={t1,t2,…,tn′,…,tN′Where t isn′Representing the p-th batch of original radar high-resolution range profile test data TpMiddle nth' range profile, tn=[tn′1,tn′2,…,tn′i′,…,tn′D′]T,[·]TRepresenting the transpose of the matrix, sn′i′Representing the high-resolution range profile test data T of the p-th original radarpThe value of the nth range image in the ith range unit, N 'is 1,2, …, N', and represents the pth batch of original radar high-resolution range image test data TpThe total number of included range profiles, i.e. the pth batch of original radar high-resolution range profile test data TpThe total number of training samples, i ' is 1,2, …, D ', D ' represents the p-th batch of original radar high-resolution range profile test data TpThe total number of range cells included in each high resolution range profile (i.e., a single sample vector dimension).
(1g) Calculating the high-resolution range profile test data T of the p-th batch of original radarspMiddle nth' high resolution range profile tn′Center of gravity W ofn′
Figure GDA0003623644840000051
(1h) Testing the high-resolution range profile T of the p-th batch of original radarpMiddle nth' high resolution range profile tn′Is moved to its center of gravity Wn′And calculating to obtain the value x of the moved nth 'high-resolution range profile at the ith' range cell n′i′', the expression of which is:
Figure GDA0003623644840000061
wherein FFT denotes a fast Fourier transform, IFFT denotes an inverse fast Fourier transform, tn′i′Representing the p-th batch of original radar high-resolution range profile test data TpThe value of the (n) 'th high-resolution range image in the (i)' th range cell, Cn′Representing the p-th batch of original radar high-resolution range profile test data TpTo middlen' high resolution range profiles tn′The center of (a) of (b),
Figure GDA0003623644840000062
φ[Wn′]representing the p-th batch of original radar high-resolution range profile test data TpMiddle nth' high resolution range profile tn′Center of gravity W ofn′Corresponding phase, phi [ C ]n′]Representing the high-resolution range profile test data T of the p-th original radarpMiddle nth' high resolution range profile tn′Center C ofn′Corresponding phase, a represents the p-th batch of original radar high-resolution range profile test data TpMiddle nth' high resolution range profile tn′Center C ofn′High-resolution range profile test data T of range unit and pth batch of original radarspMiddle nth' high resolution range profile tn′Center of gravity W ofn′The distance between the distance cells, e represents an exponential function, and j represents an imaginary unit.
(1i) The value x of the moved nth high resolution range image at the 1 st range unit is obtained by repeatedly executing the step (1h) after the step i' is 1 to Dn′1' value x at Dth ' range cell of nth ' high resolution range image after shift n′D′', is recorded as the n' th high resolution range profile x after shiftingn′′,xn′=[xn′1′,xn′2′,…,xn′i′′,…,xn′D′′]Then, the value of i' is initialized to 1.
(1j) Let N 'take 1 to N' respectively, repeat (1h) and (1i), and then get the 1 st high resolution range profile x after moving respectively1'Up to Nth' high resolution range profile x after shiftN′', recording as the test data T of the high-resolution range profile of the original radar of the p batch after movementp′,Tp′={x1′,x2′,…,xn′′,…,xN′′},xn′=[xn′1′,xn′2′,…,xn′i′′,…,xn′D′′]。
With 1,2, …, D as the abscissa and xn′1′,xn′2′,…,xn′i′′,…,xn′D′' As the ordinate, the moved p-th batch of original radar high-resolution range profile test data TpDrawing a two-dimensional plane curve graph, recording as a pth sample echo curve graph, and testing the pth batch of original radar high-resolution range profile test data T according to the pth sample echo curve graphpAdding target categories, and recording as the p-th batch of original radar high-resolution range profile test data TpTarget class Tl ofp
And 2, establishing a convolutional neural network model.
The convolutional neural network model consists of three convolutional layers and two full-connection layers, and is constructed by the following steps:
(2a) constructing a first layer of a convolutional layer: the first layer convolution layer is used for training the high-resolution range profile X of the moved p-th batch of original radarpPerforming one-dimensional convolution, the first layer of convolution layer includes C1A convolution kernel, and C of the first layer 1A convolution kernel is marked as
Figure GDA0003623644840000071
Used for training data X of high-resolution range profile of p-th original radar after movementpPerforming convolution;
Figure GDA0003623644840000072
is set to be M1×1×C1Wherein M is1Representing the size of each convolution kernel window in the first layer of convolution layer, 1 ≦ M1≤D。
Setting convolution step length of the first layer convolution layer to be L1,1≤L1D-1, L is usually set to reduce the down-sampling process 12; the moved p-th batch of original radar high-resolution range profile training data XpAnd C in the first convolution layer1Convolving the convolution kernels respectively to obtain a first convolution layer C1The result of convolution is recorded as C of the first convolution layer1Characteristic diagram
Figure GDA0003623644840000073
The calculation formula is as follows:
Figure GDA0003623644840000074
wherein the content of the first and second substances,
Figure GDA0003623644840000075
c representing the first layer of the convolutional layer1A characteristic diagram, XpRepresenting the p-th batch of original radar high-resolution range profile training data after movement,
Figure GDA0003623644840000076
denotes C in the first convolutional layer1A number of convolution kernels, each of which is a convolution kernel,
Figure GDA0003623644840000077
all 1 offsets representing the first layer convolutional layers, which represent convolution operations, f () represents the activation function, f (z)1)=max(0,z1),
Figure GDA0003623644840000078
max () represents a maximum value operation.
(2b) Constructing a second layer of convolution layer: the second layer of the convolution layer contains C2A convolution kernel, and convolution of the second layer with C of the layer2A convolution kernel is defined as
Figure GDA0003623644840000079
C for lamination with a first layer1Characteristic diagram
Figure GDA00036236448400000710
Performing convolution, C of the second convolution layer 2A convolution kernel
Figure GDA00036236448400000711
Size set to M2×C1×C2Wherein M is2For the size of each convolution kernel window in the second layer of convolution layers,
Figure GDA00036236448400000712
setting convolution step length of the second convolution layer to be L2
Figure GDA00036236448400000713
In this embodiment, L is set2=2。
Laminating the first layer with C1Characteristic diagram
Figure GDA00036236448400000714
C with a second convolution layer2A convolution kernel
Figure GDA00036236448400000715
Respectively performing convolution to obtain a second convolution layer C2The result of convolution is recorded as C of the second convolution layer2Characteristic diagram
Figure GDA00036236448400000716
The calculation formula is as follows:
Figure GDA00036236448400000717
wherein the content of the first and second substances,
Figure GDA00036236448400000718
represents the all 1 bias of the second layer convolution layer, represents the convolution operation, f () represents the activation function,
Figure GDA00036236448400000719
(2c) constructing a third layer of convolutional layer: the third layer of convolution layer is used for C of the second layer of convolution layer2Characteristic diagram
Figure GDA00036236448400000720
Performing convolution, wherein the convolution kernel of the third convolution layer contains C3A convolution kernel, the convolution kernel of the third convolution layer being defined as
Figure GDA00036236448400000721
And convolution kernel of the third convolution layer
Figure GDA0003623644840000081
Is set to be M3×C2×C3Wherein M is3Representing the size of each convolution kernel window in the third layer of convolution layers,
Figure GDA0003623644840000082
setting the convolution step length of the third layer of convolution layer to be L3
Figure GDA0003623644840000083
In this embodiment, L is set3=2。
Laminating a second layer of C2Characteristic diagram
Figure GDA0003623644840000084
Convolution kernel with the third convolutional layer
Figure GDA0003623644840000085
Respectively convolving to obtain a third layer of convolution layer C3The result of convolution is recorded as C of the third convolution layer 3Characteristic diagram
Figure GDA0003623644840000086
The calculation formula is as follows:
Figure GDA0003623644840000087
wherein, the first and the second end of the pipe are connected with each other,
Figure GDA0003623644840000088
represents the full 1 offset of the third convolutional layer, represents the convolution operation, f () represents the activation function,
Figure GDA0003623644840000089
(2d) constructing a fourth full connecting layer:first, the third layer of the convolution layer C3Characteristic diagram
Figure GDA00036236448400000810
Respectively elongated and transformed into lengths of
Figure GDA00036236448400000811
To obtain the column vector of C after the elongation transformation3Each column vector comprising
Figure GDA00036236448400000812
The individual neurons thus being elongated transformed
Figure GDA00036236448400000813
A plurality of neurons; the fourth full-connection layer is provided with h neurons for elongating the transformed C3Weight matrix of column vector and fourth layer full connection layer
Figure GDA00036236448400000814
And full 1 bias of the fourth layer full link layer
Figure GDA00036236448400000815
Carrying out nonlinear processing transformation to obtain the data result after the fourth layer full-connection layer is subjected to nonlinear transformation
Figure GDA00036236448400000816
The calculation expression is as follows:
Figure GDA00036236448400000817
wherein the content of the first and second substances,
Figure GDA00036236448400000818
after transformation by elongation
Figure GDA00036236448400000819
Weight value of connecting each neuron with h neurons of fourth full-connection layerThe matrix is a matrix of a plurality of matrices,
Figure GDA00036236448400000820
represents the full 1 offset of the fourth layer full link layer,. represents the matrix multiplication, f () represents the activation function,
Figure GDA00036236448400000821
(2e) constructing a fifth full connecting layer: h' neurons are arranged on the fifth full-connection layer and used for outputting data results of the fourth full-connection layer after nonlinear transformation, which are output by the fourth full-connection layer
Figure GDA00036236448400000822
A weight matrix connected with the fifth layer
Figure GDA00036236448400000823
And all 1 bias of fifth layer all-connected layer
Figure GDA00036236448400000824
Performing linear transformation to obtain data result after fifth layer full-connection layer linear transformation
Figure GDA00036236448400000825
Wherein the data result after the fifth layer full link layer linear transformation
Figure GDA00036236448400000826
The calculation expression is as follows:
Figure GDA00036236448400000827
wherein, W5Represents an h x h 'dimensional matrix formed by connecting h neurons of the fourth fully-connected layer and h' neurons of the fifth fully-connected layer,
Figure GDA0003623644840000091
denotes a fifth fully-connected layerAll 1 bias.
Obtaining the data result after the fifth layer full connection layer linear transformation
Figure GDA0003623644840000092
And then, the construction of the convolutional neural network is completed and the convolutional neural network is marked as a trained convolutional neural network.
And 3, calculating a Fisher information matrix of the EWC.
Data result after linear conversion from fifth layer full connection layer
Figure GDA0003623644840000093
Randomly extracting m data, and calculating the parameters of the extracted m data to all convolution layers and full connection
Figure GDA0003623644840000094
The first order partial derivative is calculated and summed according to the first order partial derivative result to obtain the first order derivative function square sum of m data, the first order derivative function square sum of each data is the Fisher information matrix of the corresponding data, and then the p th batch of original radar high-resolution range profile training data S is obtained pFisher information matrix of m data, wherein the p-th original radar high-resolution range profile training data SpThe Fisher information matrix of the jth data in the middle is FpjThe formula is as follows:
Figure GDA0003623644840000095
wherein, the first and the second end of the pipe are connected with each other,
Figure GDA0003623644840000096
representing the data result after the linear transformation of the fifth layer full connection layer
Figure GDA0003623644840000097
J-th data, j 1, m, FpjRepresenting the p-th batch of original radar high-resolution range profile training data SpFisher information matrix of the j-th data in the middle for the next batch of dataA new convolutional neural network model.
Step 4, acquiring p' +1 st batch of original radar high-resolution range profile training data Sp+1And the p' +1 st batch of original radar high-resolution range profile training data Sp'+1Object class l ofp'+1(ii) a P '+1 is 1,2,3, …, P-1, P' +1 is 2,3, …, P 'has an initial value of 1, and P' +1 has an initial value of 2.
Then calculating the p' +1 st batch of original radar high-resolution range profile training data Sp'+1EWC LOSS function LOSS ofp'+1Comprises the following steps:
Figure GDA0003623644840000098
wherein, λ is a weight coefficient, and the value is usually (0, 1); qp'+1For the p' +1 st batch of original radar high-resolution range profile training data Sp'+1The calculation formula of the parameter variation value is as follows:
Figure GDA0003623644840000101
utilizing p' +1 st batch of original radar high-resolution range profile training data S through back propagation algorithmp'+1EWC LOSS function LOSS ofp'+1Updating and training the trained convolutional neural network to obtain a convolutional neural network model M after the p' +1 batch of data is updated p'+1
And 5, adding 1 to the value of P ', and repeating the step 4 until P ' ═ P-1 and P ' +1 ═ P are obtained until a convolutional neural network model M updated by the data of the No. P batch is obtainedPThe value of p' is then initialized to 1.
Step 6, testing data T of the high-resolution range profile of the original radar of the p batch after movingp'and the p' th batch of original radar high-resolution range profile test data TpTarget class Tl ofpRespectively taking the value of P from 1 to P, and further obtaining the moved 1 st batch of original radar high-resolution range profile test data T1' Up to the P th batch of original radar high-resolution range profile test data T after movementp', and batch 1 original mineTest data T for achieving high resolution range profile1Target class Tl of1Original radar high-resolution range profile test data T from batch PPTarget class Tl ofP
And testing the 1 st batch of original radar high-resolution range profile data T after movement1Inputting the P-th batch of original radar high-resolution range profile test data after moving into the trained convolutional neural network, and updating the convolutional neural network model M by utilizing the P-th batch of dataPRespectively obtaining the predicted target class l of the 1 st batch of original radar high-resolution range profile test data1' predicted target class l ' of to No P batch of raw radar high-resolution range profile test data ' P
Step 7, if le' and TleWhen the number of the targets in the original radar high-resolution range profile test data is equal to 1,2, …, P, the identification of the target class of the original radar high-resolution range profile test data in the e-th batch is correct, that is, the targets in the original radar high-resolution range profile training data in the e-th batch are considered to be identified and are marked as the e ' th class identification correct target, the initial value of e ' is 1, and the value of e ' is added with 1; if le' and TleAnd if the data are not equal, the target type identification error of the e-th batch of original radar high-resolution range profile test data is indicated, and the result of the first batch of original radar high-resolution range profile test data when the target type identification error is abandoned.
Let e get 1 to P respectively, and then get the 1 st type recognition correct target to the 1 st
Figure GDA0003623644840000102
Each of the categories identifies the correct target and,
Figure GDA0003623644840000103
obtained at this time
Figure GDA0003623644840000104
And identifying the correct target by each category as an online radar target identification result based on the EWC.
The effect of the invention is further illustrated by the following measured data simulation of three types of airplanes:
the actual measurement data of the radar of the original radar high-resolution range profile is obtained, and the actual measurement scene is shown by referring to fig. 2a, fig. 2b and fig. 2c, wherein fig. 2a is An actual measurement scene graph of a Yark-42 airplane, fig. 2b is An actual measurement scene graph of a cenna circulation S/ii airplane, and fig. 2c is An actual measurement scene graph of An-26 airplane.
In simulation, the obtained original high-resolution range profile is divided into two types: a training set Tr and a test set Te, wherein the training set Tr comprises TrAAnd TrB。TrATr is training data for task ABTraining data for task B; and, training set TrAAnd TrBRespectively, the high resolution range profile is different azimuth information data.
The results of observing the recognition of task a and task B are shown in fig. 3 and 4, fig. 3 is a graph showing the performance variation of the recognition of three types of airplane task a according to the present invention, and fig. 4 is a graph showing the performance variation of the recognition of three types of airplane task B according to the present invention.
From the experimental results, it can be seen that when a new task arrives, the model can not only have good recognition capability for the current new task, as shown in fig. 4, but can also still have good recognition capability for the previous task, as shown in fig. 3.
In conclusion, the simulation experiment verifies the correctness, the effectiveness and the reliability of the method.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention; thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (4)

1. An online radar target identification method based on EWC is characterized by comprising the following steps:
step 1, determining the p-th batch of original radar high-resolution range profile training data SpAnd the p-th batch of original radar high-resolution range profile test data TpAnd determining the p-th batch of original radar high-resolution range profile training data SpObject class l ofpAnd the p-th batch of original radar high-resolution range profile test data TpTarget class Tl ofp;p=1,2,…,P,P>1;
Step 2, establishing a convolution neural network model, and training data S according to the high-resolution range profile of the p-th batch of original radarpObtaining a trained convolutional neural network;
step 3, obtaining a p batch of original radar high-resolution range profile training data S according to the trained convolutional neural networkpFisher information matrixes of the m data; m is more than or equal to 1;
step 4, according to the p-th batch of original radar high-resolution range profile training data SpDetermining a Fisher information matrix of the M data to determine a convolution neural network model M updated by the p' +1 datap'+1(ii) a P '+1 is 1,2,3, …, P-1, P' +1 is 2,3, …, P 'has an initial value of 1, P' +1 has an initial value of 2;
and 5, adding 1 to the value of P ', repeating the step 4 until P ' ═ P-1 and P ' +1 ═ P, and further obtaining the convolution neural network model M after the updating of the No. P data PThen, the value of p' is initialized to 1;
step 6, determining 1 st batch of original radar high-resolution range profile test data T1Target class Tl of1Original radar high-resolution range profile test data T from batch PPTarget class Tl ofPAnd updating the convolution neural network model M according to the P-th batch of dataPObtaining the predicted target class l of the 1 st batch of original radar high-resolution range profile test data1' predicted target class l ' of No. P raw Radar high resolution Range image test data 'P
Step 7, if le' and TleWhen the distance image is equal to the distance image, e is 1,2, …, P, it means that the target in the e-th original radar high resolution distance image training data is recognized and is marked as the e ' th category to recognize the correct target, the initial value of e ' is 1, and the value of e ' is added with 1; if le' and TleIf the data are not equal, the target type identification error of the e-th batch of original radar high-resolution range profile test data is indicated, and the result when the target type identification error of the e-th batch of original radar high-resolution range profile test data is abandoned;
let e take 1 to P respectively, and then get the 1 st type identification correct target to the 1 st
Figure FDA0003623644830000011
Each of the categories identifies the correct target,
Figure FDA0003623644830000012
obtained at this time
Figure FDA0003623644830000013
Identifying a correct target by each category as an online radar target identification result based on the EWC;
In step 1, the pth batch of raw radar high resolution range profile training data SpAnd the p-th batch of original radar high-resolution range profile test data TpThe determination process is as follows:
determining a high-resolution radar, receiving target echo data in a detection range by the high-resolution radar, and then randomly extracting N data from the target echo data as a pth original radar high-resolution range profile training data SpRandomly extracting N' data in the target echo data except the extracted N data to serve as the p-th batch of original radar high-resolution range profile test data TpP is 1,2, …, P represents the total batch number of the original radar high-resolution range profile training data and the original radar high-resolution range profile test data;
determining the p-th batch of original radar high-resolution range profile training data SpObject class l ofpThe determination process is as follows:
(1a) high-resolution range profile training data S of original radar of p-th batchp={s1,s2,…,sn,…,sNIn which s isnRepresenting the p-th batch of original radar high-resolution range profile training data SpMiddle nth range image, sn=[sn1,sn2,…,sni,…,snD]T,[·]TRepresenting the transpose of the matrix, sniRepresenting high resolution distance of p-th batch of original radarImage training data SpThe nth range image has the value in the ith range unit, N is 1,2, …, N, N represents the pth batch of original radar high resolution range image training data S pThe total number of included distance images, i.e. the pth original radar high-resolution distance image training data SpThe total number of training samples is included, i is 1,2, …, D represents the p-th original radar high resolution range profile training data SpThe total number of the distance units included in each high-resolution range profile;
(1b) calculating high-resolution range profile training data S of the p-th original radarpMiddle nth high resolution range profile snCenter of gravity W ofn
Figure FDA0003623644830000021
(1c) The p-th batch of original radar high-resolution range profile training data SpMiddle nth high resolution range profile snIs moved to its center of gravity WnAnd calculating to obtain the value x of the moved nth high-resolution range profile at the ith range cellniThe expression is as follows:
Figure FDA0003623644830000022
wherein FFT denotes the fast Fourier transform, IFFT denotes the inverse fast Fourier transform, sniRepresenting the p-th batch of original radar high-resolution range profile training data SpValue of the middle nth high resolution range profile at the ith range bin, CnRepresenting the p-th batch of original radar high-resolution range profile training data SpMiddle nth high resolution range profile snIs located in the center of the (c),
Figure FDA0003623644830000031
φ[Wn]representing the p-th batch of original radar high-resolution range profile training data SpMiddle nth high resolution range profile snCenter of gravity W ofnCorresponding phase, phi [ C ]n]Representing the p-th batch of original radar high-resolution range profile training data S pMiddle nth high resolution range profile snCenter C ofnThe corresponding phase, a represents the p-th batch of original radar high-resolution range profile training data SpMiddle nth high resolution range profile snCenter C ofnHigh-resolution range profile training data S of range unit and p-th batch of original radarpMiddle nth high resolution range profile snCenter of gravity W ofnThe distance between the located distance units, e represents an exponential function, and j represents an imaginary unit;
(1d) let i take 1 to D, repeat (1c), and then obtain the value x of the moved nth high resolution range profile at the 1 st range unitn1Value x of the nth high resolution range profile at the Dth range cell after the shiftnDAnd is recorded as the n-th high-resolution range profile x after the shiftn,xn=[xn1,xn2,…,xni,…,xnD]Then, the value of i is initialized to 1;
(1e) n is respectively 1 to N, and (1c) and (1d) are repeatedly executed, so that the 1 st high-resolution range profile x after movement is respectively obtained1To the Nth high-resolution range profile x after movementNAnd recording as the shifted high-resolution range profile training data X of the p-th original radarp,Xp={x1,x2,…,xn,…,xN},xn=[xn1,xn2,…,xni,…,xnD];
With 1,2, …, D as the abscissa and x1,x2,…,xn,…,xNAs a vertical coordinate, the shifted high-resolution range profile training data X of the p-th batch of original radarspDrawing a two-dimensional plane graph, recording as a p sample echo oscillogram, and performing high-resolution range profile training on a p batch of original radar high-resolution range profile training data S according to the p sample echo oscillogram pAdding target classes, and recording as the p-th batch of original radar high-resolution range profile training data SpObject class l ofp
In step 2, the trained convolutional neural network is a result obtained after training the established convolutional neural network model, the established convolutional neural network model includes three convolutional layers and two full-link layers, and the training process is as follows:
(2a) setting convolution step length of the first layer convolution layer to be L1,1≤L1Not more than D-1, the first layer of the convolution layer comprises C1A convolution kernel, and C of the first layer1A convolution kernel is marked as
Figure FDA0003623644830000032
Is set to be M1×1×C1Wherein M is1Representing the size of each convolution kernel window in the first layer of convolution layer, 1 ≦ M1≤D;
The moved p-th batch of original radar high-resolution range profile training data XpAnd C in the first convolution layer1Convolving the convolution kernels respectively to obtain a first convolution layer C1The result of convolution is recorded as C of the first convolution layer1Characteristic diagram
Figure FDA0003623644830000041
The calculation formula is as follows:
Figure FDA0003623644830000042
wherein the content of the first and second substances,
Figure FDA0003623644830000043
c representing the first layer of the convolutional layer1The characteristic diagram is shown in the figure,
Figure FDA0003623644830000044
all 1 offsets representing the first layer convolutional layers, which represent convolution operations, f () represents the activation function, f (z)1)=max(0,z1),
Figure FDA0003623644830000045
max () represents a maximum value operation;
(2b) second layer rollThe laminate comprises C2A convolution kernel, and convolution of the second layer with C of the layer 2A convolution kernel is defined as
Figure FDA0003623644830000046
C of the second convolution layer2A convolution kernel
Figure FDA0003623644830000047
Size set to M2×C1×C2Wherein M is2For the size of each convolution kernel window in the second convolutional layer,
Figure FDA0003623644830000048
setting convolution step length of the second convolution layer to be L2
Figure FDA0003623644830000049
Laminating C of the first layer1Characteristic diagram
Figure FDA00036236448300000410
C with a second convolution layer2A convolution kernel
Figure FDA00036236448300000411
Respectively convolving to obtain a second convolution layer C2The result of convolution is recorded as C of the second convolution layer2Characteristic diagram
Figure FDA00036236448300000412
The calculation formula is as follows:
Figure FDA00036236448300000413
wherein the content of the first and second substances,
Figure FDA00036236448300000414
denotes the all 1 offset, f (z) of the second convolution layer2)=max(0,z2),
Figure FDA00036236448300000415
(2c) The convolution kernel of the third convolutional layer contains C3A convolution kernel, the convolution kernel of the third convolution layer being defined as
Figure FDA00036236448300000416
And convolution kernel of the third convolution layer
Figure FDA00036236448300000417
Is set to be M3×C2×C3Wherein M is3Representing the size of each convolution kernel window in the third layer of convolution layers,
Figure FDA00036236448300000418
setting the convolution step length of the third layer of convolution layer to be L3
Figure FDA00036236448300000419
Laminating a second layer of C2Characteristic diagram
Figure FDA00036236448300000420
Convolution kernel with the third convolutional layer
Figure FDA00036236448300000421
Respectively convolving to obtain a third layer of convolution layer C3The result of convolution is recorded as C of the third convolution layer3Characteristic diagram
Figure FDA00036236448300000422
The calculation formula is as follows:
Figure FDA00036236448300000423
wherein the content of the first and second substances,
Figure FDA00036236448300000424
denotes the all 1 offset, f (z) of the third layer convolution layer3)=max(0,z3),
Figure FDA00036236448300000425
(2d) Laminating a third layer of C3Characteristic diagram
Figure FDA00036236448300000426
Respectively elongated and transformed into lengths of
Figure FDA00036236448300000427
To obtain the column vector of C after the elongation transformation3Each column vector comprising
Figure FDA00036236448300000428
The neurons thus being elongated
Figure FDA0003623644830000051
A plurality of neurons;
the fourth full-connection layer is provided with h neurons for elongating the transformed C3Weight matrix of column vector and fourth layer full connection layer
Figure FDA0003623644830000052
All 1 bias with the fourth layer all connected layer
Figure FDA0003623644830000053
Carrying out nonlinear processing transformation to obtain the data result after the fourth layer full-connection layer is subjected to nonlinear transformation
Figure FDA0003623644830000054
The calculation expression is as follows:
Figure FDA0003623644830000055
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003623644830000056
after transformation from elongation
Figure FDA0003623644830000057
A weight matrix which is formed by connecting each neuron with h neurons of a fourth full connection layer,
Figure FDA0003623644830000058
all 1 offsets representing the fourth layer fully connected layer,. represents the matrix multiplication, f () represents the activation function, f (z)4)=max(0,z4),
Figure FDA0003623644830000059
(2e) H' neurons are arranged on the fifth full-connection layer and used for outputting data results of the fourth full-connection layer after nonlinear transformation, which are output by the fourth full-connection layer
Figure FDA00036236448300000510
Weight matrix of the fifth full connection layer
Figure FDA00036236448300000511
And all 1 bias of the fifth all-connected layer
Figure FDA00036236448300000512
Performing linear transformation to obtain data result after fifth layer full-connection layer linear transformation
Figure FDA00036236448300000513
The calculation expression is as follows:
Figure FDA00036236448300000514
wherein, W5Represents an h x h 'dimensional matrix formed by connecting h neurons of the fourth fully-connected layer and h' neurons of the fifth fully-connected layer,
Figure FDA00036236448300000515
All 1 offsets representing fifth layer all connected layers;
obtaining the data result after the fifth layer full connection layer linear transformation
Figure FDA00036236448300000516
Then, the construction of the convolutional neural network is completed and the convolutional neural network is marked as a trained convolutional neural network;
in step 3, the pth batch of raw radar high resolution range profile training data SpThe Fisher information matrix of the medium m data comprises the following processes:
data result after linear transformation from fifth layer full link layer
Figure FDA00036236448300000517
Randomly extracting m data, and calculating the parameters of the extracted m data to all convolution layers and full connection
Figure FDA00036236448300000518
The first order partial derivative is calculated and summed according to the first order partial derivative result to obtain the first order derivative function square sum of m data, the first order derivative function square sum of each data is the Fisher information matrix of the corresponding data, and then the p th batch of original radar high-resolution range profile training data S is obtainedpFisher information matrix of m data, wherein the p-th batch of original radar high-resolution range profile training data SpThe Fisher information matrix of the j-th data is FpjThe formula is as follows:
Figure FDA0003623644830000061
wherein the content of the first and second substances,
Figure FDA0003623644830000062
representing the data result after the linear transformation of the fifth layer full connection layer
Figure FDA0003623644830000063
J-th data, j 1, m, FpjRepresenting the p-th batch of original radar high-resolution range profile training data S pFisher information matrix of the jth data in (g).
2. The EWC-based on-line radar target identification method of claim 1, wherein in step 1, the pth batch of raw radar high resolution range profile test data TpTarget class Tl ofpThe determination process is as follows:
(1f) high-resolution range profile test data T of original radars in batch pp,Tp={t1,t2,…,tn′,…,tN′H, where tn′Representing the p-th batch of original radar high-resolution range profile test data TpMiddle nth' range profile, tn=[tn′1,tn′2,…,tn′i′,…,tn′D′]T,[·]TRepresenting the transpose of a matrix, sn′i′Representing the p-th batch of original radar high-resolution range profile test data TpThe value of the nth range image in the ith range unit, N 'is 1,2, …, N' represents the pth original radar high resolution range image test data TpThe total number of included range profiles, i.e. the pth batch of original radar high-resolution range profile test data TpThe total number of training samples, i ' is 1,2, …, D ', D ' represents the p-th batch of original radar high-resolution range profile test data TpThe total number of distance units included in each high-resolution range profile;
(1g) calculate batch pOriginal radar high-resolution range profile test data TpMiddle nth' high resolution range profile tn′Center of gravity W ofn′
Figure FDA0003623644830000064
(1h) Testing the high-resolution range profile T of the p-th batch of original radar pMiddle nth high resolution range profile tn′Is moved to its center of gravity Wn′And calculating to obtain the value x of the moved nth high-resolution range profile at the ith range celln′i′', the expression of which is:
Figure FDA0003623644830000065
wherein FFT denotes a fast Fourier transform, IFFT denotes an inverse fast Fourier transform, tn′i′Representing the p-th batch of original radar high-resolution range profile test data TpThe value of the (n) 'th high-resolution range image in the (i)' th range cell, Cn′Representing the p-th batch of original radar high-resolution range profile test data TpMiddle nth high resolution range profile tn′The center of (a) of (b),
Figure FDA0003623644830000071
φ[Wn′]representing the high-resolution range profile test data T of the p-th original radarpMiddle nth' high resolution range profile tn′Center of gravity W ofn′Corresponding phase, phi [ C ]n′]Representing the high-resolution range profile test data T of the p-th original radarpMiddle nth' high resolution range profile tn′Center C ofn′Corresponding phase, a represents the p-th batch of original radar high-resolution range profile test data TpMiddle nth' high resolution range profile tn′Center C ofn′High-resolution range profile test data T of range unit and pth batch of original radarspMiddle nth' high resolution range profile tn′Center of gravity W ofn′The distance between the located distance units, e represents an exponential function, and j represents an imaginary unit;
(1i) Taking i 'as 1 to D', repeating the step (1h), and obtaining the value x of the moved nth high resolution range image at the 1 st range unitn′1' value x at Dth ' range cell of nth ' high resolution range image after shiftn′D′', is recorded as the n' th high resolution range profile x after shiftingn′′,xn′=[xn′1′,xn′2′,…,xn′i′′,…,xn′D′′]Then, initializing the value of i' to 1;
(1j) let N 'take 1 to N' respectively, repeat (1h) and (1i), and then get the 1 st high resolution range profile x after moving respectively1'Up to Nth' high resolution range profile x after shiftN′', is recorded as the test data T of the p-th original radar high-resolution range profile after movementp′,Tp′={x1′,x2′,…,xn′′,…,xN′′},xn′=[xn′1′,xn′2′,…,xn′i′′,…,xn′D′′];
With 1,2, …, D as the abscissa and xn′1′,xn′2′,…,xn′i′′,…,xn′D′' As a vertical coordinate, the moved p-th batch of original radar high-resolution range profile test data TpDrawing a two-dimensional plane curve graph, recording as a p-th sample echo curve graph, and testing p batches of original radar high-resolution range profile test data T according to the p-th sample echo curve graphpAdding target category, recording as p batch of original radar high-resolution range profile test data TpTarget class Tl ofp
3. The EWC-based online radar target recognition method of claim 1, wherein in step 4, the updated convolutional neural network model M of the p' +1 th batch of data p'+1The determination process is as follows:
first, the p' +1 th image is obtainedBatch original radar high-resolution range profile training data Sp+1And the p' +1 st batch of original radar high-resolution range profile training data Sp'+1Object class l ofp'+1(ii) a P '+1 is 1,2,3, …, P-1, P' +1 is 2,3, …, P 'has an initial value of 1, P' +1 has an initial value of 2;
then calculating the p' +1 st batch of original radar high-resolution range profile training data Sp'+1EWC LOSS function LOSS ofp'+1Comprises the following steps:
Figure FDA0003623644830000081
wherein, λ is a weight coefficient, and the value is usually (0, 1); qp'+1For the p' +1 st batch of original radar high-resolution range profile training data Sp'+1The calculation formula of the parameter variation value is as follows:
Figure FDA0003623644830000082
finally utilizing the p' +1 st batch of original radar high-resolution range profile training data S through a back propagation algorithmp'+1EWC LOSS function LOSS ofp'+1Updating and training the trained convolutional neural network to obtain a convolutional neural network model M after the p' +1 batch of data is updatedp'+1
4. The EWC-based online radar target recognition method of claim 2 or 3, wherein the determination process of step 6 is as follows:
for the p-th batch of original radar high-resolution range profile test data T after movementp'and the p' th batch of original radar high-resolution range profile test data TpTarget class Tl ofpRespectively taking the value of P from 1 to P, and further obtaining the moved 1 st batch of original radar high-resolution range profile test data T 1' Up to the P th batch of original radar high-resolution range profile test data T after movementp', and 1 st batch of raw Radar high resolution Range image test data T1Target class Tl of1Original radar high-resolution range profile test data T from batch PPTarget class Tl ofP
And testing data T of the moved 1 st batch of original radar high-resolution range profile1Inputting the P-th batch of original radar high-resolution range profile test data after moving into a trained convolutional neural network, and updating the model M of the convolutional neural network by utilizing the P-th batch of dataPRespectively obtaining the predicted target class l of the 1 st batch of original radar high-resolution range profile test data1' predicted target class l ' of to No P batch of raw radar high-resolution range profile test data 'P
CN201810757440.XA 2018-07-11 2018-07-11 Online radar target identification method based on EWC Active CN108983187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810757440.XA CN108983187B (en) 2018-07-11 2018-07-11 Online radar target identification method based on EWC

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810757440.XA CN108983187B (en) 2018-07-11 2018-07-11 Online radar target identification method based on EWC

Publications (2)

Publication Number Publication Date
CN108983187A CN108983187A (en) 2018-12-11
CN108983187B true CN108983187B (en) 2022-07-15

Family

ID=64536851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810757440.XA Active CN108983187B (en) 2018-07-11 2018-07-11 Online radar target identification method based on EWC

Country Status (1)

Country Link
CN (1) CN108983187B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113126052A (en) * 2021-03-08 2021-07-16 西安电子科技大学 High-resolution range profile target identification online library building method based on stage-by-stage segmentation training
CN113171102B (en) * 2021-04-08 2022-09-02 南京信息工程大学 ECG data classification method based on continuous deep learning
CN114246563B (en) * 2021-12-17 2023-11-17 重庆大学 Heart and lung function intelligent monitoring equipment based on millimeter wave radar

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104459668A (en) * 2014-12-03 2015-03-25 西安电子科技大学 Radar target recognition method based on deep learning network
CN107563411A (en) * 2017-08-07 2018-01-09 西安电子科技大学 Online SAR target detection method based on deep learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080169939A1 (en) * 2007-01-11 2008-07-17 Dickens Charles E Early warning control system for vehicular crossing safety
US10095950B2 (en) * 2015-06-03 2018-10-09 Hyperverge Inc. Systems and methods for image processing
CN107728142B (en) * 2017-09-18 2021-04-27 西安电子科技大学 Radar high-resolution range profile target identification method based on two-dimensional convolutional network
CN107784320B (en) * 2017-09-27 2019-12-06 电子科技大学 Method for identifying radar one-dimensional range profile target based on convolution support vector machine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104459668A (en) * 2014-12-03 2015-03-25 西安电子科技大学 Radar target recognition method based on deep learning network
CN107563411A (en) * 2017-08-07 2018-01-09 西安电子科技大学 Online SAR target detection method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Overcoming catastrophic forgetting in neural networks;Kirkpatrick J等;《Proceedings of the National Academy of Sciences》;20170328;第114卷(第13期);第3521-3526页 *
基于Fisher的线性判别回归分类算法;曾贤灏等;《安阳工学院学报》;20150320(第02期);1-3页 *

Also Published As

Publication number Publication date
CN108983187A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
CN108229404B (en) Radar echo signal target identification method based on deep learning
CN111160176B (en) Fusion feature-based ground radar target classification method for one-dimensional convolutional neural network
CN110334741B (en) Radar one-dimensional range profile identification method based on cyclic neural network
CN112001270B (en) Ground radar automatic target classification and identification method based on one-dimensional convolutional neural network
CN108983187B (en) Online radar target identification method based on EWC
CN104459668A (en) Radar target recognition method based on deep learning network
CN112965062B (en) Radar range profile target recognition method based on LSTM-DAM network
CN110082738B (en) Radar target identification method based on Gaussian mixture and tensor recurrent neural network
CN107085733A (en) Offshore infrared ship recognition methods based on CNN deep learnings
CN112052762A (en) Small sample ISAR image target identification method based on Gaussian prototype
CN109948722B (en) Method for identifying space target
CN110766084A (en) Small sample SAR target identification method based on CAE and HL-CNN
CN109239670B (en) Radar HRRP (high resolution ratio) identification method based on structure embedding and deep neural network
CN110223342B (en) Space target size estimation method based on deep neural network
CN113406623A (en) Target identification method, device and medium based on radar high-resolution range profile
CN109871907B (en) Radar target high-resolution range profile identification method based on SAE-HMM model
CN111596292A (en) Radar target identification method based on importance network and bidirectional stacking recurrent neural network
CN109063750B (en) SAR target classification method based on CNN and SVM decision fusion
CN112835008B (en) High-resolution range profile target identification method based on attitude self-adaptive convolutional network
CN111368653A (en) Low-altitude small target detection method based on R-D (R-D) graph and deep neural network
CN112946600B (en) Method for constructing radar HRRP database based on WGAN-GP
CN114004152A (en) Multi-wind-field wind speed space-time prediction method based on graph convolution and recurrent neural network
Choi et al. Information-maximizing adaptive design of experiments for wind tunnel testing
Liu et al. Incremental multitask SAR target recognition with dominant neuron preservation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant