CN112163450A - Based on S3High-frequency ground wave radar ship target detection method based on D learning algorithm - Google Patents

Based on S3High-frequency ground wave radar ship target detection method based on D learning algorithm Download PDF

Info

Publication number
CN112163450A
CN112163450A CN202010853777.8A CN202010853777A CN112163450A CN 112163450 A CN112163450 A CN 112163450A CN 202010853777 A CN202010853777 A CN 202010853777A CN 112163450 A CN112163450 A CN 112163450A
Authority
CN
China
Prior art keywords
target
network
training
learning algorithm
wave radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010853777.8A
Other languages
Chinese (zh)
Inventor
张玲
李庆丰
牛炯
黎明
纪永刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202010853777.8A priority Critical patent/CN112163450A/en
Publication of CN112163450A publication Critical patent/CN112163450A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method based on S3A high-frequency ground wave radar ship target detection method based on a D learning algorithm belongs to the technical field of radar target detection. The method comprises the following implementation steps: positioning a training sample by using a unit average constant false alarm rate, selecting a target window and generating the training sample, enhancing data, constructing a self-distillation learning network, realizing a semi-supervised self-distillation learning algorithm by using an unsupervised loss function and a cross entropy loss function, training a neural network, calculating a target value of the training sample, and calculating a target value of the training sample,Classifying the candidate targets by using the trained neural network, removing redundant target frames by adopting a non-maximum suppression algorithm, and finishing the S-based classification3And D, high-frequency ground wave radar ship target detection of a learning algorithm.

Description

Based on S3High-frequency ground wave radar ship adopting D learning algorithmTarget detection method
Technical Field
The invention discloses a method based on S3A high-frequency ground wave radar ship target detection method based on a D learning algorithm belongs to the technical field of radar target detection.
Background
The traditional high-frequency ground wave radar ship target detection method is generally single in judgment condition and mostly takes values according to human experience, such as a detection threshold of constant false alarm rate, wavelet scale of wavelet transformation, selection of a complete dictionary set in sparse expression and the like. When a complex detection environment is encountered, a higher false alarm rate or a lower detection rate often occurs according to the judgment of a single human experience. For some intelligent methods, such as OES-ELM, Faster R-CNN and YOLOv2, a large number of labeled samples are often required. Because the high-frequency ground wave radar environment is complex, many targets are often on the RD spectrogram, the energy of the target points is weak, it is very difficult to label all the target points, and a large amount of time and labor cost are consumed. In the past, feature extraction operators are often adopted to extract features such as SIFT features, HOG features and the like. If SIFT is adopted in OES-ELM for feature extraction, the manually extracted features are generally not targeted, the characterization capability is weak, and the classification accuracy is greatly limited.
The prior art mainly has the following defects: the method has the advantages of high false alarm rate or low detection rate, large demand of labeled samples, and weak feature characterization capability of feature extraction operator extraction, and is not targeted.
Disclosure of Invention
The invention discloses a method based on S3A high-frequency ground wave radar ship target detection method based on a D learning algorithm aims at solving the problems of high false alarm rate, low detection rate and large demand of labeled samples in the prior art.
Based on S3The high-frequency ground wave radar ship target detection method of the D learning algorithm comprises the following implementation steps:
s1, detecting the average constant false alarm rate of a unit, and detecting and positioning a training sample;
s2, selecting a target window and generating a training sample;
s3, enhancing data;
s4, constructing a self-distillation learning network;
s5, realizing a semi-supervised self-distillation learning algorithm by using an unsupervised loss function and a cross entropy loss function;
s6, training a neural network;
s7, classifying the candidate targets by using the trained neural network, and removing redundant target frames by adopting a non-maximum suppression algorithm;
and S8, completing target detection of the high-frequency ground wave radar ship.
In step S1, the unit average constant false alarm rate detection step is as follows: obtaining a gray map of each sample, inputting gray values along the frequency and distance directions, the number of reference cells N being 40, the guard cell being set to 2, determining a target point by the following formula:
Figure BDA0002645689090000021
using target detection rate PdAnd false alarm rate PfAs a reference index, the calculation formula is as follows:
Figure BDA0002645689090000022
Figure BDA0002645689090000023
in the formula, TP is a detected real target point, FN is an undetected target point, TP + FN is the number of all target points, FP is a detected false target point, the threshold factor α is set to be greater than 0.8 in the training phase, and α is taken to be 0.8 in the testing phase.
In step S2, selecting a training sample includes the following steps:
s2.1, training samples are selected by using unit average constant false alarm rate detection, the training samples comprise target points and non-target points, and a training sample coordinate set S1={(al,bl);l∈{xCUT=1}},(al,bl) The center coordinates of the training samples are obtained;
s2.2. selectionThe 11 x 11 window leaves a certain background information around the object, denoted by S1Truncating the training sample set T in the RD spectrum with a window of 11 × 11 size for the center coordinates1Will T1The samples in (a) are changed to a size of 32 × 32 of the network input, and a part of the preselected target points is artificially selected to be given the label L { (x)i,pi);i∈{T1And the rest samples are unlabeled data U-Uj;j∈{T1},j≠i};
In step S3, the data enhancement includes the steps of: the method adopts a data enhancement strategy of combining random increment and Cutout, the transformation is mainly divided into three types, namely a first type and a first type, and the transformation is mainly carried out on pixels, and the spatial structure is not changed, for example: autocontract, Brightness, Color, Contrast, Equalize, Identity, Sharpness, Posterize, Solarize; the second category changes the spatial structure of the image, for example: rotate, Shear _ x, Shear _ y, translation _ x, translation _ y; the third type is Cutout, which is used for learning the whole image and the incomplete image through a training network and is used for learning global and local information. The weak enhancement transform is a transform selected randomly from the first class, while the strong enhancement transform is a combination of multiple transforms selected randomly.
Step S4 includes the following steps: the network is divided into four sections, the deepest network is used as a teacher network, three shallow branches are used as three student networks, the network structure can compress the knowledge of the deep network into the shallow network, and the shallow network has feedback on the deep network;
semi-supervised self-distillation learning algorithm in step S5 (S)3D) The method comprises the following steps:
s5.1, the learning algorithm has the calculation formula as follows:
Figure BDA0002645689090000024
Figure BDA0002645689090000025
Figure BDA0002645689090000026
in which theta represents a network parameter and y represents a networkThe result of the prediction is that,
Figure BDA0002645689090000027
for each of the data sizes of the batches,
Figure BDA0002645689090000028
for each batch of unlabeled data ubBy weak enhancement of T1As a result of the latter, the result,
Figure BDA0002645689090000029
for the same batch of unlabeled data ubBy strongly enhancing T2The latter result. Assigning the prediction label of the weak enhancement sample with the confidence coefficient of teacher network prediction larger than tau to non-label data, and forcing the teacher network and the student network to have the same prediction for the strong enhancement sample through consistency loss;
s5.2. the cross entropy loss function form is H (X) -Sigmaxy*log (y), where y represents the predicted result of the teacher network or the student network for weakly enhanced labeled samples, y*The true labels, which represent labeled exemplars, allow the student network and teacher network to predict the same semantic categories through cross-entropy loss.
Step S6 includes the following steps: by using S3D, updating the network parameters by the algorithm under the judgment condition of
Figure BDA0002645689090000031
Figure BDA0002645689090000032
Judging whether the prediction confidence of the deepest teacher network on the non-label data is greater than a threshold tau or not, updating the network by using samples meeting the conditions through consistency loss, directly updating the network by using a cross entropy loss function without judging the conditions for all the labeled samples, and storing the trained network parameters.
Step S7 includes the following sub-steps:
s7.1, sending the whole RD spectrogram into a constant false alarm rate detection step to obtain a center coordinate set S of a preselected target2= {(ak,bk);k∈{xCUT1, intercepting the candidate target to obtain an image set combined with T1Will T1Sending the images into a trained neural network for classification to obtain a prediction result Q of each image, wherein the confidence coefficient set predicted as a target point is Q*Whereby information on each predicted target point is obtained as
Figure BDA0002645689090000033
S7.2. mixing Q*The values in the sequence are sorted from large to small, and Q is calculated*The intersection ratio IOU between the target frame corresponding to the medium maximum value and other target frames has the formula:
Figure BDA0002645689090000034
k ≠ 1, where C ≠ 11, which denotes the length of the edge of the target, if IxOr IyEqual to zero, then IOU is equal to 0, (a)1,b1) Is Q*The central coordinate of the target frame corresponding to the maximum value in (a)k,bk) Then is Q*The center coordinates of the target frame corresponding to the rest values;
s7.3, judging whether the IOU is larger than a set threshold value or not
Figure BDA0002645689090000035
If so, the k pairs of corresponding targets are driven from Q*Removing Q from*Moving the target corresponding to the maximum value into R, and gradually updating Q*And R to Q*Leaving only one element position, Q*And moving the target corresponding to the remaining element into R, wherein R is the coordinate set of the obtained final prediction result.
And in the step S8, marking all target frames in the R on the RD spectrum, namely finishing the target detection of the high-frequency ground wave radar ship.
Compared with the prior art, the invention has the beneficial effects that: artificial labeling is greatly reduced, generalization performance of the network is improved, a better result can be obtained under the condition of using a very small amount of label data, targets around clutter and targets in partial clutter can be effectively detected, and compared with the prior art, the accuracy is greatly improved, and the false alarm rate is kept at a lower level.
Drawings
FIG. 1 is a flow chart of high frequency ground wave radar target detection;
FIG. 2 is a graph of detection rate change;
FIG. 3 is a graph of false alarm rate variation;
fig. 4 is a diagram of a self-distillation learning network.
Detailed Description
The invention is described in further detail below with reference to the following figures and detailed description:
based on S3A flow chart of a high-frequency ground wave radar ship target detection method of a D learning algorithm is shown in figure 1, and the method comprises the following implementation steps:
s1, detecting the average constant false alarm rate of a unit, and detecting and positioning a training sample;
s2, selecting a target window and generating a training sample;
s3, enhancing data;
s4, constructing a self-distillation learning network;
s5, realizing a semi-supervised self-distillation learning algorithm by using an unsupervised loss function and a cross entropy loss function;
s6, training a neural network;
s7, classifying the candidate targets by using the trained neural network, and removing redundant target frames by adopting a non-maximum suppression algorithm;
and S8, completing target detection of the high-frequency ground wave radar ship.
In step S1, the unit average constant false alarm rate detection step is as follows: obtaining a gray map of each sample, inputting gray values along the frequency and distance directions, the number of reference cells N being 40, the guard cell being set to 2, determining a target point by the following formula:
Figure BDA0002645689090000041
using target detection rate PdAnd false alarm rate PfAs reference index, the detection rate variation graph is shown in FIG. 2, and the false alarm rate variation graph is shown in FIG. 2As shown in fig. 3, the calculation formula is as follows:
Figure BDA0002645689090000042
Figure BDA0002645689090000043
in the formula, TP is a detected real target point, FN is an undetected target point, TP + FN is the number of all target points, FP is a detected false target point, the threshold factor α is set to be greater than 0.8 in the training phase, and α is taken to be 0.8 in the testing phase.
In step S2, selecting a training sample includes the following steps:
s2.1, training samples are selected by using unit average constant false alarm rate detection, the training samples comprise target points and non-target points, and a training sample coordinate set S1={(al,bl);l∈{xCUT=1}},(al,bl) The center coordinates of the training samples are obtained;
s2.2. selecting 11X 11 window to leave certain background information around the target, and S1Truncating the training sample set T in the RD spectrum with a window of 11 × 11 size for the center coordinates1Will T1The samples in (a) are changed to a size of 32 × 32 of the network input, and a part of the preselected target points is artificially selected to be given the label L { (x)i,pi);i∈{T1And the rest samples are unlabeled data U-Uj;j∈{T1},j≠i};
In step S3, the data enhancement includes the steps of: the method adopts a data enhancement strategy of combining random increment and Cutout, the transformation is mainly divided into three types, namely a first type and a first type, and the transformation is mainly carried out on pixels, and the spatial structure is not changed, for example: autocontract, Brightness, Color, Contrast, Equalize, Identity, Sharpness, Posterize, Solarize; the second category changes the spatial structure of the image, for example: rotate, Shear _ x, Shear _ y, translation _ x, translation _ y; the third type is Cutout, which is used for learning the whole image and the incomplete image through a training network and is used for learning global and local information. The weak enhancement transformation is a transformation mode randomly selected in the first class, and the strong enhancement transformation is a combination of multiple transformations randomly selected.
In step S4, the self-distillation learning network structure is shown in fig. 4, and includes the following steps: the network is divided into four sections, the deepest network is used as a teacher network, three shallow branches are used as three student networks, the network structure can compress the knowledge of the deep network into the shallow network, and the shallow network has feedback on the deep network;
semi-supervised self-distillation learning algorithm in step S5 (S)3D) The method comprises the following steps:
S5.1.
Figure BDA0002645689090000051
Figure BDA0002645689090000052
Figure BDA0002645689090000053
wherein, theta represents the network parameter, y represents the predicted result of the network,
Figure BDA0002645689090000054
for each of the data sizes of the batches,
Figure BDA0002645689090000055
for each batch of unlabeled data ubBy weak enhancement of T1As a result of the latter, the result,
Figure BDA0002645689090000056
for the same batch of unlabeled data ubBy strongly enhancing T2The latter result. Assigning the prediction label of the weak enhancement sample with the confidence coefficient of teacher network prediction larger than tau to non-label data, and forcing the teacher network and the student network to have the same prediction for the strong enhancement sample through consistency loss;
S5.2.H(X)=-∑xy*log (y), where y represents the predicted result of the teacher network or the student network for weakly enhanced labeled samples, y*Real tags representing tagged exemplars enabling student networks and teachers through cross-entropy lossThe network predicts the same semantic categories.
Step S6 includes the following steps: by using S3D, updating the network parameters by the algorithm under the judgment condition of
Figure BDA0002645689090000057
Figure BDA0002645689090000058
Judging whether the prediction confidence of the deepest teacher network on the non-label data is greater than a threshold tau or not, updating the network by using samples meeting the conditions through consistency loss, directly updating the network by using a cross entropy loss function without judging the conditions for all the labeled samples, and storing the trained network parameters.
Step S7 includes the following sub-steps:
s7.1, sending the whole RD spectrogram into a constant false alarm rate detection step to obtain a center coordinate set S of a preselected target2= {(ak,bk);k∈{x CUT1, intercepting the candidate target to obtain an image set combined with T1Will T1Sending the images into a trained neural network for classification to obtain a prediction result Q of each image, wherein the confidence coefficient set predicted as a target point is Q*Whereby information on each predicted target point is obtained as
Figure BDA0002645689090000059
S7.2. mixing Q*The values in the sequence are sorted from large to small, and Q is calculated*The intersection ratio IOU between the target frame corresponding to the medium maximum value and other target frames has the formula:
Figure BDA00026456890900000510
k ≠ 1, where C ≠ 11, which denotes the length of the edge of the target, if IxOr IyEqual to zero, then IOU is equal to 0, (a)1,b1) Is Q*The central coordinate of the target frame corresponding to the maximum value in (a)k,bk) Then is Q*The center coordinates of the target frame corresponding to the rest values;
s7.3, judging whether the IOU is larger than a set threshold value or not
Figure BDA0002645689090000061
If so, the k pairs of corresponding targets are driven from Q*Removing Q from*Moving the target corresponding to the maximum value into R, and gradually updating Q*And R to Q*Leaving only one element position, Q*And moving the target corresponding to the remaining element into R, wherein R is the coordinate set of the obtained final prediction result.
And in the step S8, marking all target frames in the R on the RD spectrum, namely finishing the target detection of the high-frequency ground wave radar ship.
For comparison purposes, the present invention proposes a method based on S3The effectiveness of the high-frequency ground wave radar ship target detection method of the D learning algorithm is compared on a public data set CIFAR-10 and compared with a traditional target detection algorithm and a deep learning classic target detection algorithm. Under the support of an experiment platform GTX 1080Ti hardware, a pyrorch is used for carrying out related simulation experiments, a random gradient descent (SGD) algorithm is adopted for optimizing a network, experiments are carried out on public data sets and actually measured ground wave radar data, and S is verified3Effectiveness of the method D.
SVHN is a real image dataset used to develop machine learning, comprising 73257 digital images for training, 26032 digital images for testing and 531131 additional digital images. There are a total of '1' to '10', 10 numerical categories. As with the other methods, the comparison was performed without additional data and the experimental results are shown in table 1, comparing Mean teacher using the model EMA algorithm, pi model using the tag EMA algorithm, and other models, and the results show that the methods presented herein have better performance.
TABLE 1 test results on SVHN
Method of producing a composite material Label data (1k)
Pseudo-Label 7.62±0.29
ΠModel 4.82±0.17
Mean Teacher 3.95±0.19
VAT+EntMin 3.86±0.11
Deep Co-training 3.29±0.03
ICT 3.53±0.07
MixMatch 2.89±0.06
S3D 2.77±0.03
CIFAR-10 is the most commonly used data set for semi-supervised learning algorithms, and consists of 60k 32x32 size co-colored pictures. A total of ten categories were included, 6k pictures per category, with 50k training sets and 10k test sets. Experimental results as shown in table 2, the method proposed herein is superior to FixMatch and superior to related methods modeled in the der ResNet or Conv-Large under the same experimental environment and code basis.
TABLE 2 test results on CIFAR-10
Figure BDA0002645689090000062
Figure BDA0002645689090000071
For more comprehensive comparison experiments, two data sets X are taken1And X2,X1For the training set as a classifier, X2As training sets of the deep learning target detection network, two RD spectrums (size: 681 pixel × 538 pixel) are generated, the RD spectrums contain clutter and background noise of all kinds, targets are real ship target points, data set information is shown in Table 3, and a verification set is a labeled image generated by 10 complete RD spectrograms with the same size as a training image.
TABLE 3 data set information
Data set Number of RD spectrum Number of training samples Number of label samples Number of unlabeled samples
X1 17 1415 150 1265
X2 62 200 200 Is free of
For the training set of the self-distillation learning network, input data X1The input dimensionality of each sample is 32 multiplied by 32, the 11 multiplied by 11 screenshot on the RD spectrum is amplified and generated through a bilinear interpolation method, two types of samples are output, whether the target point is the target point or not is judged, and the proportion of positive samples to negative samples is set to be 1:2 to enable the samples to be more balanced as the number of the negative samples is far higher than that of the positive samples.
Fast R-CNN and Yolov2 target detection algorithms for detecting radar targets, training set X2And X1And similarly, the RD spectrum is intercepted and generated on the original RD spectrum. Because the target point of the radar is extremely small on the RD spectrogram and is directly transmitted into the neural network, the main features may disappear through deep convolution, so that an image set is intercepted on the RD spectrogram through a 70 x 70 sliding window, the image set is enlarged into an image with the size of 224 x 224, and the features of the enlarged target point can be well reserved after deep convolution.
The data enhancement strategy of RandAugment combined with Cutout is adopted, and a data enhancement method is included as shown in table 4. And random Augment randomly extracts a data enhancement mode for each batch of samples to transform the data.
TABLE 4 transformation List for RandAugment and Cutout
Figure BDA0002645689090000072
Figure BDA0002645689090000081
The data enhancement method in Ranaugment is roughly divided into three types, and the influence degree of the three types on radar target detection is discussed. The first category transforms mainly on pixels, with no change in spatial structure, such as: autocontract, Brightness, Color, Contrast, Equalize, Identity, Sharpness, Posterize, Solarize; the second category changes the spatial structure of the image, for example: rotate, Shear _ x, Shear _ y, translation _ x, translation _ y; the third type is Cutout, which is used for learning the whole image and the incomplete image through a training network and is used for learning global and local information. Since the strong enhancement transform has a larger variation in the image appearance than the weak enhancement transform, the weak enhancement transform is usually a transform mode randomly selected in the first class, and the strong enhancement transform is a combination of multiple transforms. Ablation learning is used for the strong enhancement transformation, and the best result of the network is selected as the final result in each experiment, as shown in table 5.
Table 5 data enhanced ablation learning
Non-spatial transformation Cutout Spatial transformation Detection Rate (P)d)
93.92
94.43
93.41
94.17
91.89
The effectiveness of the proposed radar target detection framework is evaluated by comparing the traditional method and the deep learning method with the target detection methods of Faster R-CNN and YOLOv 2. A number of experimental tests have shown that the method S3D described herein requires only a small number of labeled samples, and thus the number of labeled samples in X is much smaller than the number of unlabeled samples. Target point detection rate P for experimentdFalse alarm rate PfTarget miss rate MrError rate ErFor evaluation of target detection Performance, MrAnd ErIs defined as follows: mr=1-Pd Er=Pf+Mr
In order to verify the detection performance of the algorithm provided herein, 10 complete measured RD spectrum images of known ship positions were used to perform the target point detection experiment, and the evaluation indexes defined above were used as the evaluation criteria, and the comparison experiment results are shown in table 6.
TABLE 6 comparison of six radar target detection algorithms
Method of producing a composite material Pd(detection Rate) Pf(false alarm rate) Mr(missing rate) Er(error Rate)
Improved constant false alarm rate 85% 13% 15% 28%
Adaptive wavelet transform 90% 8% 10% 18%
OES-ELM 92% 6% 8% 14%
Faster R-CNN 60.7% 3.33% 59.3% 62.63%
YOLOv2 41.7% 0% 58.3% 58.3%
The methods as presented herein 95.69% 5% 4.31% 9.31%
It can be seen from the experimental results in table 6 that the accuracy of the method provided herein reaches 95.69%, the false alarm rate is reduced to 5%, and the miss rate and the error rate are 4.31% and 9.31%, respectively. Compared with an OES-ELM method, the method has the advantages that all indexes are kept to be larger; compared with fast R-CNN and YOLOv2, the method has higher false alarm rate but higher accuracy rate.
It is to be understood that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art may make modifications, alterations, additions or substitutions within the spirit and scope of the present invention.

Claims (5)

1. Based on S3The high-frequency ground wave radar ship target detection method of the D learning algorithm is characterized by comprising the following implementation steps of:
s1, detecting a unit average constant false alarm, and detecting and positioning a training sample;
s2, selecting a target window and generating a training sample;
s3, enhancing data;
s4, constructing a self-distillation learning network;
s5, realizing a semi-supervised self-distillation learning algorithm by using an unsupervised loss function and a cross entropy loss function;
s6, training a neural network;
s7, classifying the candidate targets by using the trained neural network, and removing redundant target frames by adopting a non-maximum suppression algorithm;
and S8, completing target detection of the high-frequency ground wave radar ship.
2. S-based according to claim 13D learning algorithm high frequency ground wave radar ship target detection method, wherein in step S1, unit average constant false alarm rate detection steps are as follows: obtaining a gray map of each sample, inputting gray values along the frequency and distance directions, the number of reference cells N being 40, the guard cell being set to 2, determining a target point by the following formula:
Figure RE-FDA0002781111360000011
setting a threshold factor alpha for detecting and positioning a training sample, and utilizing a target detection rate PdAnd false alarm rate PfAs a reference index, the calculation formula is as follows:
Figure RE-FDA0002781111360000012
in the formula, TP is a detected real target point, FN is an undetected target point, TP + FN is the number of all target points, FP is a detected false target point, a threshold factor α is set to be greater than 0.8 in a training phase, and α is taken to be 0.8 in a testing phase; in step S2, selecting a target window to generate a training sample includes the following steps:
s2.1, training samples are selected by using unit average constant false alarm rate detection, the training samples comprise target points and non-target points, and a training sample coordinate set S1={(al,bl);l∈{xCUT=1}},(al,bl) The center coordinates of the training samples are obtained;
s2.2. selecting 11X 11 window by S1Truncating a training sample set T in the RD spectrum with a window of 11 x 11 size for the center coordinates1Will T1The sample in (a) is changed to a size of 32 × 32 of the network input, and a part of the preselected target points is artificially selected to be given the label L { (x)i,pi);i∈{T1And the rest samples are unlabeled data U-Uj;j∈{T1},j≠i}。
3. S-based according to claim 13In the step S3, data enhancement adopts a data enhancement strategy of combining RandAugment and Cutout, and the related transformation of the data enhancement is divided into the following three types: the transformation is carried out on the pixels, and the spatial structure is not changed; changing the spatial structure of the image; learning the whole image and the incomplete image through a training network, and learning global and local information; the weak enhancement transformation randomly selects a transformation mode in the first class, and the strong enhancement transformation randomly selects a plurality of transformations to be combined; step S4 includes the following steps: dividing the network into four sections, wherein the deepest network is a teacher network, three branches of the shallow layer are used as three student networks, the knowledge of the deep layer network is compressed into the shallow layer network, and the shallow layer network generates feedback on the deep layer network; the semi-supervised self-distillation learning algorithm in step S5 includes the steps of:
s5.1, a semi-supervised self-distillation learning algorithm calculation formula is as follows:
Figure FDA0002645689080000021
Figure FDA0002645689080000022
wherein, theta represents the network parameter, y represents the predicted result of the network,
Figure FDA0002645689080000023
for each of the data sizes of the batches,
Figure FDA0002645689080000024
for each batch of unlabeled data ubBy weak enhancement of T1As a result of the latter, the result,
Figure FDA0002645689080000025
for the same batch of unlabeled data ubBy strongly enhancing T2Giving the prediction label of the weak enhancement sample with the confidence coefficient of the teacher network prediction being larger than tau to the non-label data;
s5.2, the cross entropy loss function form is as follows: h (x) ═ Σxy*log (y), where y represents the predicted result of the teacher network or the student network for weakly enhanced labeled samples, y*A genuine label representing the labeled swatch.
4. S-based according to claim 13The high-frequency ground wave radar ship target detection method of the D learning algorithm, wherein the step S6 comprises the following steps: updating network parameters by using a semi-supervised self-distillation learning algorithm under the condition of judgment
Figure FDA0002645689080000026
Judging whether the prediction confidence of the deepest teacher network on the non-label data is greater than a threshold tau or not, updating the network by using the samples meeting the conditions through consistency loss, directly updating the network by using a cross entropy loss function without judging the conditions for the samples with labels, and storing the trained network parameters; step S7 includes the following steps:
s7.1, sending the whole RD spectrogram into a constant false alarm rate detection step to obtain a center coordinate set S of a preselected target2={(ak,bk);k∈{xCUT1, intercepting the candidate target to obtain an image set combined with T1Will T1Sending the images into a trained neural network for classification to obtain a prediction result Q of each image, wherein the confidence coefficient set predicted as a target point is Q*Whereby information on each predicted target point is obtained as
Figure FDA0002645689080000027
S7.2. mixing Q*The values in the sequence are sorted from large to small, and Q is calculated*The intersection ratio IOU between the target frame corresponding to the medium maximum value and other target frames has the formula:
Figure FDA0002645689080000028
wherein C is 11, the edge length of the target edge is shown, if IxOr IyEqual to zero, then IOU is equal to 0, (a)1,b1) Is Q*The central coordinate of the target frame corresponding to the maximum value in (a)k,bk) Then is Q*The center coordinates of the target frame corresponding to the rest values;
s7.3, judging whether the IOU is larger than a set threshold value or not
Figure FDA0002645689080000029
If so, the k pairs of corresponding targets are driven from Q*Removing Q from*Moving the target corresponding to the maximum value into R, and gradually updating Q*And R to Q*Leaving only one element position, Q*And moving the target corresponding to the remaining element into R, wherein R is the coordinate set of the obtained final prediction result.
5. S-based according to claim 13In the high-frequency ground wave radar ship target detection method of the D learning algorithm, in step S8, all target frames in the R are marked on the RD spectrum, and the target detection of the high-frequency ground wave radar ship is completed.
CN202010853777.8A 2020-08-24 2020-08-24 Based on S3High-frequency ground wave radar ship target detection method based on D learning algorithm Pending CN112163450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010853777.8A CN112163450A (en) 2020-08-24 2020-08-24 Based on S3High-frequency ground wave radar ship target detection method based on D learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010853777.8A CN112163450A (en) 2020-08-24 2020-08-24 Based on S3High-frequency ground wave radar ship target detection method based on D learning algorithm

Publications (1)

Publication Number Publication Date
CN112163450A true CN112163450A (en) 2021-01-01

Family

ID=73859730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010853777.8A Pending CN112163450A (en) 2020-08-24 2020-08-24 Based on S3High-frequency ground wave radar ship target detection method based on D learning algorithm

Country Status (1)

Country Link
CN (1) CN112163450A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255603A (en) * 2021-06-29 2021-08-13 中国人民解放军国防科技大学 Enhancement matrix constant false alarm rate detection method based on Riemann manifold supervision dimension reduction
CN113486899A (en) * 2021-05-26 2021-10-08 南开大学 Saliency target detection method based on complementary branch network
CN113808219A (en) * 2021-09-17 2021-12-17 西安电子科技大学 Radar-assisted camera calibration method based on deep learning
CN114399683A (en) * 2022-01-18 2022-04-26 南京甄视智能科技有限公司 End-to-end semi-supervised target detection method based on improved yolov5
WO2023284698A1 (en) * 2021-07-14 2023-01-19 浙江大学 Multi-target constant false alarm rate detection method based on deep neural network
CN117058556A (en) * 2023-07-04 2023-11-14 南京航空航天大学 Edge-guided SAR image ship detection method based on self-supervision distillation

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271160A (en) * 2007-03-21 2008-09-24 中国科学院电子学研究所 Method and device for real-time detection SAR movement objective by choosing small unit average constant false alarm rate
CN101329400A (en) * 2008-07-30 2008-12-24 电子科技大学 Constant false alarm detection method of radar target based on goodness-of-fit test
CN105894847A (en) * 2016-06-27 2016-08-24 华南理工大学 Unsupervised learning real-time public transport dynamic scheduling system and unsupervised learning real-time public transport dynamic scheduling method in cloud platform environment
CN109117802A (en) * 2018-08-21 2019-01-01 东北大学 Ship Detection towards large scene high score remote sensing image
CN109583293A (en) * 2018-10-12 2019-04-05 复旦大学 Aircraft Targets detection and discrimination method in satellite-borne SAR image
CN109711544A (en) * 2018-12-04 2019-05-03 北京市商汤科技开发有限公司 Method, apparatus, electronic equipment and the computer storage medium of model compression
CN109886218A (en) * 2019-02-26 2019-06-14 西安电子科技大学 SAR image Ship Target Detection method based on super-pixel statistics diversity
CN110870019A (en) * 2017-10-16 2020-03-06 因美纳有限公司 Semi-supervised learning for training deep convolutional neural network sets
CN110889843A (en) * 2019-11-29 2020-03-17 西安电子科技大学 SAR image ship target detection method based on maximum stable extremal region
CN111160474A (en) * 2019-12-30 2020-05-15 合肥工业大学 Image identification method based on deep course learning
CN111160481A (en) * 2019-12-31 2020-05-15 苏州安智汽车零部件有限公司 Advanced learning-based adas target detection method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271160A (en) * 2007-03-21 2008-09-24 中国科学院电子学研究所 Method and device for real-time detection SAR movement objective by choosing small unit average constant false alarm rate
CN101329400A (en) * 2008-07-30 2008-12-24 电子科技大学 Constant false alarm detection method of radar target based on goodness-of-fit test
CN105894847A (en) * 2016-06-27 2016-08-24 华南理工大学 Unsupervised learning real-time public transport dynamic scheduling system and unsupervised learning real-time public transport dynamic scheduling method in cloud platform environment
CN110870019A (en) * 2017-10-16 2020-03-06 因美纳有限公司 Semi-supervised learning for training deep convolutional neural network sets
CN109117802A (en) * 2018-08-21 2019-01-01 东北大学 Ship Detection towards large scene high score remote sensing image
CN109583293A (en) * 2018-10-12 2019-04-05 复旦大学 Aircraft Targets detection and discrimination method in satellite-borne SAR image
CN109711544A (en) * 2018-12-04 2019-05-03 北京市商汤科技开发有限公司 Method, apparatus, electronic equipment and the computer storage medium of model compression
CN109886218A (en) * 2019-02-26 2019-06-14 西安电子科技大学 SAR image Ship Target Detection method based on super-pixel statistics diversity
CN110889843A (en) * 2019-11-29 2020-03-17 西安电子科技大学 SAR image ship target detection method based on maximum stable extremal region
CN111160474A (en) * 2019-12-30 2020-05-15 合肥工业大学 Image identification method based on deep course learning
CN111160481A (en) * 2019-12-31 2020-05-15 苏州安智汽车零部件有限公司 Advanced learning-based adas target detection method and system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113486899A (en) * 2021-05-26 2021-10-08 南开大学 Saliency target detection method based on complementary branch network
CN113255603A (en) * 2021-06-29 2021-08-13 中国人民解放军国防科技大学 Enhancement matrix constant false alarm rate detection method based on Riemann manifold supervision dimension reduction
CN113255603B (en) * 2021-06-29 2021-09-24 中国人民解放军国防科技大学 Enhancement matrix constant false alarm rate detection method based on Riemann manifold supervision dimension reduction
WO2023284698A1 (en) * 2021-07-14 2023-01-19 浙江大学 Multi-target constant false alarm rate detection method based on deep neural network
CN113808219A (en) * 2021-09-17 2021-12-17 西安电子科技大学 Radar-assisted camera calibration method based on deep learning
CN113808219B (en) * 2021-09-17 2024-05-14 西安电子科技大学 Deep learning-based radar auxiliary camera calibration method
CN114399683A (en) * 2022-01-18 2022-04-26 南京甄视智能科技有限公司 End-to-end semi-supervised target detection method based on improved yolov5
CN117058556A (en) * 2023-07-04 2023-11-14 南京航空航天大学 Edge-guided SAR image ship detection method based on self-supervision distillation
CN117058556B (en) * 2023-07-04 2024-03-22 南京航空航天大学 Edge-guided SAR image ship detection method based on self-supervision distillation

Similar Documents

Publication Publication Date Title
CN112163450A (en) Based on S3High-frequency ground wave radar ship target detection method based on D learning algorithm
CN110334765B (en) Remote sensing image classification method based on attention mechanism multi-scale deep learning
CN111369572B (en) Weak supervision semantic segmentation method and device based on image restoration technology
CN111161311A (en) Visual multi-target tracking method and device based on deep learning
CN107563433B (en) Infrared small target detection method based on convolutional neural network
CN113486981B (en) RGB image classification method based on multi-scale feature attention fusion network
CN107832797B (en) Multispectral image classification method based on depth fusion residual error network
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN108960330A (en) Remote sensing images semanteme generation method based on fast area convolutional neural networks
Zhang et al. A GANs-based deep learning framework for automatic subsurface object recognition from ground penetrating radar data
CN107784288A (en) A kind of iteration positioning formula method for detecting human face based on deep neural network
CN111046787A (en) Pedestrian detection method based on improved YOLO v3 model
CN109325490A (en) Terahertz image target identification method based on deep learning and RPCA
CN113761259A (en) Image processing method and device and computer equipment
CN113221987A (en) Small sample target detection method based on cross attention mechanism
CN105989336A (en) Scene identification method based on deconvolution deep network learning with weight
CN104881684A (en) Stereo image quality objective evaluate method
Naqvi et al. Feature quality-based dynamic feature selection for improving salient object detection
CN115966010A (en) Expression recognition method based on attention and multi-scale feature fusion
CN111222545B (en) Image classification method based on linear programming incremental learning
CN114972759A (en) Remote sensing image semantic segmentation method based on hierarchical contour cost function
CN112597798A (en) Method for identifying authenticity of commodity by using neural network
CN112329771A (en) Building material sample identification method based on deep learning
CN114387270A (en) Image processing method, image processing device, computer equipment and storage medium
CN112766381B (en) Attribute-guided SAR image generation method under limited sample

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210101