CN114818963A - Small sample detection algorithm based on cross-image feature fusion - Google Patents

Small sample detection algorithm based on cross-image feature fusion Download PDF

Info

Publication number
CN114818963A
CN114818963A CN202210506243.7A CN202210506243A CN114818963A CN 114818963 A CN114818963 A CN 114818963A CN 202210506243 A CN202210506243 A CN 202210506243A CN 114818963 A CN114818963 A CN 114818963A
Authority
CN
China
Prior art keywords
feature
support set
image
cross
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210506243.7A
Other languages
Chinese (zh)
Other versions
CN114818963B (en
Inventor
贾海涛
田浩琨
胡佳丽
王子彦
吴俊男
任利
刘子骥
许文波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210506243.7A priority Critical patent/CN114818963B/en
Publication of CN114818963A publication Critical patent/CN114818963A/en
Application granted granted Critical
Publication of CN114818963B publication Critical patent/CN114818963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2113Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a small sample detection algorithm based on cross-image feature fusion. The main structure of the invention is a small sample learning algorithm constructed based on a two-stage target detection algorithm, namely fast-RCNN. Firstly, inputting a query image and a support set image to extract a feature map, sending the obtained feature map to a cross-image feature fusion module to strengthen the expression of feature information of a target in the query set by using the feature information in the support set, then sending the feature map to an improved RPN module to generate an ROI feature vector, screening a candidate frame by using the improved feature fusion module and completing the spatial alignment of the support set vector and the ROI feature vector, finally sending the processed ROI and support set vector to a classifier to be classified, and finally outputting the accurate positioning of the target type and the frame. Finally, a plurality of groups of ablation experiments and comparison experiments are designed on the PASCALVOC data set, so that good detection precision is obtained, and the effectiveness of a detection algorithm is verified.

Description

Small sample detection algorithm based on cross-image feature fusion
Technical Field
The invention relates to the field of small sample detection in deep learning, in particular to a target detection technology under the condition of small samples.
Background
In recent years, an object detection technique based on deep learning has attracted great attention and has been applied to practical industrial and daily life, such as intelligent monitoring, automatic driving, face authentication, and the like. The current target detection algorithm based on deep learning needs to use a large amount of data for model training compared with the traditional image algorithm. Generally, about 10 seconds are needed for marking an example, and ten thousand pieces of data needed for making a data set are needed, so that the marking process is very time-consuming. Since the distribution of real data follows a long tail distribution, some sample data exists in a low proportion, for example: some medical images, or some animals and plants in nature. With the development of small sample image classification algorithms, the emphasis of small sample learning and scientific research in the image field is gradually put on small sample target detection.
Inspired by research of small sample classification algorithms, most of the existing small sample target detection algorithms mostly adopt a two-stage mode, namely, the approximate position of a target is determined firstly, and then the target is classified and the position is accurately estimated through a classifier. On the basis of a two-stage mode, changes need to be made for the application scene of a small sample, and the current mainstream changes can be summarized into training and testing by adopting a meta-learning method, aggregating support set and query set features, increasing the relevance of feature vectors, modifying a loss function to improve the vector discrimination and the like. Therefore, the invention provides a small sample detection algorithm based on cross-image feature fusion, wherein a cross-image feature fusion module is added, and an RPN module and a feature aggregation module are improved.
Disclosure of Invention
In order to solve the target detection problem under the condition of a small sample, the invention designs a cross-image feature fusion network for small sample target detection. Aiming at the problem that the RPN is easy to ignore new feature information, a cross-image feature fusion mechanism is designed, and vectors of a support set are used for weighting vectors of a query set for highlighting feature information of a new target in a query set image, so that omission of the RPN on a target region can be obviously reduced; aiming at the problem that the binary classifier in the RPN is classified wrongly and possibly causes the loss of the suggested region with high IOU, the invention adopts the strategy of connecting a plurality of classifiers in parallel to reduce the loss probability; aiming at the defect of feature aggregation, the invention designs a feature aggregation mode based on sequencing, filters out certain ROI feature vectors with lower similarity with a support set, and sends the query set features and the support set features into a subsequent classification module after spatial alignment.
The technical scheme adopted by the invention is as follows:
step 1: extracting image features of support set and query set via shared feature extraction network and recording as f s And f q Sending the group of feature maps into a cross-image feature fusion module;
step 2: calculating f s And f q Given degree of similarity of s The feature map in (1) is weighted to generate a weighted support set feature map f s ′;
And step 3: calculating f s ' and f q Attention feature map f between a And applying the attention feature map f a And query set feature graph f q Matrix multiplication is carried out to obtain a query set characteristic diagram f with support set attention information q ′;
And 4, step 4: will f is q ' Generation into improved RPN networkCandidate box, generating ROI feature vector after ROI Pooling, and matching f s The feature vectors are sent to an improved feature aggregation module to generate final feature vectors, the final feature vectors are sent to a classifier to be classified, the feature vectors are sent to a frame regression module to be accurately positioned in a prediction frame, and a final target detection result comprising a target category and a target frame coordinate is output.
Compared with the prior art, the invention has the beneficial effects that:
(1) compared with the traditional small sample target detection algorithm, the method has higher detection precision;
(2) the method has better detection effect on small sample detection of various types.
Drawings
FIG. 1 shows: a structure diagram of a small sample detection algorithm based on cross-image feature fusion.
FIG. 2 is a diagram of: and supporting a weighting module schematic diagram.
FIG. 3 is a diagram of: a schematic diagram of a cross-image space attention feature fusion module.
FIG. 4 is a diagram of: the improved RPN network structure is shown schematically.
FIG. 5 is a diagram: improved feature aggregation module schematic.
FIG. 6 is a diagram of: CIF-FSOD identifies mAP value comparison graphs on PASCAL VOC datasets.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
The invention designs a hand sample target detection algorithm based on cross-image feature fusion based on a two-stage target detection network, which is also called CIF-FSOD, and the network structure of the hand sample target detection algorithm is shown in figure 1. The whole network structure mainly comprises three parts, namely a cross-image feature fusion module, an improved RPN module and an improved feature aggregation module.
Firstly, an input image extracts image features of a support set and a query set through a shared feature extraction network, and the group of feature maps are sent to a cross-image feature fusion module. The cross-image feature fusion module designed in the invention mainly comprises a support set weighting module and a cross-image space attention fusion module, the expression of the feature in the query set is enhanced by utilizing the feature of the support set, and a feature map enhanced by the module is input into an RPN network in the next part to generate a candidate frame.
The support set weighting module is configured to weight the feature maps in the support set, as shown in fig. 2. Because the shape, angle and brightness of the same target in different support set feature maps are possibly different, and the contribution degrees of the feature information to the query set features are also different, the weighting information of the support set can be obtained by calculating the similarity between the support set feature map and the query set feature map, a higher weight is assigned to a support set feature map with higher similarity, and a lower weight is assigned to a support set feature map with lower similarity.
The similarity between two characteristic graphs is calculated by adopting a relational network, and the specific calculation is shown as a formula (1):
sim(f s ,f q )=R φ [E(f s ),E(f q )] (1)
wherein f is s Representing a feature map of the support set, f q Representing a feature graph of the query set, E (-) representing an embedding function, R φ (. -) represents a relationship network. sim (f) s ,f q ) For the final output similarity result, the result also needs to be sent to a softmax function for normalization to generate weight information, as shown in formula (2);
Figure BDA0003636277000000031
wherein, W i Weight, sim, representing the ith feature map in the support set i Representing the similarity between the ith feature map in the support set and the feature map in the query set. And finally, weighting the original support set feature set by using the generated weight information, as shown in a formula (3):
f s i =f s ·W i (3)
as shown in fig. 3, the cross-image space attention feature fusion module generates a cross-image space attention feature map by using the support set feature map and the query set feature map, and the specific process is as in formulas (4) and (5):
A(f s ,f q )=σ((f s ) T ·f q ) (4)
Figure BDA0003636277000000032
in the formula (4), the first and second groups,
Figure BDA0003636277000000041
a feature map representing a set of queries,
Figure BDA0003636277000000042
represents a feature map of the support set, A (f) q ,f s ) For the generated cross-image space attention feature map, wherein
Figure BDA0003636277000000043
Figure BDA0003636277000000044
Wherein, σ (-) represents the softmax function, and the specific calculation process in the function is shown in formula (5), wherein
Figure BDA0003636277000000045
To represent
Figure BDA0003636277000000046
Result after transposition, s ji And representing the generated attention feature map, and representing the influence of the ith position on the jth position, namely the influence of the ith spatial position in the support set feature map on the jth spatial position of the query set feature map. And subsequently, performing matrix multiplication on the generated spatial attention feature map and the query feature map to generate a query set feature map containing spatial feature information, wherein the specific process is as shown in a formula (6):
Figure BDA0003636277000000047
wherein p is i Representing the feature information of the ith position of the output cross-image feature map, and finally generating a query set feature map containing space attention information
Figure BDA0003636277000000048
And query set primitive features
Figure BDA0003636277000000049
Figure BDA00036362770000000410
Merging, and finally outputting a cross-image feature fusion feature map
Figure BDA00036362770000000411
f output In the method, two parts of information are provided, wherein one part of the information is an original query set feature graph f q The other part is a query set feature map f weighted by the attention information in the support set space p
And secondly, the output cross-image feature fusion feature map is sent to the improved RPN to predict the candidate region. The invention improves the classifier in the RPN, and the improved RPN network can reduce the probability of missing the high IOU candidate frame, and the specific network structure is shown in figure 4. Compared with the traditional RPN, the improved RPN is added with a plurality of classifiers which are connected in parallel, wherein the improved RPN is divided into two parts, one part can be frozen in the fine tuning process and is mainly used for extracting the candidate frames of the base class, and the other part can learn new parameters in the fine tuning process and is used for avoiding the missing detection of the new class and reducing the probability of missing the high IOU candidate frames. One of the classifiers for detecting the base class is called a base class classifier, and the other classifier which is more sensitive to the new class is composed of three sub-classifiers connected in parallel and is called a new class classifier. The classification scores generated by three parallel classifiers for the same anchor frame should be different, some classifiers may miss candidate frames with high IOU due to wrong classification, but the rest classifiers can correct the candidate frames in timeThis error. The final output classification result is determined by the output of the base class classifier and the new class classifier. First, the operating flow of the new class classifier is analyzed, for each anchor frame x i Three scores are output simultaneously to judge whether the current anchor frame belongs to the foreground or the background, as shown in formula (7):
Figure BDA0003636277000000051
wherein
Figure BDA0003636277000000052
Represents a binary classifier for the image data to be classified,
Figure BDA0003636277000000053
representing the score output by the jth classifier on the ith anchor box. Sending the fraction output by the classifier into a sigmoid function, and mapping the value to [0,1]In the range of the interval, a classifier with the best score performance is selected finally, and the specific selection mode is as the formula (8):
Figure BDA0003636277000000054
wherein j * Representing the classifier with the best current classification effect, α (·) represents the sigmod function. The invention adopts the calculation method of the formula (8). When a situation that the classifier has too large deviation occurs, the result calculated by the formula (8) should be the result output by the classifier with the lowest score, so that the situation is more suitable for practical situations. The score output by the new class classifier is not the final result, and needs to be maximized with the classifier that is mainly used for extracting the base class candidate box, as shown in formula (9):
score=max(S novel ,S base ) (9)
wherein S novel Representing classifier output scores, S, primarily for extracting new class candidate boxes base The classifier output score, which is the final output score, is represented mainly for extracting the base class candidate box.
In the inference process, the feature extraction module is shared as a traditional RPN, part of feature information obtained from the feature extraction module is sent into a classifier, and the other part of the feature information is sent into a frame regression module, the classifier outputs the score of the current anchor frame as the foreground, the frame regression module outputs the relative accurate positioning information of the anchor frame, namely the deviation relative to the anchor frame, the former W anchor frames with higher scores are selected as candidate frames according to the ranking of the foreground scores of all the anchor frames from high to low, and then the K anchor frames are selected as final candidate frames through the non-maximum suppression NMS.
And finally, generating ROI feature vectors after the generated candidate frames are subjected to ROI Pooling, sending the ROI feature vectors and the feature maps of the support sets to an improved feature aggregation module to generate final feature vectors, sending the final feature vectors to a classifier for classification, sending the final feature vectors to a frame regression module for accurate positioning of the prediction frame, and outputting a final target detection result comprising a target category and target frame coordinates. The improved feature aggregation module designed by the invention is shown in fig. 5, and utilizes the similarity between the image features of the support set and the image features of the query set to sort the ROI features, filters out partial ROI feature vectors with poor quality through a threshold set in advance, and performs spatial alignment on the reserved ROI features and the support set features to reduce feature dislocation between the image features of the support set and the images of the query set.
To filter low quality ROI candidate vectors, the present invention employs a relationship network to calculate the similarity between the candidate vectors and the support set vectors, as shown in formula (10):
Figure BDA0003636277000000061
wherein x is roi Representing ROI feature vector, x, output by RPN s A vector of the support set is represented,
Figure BDA0003636277000000062
represents an embedding module in the relational network, responsible for embedding the feature vectors into a specific space, Consate [ ·]Represents two charactersThe eigenvectors are merged and added together, R φ {. cndot. } represents a relationship module for calculating the similarity between two feature vectors. And sequencing the obtained similarity, and filtering out partial low-quality candidate regions. The remaining feature vectors need to be aligned in space, and the similarity matrix between the ROI feature vectors and the feature vectors of the support set is calculated according to the present invention, as shown in formula (11):
M(i,j)=x′ roi ⊙xs′ s (11)
wherein x is roi ∈R H×W×C Representing ROI feature vector, x, output by RPN s ∈R H×W×C Class prototypes representing support set vector generation, for x roi Carry out reshape operation to generate x' roi ∈R N×C (where N ═ hxw), pair x s Generating x 'by performing reshape and transpose operations' s ∈R C×N (where N ═ H × W), M (i, j) represents the generated similarity matrix denoted at x' roi Ith row and x 'of' s The jth column in (b). As a special matrix multiplication method, the ordinary summation is replaced by calculating the cosine similarity between two vectors, as shown in formula (12):
Figure BDA0003636277000000063
after the similarity matrix is obtained, the similarity matrix is sent to a softmax function to calculate a normalized result, and the specific calculation process is as in formula (13):
Figure BDA0003636277000000064
wherein S (i, j) represents the similarity weight calculated by softmax, and the spatial alignment is completed by assigning the weight to the support set features, as shown in formula (14):
Figure BDA0003636277000000065
as can be seen from equation, each vector f 'is in the final support set' s (i) Is determined by its similarity S (i, j) to the ROI vector of the query set, the vector with higher similarity being at f' s (i) The occupied proportion is larger, the reverse is also true, the higher the similarity is, the closer the spatial features are, and the design achieves the purpose of spatial alignment.
To demonstrate the effectiveness of the present invention, the improved algorithm of the present invention was compared to several advanced algorithms, on the PASCAL VOC data set. The invention trains 20 Epochs together, and sets the initial learning rate to 10 -3 The learning rate is set to be exponentially decaying (half learning rate per 2 Epochs) and Adam is used as an optimizer to speed up convergence with the parameter set to β 1 =0.5,β 2 0.9999,. epsilon.0.5, to prevent model overfitting, a random deactivation rate of 0.1 and a weight decay parameter of 10 were set -6 . The experimental result is shown in FIG. 6, and it can be seen that the detection precision of CIF-FSOD is totally superior to that of Meta YOLO, Meta R-CNN, RepMET and FsDetView algorithms.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except combinations where mutually exclusive features or/and steps are present.

Claims (3)

1. A small sample detection algorithm based on cross-image feature fusion is characterized by comprising the following steps:
step 1: extracting image features of support set and query set via shared feature extraction network and recording as f s And f q Sending the group of feature maps into a cross-image feature fusion module;
step 2: calculating f s And f q Given degree of similarity of s The feature map in (1) is weighted to generate a weighted support set feature map f s ′;
And step 3: calculating f s ' and f q Attention feature map f between a And applying the attention feature map f a And query set feature graph f q Matrix multiplication is carried out to obtain a query set characteristic diagram f with support set attention information q ′;
And 4, step 4: will f is q ' feeding into improved RPN network to generate candidate frame, generating ROI feature vector after ROI Pooling, and mixing with f s The feature vectors are sent to an improved feature aggregation module to generate final feature vectors, the final feature vectors are sent to a classifier to be classified, the feature vectors are sent to a frame regression module to be accurately positioned in a prediction frame, and a final target detection result comprising a target category and a target frame coordinate is output.
2. The method according to claim 1, wherein the cross-image feature fusion module in step 1 comprises a support set weighting module and a cross-image space attention fusion module.
3. The method of claim 1, wherein the modified RPN network of step 4 comprises a plurality of classifiers in parallel, and wherein the modified feature aggregation module comprises filtering candidate frames and performing spatial alignment of the support set vector and the ROI feature vector.
CN202210506243.7A 2022-05-10 2022-05-10 Small sample detection method based on cross-image feature fusion Active CN114818963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210506243.7A CN114818963B (en) 2022-05-10 2022-05-10 Small sample detection method based on cross-image feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210506243.7A CN114818963B (en) 2022-05-10 2022-05-10 Small sample detection method based on cross-image feature fusion

Publications (2)

Publication Number Publication Date
CN114818963A true CN114818963A (en) 2022-07-29
CN114818963B CN114818963B (en) 2023-05-09

Family

ID=82513916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210506243.7A Active CN114818963B (en) 2022-05-10 2022-05-10 Small sample detection method based on cross-image feature fusion

Country Status (1)

Country Link
CN (1) CN114818963B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091787A (en) * 2022-10-08 2023-05-09 中南大学 Small sample target detection method based on feature filtering and feature alignment
CN116403071A (en) * 2023-03-23 2023-07-07 河海大学 Method and device for detecting few-sample concrete defects based on feature reconstruction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017039967A1 (en) * 2015-09-01 2017-03-09 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Method and device for detecting antigen-specific antibodies in a biological fluid sample by using neodymium magnets
CN112434721A (en) * 2020-10-23 2021-03-02 特斯联科技集团有限公司 Image classification method, system, storage medium and terminal based on small sample learning
CN112818903A (en) * 2020-12-10 2021-05-18 北京航空航天大学 Small sample remote sensing image target detection method based on meta-learning and cooperative attention
CN113052185A (en) * 2021-03-12 2021-06-29 电子科技大学 Small sample target detection method based on fast R-CNN
CN113705570A (en) * 2021-08-31 2021-11-26 长沙理工大学 Few-sample target detection method based on deep learning
CN114220063A (en) * 2021-11-17 2022-03-22 浙江大华技术股份有限公司 Target detection method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017039967A1 (en) * 2015-09-01 2017-03-09 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Method and device for detecting antigen-specific antibodies in a biological fluid sample by using neodymium magnets
CN112434721A (en) * 2020-10-23 2021-03-02 特斯联科技集团有限公司 Image classification method, system, storage medium and terminal based on small sample learning
CN112818903A (en) * 2020-12-10 2021-05-18 北京航空航天大学 Small sample remote sensing image target detection method based on meta-learning and cooperative attention
CN113052185A (en) * 2021-03-12 2021-06-29 电子科技大学 Small sample target detection method based on fast R-CNN
CN113705570A (en) * 2021-08-31 2021-11-26 长沙理工大学 Few-sample target detection method based on deep learning
CN114220063A (en) * 2021-11-17 2022-03-22 浙江大华技术股份有限公司 Target detection method and device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ZIYING SONG等: "MSFYOLO: Feature fusion-based detection for small objects" *
刘志宏: "基于特征融合卷积神经网络的SAR图像目标检测方法" *
彭豪等: "多尺度选择金字塔网络的小样本目标检测算法" *
田浩琨: "小样本条件下的目标分类与识别算法研究" *
贾海涛: "运动目标检测与识别算法的研究" *
邓杰航等: "融合多尺度多头自注意力和在线难例挖掘的小样本硅藻检测" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091787A (en) * 2022-10-08 2023-05-09 中南大学 Small sample target detection method based on feature filtering and feature alignment
CN116403071A (en) * 2023-03-23 2023-07-07 河海大学 Method and device for detecting few-sample concrete defects based on feature reconstruction
CN116403071B (en) * 2023-03-23 2024-03-26 河海大学 Method and device for detecting few-sample concrete defects based on feature reconstruction

Also Published As

Publication number Publication date
CN114818963B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN104866810B (en) A kind of face identification method of depth convolutional neural networks
Cheng et al. Exploiting effective facial patches for robust gender recognition
WO2018107760A1 (en) Collaborative deep network model method for pedestrian detection
CN106845510B (en) Chinese traditional visual culture symbol recognition method based on depth level feature fusion
CN106295507B (en) A kind of gender identification method based on integrated convolutional neural networks
US7362892B2 (en) Self-optimizing classifier
CN113408605B (en) Hyperspectral image semi-supervised classification method based on small sample learning
CN110321967B (en) Image classification improvement method based on convolutional neural network
CN103605972B (en) Non-restricted environment face verification method based on block depth neural network
CN109063649B (en) Pedestrian re-identification method based on twin pedestrian alignment residual error network
CN114818963A (en) Small sample detection algorithm based on cross-image feature fusion
CN106503661B (en) Face gender identification method based on fireworks deepness belief network
CN111008639B (en) License plate character recognition method based on attention mechanism
CN103065158A (en) Action identification method of independent subspace analysis (ISA) model based on relative gradient
CN109165658B (en) Strong negative sample underwater target detection method based on fast-RCNN
CN104636732A (en) Sequence deeply convinced network-based pedestrian identifying method
CN110647907A (en) Multi-label image classification algorithm using multi-layer classification and dictionary learning
CN114299542A (en) Video pedestrian re-identification method based on multi-scale feature fusion
CN112232395B (en) Semi-supervised image classification method for generating countermeasure network based on joint training
Raparthi et al. Machine Learning Based Deep Cloud Model to Enhance Robustness and Noise Interference
CN111310820A (en) Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration
Gao et al. Adaptive random down-sampling data augmentation and area attention pooling for low resolution face recognition
CN113033345B (en) V2V video face recognition method based on public feature subspace
CN115830401B (en) Small sample image classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant