CN110427875B - Infrared image target detection method based on deep migration learning and extreme learning machine - Google Patents
Infrared image target detection method based on deep migration learning and extreme learning machine Download PDFInfo
- Publication number
- CN110427875B CN110427875B CN201910704660.0A CN201910704660A CN110427875B CN 110427875 B CN110427875 B CN 110427875B CN 201910704660 A CN201910704660 A CN 201910704660A CN 110427875 B CN110427875 B CN 110427875B
- Authority
- CN
- China
- Prior art keywords
- model
- visible light
- learning
- training
- migration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Photometry And Measurement Of Optical Pulse Characteristics (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an infrared image target detection method based on deep migration learning and an extreme learning machine, which comprises the following steps of training a visible light image target detection model, training on a visible light sample set D by using a mask rcnn two-stage multi-task detection architecture, inputting a mask into a neural network, and redefining a loss function of an integral network structure; based on a sample migration method, a migration learning data set is obtained by expanding a target domain, namely the distribution of an infrared sample set T; based on a model migration method, a target detection model with high precision based on a visible light image is taken as a pre-training model of the generated migration learning data set, and training is carried out by adopting the same frame as that of visible light target detection; and an extreme learning machine is adopted to replace a network full-connection layer, so that the overfitting phenomenon of the migration training of the small sample model is overcome.
Description
Technical Field
The invention belongs to the field of image detection, and relates to a target detection method of an infrared image.
Background
At present, small sample learning is a research hotspot of deep learning, a large number of unmarked images exist in the real world, the realization of a detection task depends on a large number of labeled data, and the time and money cost is greatly increased. The traditional machine learning method has a serious disadvantage that the training data and the test data are supposed to obey the same data distribution, but the hypothesis is not satisfied in many cases, a large amount of manpower and resources are generally spent to label a large amount of data again to satisfy the training requirement, and the data waste is caused; and the transfer learning can extract and transfer knowledge from the existing data to complete a new learning task. As a branch of machine learning, the migration learning is originally intended to save the time for manually marking samples, in recent years, due to the rapid development of a deep neural network, the migration learning is increasingly combined with the neural network, and the characteristics of high resource utilization rate and lower training cost attract academia and industry to develop a lot of related researches.
The transfer learning can be divided into sample transfer, feature transfer and model transfer according to the specific implementation method. When the data of the source domain and the target domain are very similar, the sample migration can effectively solve the problem of insufficient samples of the target domain; the feature migration finds a potential feature space shared by a source domain and a target domain by reconstructing features so as to minimize the difference between the domains; model migration, namely when the source domain samples are similar to the target domain samples in distribution, part of the model distribution or prior parameters can be shared among the learning tasks.
At present, the popular far-field target migration is realized, the migration learning target detection from the face to the airplane is realized through the middle field, and a solution is provided for the difficulty in deep learning of labeled data.
Disclosure of Invention
The invention aims to provide an infrared target detection method based on deep migration learning, which is characterized by comprising the following steps of firstly training to obtain a high-precision target detection model based on a visible light image and a two-stage detection framework; and then, by means of sample migration of migration learning and parallel use of multiple modes of model migration, the detection model of visible light is migrated and applied to the infrared image of the small sample, and finally, an extreme learning machine is added, so that the detection precision of the infrared image is improved under the condition of avoiding overfitting. The technical scheme is as follows:
1. an infrared image target detection method based on deep transfer learning and extreme learning machine comprises the following steps:
the first step is as follows: training a visible light image target detection model, training on a visible light sample set D by using a mask rcnn two-stage multi-task detection framework, inputting a mask into a neural network, and redefining a loss function of the whole network structure:
L=L cls +L box +L mask (1)
L cls and L box Respectively representing classification error and positioning error, and L mask Is the pixel error.
A ResnexT101 network is used, and a target detection model D with high precision based on a visible light image is obtained in a backward propagation mode of random gradient descent model 。
Secondly, based on the sample migration method, by expanding the distribution of the target domain, i.e. the infrared sample set T, adaBoo is used s T filtering out samples different from the target domain in the source domain, i.e. the set of visible light samples D, by giving different weights to the samples, re-weighting the samples in the source domain to form a distribution similar to the target domain, and finally training the model by using the re-weighted instances from the source domain and the original instances from the target domain, resulting in a data set T for transfer learning new :
T new =T+D part (2)
Thirdly, detecting a target with high precision based on a visible light image into a model D based on a model migration method model As the above-generated data set T for transfer learning new The pre-training model adopts the same frame as that of visible light target detection to train, and uses the L-based model 2 Embedding loss function of norm, training to obtain T of over-fitting condition model But not the mostAnd (5) optimizing the model.
And fourthly, when the model parameters are adjusted in the target domain, reconstructing the last multi-classifier layer, replacing a network full-connection layer with an extreme learning machine, overcoming the overfitting phenomenon of the migration training of the small sample model, and assisting the learning of the deep neural network model parameters in the target domain in the migration learning by using a truncated gradient method (dropout), so that the learning of activated neurons is implicitly facilitated, and meanwhile, the learning of unactivated neurons is limited.
The invention discloses an image detection method for deep learning migration based on an extreme learning machine, which is characterized in that a convolutional neural network is pre-trained through a large number of visible light image samples, an AdaBoost algorithm is used for selecting images in a visible light sample set which are distributed similarly to a target domain sample through a sample migration method, a visible light model is put into an infrared image small sample set which serves as a pre-training model, then a pre-training network structure and an ELM (extreme learning machine) structure are fused, the requirement of the network training process on the number of samples is reduced, and meanwhile, in order to overcome the problem that the overfitting of the network can be caused by directly using rare training data in the target domain for retraining, a novel and simple method is adopted, the learnable space of parameters in the target domain is limited, and the learnable space of the parameters in the target domain is made to be as close to the parameters learnt in the source domain as possible in the self-adaption process. The invention is characterized in that: 1) Expanding the distribution of the infrared small samples; 2) The infrared migration model has strong robustness and high detection precision, and overfitting is not easy to occur.
Drawings
FIG. 1 is a flow chart of the overall structure of the two-stage detection
FIG. 2 transfer learning diagram
FIG. 3 Structure of Extreme Learning Machine (ELM)
FIG. 4 is a flowchart of an infrared sample transfer learning process
Detailed Description
Through retrieval, the patent that finds infrared small sample target detection is less, and this patent is from the aspect of migration learning and extreme learning machine solution infrared small sample problem. Visible light images occupy the absolute proportion of image distribution in the real world, most tasks such as detection, segmentation and tracking of the current deep learning image target are established on a large number of marked visible light images, but the detection and identification precision of the visible light images is greatly reduced in the situations of night, complex weather and the like, the defects of visible light are greatly filled by the infrared images through the characteristic of energy imaging, but the marking cost of infrared samples is high, and a large number of marked infrared data sets are absent in the real world. And the target detection precision of the infrared image is improved through the migration of visible light to infrared.
The patent relates to an infrared target detection method based on deep migration learning, which is characterized by comprising the following steps of firstly training to obtain a high-precision target detection model based on a visible light image and a two-stage detection framework; and then, by means of sample migration of migration learning and parallel use of multiple modes of model migration, the detection model of visible light is migrated and applied to the infrared image of the small sample, and finally, an extreme learning machine is added, so that the detection precision of the infrared image is improved under the condition of avoiding overfitting. The scheme is as follows:
first, visible light model training
The method aims to train a target detection model by using a large number of marked visible light sample images, adopts a two-stage detection framework mask rcnn (a two-stage mainstream architecture of a deep learning image detection algorithm), and trains a visible light detection model with high detection precision and strong model robustness in a visible light image domain by utilizing data enhancement, a characteristic pyramid, model fusion and the like. Based on a two-stage detection framework and a mask detection task, the target position is more finely positioned, an integral operation is cancelled by using a ROIAlign method, a decimal is reserved, and after a mask is added, an overall algorithm loss function is changed into the following formula:
L=L cls +L box +L mask
L cls and L box Respectively representing classification errors and positioning errors. And L is mask Assuming a total of k classes, the output dimension of the mask split branch is k m, and for each point in m, k binary masks are output (each class outputs a continuous value using a sigmoid). When loss is calculated, the pixel belongs to which class, and sigmoid of which class is inputThe loss is calculated.
Second step, sample migration
The method is based on AdaBoost (integration algorithm) to endow the samples with different weights to filter out the samples different from a target domain (infrared image) in a source domain (visible light image), and reweigh the samples in the source domain to form distribution similar to the target domain. Finally, the model is trained by using the re-weighted samples from the source domain and the original samples from the target domain.
The concrete formalized definition is a source domain D s Source task T s Target domain D t Domain target task T t . A base learner (base leaner) is first selected and then the current base learner is trained based on the performance of some previous base learners and the sample weights are iteratively adjusted. Let Y be the category space, the target domain has the same data distribution space, i.e. the target domain data, and is denoted as X _ s, and the source domain is a different distribution space, i.e. the source domain data, and is denoted as X _ d. The entire training data space is training data T
S={(x i )}x i ∈X s (i=1,2,...,k)
Where the training set T may be divided into data T _ d from different distributions and data T _ s from the same distribution,
first, the weight of each data is normalized to a distribution
The weak classifier is then invoked. The data for T _ d and T _ s are taken as training data in their entirety, which is also where the source domain data works on the model.
Recalculate the error rate ε t . Only the data extracted in T _ s, that is, the target domain data, is calculated. The source domain data does not enter the computation. And the weight of the extracted data in T _ s needs to be renormalized when calculating the error rate.
The rates of T _ s and T _ d weight adjustments are calculated separately. The weight adjustment rate for T _ s is different for each iteration, and is the same for T _ d. And finally, executing the most policy, selecting the source domain data sample which is most similar to the target domain, and combining the source domain data sample with the target domain sample to form a new sample set.
Third, model migration
The purpose of the step is to transfer the first two layers of network architectures of the visible light model through the model and the sample set obtained in the step, and retrain a new sample set. The training model of visible light is transferred to an infrared sample set, and in order to learn discriminant features, an embedding loss function is introduced. The embedding loss function not only normalizes representative features, but also effectively reduces variance within each individual class while broadening variance between different individual classes. Based on L 2 Norm embedded loss function form
f are the outputs of the full connection layers, which can be seen as being respectively taken from the picture X i And X j The extracted representative features. y is ij =1 and y ij And =0 represents whether the two pictures belong to the same individual or not. Where the hyperparameter m refers to the interval parameter. The overall objective loss function is as follows:
the former part of Loss is a Loss function of cross entropy, the latter part is an L2 Loss function, and the redefined model Loss function can better specify characteristics, so that the visible light model is adapted to a new infrared sample set.
Fourthly, the extreme learning machine replaces the network full connection layer
After the deep neural network model is trained in the source domain, the next step is to migrate the visible light model parameters to the target domain. However, the distribution of samples in the target domain may deviate from the distribution of data in the source domain to a great extent and the scarcity of training data samples in the target domain, and the direct retraining using the training data scarcity in the target domain may cause overfitting of the network and destroy the parameter structure of the deep neural network model in the source domain.
This step retains the parameter information W learned in the source domain s Constraining the parameters W learned in the source domain on the target domain s Fine tuning in the target domain. In order to prevent overfitting and reduce the difference of the sample picture distribution of the source domain in the target domain, the step adopts an extreme learning machine to replace a full-connection layer, and the parameter W learned in the source domain is constrained s The learning space of (a) is thus fine-tuned in the target domain to obtain the effective parameters, where | | W t -W s L may be used 1 Or L 2 Norm form.
J(W t ,b t )=Loss+∈||W t -W s || 2
The method limits the target fieldParameter W of t In order to make it as close as possible to the parameter W learned in the source domain during the adaptation process s . Training with rare samples, W t Fine tuning can be performed within reasonable variation.
When adjusting the model in the target domain, the last multi-classifier layer needs to be reconstructed since the number of classes of pictures in the source domain may be different from the number of classes of pictures in the target domain. The model gradient descent updates as follows: l is 1 :J(W t ,b t )=Loss+||W t -W s ||
In the above formula, whenThe element W in the W matrix ij Is 1, otherwise W ij The value of (d) is 0.
In addition, the neurons have selective activation performance, and the step adopts a truncated gradient method (dropout) to assist the learning of the deep neural network model parameters in the target domain in the migration learning, so that the learning of the activated neurons is implicitly assisted, and the learning of the inactivated neurons is limited. Gradient descent updating:
by limiting the model parameters W t The method is beneficial to the adaptive learning of parameters in the target domain in coordination.
In combination with the embodiment, let the infrared sample set be T and the visible light sample set be D, the steps of the invention are as follows:
the first step is as follows: training a visible light image target detection model, specifically, using a mask rcnn two-stage multi-task detection architecture, training on a visible light sample set D, inputting a mask into a neural network, and redefining a loss function of the whole network structure, wherein the specific flow is shown in fig. 1:
L=L cls +L box +L mask (1)
L cls and L box Respectively representing classification errors and positioning errors. And L is mask Is the pixel error.
A ResnexT101 network is used, and a target detection model D with high precision based on a visible light image is obtained in a backward propagation mode of random gradient descent model 。
In the second step, based on the sample migration method, as shown in fig. 2, samples different from the target domain in the source domain (visible light sample set D) are filtered out by expanding the distribution of the target domain (infrared sample set T) and using AdaBoost to give different weights to the samples, and the samples are re-weighted in the source domain to form a distribution similar to the target domain. Finally, the model is trained by using the re-weighted instances from the source domain and the original instances from the target domain. Finally, obtaining a new sample set:
T new =T+D part (2)
thirdly, based on the model migration method, the model D of the visible light is converted into the model D of the visible light model As the above-generated migration learning data set T new The pre-training model of (2) is trained by using the same framework as that of visible light target detection, as shown in fig. 1, and not only representative features can be normalized, but also embedded loss functions are usedThe variance within each individual class can be effectively reduced while the variance between different individual classes is enlarged. Based on L 2 Norm of the embedded loss function form:
overall objective loss function:
training in the step to obtain T under the overfitting condition model But not the optimal model.
Fourthly, when the model parameters are adjusted in the target domain, the last multi-classifier layer needs to be reconstructed because the number of classes of the pictures in the source domain may be different from the number of classes of the pictures in the target domain. An extreme learning machine (shown in fig. 3) is adopted to replace a network full-connection layer, the overfitting phenomenon of the migration training of the small sample model is overcome, the overall flow chart is shown in fig. 4, and the model is updated as the gradient of the model is reduced:
L 1 :J(W t ,b t )=Loss+||W t -W s || (5)
the extreme learning machine replaces a full connection layer of a migration model network, so that the overfitting problem caused by the fact that the quantity of infrared sample sets is small, the variance of the infrared sample sets with visible light images is large and the like is effectively avoided, and the generalization capability of the model is obtainedStrong, high detection precision and model robustness T model 。
Claims (1)
1. An infrared image target detection method based on deep transfer learning and extreme learning machine comprises the following steps:
the first step is as follows: training a visible light image target detection model, training on a visible light sample set D by using a mask rcnn two-stage multi-task detection framework, inputting a mask into a neural network, and redefining a loss function of the whole network structure:
L=L cls +L box +L mask (1)
L cls and L box Respectively representing classification errors and positioning errors, and L mask Is the pixel error;
obtaining a target detection model D with high precision based on the visible light image in a counter propagation mode of random gradient descent model ;
Secondly, based on a sample migration method, filtering out samples different from a target domain in a source domain by expanding the distribution of the target domain, namely an infrared sample set T, giving different weights to the samples by using AdaBoost, wherein the samples in the source domain are the visible light sample set D, re-weighting the samples in the source domain to form a distribution similar to the target domain, and finally training a model by using a re-weighted example from the source domain and an original example from the target domain; finally, a data set T of transfer learning is obtained new :
T new =T+D part (2)
Thirdly, detecting a target with high precision based on a visible light image into a model D based on a model migration method model The first two layers of network structure is used as the generated data set T of the transfer learning new Using a pre-trained model based on L 2 Embedding loss function of norm, training to obtain T of over-fitting condition model But not the optimal model;
the overall objective loss function is as follows:
the Loss function of cross entropy is taken as the former part of Loss, and the Loss function is based on L as the latter part 2 A loss function of the norm, a redefined model loss function, for normalizing the characteristics, so that the visible light model is adapted to the new infrared sample set;
based on L 2 The norm of the embedding loss function is in the form:
f is the output of the full connection layer, considered as being respectively from picture X i And X j Representative features extracted from (1); y is ij =1 and y ij =0 represents whether the two pictures belong to the same individual or not; the hyperparameter m refers to an interval parameter;
and fourthly, when the model parameters are adjusted in the target domain, reconstructing the last multi-classifier layer, replacing a network full-connection layer with an extreme learning machine, overcoming the overfitting phenomenon of the migration training of the small sample model, and assisting the learning of the deep neural network model parameters in the target domain in the migration learning by using a truncated gradient method dropout, so that the learning of activated neurons is implicitly assisted, and meanwhile, the learning of unactivated neurons is limited.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910704660.0A CN110427875B (en) | 2019-07-31 | 2019-07-31 | Infrared image target detection method based on deep migration learning and extreme learning machine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910704660.0A CN110427875B (en) | 2019-07-31 | 2019-07-31 | Infrared image target detection method based on deep migration learning and extreme learning machine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110427875A CN110427875A (en) | 2019-11-08 |
CN110427875B true CN110427875B (en) | 2022-11-11 |
Family
ID=68413609
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910704660.0A Active CN110427875B (en) | 2019-07-31 | 2019-07-31 | Infrared image target detection method based on deep migration learning and extreme learning machine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110427875B (en) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111339836A (en) * | 2020-02-07 | 2020-06-26 | 天津大学 | SAR image ship target detection method based on transfer learning |
CN111401454A (en) * | 2020-03-19 | 2020-07-10 | 创新奇智(重庆)科技有限公司 | Few-sample target identification method based on transfer learning |
CN111582477B (en) * | 2020-05-09 | 2023-08-29 | 北京百度网讯科技有限公司 | Training method and device for neural network model |
CN111882055B (en) * | 2020-06-15 | 2022-08-05 | 电子科技大学 | Method for constructing target detection self-adaptive model based on cycleGAN and pseudo label |
CN111723823B (en) * | 2020-06-24 | 2023-07-18 | 河南科技学院 | Underwater target detection method based on third party transfer learning |
CN111767868B (en) * | 2020-06-30 | 2024-06-11 | 创新奇智(北京)科技有限公司 | Face detection method and device, electronic equipment and storage medium |
CN111915566B (en) * | 2020-07-03 | 2022-03-15 | 天津大学 | Infrared sample target detection method based on cyclic consistency countermeasure network |
CN111721536B (en) * | 2020-07-20 | 2022-05-27 | 哈尔滨理工大学 | Rolling bearing fault diagnosis method for improving model migration strategy |
CN112287839B (en) * | 2020-10-29 | 2022-12-09 | 广西科技大学 | SSD infrared image pedestrian detection method based on transfer learning |
CN112506667A (en) * | 2020-12-22 | 2021-03-16 | 北京航空航天大学杭州创新研究院 | Deep neural network training method based on multi-task optimization |
CN112949387B (en) * | 2021-01-27 | 2024-02-09 | 西安电子科技大学 | Intelligent anti-interference target detection method based on transfer learning |
CN113076895B (en) * | 2021-04-09 | 2022-08-02 | 太原理工大学 | Conveyor belt longitudinal damage vibration sensing method based on infrared computer vision |
CN113221993B (en) * | 2021-05-06 | 2023-08-01 | 西安电子科技大学 | Large-view-field small-sample target detection method based on meta-learning and cross-stage hourglass |
CN113159300B (en) * | 2021-05-15 | 2024-02-27 | 南京逸智网络空间技术创新研究院有限公司 | Image detection neural network model, training method thereof and image detection method |
CN113344868B (en) * | 2021-05-28 | 2023-08-25 | 山东大学 | Label-free cell classification screening system based on mixed transfer learning |
CN113762466B (en) * | 2021-08-02 | 2023-06-20 | 国网河南省电力公司信息通信公司 | Electric power internet of things flow classification method and device |
CN113627541B (en) * | 2021-08-13 | 2023-07-21 | 北京邮电大学 | Optical path transmission quality prediction method based on sample migration screening |
CN113537403A (en) * | 2021-08-14 | 2021-10-22 | 北京达佳互联信息技术有限公司 | Training method and device and prediction method and device of image processing model |
CN114170532A (en) * | 2021-11-23 | 2022-03-11 | 北京航天自动控制研究所 | Multi-target classification method and device based on difficult sample transfer learning |
CN114170531A (en) * | 2021-11-23 | 2022-03-11 | 北京航天自动控制研究所 | Infrared image target detection method and device based on difficult sample transfer learning |
CN114783072B (en) * | 2022-03-17 | 2022-12-30 | 哈尔滨工业大学(威海) | Image identification method based on remote domain transfer learning |
CN115037641B (en) * | 2022-06-01 | 2024-05-03 | 网络通信与安全紫金山实验室 | Network traffic detection method and device based on small sample, electronic equipment and medium |
CN116129292A (en) * | 2023-01-13 | 2023-05-16 | 华中科技大学 | Infrared vehicle target detection method and system based on few sample augmentation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104657596A (en) * | 2015-01-27 | 2015-05-27 | 中国矿业大学 | Model-transfer-based large-sized new compressor performance prediction rapid-modeling method |
CN106096004A (en) * | 2016-06-23 | 2016-11-09 | 北京工业大学 | A kind of method setting up extensive cross-domain texts emotional orientation analysis framework |
CN107463954A (en) * | 2017-07-21 | 2017-12-12 | 华中科技大学 | A kind of template matches recognition methods for obscuring different spectrogram picture |
CN109308483A (en) * | 2018-07-11 | 2019-02-05 | 南京航空航天大学 | Double source image characteristics extraction and fusion identification method based on convolutional neural networks |
CN109583482A (en) * | 2018-11-13 | 2019-04-05 | 河海大学 | A kind of infrared human body target image identification method based on multiple features fusion Yu multicore transfer learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10909407B2 (en) * | 2017-05-24 | 2021-02-02 | Hrl Laboratories, Llc | Transfer learning of convolutional neural networks from visible color (RBG) to infrared (IR) domain |
-
2019
- 2019-07-31 CN CN201910704660.0A patent/CN110427875B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104657596A (en) * | 2015-01-27 | 2015-05-27 | 中国矿业大学 | Model-transfer-based large-sized new compressor performance prediction rapid-modeling method |
CN106096004A (en) * | 2016-06-23 | 2016-11-09 | 北京工业大学 | A kind of method setting up extensive cross-domain texts emotional orientation analysis framework |
CN107463954A (en) * | 2017-07-21 | 2017-12-12 | 华中科技大学 | A kind of template matches recognition methods for obscuring different spectrogram picture |
CN109308483A (en) * | 2018-07-11 | 2019-02-05 | 南京航空航天大学 | Double source image characteristics extraction and fusion identification method based on convolutional neural networks |
CN109583482A (en) * | 2018-11-13 | 2019-04-05 | 河海大学 | A kind of infrared human body target image identification method based on multiple features fusion Yu multicore transfer learning |
Non-Patent Citations (7)
Title |
---|
"Boosting for Transfer Learning";Wenyuan Dai等;《Proceedings of the 24th International Conference on Machine Learning》;20071231;第193-200页 * |
"Land-Use Classification via Extreme Learning Classifier Based on Deep Convolutional Features";Qian Weng等;《IEEE Geoscience and Remote Sensing Letters》;20170322;第704-708页 * |
"基于深度 CNN 和极限学习机相结合的实时文档分类";闫河等;《计算机应用与软件》;20190331;第174-178页 * |
"基于深度学习的红外图像飞机目标检测方法";朱大炜;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20190215;第2019年卷(第2期);C032-46 * |
"基于迁移学习的小样本人脸识别研究";曹越;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190215;第2019年卷(第2期);I138-2015 * |
"基于迁移学习的红外图像多目标检测技术";林鸿生等;《红外》;20190725;第26-34页 * |
"基于迁移学习的雷达辐射源识别研究";陆鑫伟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130315;第2013年卷(第3期);I136-1058 * |
Also Published As
Publication number | Publication date |
---|---|
CN110427875A (en) | 2019-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110427875B (en) | Infrared image target detection method based on deep migration learning and extreme learning machine | |
WO2021012752A1 (en) | Spiking neural network-based short-range tracking method and system | |
WO2022252272A1 (en) | Transfer learning-based method for improved vgg16 network pig identity recognition | |
Yuan et al. | Gated CNN: Integrating multi-scale feature layers for object detection | |
Liu et al. | Multi-objective convolutional learning for face labeling | |
CN109934173B (en) | Expression recognition method and device and electronic equipment | |
CN109829541A (en) | Deep neural network incremental training method and system based on learning automaton | |
CN107644235A (en) | Image automatic annotation method based on semi-supervised learning | |
CN110909820A (en) | Image classification method and system based on self-supervision learning | |
CN109299716A (en) | Training method, image partition method, device, equipment and the medium of neural network | |
CN111008633B (en) | License plate character segmentation method based on attention mechanism | |
CN109711448A (en) | Based on the plant image fine grit classification method for differentiating key field and deep learning | |
CN112395951B (en) | Complex scene-oriented domain-adaptive traffic target detection and identification method | |
CN110929610A (en) | Plant disease identification method and system based on CNN model and transfer learning | |
CN112801270B (en) | Automatic U-shaped network slot identification method integrating depth convolution and attention mechanism | |
CN111832573B (en) | Image emotion classification method based on class activation mapping and visual saliency | |
CN108009548A (en) | A kind of Intelligent road sign recognition methods and system | |
CN112464816A (en) | Local sign language identification method and device based on secondary transfer learning | |
WO2023087597A1 (en) | Image processing method and system, device, and medium | |
CN111461006A (en) | Optical remote sensing image tower position detection method based on deep migration learning | |
CN116258990A (en) | Cross-modal affinity-based small sample reference video target segmentation method | |
US20220301311A1 (en) | Efficient self-attention for video processing | |
CN111242870A (en) | Low-light image enhancement method based on deep learning knowledge distillation technology | |
CN111126155A (en) | Pedestrian re-identification method for generating confrontation network based on semantic constraint | |
US20230076290A1 (en) | Rounding mechanisms for post-training quantization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |