CN110837865A - Domain adaptation method based on representation learning and transfer learning - Google Patents

Domain adaptation method based on representation learning and transfer learning Download PDF

Info

Publication number
CN110837865A
CN110837865A CN201911084862.6A CN201911084862A CN110837865A CN 110837865 A CN110837865 A CN 110837865A CN 201911084862 A CN201911084862 A CN 201911084862A CN 110837865 A CN110837865 A CN 110837865A
Authority
CN
China
Prior art keywords
data
domain
source domain
loss
pooling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911084862.6A
Other languages
Chinese (zh)
Inventor
李佳珍
袁晓光
王泊涵
戴志明
李墈婧
韩涛
谢德鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Computer Technology and Applications
Original Assignee
Beijing Institute of Computer Technology and Applications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Computer Technology and Applications filed Critical Beijing Institute of Computer Technology and Applications
Priority to CN201911084862.6A priority Critical patent/CN110837865A/en
Publication of CN110837865A publication Critical patent/CN110837865A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a field adaptation method based on representation learning and transfer learning, and belongs to the technical field of big data and Internet of things. The domain adaptation method based on representation learning and transfer learning provided by the invention is realized based on a small number of real labels of a source domain, thereby reducing the dependence on labeled data. And (3) through the combination of the Center Loss and the Softmax Loss, the data characteristics of the source domain are enabled to expand the class interval and reduce the class inner interval, so that the separable characteristics of the source domain are obtained. By defining a Loss function of the distance between the source domain feature and the target domain feature, namely the MMD Loss function, the value of the MMD Loss function can be reduced through the training process of the neural network, so that the distance between the source domain and the target domain is shortened, and the accuracy of the target domain is finally improved.

Description

Domain adaptation method based on representation learning and transfer learning
Technical Field
The invention belongs to the technical field of big data and Internet of things, and particularly relates to a field adaptation method based on representation learning and transfer learning.
Background
With the development of big data and the internet of things, researchers have a larger and larger proportion of processing big data problems by using a deep learning model. The deep learning model automatically extracts high-level data features through an end-to-end neural network structure, so that the step of manually extracting the features in a machine learning algorithm is replaced, and the method for automatically extracting the features is obviously improved in the identification accuracy. The Convolutional Neural Network (CNN) relies on the characteristics of translation invariance, local connection and weight sharing of the Network, extracts data features through a Convolutional layer and a pooling layer, and classifies data through a Softmax layer. Among the items of image and text classification, CNN is increasingly used.
The traditional CNN-based data classification method relies on the training of labeled data, and most of data collected in life is label-free data, so that a great amount of time is spent on the labeled data in the process of training an algorithm model. Meanwhile, the traditional network structure of the CNN can well classify the data consistent with the data distribution in the training set, and when the data inconsistent with the data distribution in the training set appears in the test set, the accuracy of the test set can be greatly reduced. Therefore, the generalization of the traditional CNN neural network structure is too low, so that the CNN network has lower accuracy in identifying test set data different from the training set data distribution, and the use scene of a deep learning model is limited.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is as follows: how to enable the trained deep learning model to improve the accuracy of the target domain (test set data).
(II) technical scheme
In order to solve the technical problem, the invention provides a domain adaptation method based on representation learning and transfer learning, which comprises the following steps:
the method comprises the following steps that firstly, for data of a source domain and data of a target domain, source domain features and target domain features are extracted through convolution pooling of a CNN model; wherein the data of the source domain is data of the sensors in the training set, and the data of the target domain is data of the sensors in the testing set;
step two, aiming at the source domain characteristics and the source domain real tags, obtaining separable source domain characteristics through a Center Loss and Softmax Loss combined Loss function;
and step three, drawing the distance between the source domain feature processed in the step two and the target domain feature distribution through MMD Loss.
Preferably, in the step one, the convolutional layer and the pooling layer in the deep learning network CNN are adopted to extract the data features of the source domain and the data features of the target domain, respectively.
Preferably, in the first step, the data of the source domain is subjected to feature extraction through convolution operation, each convolution layer comprises 50 convolution kernels, the size of each convolution kernel is 5 × 1, then the pooling operation is performed on the data, the pooling layer is the operation of maximum pooling, the pooling selects the maximum pooling of 3 × 1, and the data feature of the source domain is obtained after the pooling operation; extracting features from the data of the target domain through convolution operation, wherein each convolution layer comprises 50 convolution kernels, the size of each convolution kernel is 5 x 1, then performing pooling operation on the data, the pooling layer is the operation of performing maximum pooling, the pooling selects the maximum pooling of 3 x 1, and the data features of the target domain are obtained after the pooling operation.
Preferably, in step two, the Center Loss is the definition of each feature classification Center, so that the CNN feature class of the same classification is as close as possible to the classification Center of each classification, and the classification features of different classifications are far from the classification Center, as shown in formula (1)
Figure BDA0002265083220000024
Represent each oneDistance of the feature matrix of the classification to each classification center, where yiRepresenting data classification, i.e. i is used to identify classes, where CyiDenotes the center, χ, of each data classiFor the feature matrix of each data, m is the number of samples for which the training model is updated for each gradient, according to equation (1)
Figure BDA0002265083220000025
The gradient updated when training the neural network is χi-CyiAs shown in formula (2);
Figure BDA0002265083220000021
Figure BDA0002265083220000022
the joint Loss function is established as shown in equation (3), where λ is the ratio of losses that regulate Softmax Loss to CenterLoss, λ ∈ (0,1), in equation (3),
Figure BDA0002265083220000026
the Loss of Softmax Loss is indicated,
Figure BDA0002265083220000027
the Loss of Center Loss, i.e., the distance of the feature matrix of each class to the Center of each class, is indicated;
Figure BDA0002265083220000023
preferably, the maximum mean difference MMD is used in the third step to measure the distribution difference of the source domain and the target domain, and the source domain feature and the target domain feature are pulled by defining the MMD Loss function.
Preferably, in step three, the distance between the source domain data and the target domain data, i.e. the value of MMD, is solved:
Figure BDA0002265083220000031
wherein, Εp[f(x)]Expectation values, E, representing data in the source domainq[f(y)]Representing the expected value in the target domain data, f in equation (4) represents a point in the regenerative nuclear hilbert space,
Figure BDA0002265083220000033
the method is characterized in that f is a set, in formula (4), p represents a source domain, q represents a target domain, in formula (4), x represents source domain data, y represents target domain data, and sup function represents a function method for mapping data characteristics of two domains to a regeneration kernel Hilbert space.
Preferably, in step three, an MMDLoss loss function for approximating the distribution distance of the source domain feature and the target domain feature is further defined
Figure BDA0002265083220000034
As shown in equation (5), equation (5) is the MMD value found in equation (4) squared and multiplied by
Figure BDA0002265083220000035
Resulting MMDLoss loss function:
Figure BDA0002265083220000032
in equation (5), MMD represents the MMD value obtained in equation 4.
Preferably, in step three, the distribution distance of the source domain feature and the target domain feature is finally also pulled by reducing the value of the MMD Loss function.
Preferably, in the first step, the convolutional layer extraction feature-dependent characteristics in the convolutional network include local connection and weight sharing, a local region is used in the convolutional neural network, the whole matrix data is scanned by using the local region, and all nodes encircled by the local region are connected to one node of the next layer.
(III) advantageous effects
The domain adaptation method based on representation learning and transfer learning provided by the invention is realized based on a small number of real labels of a source domain, thereby reducing the dependence on labeled data. And (3) through the combination of the Center Loss and the Softmax Loss, the data characteristics of the source domain are enabled to expand the class interval and reduce the class inner interval, so that the separable characteristics of the source domain are obtained. By defining a Loss function of the distance between the source domain feature and the target domain feature, namely the MMD Loss function, the value of the MMD Loss function can be reduced through the training process of the neural network, so that the distance between the source domain and the target domain is shortened, and the accuracy of the target domain is finally improved.
Drawings
FIG. 1 is a general flow diagram illustrating a domain adaptation method for learning and transfer learning in accordance with the present invention;
fig. 2 is a schematic diagram of the steps of obtaining the separable source domain feature in the present invention.
Detailed Description
In order to make the objects, contents, and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
The invention aims to provide a field adaptation method based on representation learning and transfer learning, which can achieve higher accuracy by utilizing a small amount of labeled data training algorithm models and simultaneously reduce the cost of manually labeling data labels. Secondly, based on a data feature representation learning method, the CNN extracted features are visualized and analyzed, so that the CNN extracted features have the characteristics of large class spacing and small class inner spacing, and finally the data features extracted by the CNN can be better classified by a Softmax function. In addition, the Maximum Mean Difference (MMD) in the transfer learning method is applied to draw close to the sample distribution in the training set and the test set, so that the accuracy of classification in the test set is improved, and the generalization of the algorithm is improved. And finally, constructing a convolutional neural network through the CNN, and simultaneously providing a field adaptation method based on the representation learning and transfer learning technology to improve the accuracy of the target field.
Referring to fig. 1, the general idea of the present invention is: firstly, a basic deep learning CNN model is required to be constructed to extract data characteristics, source domain data (data in a training set) are subjected to convolution and pooling in the CNN to extract the source domain characteristics, and target domain data (test set data) are subjected to convolution and pooling in the CNN to extract the target domain characteristics. Secondly, for a small number of real labels of the source domain and the source domain features extracted through CNN, separable source domain features are obtained based on a Center Loss function and a Softmax Loss function, so that the class distance of the extracted source domain data features is expanded, and the class inner distance is reduced. The data feature representation method based on the Center Loss and Softmax Loss combined function can effectively classify the data features of the source domain. And finally, the distribution distance of the source domain features and the target domain features is shortened through the MMD Loss function, so that the training model can obtain better classification accuracy on the source domain data and can also obtain higher accuracy on the target domain (test set data). The method specifically comprises the following steps:
step one, aiming at source domain data and target domain data, extracting source domain characteristics and target domain characteristics through convolution pooling of CNN model
The invention aims to enable a trained deep learning model to improve the accuracy of a target domain (test set data). The source domain data is data for the sensors in the training set and the target domain is data for the sensors in the test set. Meanwhile, the invention adopts Python as a development language, and builds convolution and pooling of CNN on the basis of a Tensorflow framework. The convolution layer and the pooling layer in the deep learning network CNN adopted by the invention respectively extract the data characteristics of the source domain and the data characteristics of the target domain.
The characteristics of convolutional layer extraction feature dependence in the convolutional network are mainly divided into local connection and weight sharing. In convolutional neural networks, a local region is typically used that is used to scan the entire matrix data. All nodes enclosed by the local area are connected to a node on the next layer. The convolutional neural network is characterized in that each output node is not connected to all input nodes as in the feedforward neural network, but is partially connected.
The function of pooling is to reduce the spatial size of the input features step by step, thereby reducing the number of parameters and computations in the network and thus suppressing overfitting, and the present invention employs a method of maximal pooling.
Extracting features from source domain data through convolution operation, wherein each convolution layer comprises 50 convolution kernels, the size of each convolution kernel is 5 x 1, then performing pooling operation on the data, performing maximum pooling operation on the pooling layer, selecting the maximum pooling of 3 x 1 for the pooling, and obtaining the data features of the source domain after the pooling operation.
Extracting features from target domain data through convolution operation, wherein each convolution layer comprises 50 convolution kernels, the size of each convolution kernel is 5 x 1, then performing pooling operation on the data, performing maximum pooling operation on the pooling layer, selecting the maximum pooling of 3 x 1 for the pooling, and obtaining the data features of the target domain after the pooling operation.
Step two, aiming at the source domain characteristics and the source domain real labels, obtaining separable source domain characteristics through a Center Loss function and a Softmax Loss function
As shown in fig. 2, the two cases of CNN extraction data features are shown, and the influence of separable data features and discriminative data features on the data classification result is shown. The black and gray in the middle diagram at the lower part of fig. 2 represent data of two categories, and it can be seen from fig. 2 that the data features with discrimination have the characteristics of large data category interval and small data category inner distance of the same category, and the data features with discrimination can be well classified, so that the accuracy is improved. Such discriminative data feature shown in the right drawing of the lower middle drawing of fig. 2 is extracted by the Center Loss data feature representation method.
The idea of Center Loss is therefore the definition of each feature classification Center, such that CNN feature classes of the same class are as close as possible to the classification Center of each class, and classification features of different classes are far from the classification Center. In the formula (1)
Figure BDA0002265083220000061
Representing the distance of the feature matrix of each class to the center of each class, where yiData classification is shown.Wherein C isyiDenotes the center, χ, of each data classiFor the feature matrix of each data, m is the number of samples that the training model updates for each gradient. According to the formula (1)
Figure BDA0002265083220000074
The gradient updated when training the neural network is χi-CyiAs shown in equation (2).
Figure BDA0002265083220000071
Figure BDA0002265083220000072
Because the source domain data are distributed in a strip shape through the features extracted by the CNN, the features extracted by the CNN have the characteristics of overlarge class inner distance and undersize class inner distance. Therefore, based on a small number of real labels of the source domain and the source domain characteristics extracted through CNN, the invention provides a Center Loss and Softmax Loss combined Loss function, so that the extracted source domain data characteristics expand the class interval and reduce the class inner distance. The overall Loss function is the Softmax classification Loss and the Loss of Center Loss as shown in equation (3), where λ is the ratio that governs the Loss of Softmax Loss to Center Loss, and in general, λ ∈ (0, 1). In the formula (3), the first and second groups,
Figure BDA0002265083220000075
the Loss of Softmax Loss is indicated,
Figure BDA0002265083220000076
the Loss of Center Loss, i.e., the distance of the feature matrix of each class to the Center of each class, is indicated.
Thirdly, the distance of the distribution of the source domain characteristics and the target domain characteristics is zoomed in by MMD Loss
In the step one, target domain data are convoluted and pooled through a convolutional neural network to extract target domain characteristics, and in the step two, the representation learning method of the CenterLoss is combined with the convolution and pooling operation in the CNN model to extract source domain data characteristics with discriminative power of a source domain. In step three, the distribution difference of the source domain and the target domain is measured by using the Maximum Mean Difference (MMD), and the source domain characteristic and the target domain characteristic are zoomed in by defining an MMD Loss function. Under the condition that the data characteristics of the source domain can be well classified by the representation learning method, the target domain can also obtain higher accuracy, so the invention provides a method for utilizing the maximum mean difference in the unsupervised migration method to shorten the distance between the source domain and the target and the characteristic distribution. The invention researches the situation that the classification categories of the source domain and the target domain are the same, namely the situation that the classification feature spaces of the source domain and the target domain are consistent, but the feature space distribution of different domains is inconsistent.
The data features extracted by the CNN are high-dimensional data features generally, and when the distribution difference of a source domain and a target domain is measured, the data features of the two domains are mapped to a high-dimensional regeneration nuclear Hilbert Space (RKHS). The essence of the adaptation problem of the migration learning in the field is to draw the distribution distance of the source domain features and the target domain features, and the invention adopts the MMD migration method.
Figure BDA0002265083220000081
Equation (4) represents the value solving method for the MMD, i.e., the distance between the source domain data and the target domain data, Ep[f(x)]Representing expected values of data in the source domain, Eq[f(y)]Representing the expected value in the target domain data, f in equation (4) represents a point in the regenerative nuclear hilbert space,
Figure BDA0002265083220000083
denoted is the set of f. In formula (4), p represents the source domain and q represents the target domain. In the formula (4), x represents source domain data, y represents target domain data, and sup function represents twoThe data characteristics of each domain are mapped to a function method of a high-dimensional Regeneration Kernel Hilbert Space (RKHS).
The invention also defines an MMD Loss function for zooming in the distribution distance of the source domain characteristics and the target domain characteristics
Figure BDA0002265083220000084
As shown in equation (5). Equation (5) is to square and multiply the MMD value obtained in equation (4)
Figure BDA0002265083220000085
Thereby obtaining the MMD Loss function:
Figure BDA0002265083220000082
in equation (5), MMD represents the MMD value obtained in equation 4.
And finally, improving the accuracy of the target domain by reducing the value of the MMD Loss function to shorten the distribution distance of the source domain characteristic and the target domain characteristic.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A domain adaptation method based on representation learning and transfer learning is characterized by comprising the following steps:
the method comprises the following steps that firstly, for data of a source domain and data of a target domain, source domain features and target domain features are extracted through convolution pooling of a CNN model; wherein the data of the source domain is data of the sensors in the training set, and the data of the target domain is data of the sensors in the testing set;
step two, aiming at the source domain characteristics and the source domain real tags, obtaining separable source domain characteristics through a Center Loss and Softmax Loss combined Loss function;
and step three, drawing the distance between the source domain feature processed in the step two and the target domain feature distribution through MMD Loss.
2. The method of claim 1, wherein in the first step, the convolutional layer and the pooling layer in the deep learning network CNN are used to extract the data features of the source domain and the data features of the target domain, respectively.
3. The method of claim 1, wherein in step one, the data of the source domain is characterized by convolution operation, each convolution layer comprises 50 convolution kernels, the size of each convolution kernel is 5 x 1, then pooling operation is carried out on the data, the pooling layer is the operation of maximum pooling, the pooling adopts the maximum pooling of 3 x 1, and the data characteristic of the source domain is obtained after the pooling operation; extracting features from the data of the target domain through convolution operation, wherein each convolution layer comprises 50 convolution kernels, the size of each convolution kernel is 5 x 1, then performing pooling operation on the data, the pooling layer is the operation of performing maximum pooling, the pooling selects the maximum pooling of 3 x 1, and the data features of the target domain are obtained after the pooling operation.
4. The method of claim 3, wherein in step two, the Center Loss is the definition of each feature classification Center, so that the CNN feature class of the same classification is as close as possible to the classification Center of each classification, and the classification features of different classifications are far from the classification Center, as shown in formula (1)
Figure FDA0002265083210000011
Distance of feature matrix representing each class to center of each class, where yiRepresenting data classification, i.e. i is used to identify classes, where CyiDenotes the center, χ, of each data classiFor the feature matrix of each data, m is the number of samples for which the training model is updated for each gradient, according to equation (1)
Figure FDA0002265083210000021
The gradient updated when training the neural network is χi-CyiAs shown in formula (2);
Figure FDA0002265083210000022
Figure FDA0002265083210000023
the established joint Loss function is shown in equation (3), where λ is the ratio of the Loss adjusting Softmax Loss to Center Loss, λ ∈ (0,1), in equation (3),
Figure FDA0002265083210000024
the Loss of Softmax Loss is indicated,
Figure FDA0002265083210000025
the loss of centrloss, i.e. the distance of the feature matrix of each class to the center of each class, is indicated;
Figure FDA0002265083210000026
5. the method of claim 4, wherein the maximum mean difference MMD is used in the third step to measure the distribution difference of the source domain and the target domain, and the source domain feature and the target domain feature are pulled by defining an MMD Loss function.
6. The method of claim 5, wherein in step three, the distance between the source domain data and the target domain data, i.e. the value of MMD, is solved:
Figure FDA0002265083210000027
wherein, Εp[f(x)]Expectation values, E, representing data in the source domainq[f(y)]Representing the expected value in the target domain data, f in equation (4) represents a point in the regenerative nuclear hilbert space,
Figure FDA0002265083210000028
the method is characterized in that f is a set, in formula (4), p represents a source domain, q represents a target domain, in formula (4), x represents source domain data, y represents target domain data, and sup function represents a function method for mapping data characteristics of two domains to a regeneration kernel Hilbert space.
7. The method of claim 6, wherein in step three, the MMD Loss function for zooming in the distribution distance of the source domain features and the target domain features is further defined
Figure FDA0002265083210000029
As shown in equation (5), equation (5) is the MMD value found in equation (4) squared and multiplied by
Figure FDA00022650832100000210
The resulting MMD Loss function:
Figure FDA0002265083210000031
in equation (5), MMD represents the MMD value obtained in equation 4.
8. The method of claim 7, wherein in step three, the distribution distance of the source domain feature and the target domain feature is finally further narrowed by decreasing the value of the MMD Loss function.
9. The method of claim 3, wherein in step one, the convolutional layer extraction feature-dependent properties in the convolutional network include local connection and weight sharing, a local region is used in the convolutional neural network, the whole matrix data is scanned by the local region, and all nodes encircled by the local region are connected to a node of the next layer.
10. The method of claim 3, wherein in step one, Python is used as a development language to build CNN convolution and pooling on the basis of a Tensorflow framework.
CN201911084862.6A 2019-11-08 2019-11-08 Domain adaptation method based on representation learning and transfer learning Pending CN110837865A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911084862.6A CN110837865A (en) 2019-11-08 2019-11-08 Domain adaptation method based on representation learning and transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911084862.6A CN110837865A (en) 2019-11-08 2019-11-08 Domain adaptation method based on representation learning and transfer learning

Publications (1)

Publication Number Publication Date
CN110837865A true CN110837865A (en) 2020-02-25

Family

ID=69574640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911084862.6A Pending CN110837865A (en) 2019-11-08 2019-11-08 Domain adaptation method based on representation learning and transfer learning

Country Status (1)

Country Link
CN (1) CN110837865A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401454A (en) * 2020-03-19 2020-07-10 创新奇智(重庆)科技有限公司 Few-sample target identification method based on transfer learning
CN112395986A (en) * 2020-11-17 2021-02-23 广州像素数据技术股份有限公司 Face recognition method for quickly migrating new scene and preventing forgetting
CN112686333A (en) * 2021-01-19 2021-04-20 科润智能控制股份有限公司 Switch cabinet partial discharge mode identification method based on depth subdomain adaptive migration network
CN112733970A (en) * 2021-03-31 2021-04-30 腾讯科技(深圳)有限公司 Image classification model processing method, image classification method and device
CN113033088A (en) * 2021-03-23 2021-06-25 浙江工业大学 Wind driven generator system fault diagnosis method based on depth subdomain adaptive migration network
CN115457311A (en) * 2022-08-23 2022-12-09 宁波大学 Hyperspectral remote sensing image band selection method based on self-expression transfer learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018137357A1 (en) * 2017-01-24 2018-08-02 北京大学 Target detection performance optimization method
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
CN110414383A (en) * 2019-07-11 2019-11-05 华中科技大学 Convolutional neural networks based on Wasserstein distance fight transfer learning method and its application

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018137357A1 (en) * 2017-01-24 2018-08-02 北京大学 Target detection performance optimization method
CN109583342A (en) * 2018-11-21 2019-04-05 重庆邮电大学 Human face in-vivo detection method based on transfer learning
CN110414383A (en) * 2019-07-11 2019-11-05 华中科技大学 Convolutional neural networks based on Wasserstein distance fight transfer learning method and its application

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李佳珍: "基于传感器数据和深度学习的日常活动识别方法研究" *
梁鹏;黎绍发;林智勇;郝刚;: "共享域特征的深度神经网络异常检测方法" *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401454A (en) * 2020-03-19 2020-07-10 创新奇智(重庆)科技有限公司 Few-sample target identification method based on transfer learning
CN112395986A (en) * 2020-11-17 2021-02-23 广州像素数据技术股份有限公司 Face recognition method for quickly migrating new scene and preventing forgetting
CN112395986B (en) * 2020-11-17 2024-04-26 广州像素数据技术股份有限公司 Face recognition method capable of quickly migrating new scene and preventing forgetting
CN112686333A (en) * 2021-01-19 2021-04-20 科润智能控制股份有限公司 Switch cabinet partial discharge mode identification method based on depth subdomain adaptive migration network
CN113033088A (en) * 2021-03-23 2021-06-25 浙江工业大学 Wind driven generator system fault diagnosis method based on depth subdomain adaptive migration network
CN112733970A (en) * 2021-03-31 2021-04-30 腾讯科技(深圳)有限公司 Image classification model processing method, image classification method and device
CN112733970B (en) * 2021-03-31 2021-06-18 腾讯科技(深圳)有限公司 Image classification model processing method, image classification method and device
CN115457311A (en) * 2022-08-23 2022-12-09 宁波大学 Hyperspectral remote sensing image band selection method based on self-expression transfer learning
CN115457311B (en) * 2022-08-23 2023-08-29 宁波大学 Hyperspectral remote sensing image band selection method based on self-expression transfer learning

Similar Documents

Publication Publication Date Title
CN110837865A (en) Domain adaptation method based on representation learning and transfer learning
CN111414942B (en) Remote sensing image classification method based on active learning and convolutional neural network
CN107092870B (en) A kind of high resolution image Semantic features extraction method
CN110909820B (en) Image classification method and system based on self-supervision learning
CN111126386B (en) Sequence domain adaptation method based on countermeasure learning in scene text recognition
CN106919951B (en) Weak supervision bilinear deep learning method based on click and vision fusion
CN106682696B (en) The more example detection networks and its training method refined based on online example classification device
CN109784392B (en) Hyperspectral image semi-supervised classification method based on comprehensive confidence
CN110717526A (en) Unsupervised transfer learning method based on graph convolution network
CN109492750B (en) Zero sample image classification method based on convolutional neural network and factor space
CN112308115B (en) Multi-label image deep learning classification method and equipment
CN110414616B (en) Remote sensing image dictionary learning and classifying method utilizing spatial relationship
Liang et al. Comparison detector for cervical cell/clumps detection in the limited data scenario
CN103177265B (en) High-definition image classification method based on kernel function Yu sparse coding
CN111126361B (en) SAR target identification method based on semi-supervised learning and feature constraint
CN111475622A (en) Text classification method, device, terminal and storage medium
CN113487576B (en) Insect pest image detection method based on channel attention mechanism
Su et al. LodgeNet: Improved rice lodging recognition using semantic segmentation of UAV high-resolution remote sensing images
CN114882521A (en) Unsupervised pedestrian re-identification method and unsupervised pedestrian re-identification device based on multi-branch network
Zhu et al. Identifying carrot appearance quality by an improved dense CapNet
CN115564996A (en) Hyperspectral remote sensing image classification method based on attention union network
CN115439715A (en) Semi-supervised few-sample image classification learning method and system based on anti-label learning
CN110647897B (en) Zero sample image classification and identification method based on multi-part attention mechanism
CN114818931A (en) Fruit image classification method based on small sample element learning
Lonij et al. Open-world visual recognition using knowledge graphs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination