CN112149556B - Face attribute identification method based on deep mutual learning and knowledge transfer - Google Patents

Face attribute identification method based on deep mutual learning and knowledge transfer Download PDF

Info

Publication number
CN112149556B
CN112149556B CN202010999031.8A CN202010999031A CN112149556B CN 112149556 B CN112149556 B CN 112149556B CN 202010999031 A CN202010999031 A CN 202010999031A CN 112149556 B CN112149556 B CN 112149556B
Authority
CN
China
Prior art keywords
net
attribute
network
face
attributes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010999031.8A
Other languages
Chinese (zh)
Other versions
CN112149556A (en
Inventor
张立言
姚树婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202010999031.8A priority Critical patent/CN112149556B/en
Publication of CN112149556A publication Critical patent/CN112149556A/en
Application granted granted Critical
Publication of CN112149556B publication Critical patent/CN112149556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face attribute identification method based on deep mutual learning and knowledge transfer, which comprises the following steps: preparing a face data set; carrying out random image overturning on each face image in the face data set, respectively taking the random image overturning as the input of two models Net1 and Net2, constructing a deep convolutional neural network migration model B_Net1 and B_Net2, and extracting sharing characteristics among multiple attributes; constructing the same attribute prediction network S_Net for the easily-identified tasks in the two models, wherein three groups of attributes contain easily-identified task attributes and difficultly-identified attributes; and constructing the same attribute prediction network H_Net for the tasks difficult to identify in the two models, and constructing a sub-network H_Net for the tasks difficult to identify in each model, wherein the sub-network H_Net and the sub-network S_Net are connected to the sub-network B_Net in parallel. The method solves the problems of low prediction precision and weak model generalization capability in the prior art.

Description

Face attribute identification method based on deep mutual learning and knowledge transfer
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a face attribute identification method realized by deep learning.
Background
Face attribute identification is an important subject, and can be applied to the research of the face recognition field, such as face verification and face retrieval. Face pictures in non-laboratory scenes are influenced by factors such as illumination, posture and shielding, and meanwhile the face pictures are closer to face pictures acquired in real life, so that face attribute identification research based on the non-laboratory scenes is very significant. The method for identifying the attributes of the human faces has important practical value for analyzing the attributes of the target human faces, and can be used for various scenes such as accurate advertisement putting, market investigation and the like.
The face has tens of attributes such as hairstyle, sex, beard or the like, the recognition network model is trained and recognized for each attribute respectively, the inherent relation among the attributes is ignored, and the prediction accuracy is not high. Therefore, in the context of deep learning, the current research method for face attribute recognition generally uses multi-task learning to simultaneously perform grouping prediction on multiple attributes so as to search for the connection between the multiple attributes, and the network model is constructed by using a neural network. More common grouping modes include a grouping mode based on an attribute existence position area, a grouping mode based on semantics and a grouping mode based on attribute isomerism, but still have the problems that the prediction precision of part of attributes is low and the model generalization capability is not strong.
Disclosure of Invention
The invention aims to provide a face attribute identification method based on deep mutual learning and knowledge transfer, which aims to solve the problems of low prediction precision and weak model generalization capability in the prior art.
In order to achieve the above purpose, the invention adopts the following technical scheme:
A face attribute identification method based on deep mutual learning and knowledge transfer comprises the following steps:
step1, preparing a face data set, wherein the face data set comprises a face image and a multi-attribute label corresponding to the face image;
Step 2, carrying out random image overturning on each face image in the face data set, respectively taking the random image overturning as the input of two models Net1 and Net2, constructing a deep convolutional neural network migration model B_Net1 and B_Net2, and extracting sharing characteristics among multiple attributes;
Step 3: constructing an identical attribute prediction network S_Net for the easily-identified tasks in Net1 and Net2, wherein three groups of attributes comprise easily-identified task attributes and difficultly-identified attributes;
Step 4: the same attribute prediction network H_Net is built for the difficult-to-identify tasks in Net1 and Net2, and in each model, a difficult-to-identify task sub-network H_Net is built and connected with B_Net in parallel with S_Net.
In the step1, face detection, face key point positioning and face alignment are performed on each face image in the face data set.
In the step 1, a face dataset is selected CelebA datasets.
In the step 2, the structures of the B_Net1 and the B_Net2 are consistent, wherein the migration model adopts inceptionV of Google; the network structure of b_net1 and b_net2 is as follows:
input->conv_1->conv_2->conv_3->pooling_1->conv_4->conv_5->conv_6->pooling_2->inception_1->inception_2->inception_3->pooling_3->fc->softmax
Where conv_i { i=1, 2, …,6} represents the ith convolutional layer in the network, inception _j { j=1, 2,3} represents the jth inception module.
In the step 3, in each model, an easily identifiable task sub-network s_net is constructed and connected to b_net; the S_Net network structure is as follows:
inception_3->conv_7_1->fc_1_1|fc_1_2|fc_1_3
Wherein conv_7_1 represents the 7 th convolutional layer of s_net; fc_1_j { j=1, 2,3} represents the fully connected layer of the corresponding j-th attribute packet in s_net; in the s_net, conv_7_1 connects fc_1_1, fc_1_2, fc_1_3 of three full connection layers to predict the attribute contained in the easily identified task in the three attribute groups;
The objective function is as follows:
The final target L S is:
Ls=αLS1+αLS2+(1-2α)LS12
wherein N represents the number of images, T represents the number of attribute groups, S t represents the number of attributes in the task easily identified in the T-th group, And/>Respectively representing the true value, the predicted value and the predicted probability of the kth attribute in the easily-recognized task of the ith image; subscripts 1 and 2 represent Net1 and Net2, respectively, e.g./>And/>Network predicted values of the kth attribute in the easily identifiable task of the ith image in Net1 and Net2 are respectively represented; l S12 is used to reduce the distance between two network predictor distributions and α is used to balance the loss functions.
In the step 4, the h_net network structure is as follows:
inception_3->conv_7_2->fc_2_1|fc_2_2|fc_2_3
Wherein conv_7_2 represents the 7 th convolutional layer of h_net; fc_2_j { j=1, 2,3} represents the fully connected layer of the h_net for the j-th attribute packet; initializing parameters of H_Net by using the parameters in the trained S_Net network, thereby realizing knowledge transfer from S_Net to H_Net;
The objective function is as follows:
the final target L H is:
LH=αLH1+αLH2+(1-2α)LH12
wherein H t is the number of attributes in the difficult-to-identify task in the group t, H is the total number of attributes in the difficult-to-identify task, And/>Respectively representing the predicted value and the predicted probability of the kth attribute in the task difficult to identify of the ith face image; subscripts 1 and 2 represent Net1 and Net2, respectively; /(I)Predictive value matrix representing attributes of tasks easy to identify in group t' corresponding to kth attribute in tasks difficult to identify,/>The kth attribute representing the easily identifiable task in t' corresponds to the person correlation coefficient matrix of the same set of difficult-to-identify task attributes, p is the loss balance parameter, and p is the matrix multiplication operation.
The beneficial effects are that: in the prior art, the intrinsic relation among the explored attributes is focused on the face attribute identification, so that the analysis of the phenomenon that different attribute identification accuracy rates have difference is highlighted, but the phenomenon needs to be focused and utilized. The invention is inspired by deep mutual learning and model distillation, two symmetrical convolutional neural network models are constructed, interactive learning is carried out between the two models, each model not only receives supervision constraint of a truth value label, but also can refer to learning experience of the other model. The attributes are grouped in the model, and the group is divided into easily-identified tasks and difficult-to-identify tasks. The easily-identified tasks of each group in the model are firstly learned, and the learning of the difficult-to-identify task is guided, so that the prediction accuracy of the attribute in the difficult-to-identify task is improved. The easily-identifiable tasks and the difficult-to-identify tasks of the two models are respectively subjected to interactive learning, so that the generalization capability of the models is improved.
Detailed Description
The present invention will be further explained below.
The invention discloses a face attribute identification method based on deep mutual learning and knowledge transfer, which comprises the following steps:
Step 1: a face data set is prepared, wherein the data set comprises a face image and a multi-attribute label corresponding to the image. And then carrying out face detection, face key point positioning and face alignment on each image. The present example selects CelebA that have undergone face alignment as a face dataset that contains more than 20 tens of thousands of face images, each with 40 face attribute truth values. The CelebA dataset is divided into a training set, a verification set and a test set according to official documents, and the training set, the verification set and the test set respectively comprise 162770, 19867 and 19962 face images.
In order to reduce the adverse effect of unbalance of positive and negative samples of attributes in the data set, 20 attributes are selected and respectively correspond to the attribute numbers 1,2, 3,4, 7, 8, 9, 10, 12, 13, 20, 21, 22, 26, 28, 32, 33, 34, 35 and 37. Defining attribute correlation in the example by using person correlation coefficients, clustering 20 attributes into three groups according to person correlation coefficients among the attributes, sorting the groups in descending order according to the sum of the correlation coefficients of the attributes, and equally dividing the attributes in the groups into easily-identifiable tasks S and difficultly-identifiable tasks H according to the number of the attributes in the groups, wherein the attribute division result is :(1)1H、2H、3S、35H、37S,(2)4S、7H、8S、9H、10S、12H、13H、28S、33H、34S,(3)20S、22H、26H、32S., the proportion of positive and negative samples of the attributes in a training set is in the range of 0.12-1.06, and the unbalanced proportion is smaller than that of the original data set; in the prior start-of-art method, the recognition accuracy of the attributes is in the range of 78% -99%, and the accuracy difference is large, so that the data set is suitable for being used as the face attribute data set in the example.
Step 2: and carrying out random image overturn on each face image in the data set, respectively taking the random image overturn as the input of two models Net1 and Net2, constructing deep convolutional neural network migration models B_Net1 and B_Net2, and extracting sharing characteristics among multiple attributes. The structures of the B_Net1 and the B_Net2 are consistent, the migration model adopts inceptionV of Google, and the network structure of the model is as follows:
input->conv_1->conv_2->conv_3->pooling_1->conv_4->conv_5->conv_6->pooling_2->inception_1->inception_2->inception_3->pooling_3->fc->softmax
Where conv_i { i=1, 2, …,6} represents the ith convolutional layer in the network, inception _i { i=1, 2,3} represents the ith inception module. The present example selects the network structure preceding inception _3 as the b_net for shared feature extraction. The output through inception _3 module is the attribute sharing feature to be extracted.
Step 3: the same attribute prediction network s_net is built for the easily identifiable tasks in Net1 and Net 2. The three groups of attributes contain easily identifiable task attributes and difficult-to-identify attributes. In each model, an easily identifiable task subnetwork s_net is built, connected to b_net. The network structure is as follows:
inception_3->conv_7_1->fc_1_1|fc_1_2|fc_1_3
Wherein conv_7_1 represents the 7 th convolutional layer of s_net; fc_1_j { j=1, 2,3} represents the fully connected layer of the corresponding j-th attribute packet in s_net. In s_net, conv_7_1 connects fc_1_1, fc_1_2, fc_1_3 of three fully connected layers predict attributes contained in the easily identifiable task in the three attribute groups.
The objective function is as follows:
The final target L S is:
Ls=αLS1+αLS2+(1-2α)LS12
wherein N represents the number of images, T represents the number of attribute groups, S t represents the number of attributes in the task easily identified in the T-th group, And/>True, predicted, and predicted probabilities of the kth attribute in the easily identifiable task for the ith image are respectively represented, and subscripts 1 and 2 represent Net1 and Net2, respectively, e.g. >, for exampleAnd/>Network prediction value,/>, representing the kth attribute in the easily identifiable task of the ith image in Net1 and Net2, respectivelyAnd/>And so on; l S12 is used to reduce the distance between two network predictor distributions and α is used to balance the loss functions.
Step 4: the same attribute prediction network H_Net is built for difficult-to-identify tasks in Net1 and Net 2. In each model, a difficult-to-identify task subnetwork H_Net is built and connected in parallel with S_Net to B_Net. The network structure is as follows:
inception_3->conv_7_2->fc_2_1|fc_2_2|fc_2_3
Wherein conv_7_2 represents the 7 th convolutional layer of h_net; fc_2_j { j=1, 2,3} represents the fully connected layer of the h_net for the j-th attribute packet. And initializing parameters of the H_Net by using the parameters in the trained S_Net network, so that knowledge transfer from the S_Net to the H_Net is realized.
The objective function is as follows:
the final target L H is:
LH=αLH1+αLH2+(1-2α)LH12
wherein H t is the number of attributes in the difficult-to-identify task in the group t, H is the total number of attributes in the difficult-to-identify task, And/>Respectively representing the predicted value and the predicted probability of the kth attribute in the task difficult to identify of the ith face image; subscripts 1 and 2 represent Net1 and Net2,/>, respectivelyMeaning similar to step 3; /(I)Predictive value matrix representing attributes of tasks easy to identify in group t' corresponding to kth attribute in tasks difficult to identify,/>The kth attribute representing the easily identifiable task in t' corresponds to the person correlation coefficient matrix of the same set of difficult-to-identify task attributes, p is the loss balance parameter, and p is the matrix multiplication operation.
In the embodiment, the S_Net subnetworks of Net_1 and Net_2 are trained on the CelebA training set and the verification set, then the H_Net subnetworks of the two networks are trained, and the test set is used for selecting the one with higher accuracy rate from Net_1 and Net_2.
Aiming at the phenomenon that the multi-attribute identification accuracy of the face image is large in difference, the invention provides a knowledge transfer mode, wherein the weight of the task prediction model easy to identify in the attribute is transferred to the task prediction model difficult to identify, the task model difficult to identify is further trained, and the prediction accuracy of the attribute in the related task difficult to identify is improved. On the other hand, in order to improve the generalization capability of the model, two symmetrical face attribute networks are built, different data enhancement operations such as random overturn are carried out on the face image, the obtained image is sent into the two networks, ternary loss is constructed on predicted values and attribute true values of the two models, the distribution distance of the predicted values of the two models is reduced, interactive learning is carried out, and the generalization capability of the model is further improved.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (3)

1. A face attribute identification method based on deep mutual learning and knowledge transfer is characterized in that: the method comprises the following steps:
step1, preparing a face data set, wherein the face data set comprises a face image and a multi-attribute label corresponding to the face image;
Step 2, carrying out random image overturning on each face image in the face data set, respectively taking the random image overturning as the input of two models Net1 and Net2, constructing a deep convolutional neural network migration model B_Net1 and B_Net2, and extracting sharing characteristics among multiple attributes;
in the step 2, the structures of the B_Net1 and the B_Net2 are consistent, wherein the migration model adopts inceptionV of Google; the network structure of b_net1 and b_net2 is as follows:
input->conv_1->conv_2->conv_3->pooling_1->conv_4->conv_5->conv_6->pooling_2->inception_1->inception_2->inception_3->pooling_3->fc->softmax
Wherein conv_i { i=1, 2, …,6} represents the ith convolutional layer in the network, inception _j { j=1, 2,3} represents the jth inception module;
Step 3: constructing an identical attribute prediction network S_Net for the easily-identified tasks in Net1 and Net2, wherein three groups of attributes comprise easily-identified task attributes and difficultly-identified attributes;
in the step 3, in each model, an easily identifiable task sub-network s_net is constructed and connected to b_net; the S_Net network structure is as follows:
inception_3->conv_7_1->fc_1_1|fc_1_2|fc_1_3
Wherein conv_7_1 represents the 7 th convolutional layer of s_net; fc_1_j { j=1, 2,3} represents the fully connected layer of the corresponding j-th attribute packet in s_net; in the s_net, conv_7_1 connects fc_1_1, fc_1_2, fc_1_3 of three full connection layers to predict the attribute contained in the easily identified task in the three attribute groups;
The objective function is as follows:
The final target L S is:
Ls=αLS1+αLS2+(1-2α)LS12
wherein N represents the number of images, T represents the number of attribute groups, S t represents the number of attributes in the task easily identified in the T-th group, And/>Respectively representing the true value, the predicted value and the predicted probability of the kth attribute in the easily-recognized task of the ith image; subscripts 1 and 2 represent Net1 and Net2, respectively; /(I)And/>Network predicted values of the kth attribute in the easily identifiable task of the ith image in Net1 and Net2 are respectively represented; l S12 is used for reducing the distance between two network predicted value distributions, and alpha is used for balancing each loss function;
step 4: constructing the same attribute prediction network H_Net for the tasks difficult to identify in Net1 and Net2, constructing a task sub-network H_Net difficult to identify in each model, and connecting the task sub-network H_Net and S_Net to B_Net in parallel;
in the step 4, the h_net network structure is as follows:
inception_3->conv_7_2->fc_2_1|fc_2_2|fc_2_3
Wherein conv_7_2 represents the 7 th convolutional layer of h_net; fc_2_j { j=1, 2,3} represents the fully connected layer of the h_net for the j-th attribute packet; initializing parameters of H_Net by using the parameters in the trained S_Net network, thereby realizing knowledge transfer from S_Net to H_Net;
The objective function is as follows:
the final target L H is:
LH=αLH1+αLH2+(1-2α)LH12
wherein H t is the number of attributes in the difficult-to-identify task in the group t, H is the total number of attributes in the difficult-to-identify task, And/>Respectively representing the predicted value and the predicted probability of the kth attribute in the difficult-to-recognize task of the ith face image, and subscripts 1 and 2 respectively represent Net1 and Net2; /(I)Predictive value matrix representing attributes of tasks easy to identify in group t' corresponding to kth attribute in tasks difficult to identify,/>The kth attribute representing the easily identifiable task in t' corresponds to the person correlation coefficient matrix of the same set of difficult-to-identify task attributes, p is the loss balance parameter, and p is the matrix multiplication operation.
2. The face attribute identification method based on deep mutual learning and knowledge transfer according to claim 1, wherein: in the step 1, face detection, face key point positioning and face alignment are performed on each face image in the face data set.
3. The face attribute identification method based on deep mutual learning and knowledge transfer according to claim 1, wherein: in the step 1, a face dataset is selected CelebA datasets.
CN202010999031.8A 2020-09-22 2020-09-22 Face attribute identification method based on deep mutual learning and knowledge transfer Active CN112149556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010999031.8A CN112149556B (en) 2020-09-22 2020-09-22 Face attribute identification method based on deep mutual learning and knowledge transfer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010999031.8A CN112149556B (en) 2020-09-22 2020-09-22 Face attribute identification method based on deep mutual learning and knowledge transfer

Publications (2)

Publication Number Publication Date
CN112149556A CN112149556A (en) 2020-12-29
CN112149556B true CN112149556B (en) 2024-05-03

Family

ID=73892570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010999031.8A Active CN112149556B (en) 2020-09-22 2020-09-22 Face attribute identification method based on deep mutual learning and knowledge transfer

Country Status (1)

Country Link
CN (1) CN112149556B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240113B (en) * 2021-06-04 2024-05-28 北京富通东方科技有限公司 Method for enhancing network prediction robustness
CN114821658B (en) * 2022-05-11 2024-05-14 平安科技(深圳)有限公司 Face recognition method, operation control device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203395A (en) * 2016-07-26 2016-12-07 厦门大学 Face character recognition methods based on the study of the multitask degree of depth
CN109325443A (en) * 2018-09-19 2019-02-12 南京航空航天大学 A kind of face character recognition methods based on the study of more example multi-tag depth migrations
CN110119689A (en) * 2019-04-18 2019-08-13 五邑大学 A kind of face beauty prediction technique based on multitask transfer learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203395A (en) * 2016-07-26 2016-12-07 厦门大学 Face character recognition methods based on the study of the multitask degree of depth
CN109325443A (en) * 2018-09-19 2019-02-12 南京航空航天大学 A kind of face character recognition methods based on the study of more example multi-tag depth migrations
CN110119689A (en) * 2019-04-18 2019-08-13 五邑大学 A kind of face beauty prediction technique based on multitask transfer learning

Also Published As

Publication number Publication date
CN112149556A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112308158B (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN107944559B (en) Method and system for automatically identifying entity relationship
CN109918528A (en) A kind of compact Hash code learning method based on semanteme protection
CN109299342A (en) A kind of cross-module state search method based on circulation production confrontation network
CN108984745A (en) A kind of neural network file classification method merging more knowledge mappings
CN110647904B (en) Cross-modal retrieval method and system based on unmarked data migration
CN110516095A (en) Weakly supervised depth Hash social activity image search method and system based on semanteme migration
CN110059206A (en) A kind of extensive hashing image search method based on depth representative learning
CN111414461A (en) Intelligent question-answering method and system fusing knowledge base and user modeling
CN109993100A (en) The implementation method of facial expression recognition based on further feature cluster
CN112149556B (en) Face attribute identification method based on deep mutual learning and knowledge transfer
CN110097060A (en) A kind of opener recognition methods towards trunk image
CN106203483A (en) A kind of zero sample image sorting technique of multi-modal mapping method of being correlated with based on semanteme
CN106250925B (en) A kind of zero Sample video classification method based on improved canonical correlation analysis
CN113032613B (en) Three-dimensional model retrieval method based on interactive attention convolution neural network
CN113743353B (en) Cervical cell classification method for space, channel and scale attention fusion learning
CN112733602B (en) Relation-guided pedestrian attribute identification method
CN112529638B (en) Service demand dynamic prediction method and system based on user classification and deep learning
CN109686402A (en) Based on key protein matter recognition methods in dynamic weighting interactive network
CN110991554B (en) Improved PCA (principal component analysis) -based deep network image classification method
CN115761408A (en) Knowledge distillation-based federal domain adaptation method and system
CN114897085A (en) Clustering method based on closed subgraph link prediction and computer equipment
CN113657473A (en) Web service classification method based on transfer learning
CN110717068B (en) Video retrieval method based on deep learning
CN113076490B (en) Case-related microblog object-level emotion classification method based on mixed node graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant