CN112800959B - Difficult sample mining method for data fitting estimation in face recognition - Google Patents

Difficult sample mining method for data fitting estimation in face recognition Download PDF

Info

Publication number
CN112800959B
CN112800959B CN202110117852.9A CN202110117852A CN112800959B CN 112800959 B CN112800959 B CN 112800959B CN 202110117852 A CN202110117852 A CN 202110117852A CN 112800959 B CN112800959 B CN 112800959B
Authority
CN
China
Prior art keywords
similarity
samples
training
sample
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110117852.9A
Other languages
Chinese (zh)
Other versions
CN112800959A (en
Inventor
田联房
孙峥峥
杜启亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Zhuhai Institute of Modern Industrial Innovation of South China University of Technology
Original Assignee
South China University of Technology SCUT
Zhuhai Institute of Modern Industrial Innovation of South China University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT, Zhuhai Institute of Modern Industrial Innovation of South China University of Technology filed Critical South China University of Technology SCUT
Priority to CN202110117852.9A priority Critical patent/CN112800959B/en
Publication of CN112800959A publication Critical patent/CN112800959A/en
Application granted granted Critical
Publication of CN112800959B publication Critical patent/CN112800959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a difficult sample mining method for data fitting estimation in face recognition, which comprises the following steps: 1) Preparing a batch of face image samples and corresponding labels, and inputting the face image samples into a feature extraction model to extract face features; 2) Inputting the extracted face features into a class center weight layer, and normalizing the output of the class center weight layer to obtain a similarity matrix; 3) Constructing a sample weight modulator, and re-giving weight to the similarity matrix through the modulator; 4) Inputting the similarity matrix which is re-given with the weight into a loss layer, and calculating the loss value of the face image samples in the batch; 5) Updating parameters of the feature extraction model according to the loss value, verifying the performance of the model, and terminating training if the model meets the standard; if the standard is not met, repeating the steps 1) to 5). The invention suppresses the difficult sample in the early training stage and emphasizes the difficult sample in the later training stage, thereby achieving the purposes of accelerating model convergence and improving the learning efficiency of the model in the later training stage.

Description

Difficult sample mining method for data fitting estimation in face recognition
Technical Field
The invention relates to the technical field of face recognition, in particular to a difficult sample mining method for data fitting estimation in face recognition.
Background
Face recognition is a biometric identification technology that takes a face image of a person as an identification object. The human face is used as the most direct biological feature on human body, and is a common means in a plurality of biological feature recognition methods due to the fact that the human face has the security and the stability which are not easy to be repeatedly carved. Compared with the recognition methods of fingerprints, irises, voices, finger veins and the like, the face recognition method has the advantages of non-invasiveness, concealment, substantivity and the like. Therefore, the face recognition technology is widely applied to the fields of finance, security inspection, video monitoring, man-machine interaction, electronic commerce, public security systems and the like, and has wide application prospects in the fields of 5G and the Internet of things.
The extraction of face features is a very important link in face recognition. Currently, there are many methods for extracting facial features, one of which is a deep learning-based method. The method utilizes the existing face database to design and train a deep convolutional neural network model (hereinafter referred to as a model) for extracting face features. However, training a model in the context of large data requires a significant amount of time and does not necessarily ensure model convergence. In addition, there is a problem that learning efficiency of the model is lowered in the later stage of training, and the model is difficult to learn effective features due to lack of a sample with rich information and value. Therefore, in the later stage of training, the performance of the model is improved slowly and tends to be saturated. Currently, a common solution is difficult sample mining, i.e. mining valuable difficult samples and inputting models for training. However, current difficult sample mining techniques ignore the information of the degree of fitting of the model to the training data, as well as the difficulty information of the difficult samples themselves. This causes the problem that difficult samples to be mined cannot accurately configure weights in different training phases, and even causes the model to fail to converge. In addition, the problem of overlong model training time still exists.
By combining the discussion, the invention discloses a difficult sample mining method for data fitting estimation in face recognition, which has higher practical application value.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art, and provides a difficult sample mining method for data fitting estimation in face recognition, which can finish mining a difficult sample by estimating the fitting degree of a current model to data in real time in the training process, and re-assign weight to the difficult sample by utilizing the estimated fitting degree information of the data, so that the model is focused on a simple sample in the early stage of training and focused on the difficult sample in the later stage of training. The method is not only beneficial to rapid convergence of the model, but also beneficial to effective learning of complex information and features contained in difficult samples, so that the performance of the model is further improved.
In order to achieve the above purpose, the technical scheme provided by the invention is as follows: a difficult sample mining method for data fitting estimation in face recognition comprises the following steps:
1) Preparing a batch of face image samples and corresponding labels, inputting the face image samples into a feature extraction model to extract face features, and converting the input face images into feature vectors with fixed dimensions, which are designed in advance, through the feature extraction model; wherein, a training data set and a verification data set are required to be prepared for training the feature extraction model and verifying the feature extraction model performance respectively;
2) Inputting the face features extracted in the step 1) into a class center weight layer, and normalizing the output of the class center weight layer to obtain a similarity matrix;
3) Constructing a sample weight modulator, and re-giving weight to the similarity matrix in the step 2) through the modulator;
4) Inputting the similarity matrix re-assigned with the weight in the step 3) into a loss layer, and calculating the loss value of the face image samples in the batch;
5) Updating the parameters of the feature extraction model according to the loss value in the step 4), verifying the performance of the feature extraction model, and terminating training if the feature extraction model meets the standard; if the standard is not met, repeating the steps 1) to 5).
In step 1), the preparation process of the face image sample comprises face detection and alignment, image enhancement, histogram equalization and normalization of image size and pixel value.
In step 2), the class center weight layer is a fully-connected network layer capable of learning, the input of the layer is the face feature output in step 1), and the output of the layer is a normalized similarity matrix cos θ, wherein the similarity matrix cos θ comprises the similarity of each input sample and each class in the training data set.
In step 3), a sample weight modulator is constructed and the modulator re-weights the similarity matrix in step 2), comprising the following two steps:
31 Inputting the similarity matrix and the label into a data fitting degree estimator to obtain fitting degree indexes of the feature extraction model to the training data set in the current training stage
Figure BDA0002920994660000031
Wherein k is the current iteration number;
the data processing process of the data fitting degree estimator comprises the following three steps of: a sliding average coefficient alpha, a difficult sample rate weight coefficient mu, a difficult sample rate threshold epsilon, the number of samples N of the batch and the total category number N of training data;
311 Calculating an average positive similarity t (k) : first, calculate the sum of positive similarities of the samples of the batch
Figure BDA0002920994660000032
Wherein->
Figure BDA0002920994660000033
For the positive similarity of the ith input sample, then calculate the positive similarity t after the moving average (k) =(1-α)r (k) +αt (k-1) Wherein t is (k) Initial value t of (0) =0;
312 Calculating the difficult sample rate h (k) : firstly, sequentially judging input samples of the batch according to a judging function, counting the number count (hard samples) of difficult samples in the input samples, and then calculating the difficult sample rate of the samples of the batch
Figure BDA0002920994660000034
The formula is as follows:
Figure BDA0002920994660000035
next, the meterCalculating the difficulty sample rate h after the moving average (k) The formula is as follows:
Figure BDA0002920994660000036
wherein h is (k) Initial value h of (0) =0,I k To indicate a function, the formula is as follows:
Figure BDA0002920994660000037
313 Calculating the data fitting degree index
Figure BDA0002920994660000041
The formula is as follows: />
Figure BDA0002920994660000042
32 Respectively re-assigning weights to each similarity element in the similarity matrix, wherein the process is as follows:
321 Judging whether the similarity element is positive similarity according to the input label: if yes, the positive similarity element is added
Figure BDA0002920994660000043
Replaced by->
Figure BDA0002920994660000044
Wherein->
Figure BDA0002920994660000045
For inputting sample i and its label y i The similarity of the corresponding class center vectors, namely positive similarity; t (-) is a preset positive similarity weighting function; if not, go to step 322);
322 The processing object of this step is the similarity cos θ shunted by step 321) j Wherein cos θ j Representing input samples i and NOTSignature j not equal to y i Similarity of the corresponding class center vectors, namely negative similarity; then, judging whether the negative similarity is a difficult sample according to a judging function: if not, then hold cos θ j Unchanged; if yes, go to step 323); wherein, the definition of the decision function is as follows:
Figure BDA0002920994660000046
323 For the difficult negative similarity cos θ passed from step 322) j First, calculate the degree of difficulty
Figure BDA0002920994660000047
Wherein->
Figure BDA0002920994660000048
Positive similarity after replacement in step 321), and then combining the data fitting degree index generated in step 31)>
Figure BDA0002920994660000049
By G (cos θ) j ) Replacing the original cos theta j Wherein G (·) is a difficult negative similarity weighting function, namely:
Figure BDA00029209946600000410
in step 4), the similarity matrix to which the weight is newly assigned in step 3) is used to calculate the loss value of the samples of the present batch according to a predetermined loss function format.
In step 5), according to the loss value calculated in step 4) and a preset learning rate, updating parameters of the feature extraction model and the class center weight layer by using a gradient descent algorithm, then calculating the accuracy of the feature extraction model on the verification data set, judging whether the model meets the standard or not according to a preset model performance index, and if the model does not meet the standard, repeating the steps 1) to 5); if the training reaches the standard, the training is terminated.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the data fitting degree estimator is designed, and the fitting degree of the model on the training set data is estimated from macroscopic and microscopic angles in the training process, so that different weights are respectively given to the difficult samples in different training stages. Therefore, the model can be converged rapidly in the early training stage, and the model can be focused on the characteristics of the difficult learning samples in the later training stage, so that the training efficiency and the model accuracy are improved.
2. The weight given to a difficult sample depends not only on the degree of fitting of the model to the data, but also on the degree of difficulty of the sample itself. The weighting function provided by the invention can finish the linear assignment of the difficult sample, namely, the later the training stage is, the more difficult the sample is, the higher the importance is, and the larger the weighting is. Therefore, the purpose of accurately emphasizing or inhibiting the target sample can be achieved, and the model can effectively learn more complex features.
3. The difficult samples are defined as misclassified samples and can be quickly determined from the boundary function. The judging method not only has small consumption of computing resources, but also has small time. Thus, on-line difficult sample discovery can be performed without affecting the training speed.
Drawings
FIG. 1 is an overall training flow chart of the method of the present invention.
Fig. 2 is a block diagram of a sample weight modulator.
Fig. 3 is a block diagram of the data fitness estimator.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
As shown in fig. 1, the difficult sample mining method for data fitting estimation in face recognition provided in this example includes the following steps:
1) A batch of face image samples is prepared, namely a specified number of images (such as 64 images) are randomly selected from a training data set MS-Celeb-1M (comprising 5822653 images from 85742 different categories), and the face image samples which can be directly input into a feature extraction model are obtained through image preprocessing and normalization. The image preprocessing process may include the steps of face detection and alignment, image enhancement, histogram equalization, and the like. The normalization process includes image size normalization and image pixel value normalization. Wherein, the image size is normalized to the input size 112×112×3 of the model (representing the length, width and color channel number of the input picture, respectively); pixel value normalization is to normalize the pixel value of an input digital image to a floating point number in the range of 0 to 255.
The model for extracting face features can be selected as a deep convolutional neural network model ResNet100, whose input is a tensor of 64×112×112×3, and whose output feature x is a tensor of 64×512 in shape, where 512 is a predesigned feature dimension.
2) And (3) inputting the face features extracted in the step (1) into a class center weight layer, and normalizing the output of the class center weight layer to obtain a similarity matrix.
The class center weight layer is a full connection layer, and the included learnable parameter is weight W. Where W is a matrix of 512 x 85742 shape (representing the input feature dimension and total number of categories in the training dataset, respectively). And carrying out matrix operation and normalization processing on the input characteristic x and the weight W to obtain an output similarity matrix. The calculation formula is as follows:
Figure BDA0002920994660000061
wherein,, I represent and (5) performing modular length operation. The cos θ is 64 x 85742, each row represents a sample, and the elements on each column of the row represent the similarity of the sample to the class center weight vector corresponding to the column.
3) As shown in fig. 2, the process of constructing a sample weight modulator and re-weighting the similarity matrix in step 2) by the modulator includes the following two steps:
31 Similarity matrix and label input data fitting degreeThe estimator obtains the fitting degree index of the feature extraction model to the training data set in the current training stage
Figure BDA0002920994660000071
Where k is the current number of iterations.
As shown in fig. 3, the data processing procedure of the data fitness estimator includes the following three steps. The preset super parameters are as follows: a running average coefficient α=0.01, a difficult sample rate weight coefficient μ=0.5, a difficult sample rate threshold ε=0.9, the number of samples in the batch n=64, and the number of lumped categories of training data n= 85742.
311 Calculating an average positive similarity t (k) . First, calculate the sum of positive similarities of the samples of the batch
Figure BDA0002920994660000072
Wherein->
Figure BDA0002920994660000073
For the positive similarity of the ith input sample, then calculate the positive similarity t after the moving average (k) =(1-α)r (k) +αt (k-1) Wherein t is (k) Initial value t of (0) =0。
312 Calculating the difficult sample rate h (k) . First, the input samples of the batch are sequentially judged according to the judging function, and the number count (hard samples) of the difficult samples is counted. Wherein the decision function will be given in step 322).
Then calculate the difficult sample rate for the batch of samples
Figure BDA0002920994660000074
The formula is as follows:
Figure BDA0002920994660000075
then, the difficulty sample rate h after the sliding average is calculated (k) The formula is as follows:
Figure BDA0002920994660000076
wherein h is (k) Initial value h of (0) =0,I k To indicate a function, the formula is as follows:
Figure BDA0002920994660000077
313 Calculating the data fitting degree index
Figure BDA0002920994660000078
The formula is as follows:
Figure BDA0002920994660000079
32 Respectively re-assigning weights to each similarity element in the similarity matrix, wherein the process is as follows:
321 Judging whether the similarity element is positive similarity according to the input label: if yes, the positive similarity element is added
Figure BDA00029209946600000710
Replaced by->
Figure BDA00029209946600000711
Wherein->
Figure BDA00029209946600000712
For inputting sample i and its label y i The similarity of the corresponding class center vectors, namely positive similarity; wherein (1)>
Figure BDA00029209946600000713
A weighting function is given to positive similarity. m=0.5 is an angular margin hyper-parameter. If not, go to step 322).
322 The processing object of this step is the similarity cos θ shunted by step 321) j Wherein cos θ j Representing input samples i and non-tags j not equal y i The similarity of the corresponding class center vectors, i.e., negative similarity. Then, judging whether the negative similarity is a difficult sample according to a judging function: if not, then hold cos θ j Unchanged; if yes, go to step 323). Wherein, the definition of the decision function is as follows:
Figure BDA0002920994660000081
323 For the difficult negative similarity cos θ delivered in step 322) j First, calculate the difficulty level
Figure BDA0002920994660000082
Wherein->
Figure BDA0002920994660000083
Positive similarity after replacement in step 321). Then combining the data fitting degree index generated in the step 31)>
Figure BDA0002920994660000084
By G (cos θ) j ) Replacing the original cos theta j Wherein G (·) is a difficult negative similarity weighting function, namely:
Figure BDA0002920994660000085
4) The similarity matrix of the weight is re-assigned in the step 3), and the cross entropy loss function is adopted to calculate the loss value of the samples in the batch
Figure BDA0002920994660000086
The formula is as follows: />
Figure BDA0002920994660000087
Where s=64 is a scaling factor, i represents a sample number, and j represents a class number.
5) The initial learning rate is set to η=0.1 and the learning rate update strategy decays at a decay rate of 0.1 at the 100000,160000 th and 220000 th iteration steps. And (3) automatically completing parameter updating of the feature extraction model and the class center weight layer by using a random gradient descent algorithm (SGD) and an existing deep learning framework (such as Tensorflow, pyTorch, mxNet) according to the loss value and the current learning rate calculated in the step (4). Then, the accuracy of the model on the verification data sets LFW, age DB and CFP-FP is calculated, and the target performance indexes of the model on the three verification sets are set to be 99%,98% and 95% respectively. Finally judging whether the performance of the feature extraction model in the current training stage meets the standard or not: if the standard is not met, repeating the steps 1) to 5); if the training reaches the standard, the training is terminated.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (5)

1. The difficult sample mining method for data fitting estimation in face recognition is characterized by comprising the following steps of:
1) Preparing a batch of face image samples and corresponding labels, inputting the face image samples into a feature extraction model to extract face features, and converting the input face images into feature vectors with fixed dimensions, which are designed in advance, through the feature extraction model; wherein, a training data set and a verification data set are required to be prepared for training the feature extraction model and verifying the feature extraction model performance respectively;
2) Inputting the face features extracted in the step 1) into a class center weight layer, and normalizing the output of the class center weight layer to obtain a similarity matrix;
3) Constructing a sample weight modulator, and re-giving weight to the similarity matrix in the step 2) through the modulator;
constructing a sample weight modulator and re-weighting the similarity matrix in the step 2) by the modulator, wherein the method comprises the following two steps:
31 Inputting the similarity matrix and the label into a data fitting degree estimator to obtain fitting degree indexes of the feature extraction model to the training data set in the current training stage
Figure QLYQS_1
Wherein k is the current iteration number;
the data processing process of the data fitting degree estimator comprises the following three steps of: a sliding average coefficient alpha, a difficult sample rate weight coefficient mu, a difficult sample rate threshold epsilon, the number of samples N of the batch and the total category number N of training data;
311 Calculating an average positive similarity t (k) : first, calculate the sum of positive similarities of the samples of the batch
Figure QLYQS_2
Wherein the method comprises the steps of
Figure QLYQS_3
For the positive similarity of the ith input sample, then calculate the positive similarity t after the moving average (k) =(1-α)r (k) +αt (k -1) Wherein t is (k) Initial value t of (0) =0;
312 Calculating the difficult sample rate h (k) : firstly, sequentially judging input samples of the batch according to a judging function, counting the number count (hard samples) of difficult samples in the input samples, and then calculating the difficult sample rate of the samples of the batch
Figure QLYQS_4
The formula is as follows:
Figure QLYQS_5
then, the difficulty sample rate h after the sliding average is calculated (k) The formula is as follows:
Figure QLYQS_6
wherein h is (k) Initial value h of (0) =0,I k To indicate a function, the formula is as follows:
Figure QLYQS_7
313 Calculating the data fitting degree index
Figure QLYQS_8
The formula is as follows:
Figure QLYQS_9
32 Respectively re-assigning weights to each similarity element in the similarity matrix, wherein the process is as follows:
321 Judging whether the similarity element is positive similarity according to the input label: if yes, the positive similarity element is added
Figure QLYQS_10
Replaced by->
Figure QLYQS_11
Wherein->
Figure QLYQS_12
For inputting sample i and its label y i The similarity of the corresponding class center vectors, namely positive similarity; t (-) is a preset positive similarity weighting function; if not, go to step 322);
322 The processing object of this step is the similarity cos θ shunted by step 321) j Wherein cos θ j Representing input samples i and non-tags j not equal y i Similarity of the corresponding class center vectors, namely negative similarity; then, according to the decision functionJudging whether the negative similarity is a difficult sample: if not, then hold cos θ j Unchanged; if yes, go to step 323); wherein, the definition of the decision function is as follows:
Figure QLYQS_13
323 For the difficult negative similarity cos θ passed from step 322) j First, calculate the degree of difficulty
Figure QLYQS_14
Wherein->
Figure QLYQS_15
Positive similarity after replacement in step 321), and then combining the data fitting degree index generated in step 31)>
Figure QLYQS_16
By G (cos θ) j ) Replacing the original cos theta j Wherein G (·) is a difficult negative similarity weighting function, namely:
Figure QLYQS_17
4) Inputting the similarity matrix re-assigned with the weight in the step 3) into a loss layer, and calculating the loss value of the face image samples in the batch;
5) Updating the parameters of the feature extraction model according to the loss value in the step 4), verifying the performance of the feature extraction model, and terminating training if the feature extraction model meets the standard; if the standard is not met, repeating the steps 1) to 5).
2. A difficult sample mining method for data fit estimation in face recognition according to claim 1, wherein in step 1), the preparation process for face image samples includes face detection and alignment, image enhancement, histogram equalization, and normalization of image size, pixel values.
3. The method of claim 1, wherein in step 2), the class-center weight layer is a fully-connected network layer capable of learning, the input of the layer is the face feature output in step 1), and the output of the layer is a normalized similarity matrix cos θ, which includes the similarity between each input sample and each class in the training dataset.
4. The method according to claim 1, wherein in step 4), the similarity matrix to which the weight is newly assigned in step 3) is used to calculate the loss value of the samples of the present batch according to a predetermined loss function.
5. The difficult sample mining method for data fitting estimation in face recognition according to claim 1, wherein in step 5), parameters of the feature extraction model and the center-like weight layer are updated by using a gradient descent algorithm according to the loss value calculated in step 4) and a preset learning rate, then the accuracy of the feature extraction model on a verification data set is calculated again, whether the model meets the standard is judged according to a preset model performance index, and if the model does not meet the standard, the steps 1) to 5) are repeated; if the training reaches the standard, the training is terminated.
CN202110117852.9A 2021-01-28 2021-01-28 Difficult sample mining method for data fitting estimation in face recognition Active CN112800959B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110117852.9A CN112800959B (en) 2021-01-28 2021-01-28 Difficult sample mining method for data fitting estimation in face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110117852.9A CN112800959B (en) 2021-01-28 2021-01-28 Difficult sample mining method for data fitting estimation in face recognition

Publications (2)

Publication Number Publication Date
CN112800959A CN112800959A (en) 2021-05-14
CN112800959B true CN112800959B (en) 2023-06-06

Family

ID=75812376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110117852.9A Active CN112800959B (en) 2021-01-28 2021-01-28 Difficult sample mining method for data fitting estimation in face recognition

Country Status (1)

Country Link
CN (1) CN112800959B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113920079A (en) * 2021-09-30 2022-01-11 中国科学院深圳先进技术研究院 Difficult sample mining method, system, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109214360A (en) * 2018-10-15 2019-01-15 北京亮亮视野科技有限公司 A kind of construction method of the human face recognition model based on ParaSoftMax loss function and application
CN109948478A (en) * 2019-03-06 2019-06-28 中国科学院自动化研究所 The face identification method of extensive lack of balance data neural network based, system
CN110851645A (en) * 2019-11-08 2020-02-28 吉林大学 Image retrieval method based on similarity maintenance under depth metric learning
CN110866134A (en) * 2019-11-08 2020-03-06 吉林大学 Image retrieval-oriented distribution consistency keeping metric learning method
CN111985310A (en) * 2020-07-08 2020-11-24 华南理工大学 Training method of deep convolutional neural network for face recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109214360A (en) * 2018-10-15 2019-01-15 北京亮亮视野科技有限公司 A kind of construction method of the human face recognition model based on ParaSoftMax loss function and application
CN109948478A (en) * 2019-03-06 2019-06-28 中国科学院自动化研究所 The face identification method of extensive lack of balance data neural network based, system
CN110851645A (en) * 2019-11-08 2020-02-28 吉林大学 Image retrieval method based on similarity maintenance under depth metric learning
CN110866134A (en) * 2019-11-08 2020-03-06 吉林大学 Image retrieval-oriented distribution consistency keeping metric learning method
CN111985310A (en) * 2020-07-08 2020-11-24 华南理工大学 Training method of deep convolutional neural network for face recognition

Also Published As

Publication number Publication date
CN112800959A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN112308158B (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN110992351B (en) sMRI image classification method and device based on multi-input convolution neural network
CN110929679B (en) GAN-based unsupervised self-adaptive pedestrian re-identification method
CN109101938B (en) Multi-label age estimation method based on convolutional neural network
CN110084149B (en) Face verification method based on hard sample quadruple dynamic boundary loss function
CN109344759A (en) A kind of relatives' recognition methods based on angle loss neural network
CN109255289B (en) Cross-aging face recognition method based on unified generation model
CN111461025B (en) Signal identification method for self-evolving zero-sample learning
CN111339847A (en) Face emotion recognition method based on graph convolution neural network
CN109993100A (en) The implementation method of facial expression recognition based on further feature cluster
CN110472518B (en) Fingerprint image quality judgment method based on full convolution network
CN112052772A (en) Face shielding detection algorithm
CN113283590B (en) Defending method for back door attack
CN116311483B (en) Micro-expression recognition method based on local facial area reconstruction and memory contrast learning
CN112115796A (en) Attention mechanism-based three-dimensional convolution micro-expression recognition algorithm
CN112364791A (en) Pedestrian re-identification method and system based on generation of confrontation network
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
CN113177587A (en) Generalized zero sample target classification method based on active learning and variational self-encoder
CN110929239B (en) Terminal unlocking method based on lip language instruction
CN112132257A (en) Neural network model training method based on pyramid pooling and long-term memory structure
CN112800959B (en) Difficult sample mining method for data fitting estimation in face recognition
CN111401434A (en) Image classification method based on unsupervised feature learning
CN112766134B (en) Expression recognition method for strengthening distinction between classes
CN114091510A (en) Cross-domain vehicle weight identification method based on domain self-adaptation
CN111160161A (en) Self-learning face age estimation method based on noise elimination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant