CN112766399A - Self-adaptive neural network training method for image recognition - Google Patents

Self-adaptive neural network training method for image recognition Download PDF

Info

Publication number
CN112766399A
CN112766399A CN202110117616.7A CN202110117616A CN112766399A CN 112766399 A CN112766399 A CN 112766399A CN 202110117616 A CN202110117616 A CN 202110117616A CN 112766399 A CN112766399 A CN 112766399A
Authority
CN
China
Prior art keywords
image
neural network
loss function
adaptive
cos
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110117616.7A
Other languages
Chinese (zh)
Other versions
CN112766399B (en
Inventor
罗杨
刘翔
骆春波
韦仕才
王亚宁
彭涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110117616.7A priority Critical patent/CN112766399B/en
Publication of CN112766399A publication Critical patent/CN112766399A/en
Application granted granted Critical
Publication of CN112766399B publication Critical patent/CN112766399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image recognition-oriented adaptive neural network training method, which comprises the steps of obtaining an image data set and preprocessing the image data set; constructing a convolutional neural network model and setting a self-adaptive loss function; inputting the preprocessed image data into a convolutional neural network model for forward propagation to obtain a feature vector and classification layer weight of the image; calculating an adaptive loss function according to the image feature vector and the classification layer weight, and judging whether the convolutional neural network model is converged; carrying out back propagation on the convolutional neural network model according to the adaptive loss function, and updating the weight of the classification layer; the number of iterations is incremented and the adaptive loss function is updated. Compared with a classification loss function training method based on softmax, the method provided by the invention reduces the setting of the hyper-parameters, can accelerate the convergence rate of the convolutional neural network model, and can improve the image classification and identification accuracy.

Description

Self-adaptive neural network training method for image recognition
Technical Field
The invention relates to the technical field of image recognition, in particular to an image recognition-oriented adaptive neural network training method.
Background
The success of Convolutional Neural Networks (CNNs) in image classification and identification mainly benefits from massive training data, network architecture and reasonable loss function. The method comprises the key step of designing a reasonable loss function aiming at the data characteristics to improve the image recognition capability.
The Softmax function is a loss function which is mainly used for image classification identification, and the identification effect on difficult samples is not good. Through research, when the intra-class distance of a sample is larger than the inter-class distance, the softmax loss function can distinguish the inter-class distance, but cannot effectively distinguish the intra-class distance, which is a main reason that the softmax loss function has poor effect under difficult samples. The center loss function is an improvement over the Softmax function, which provides a class center for each class, minimizing the distance of each sample from that center. Therefore, the distance between classes can be distinguished, and the distance in the classes can be compact. The L-softmax loss function is advantageous in the classification problem because it makes the intra-class distance more compact, it introduces a constraint coefficient m, by controlling the size of m, the inter-class distance is adjusted: the larger m is, the larger the distance between classes is, and the more compact the interior of the class is. The algorithms are all based on marginal methods, the mining strategy is omitted, the difficulty of each sample is not fully utilized, and the convergence problem can be caused when the algorithms are used on a small model. The Focalloss loss function takes into account difficult samples in the training process, requires setting appropriate hyper-parameters, and depends on the experience of the engineer whether the result is optimal. Triple loss belongs to Metric Learning, compared with softmax, the triple loss can conveniently train a large-scale data set, is not limited by video memory, can reduce the distance between the samples of the same type as much as possible and enlarge the distance between the samples of different types as much as possible, and has the defects of difficulty in convergence and overlong training time due to over-focus on local parts.
The image classification and identification task mainly uses softmax and its deformation as a loss function, but the loss function requires a large amount of data and manual parameter adjustment during training, and particularly, when a difficult sample is faced, a good classification and identification effect is difficult to achieve.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an image recognition-oriented adaptive neural network training method, so that the convergence rate of a convolutional neural network model is increased, and the image classification recognition accuracy is improved.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
an adaptive neural network training method facing image recognition comprises the following steps:
s1, acquiring an image data set and preprocessing the image data set;
s2, constructing a convolutional neural network model, and setting a self-adaptive loss function;
s3, inputting the preprocessed image data into a convolutional neural network model for forward propagation to obtain a feature vector and classification layer weight of the image;
s4, calculating an adaptive loss function according to the image feature vector and the classification layer weight obtained in the step S3, and judging whether the convolutional neural network model is converged; if so, ending the training; otherwise, executing step S5;
s5, performing back propagation on the convolutional neural network model according to the adaptive loss function, and updating the weight of the classification layer;
and S6, increasing the iteration number, updating the adaptive loss function, and returning to the step S3.
The beneficial effect of this scheme is: according to the method, the gradient modulation coefficient of the training sample is added on the basis of the existing softmax loss function to form the adaptive loss function, adaptive learning in the training process of the convolutional neural network model is achieved, the simple training sample is focused to achieve fast network convergence, the difficult training sample is focused to achieve improvement of the identification accuracy rate, and compared with a softmax-based training method of the classification loss function, the method reduces the setting of the hyper-parameters, can accelerate the convergence rate of the convolutional neural network model, and can improve the image classification and identification accuracy rate.
Further, the adaptive loss function is specifically expressed as:
Figure BDA0002920933450000031
wherein, thetajIs the angle between the jth column classification layer weight and the feature vector of the ith image sample, m is the constraint coefficient, N (t, cos θ)j) The cosine similarity function is shown, t is a modulation coefficient, N is the number of image samples, and N is the number of image sample categories.
Further, the cosine similarity function is specifically expressed as:
Figure BDA0002920933450000032
further, the step S4 of calculating the adaptive loss function according to the image feature vectors and the classification layer weights obtained in the step S3 specifically includes:
normalizing the image feature vector x and the classification layer weight W obtained in the step S3 into 1 through L2;
calculating the jth column classification layer weight W according tojAnd the ith image sample xiAngle theta between feature vectors ofj
Figure BDA0002920933450000033
According to angle thetajJudging the value of the cosine similarity function;
and calculating an adaptive loss function according to the cosine similarity function.
Further, the updating of the classification layer weight in step S5 specifically includes:
when i is j, f is seti=cos(θi+m);
When i is j, if cos (θ)j+m)-cos(θj) Greater than or equal to 0, then xiFor simple image samples, set fj=cos(θj);
If cos (θ)j+m)-cos(θj) < 0, then xiFor difficult image samples, set fj=(t)cos(θj);
The classification into weight parameters is updated according to the following formula:
Figure BDA0002920933450000041
further, the step S6 is specifically:
adding 1 to the iteration number and updating the adaptive loss function by the formula
Figure BDA0002920933450000042
Return is made to step S3.
Drawings
FIG. 1 is a flowchart of an adaptive neural network training method for image recognition according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
The current image classification method based on deep learning uses softmax and its deformation as loss function. The softmax loss function divides the whole feature space according to the number of classes, and ensures that the classes are separable, which is very suitable for multi-classification tasks such as MNIST and ImageNet, because the test classes are necessarily in the training classes. softmax does not require intra-class distance compaction and inter-class distance differentiation, and the recognition effect is reduced when the test class is not in the training set class. Therefore, Softmax needs to be modified, separability is guaranteed, and the feature vector classes are as compact as possible and are separated as far as possible.
The current mainstream image classification and recognition algorithm is used as a loss function after a scaling factor is added into softmax for improvement, but the loss function always needs to set the hyper-parameters such as the scaling factor, the same hyper-parameter setting is adopted for all samples in the training process, and the difficult training samples cannot be effectively concerned in the later training period.
Therefore, an embodiment of the present invention provides an adaptive neural network training method facing image recognition, as shown in fig. 1, including the following steps S1 to S5:
s1, acquiring an image data set and preprocessing the image data set;
in this embodiment, the present invention uses an LFW (laboratory Faces in the wild) image dataset, which is divided into a training dataset and a test dataset in a 9: 1 ratio.
The entire image is then normalized to (112 ) pixels and the initial number of iterations k is set to 0.
S2, constructing a convolutional neural network model, and setting a self-adaptive loss function;
in this embodiment, the present invention uses ResNet32 as a backbone network to construct a convolutional neural network model, and adds a training sample gradient modulation coefficient on the basis of a softmax loss function to form an adaptive loss function, which is specifically represented as:
Figure BDA0002920933450000051
wherein, thetajIs the angle between the jth column classification layer weight and the feature vector of the ith image sample, m is the constraint coefficient, N (t, cos θ)j) The cosine similarity function is shown, t is a modulation coefficient, N is the number of image samples, and N is the number of image sample categories.
Cosine similarity functionN(t,cosθj) The concrete expression is as follows:
Figure BDA0002920933450000061
the gradient modulation coefficient t of the training sample is specifically expressed as:
Figure BDA0002920933450000062
cosθjrepresenting cosine similarity under difficult samples. At the beginning of training, the difference of classification between different samples is large, namely theta between different samplesjAt a larger angle, cos θjSmaller values of (c) and smaller values of (t), so that the network training focuses more on simple samples. As the number of iterations increases, the value of the parameter t increases, the gradient modulation coefficient t of the difficult sample also increases, and the model gradually focuses on the training of the difficult sample. Therefore, by adding t to the loss function, the network can pay more attention to the difficult samples as the training times are increased, so that better identification accuracy is obtained.
The method can better distinguish the difficult image samples from the simple image samples in the convolutional neural network model training process by utilizing the improved adaptive loss function, the simple image samples are concerned in the early stage of training, and the difficult image samples can be more emphasized in the later stage of training, so that the model can be converged more quickly and the accuracy rate is higher. In the training process, the adaptive loss function can be adaptively adjusted along with the training process, so that the dependence on the hyper-parameters is reduced, and the design requirement on the hyper-parameters is reduced.
S3, inputting the preprocessed image data into a convolutional neural network model for forward propagation to obtain a feature vector and classification layer weight of the image;
in this embodiment, the normalized image data is input to a convolutional neural network model for forward propagation, so as to obtain a feature vector x and a classification layer weight W of the image.
S4, calculating an adaptive loss function according to the image feature vector and the classification layer weight obtained in the step S3, and judging whether the convolutional neural network model is converged; if so, ending the training; otherwise, executing step S5;
in the present embodiment, after obtaining the image feature vector x and the classification layer weight W according to step S3, the present invention first normalizes the image feature vector x and the classification layer weight W to 1 by L2, i.e., | | Wi||=||xi||=1;
Then, the jth column classification level weight W is calculated according to the following formulajAnd the ith image sample xiAngle theta between feature vectors ofj
Figure BDA0002920933450000071
Namely, the cosine similarity cos theta is obtained firstjThen to the cosine similarity cos thetajPerforming an inverse cosine operation to obtain an angle thetaj
Further according to the angle thetajJudging the value of the cosine similarity function; i.e. when cos (theta)j+m)≥cos(θj) The cosine similarity function N (t, cos θ)j)=cosθj(ii) a When cos (theta)j+m)<cos(θj) The cosine similarity function N (t, cos θ)j)=t*cosθj
Finally, according to the cosine similarity function N (t, cos theta)j) Computing an adaptive loss function
Figure BDA0002920933450000072
The invention is based on an adaptive loss function
Figure BDA0002920933450000073
Judging whether the convolutional neural network model is converged; if adaptive loss function
Figure BDA0002920933450000074
If the convolution neural network model is not reduced, the convolution neural network model is converged, and the training is finished; otherwise execute stepStep S5.
S5, performing back propagation on the convolutional neural network model according to the adaptive loss function, and updating the weight of the classification layer;
in this embodiment, the updating the weights of the classification layers of the present invention specifically includes:
when i is j, f is seti=cos(θi+m);
When i is j, if cos (θ)j+m)-cos(θj) Greater than or equal to 0, then xiFor simple image samples, set fj=cos(θj);
If cos (θ)j+m)-cos(θj) < 0, then xiFor difficult image samples, set fj=(t)cos(θj);
The classification into weight parameters is updated according to the following formula:
Figure BDA0002920933450000081
and S6, increasing the iteration number, updating the adaptive loss function, and returning to the step S3.
In this embodiment, step S6 specifically includes:
adding 1 to the iteration number and updating the adaptive loss function by the formula
Figure BDA0002920933450000082
Return is made to step S3.
The effect of the training method of the present invention will be described below with specific examples.
The LFW (laboratory Faces in the wild) selected by the invention is an image standard data set under the present non-limited environment, the LFW data set comprises 13233 images of 5749 persons, wherein 1680 person has two or more images, 4069 person has only one image, and the resolution of the image is (250 x 250). The trained neural network framework selection is based on the ResNet32 structural convolutional neural network model.
The training parameters are: the learning rate is set to 0.01, the SGD is adopted as a gradient descender, the batch size is set to 30, a 1080Ti display card is adopted for training, and the training frame is Pyorch. Under the same training parameters, after 2000 iterations, the model of the invention is converged, and 99.82% accuracy is obtained. Whereas the Softmax loss function converges after 2500 iterations, only 98.70% accuracy is achieved. It can be seen that the method of the present invention not only converges faster, but also has higher model accuracy.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (6)

1. An adaptive neural network training method oriented to image recognition is characterized by comprising the following steps:
s1, acquiring an image data set and preprocessing the image data set;
s2, constructing a convolutional neural network model, and setting a self-adaptive loss function;
s3, inputting the preprocessed image data into a convolutional neural network model for forward propagation to obtain a feature vector and classification layer weight of the image;
s4, calculating an adaptive loss function according to the image feature vector and the classification layer weight obtained in the step S3, and judging whether the convolutional neural network model is converged; if so, ending the training; otherwise, executing step S5;
s5, performing back propagation on the convolutional neural network model according to the adaptive loss function, and updating the weight of the classification layer;
and S6, increasing the iteration number, updating the adaptive loss function, and returning to the step S3.
2. The image recognition-oriented adaptive neural network training method of claim 1, wherein the adaptive loss function is specifically expressed as:
Figure FDA0002920933440000011
wherein, thetajIs the angle between the jth column classification layer weight and the feature vector of the ith image sample, m is the constraint coefficient, N (t, cos θ)j) The cosine similarity function is shown, t is a modulation coefficient, N is the number of image samples, and N is the number of image sample categories.
3. The image recognition-oriented adaptive neural network training method of claim 2, wherein the cosine similarity function is specifically expressed as:
Figure FDA0002920933440000012
4. the method for training an adaptive neural network for image recognition according to claim 3, wherein the step S4 of calculating an adaptive loss function according to the image feature vectors obtained in the step S3 and the classification layer weights specifically comprises:
normalizing the image feature vector x and the classification layer weight W obtained in the step S3 into 1 through L2;
calculating the jth column classification layer weight W according tojAnd the ith image sample xiAngle theta between feature vectors ofj
Figure FDA0002920933440000021
According to angle thetajJudging the value of the cosine similarity function;
and calculating an adaptive loss function according to the cosine similarity function.
5. The image-recognition-oriented adaptive neural network training method of claim 4, wherein the step S5 of updating the classification layer weight specifically comprises:
when i is j, f is seti=cos(θi+m);
When i is j, if cos (θ)j+m)-cos(θj) Greater than or equal to 0, then xiFor simple image samples, set fj=cos(θj);
If cos (θ)j+m)-cos(θj) < 0, then xiFor difficult image samples, set fj=(t)cos(θj);
The classification into weight parameters is updated according to the following formula:
Figure FDA0002920933440000022
6. the adaptive neural network training method for image recognition according to claim 5, wherein the step S6 specifically comprises:
adding 1 to the iteration number and updating the adaptive loss function by the formula
Figure FDA0002920933440000031
Return is made to step S3.
CN202110117616.7A 2021-01-28 2021-01-28 Self-adaptive neural network training method for image recognition Active CN112766399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110117616.7A CN112766399B (en) 2021-01-28 2021-01-28 Self-adaptive neural network training method for image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110117616.7A CN112766399B (en) 2021-01-28 2021-01-28 Self-adaptive neural network training method for image recognition

Publications (2)

Publication Number Publication Date
CN112766399A true CN112766399A (en) 2021-05-07
CN112766399B CN112766399B (en) 2021-09-28

Family

ID=75706392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110117616.7A Active CN112766399B (en) 2021-01-28 2021-01-28 Self-adaptive neural network training method for image recognition

Country Status (1)

Country Link
CN (1) CN112766399B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361346A (en) * 2021-05-25 2021-09-07 天津大学 Scale parameter self-adaptive face recognition method for replacing adjustment parameters
CN113469084A (en) * 2021-07-07 2021-10-01 西安电子科技大学 Hyperspectral image classification method based on contrast generation countermeasure network
CN113705647A (en) * 2021-08-19 2021-11-26 电子科技大学 Dynamic interval-based dual semantic feature extraction method
CN113763501A (en) * 2021-09-08 2021-12-07 上海壁仞智能科技有限公司 Iteration method of image reconstruction model and image reconstruction method
CN114529713A (en) * 2022-01-14 2022-05-24 电子科技大学 Underwater image enhancement method based on deep learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886062A (en) * 2017-11-03 2018-04-06 北京达佳互联信息技术有限公司 Image processing method, system and server
CN108108807A (en) * 2017-12-29 2018-06-01 北京达佳互联信息技术有限公司 Learning-oriented image processing method, system and server
US20180247107A1 (en) * 2015-09-30 2018-08-30 Siemens Healthcare Gmbh Method and system for classification of endoscopic images using deep decision networks
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109214360A (en) * 2018-10-15 2019-01-15 北京亮亮视野科技有限公司 A kind of construction method of the human face recognition model based on ParaSoftMax loss function and application
CN109241995A (en) * 2018-08-01 2019-01-18 中国计量大学 A kind of image-recognizing method based on modified ArcFace loss function
CN110197102A (en) * 2018-02-27 2019-09-03 腾讯科技(深圳)有限公司 Face identification method and device
CN111967392A (en) * 2020-08-18 2020-11-20 广东电科院能源技术有限责任公司 Face recognition neural network training method, system, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180247107A1 (en) * 2015-09-30 2018-08-30 Siemens Healthcare Gmbh Method and system for classification of endoscopic images using deep decision networks
CN107886062A (en) * 2017-11-03 2018-04-06 北京达佳互联信息技术有限公司 Image processing method, system and server
CN108108807A (en) * 2017-12-29 2018-06-01 北京达佳互联信息技术有限公司 Learning-oriented image processing method, system and server
CN110197102A (en) * 2018-02-27 2019-09-03 腾讯科技(深圳)有限公司 Face identification method and device
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109241995A (en) * 2018-08-01 2019-01-18 中国计量大学 A kind of image-recognizing method based on modified ArcFace loss function
CN109214360A (en) * 2018-10-15 2019-01-15 北京亮亮视野科技有限公司 A kind of construction method of the human face recognition model based on ParaSoftMax loss function and application
CN111967392A (en) * 2020-08-18 2020-11-20 广东电科院能源技术有限责任公司 Face recognition neural network training method, system, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QIUYU ZHU,AND ETC: "A New Loss Function for CNN Classifier Based on Predefined Evenly-Distributed Class Centroids", 《IEEE ACCESS》 *
姬东飞等: "基于自适应角度损失函数的深度人脸识别算法研究", 《计算机应用研究》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361346A (en) * 2021-05-25 2021-09-07 天津大学 Scale parameter self-adaptive face recognition method for replacing adjustment parameters
CN113361346B (en) * 2021-05-25 2022-12-23 天津大学 Scale parameter self-adaptive face recognition method for replacing adjustment parameters
CN113469084A (en) * 2021-07-07 2021-10-01 西安电子科技大学 Hyperspectral image classification method based on contrast generation countermeasure network
CN113705647A (en) * 2021-08-19 2021-11-26 电子科技大学 Dynamic interval-based dual semantic feature extraction method
CN113705647B (en) * 2021-08-19 2023-04-28 电子科技大学 Dual semantic feature extraction method based on dynamic interval
CN113763501A (en) * 2021-09-08 2021-12-07 上海壁仞智能科技有限公司 Iteration method of image reconstruction model and image reconstruction method
CN113763501B (en) * 2021-09-08 2024-02-27 上海壁仞智能科技有限公司 Iterative method of image reconstruction model and image reconstruction method
CN114529713A (en) * 2022-01-14 2022-05-24 电子科技大学 Underwater image enhancement method based on deep learning

Also Published As

Publication number Publication date
CN112766399B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN112766399B (en) Self-adaptive neural network training method for image recognition
CN109345508B (en) Bone age evaluation method based on two-stage neural network
US11049011B2 (en) Neural network classifier
CN110245620B (en) Non-maximization inhibition method based on attention
CN112668483B (en) Single-target person tracking method integrating pedestrian re-identification and face detection
CN107784288A (en) A kind of iteration positioning formula method for detecting human face based on deep neural network
CN107871103B (en) Face authentication method and device
CN112149651B (en) Facial expression recognition method, device and equipment based on deep learning
WO2020168796A1 (en) Data augmentation method based on high-dimensional spatial sampling
JP2021082269A (en) Method and device for training classification model and classification method
CN113743474A (en) Digital picture classification method and system based on cooperative semi-supervised convolutional neural network
CN106203628A (en) A kind of optimization method strengthening degree of depth learning algorithm robustness and system
CN111476346A (en) Deep learning network architecture based on Newton conjugate gradient method
CN114998602A (en) Domain adaptive learning method and system based on low confidence sample contrast loss
Xue et al. Research on edge detection operator of a convolutional neural network
CN112597979B (en) Face recognition method for updating cosine included angle loss function parameters in real time
CN111445024A (en) Medical image recognition training method
CN115861625A (en) Self-label modifying method for processing noise label
CN115511061A (en) Knowledge distillation method based on YOLOv5 model
CN114897884A (en) No-reference screen content image quality evaluation method based on multi-scale edge feature fusion
CN117115825B (en) Method for improving license OCR recognition rate
CN115565051B (en) Lightweight face attribute recognition model training method, recognition method and device
CN113111957B (en) Anti-counterfeiting method, device, equipment, product and medium based on feature denoising
WO2022190301A1 (en) Learning device, learning method, and computer-readable medium
JP5834287B2 (en) Pattern classification learning device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant