CN113076833B - Training method of age identification model, face age identification method and related device - Google Patents
Training method of age identification model, face age identification method and related device Download PDFInfo
- Publication number
- CN113076833B CN113076833B CN202110317299.3A CN202110317299A CN113076833B CN 113076833 B CN113076833 B CN 113076833B CN 202110317299 A CN202110317299 A CN 202110317299A CN 113076833 B CN113076833 B CN 113076833B
- Authority
- CN
- China
- Prior art keywords
- face
- age
- images
- recognition model
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 98
- 238000000034 method Methods 0.000 title claims abstract description 78
- 230000008859 change Effects 0.000 claims abstract description 39
- 230000032683 aging Effects 0.000 claims abstract description 25
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 16
- 230000009466 transformation Effects 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 12
- 230000002708 enhancing effect Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 6
- 239000013598 vector Substances 0.000 description 26
- 238000013461 design Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 238000001514 detection method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000000605 extraction Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000001965 increasing effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000013256 Gubra-Amylin NASH model Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000002074 deregulated effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000037303 wrinkles Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/167—Detection; Localisation; Normalisation using comparisons between temporally consecutive images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/178—Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
- Image Processing (AREA)
Abstract
The application provides a training method of an age identification model, a face age identification method and a related device, wherein the training method of the age identification model comprises the following steps: acquiring a first face image; performing age data enhancement on the first face image to obtain a plurality of second face images, wherein faces in the second face images and faces in the first face image belong to the same person, and the second face images are respectively face images of the same person at different ages; training the face age recognition model based on the plurality of second face images to learn aging change characteristics of the plurality of second face images through the face age recognition model so as to obtain a target face age recognition model. The face age recognition model can finely distinguish and recognize the features of the face image of the same person in the time dimension by learning the feature change information of the face image of the same person in the time dimension, so that the accuracy of age recognition can be improved.
Description
Technical Field
The present application relates to the field of facial image recognition, and in particular, to a training method for an age recognition model, a facial age recognition method, and a related apparatus.
Background
The face image contains various face characteristic information such as face facial form, face skin state, face five sense organs, face age and the like, wherein the face age is taken as important characteristic information, and is widely applied to the field of face image recognition. For example, some clients running on the mobile device have a function of face age recognition, wherein the clients acquire face images and output the face ages obtained by recognition based on the acquired face images to feed back to users, so that the purpose that the clients interact with the users to increase the viscosity of the users is achieved.
For these clients with the face age recognition function, the accuracy of age recognition, that is, the gap between the recognized age and the true age of the user, is a content that is important. At present, in the related technology of face age recognition, the real age of a face is generally used as a single tag information, the real age is used as a tag of a face image, a one-to-one correspondence is established between the face image and the real age, and then training of a face age recognition model is performed. Because the model learns the commonality among most human faces corresponding to each age, for some special people, such as people with younger age but older age or people with more mature age but younger age, the recognition effect is poor, and the problem of low accuracy easily occurs.
Disclosure of Invention
The application provides a training method of an age identification model, a face age identification method and a related device, and aims to solve the technical problem of low accuracy in the existing face age identification technology.
In a first aspect, a training method of an age identification model is provided, the method comprising the steps of:
Acquiring a first face image;
Performing age data enhancement on the first face image to obtain a plurality of second face images, wherein faces in the second face images and faces in the first face image belong to the same person, and the second face images are face images of the same person at different ages respectively;
training a face age recognition model based on the plurality of second face images to learn aging change characteristics of the plurality of second face images through the face age recognition model, so as to obtain a target face age recognition model.
In the technical scheme, age data enhancement is carried out on the acquired first face image to obtain a plurality of second face images which belong to the same person and are of different ages of the same person, and then a face age recognition model is trained based on the plurality of second face images so as to learn aging change characteristics of the plurality of second face images through the face age recognition module to obtain a target face age recognition model. Because the plurality of second face images are face images of the same person at different ages, the aging change characteristics of the plurality of second face images are learned, the aging change condition of the same person at different ages is learned, further the characteristic change information of the face images of the same person at the time dimension can be learned, and the face age recognition model can finely distinguish and recognize the characteristics of the face images at the time dimension by learning the characteristic change information of the face images of the same person at the time dimension, so that the accuracy of age recognition can be improved.
With reference to the first aspect, in one possible implementation manner, the enhancing age data of the first face image to obtain a plurality of second face images includes: inputting the first face image and the real face age corresponding to the first face image into a preset face generation model, and generating a plurality of face images which belong to the same person as the first face image and are not of the real face age through the face generation model; and determining the plurality of face images and the first face image as the plurality of second face images. The method has the advantages that the plurality of face images with different real ages and corresponding to the first face image are generated through the preset face generation model, the generated plurality of face images can be consistent with the real images in face characteristics, the generated plurality of face images are guaranteed to be similar enough, the face age recognition model can learn the characteristic change information of the same face image in the time dimension better, and model training accuracy is improved.
With reference to the first aspect, in one possible implementation manner, the age difference between two adjacent ages after the ages of the faces corresponding to the plurality of second face images are arranged in sequence is a preset difference. The face images which are orderly and have the ages distributed in the equal difference mode are generated, so that the face age identification model is facilitated to learn the characteristic change information of the same face image in the time dimension better.
With reference to the first aspect, in one possible implementation manner, the training the face age recognition model based on the plurality of second face images to learn aging change features of the plurality of second face images with respect to each other through the face age recognition model, to obtain a target face age recognition model includes: respectively inputting the plurality of second face images into the face age recognition model to respectively extract face features corresponding to the plurality of second face images; determining age probability predicted values corresponding to the plurality of second face images based on face features corresponding to the plurality of second face images, wherein the age probability predicted value corresponding to one second face image comprises the probability that a face in the one second face image belongs to each face age; and according to the age probability predicted values corresponding to the plurality of second face images, carrying out parameter adjustment on the face age recognition model until the face age recognition model converges, and determining the converged face age recognition model as the target face age recognition model. By respectively extracting the face features corresponding to each second face image, the commonality and the face feature difference of the face features of the same person at different ages can be determined, the age probability prediction is carried out on each second face image, the model parameters are adjusted, and the aging change information of the same person at different ages can be learned by the face age identification model.
With reference to the first aspect, in a possible implementation manner, the plurality of second face images includes the first face image; the plurality of second face images all carry age labels; the parameter adjustment for the face age recognition model according to the age probability prediction values corresponding to the plurality of second face images includes: according to the age probability predicted value corresponding to each of the plurality of second face images and the age labels carried by each of the plurality of second face images, calculating age classification loss of the face age recognition model; according to the age probability predicted values corresponding to the second face images, determining the face predicted ages corresponding to the face images except the first face image in the second face images; according to the face predicted ages corresponding to the other face images and the real face ages corresponding to the first face image, calculating the age predicted loss of the face age recognition model; and carrying out parameter adjustment on the face age identification model according to the age classification loss and the age prediction loss. The difference between the predicted age and the real age can be determined by calculating the age classification loss, the difference caused by age data enhancement can be determined, and the parameters of the whole face age identification model can be disordered based on the age classification loss and the age prediction loss, so that the face age identification model can learn the common characteristics of faces corresponding to all ages, the similar characteristics of faces of the same age and different age values of the same person.
With reference to the first aspect, in one possible implementation manner, the performing parameter adjustment on the face age identification model according to the age classification loss and the age prediction loss to converge the face age identification model includes: the age classification loss and the age prediction loss are weighted and summed to obtain the total loss of the face age recognition model; and carrying out parameter adjustment on the face age identification model according to the total loss. By means of weighted summation, the influence of age data enhancement on model loss can be reduced, and model training accuracy is improved.
With reference to the first aspect, in one possible implementation manner, before the enhancing the age data of the first face image to obtain a plurality of second face images, the method further includes: carrying out coordinate affine transformation on the first face image so as to adjust the face in the first face image into a positive face state; and intercepting the face effective area of the first face image after the coordinate affine transformation, and carrying out normalization processing on the face area obtained by interception. Before the age data enhancement, the face image is converted into the face image in the face state, and the face area is intercepted, so that the face age recognition model can extract more accurate and effective face features, and the training precision of the face age recognition model is improved.
In a second aspect, a face age identification method is provided, including the following steps:
Acquiring a face image to be identified;
inputting the face image to be identified into a target face age identification model, wherein the target face age identification model is obtained by training the face age identification model training method in the first aspect;
And outputting the face age corresponding to the face image to be recognized through the target face age recognition model.
The target face age recognition model obtained by the training method of the face age recognition model in the first aspect outputs the face age corresponding to the face image, so that the accuracy of age recognition is high.
In a third aspect, there is provided a training device for an age identification model, comprising:
The first acquisition module is used for acquiring a first face image;
The data enhancement module is used for enhancing the age data of the first face image to obtain a plurality of second face images, wherein faces in the second face images and faces in the first face image belong to the same person, and the second face images are face images of the same person at different ages respectively;
The training module is used for training the face age recognition model based on the plurality of second face images so as to learn aging change characteristics of the plurality of second face images through the face age recognition model and obtain a target face age recognition model.
With reference to the third aspect, in one possible design, the data enhancement module is specifically configured to: inputting the first face image and the real face age corresponding to the first face image into a preset face generation model, and generating a plurality of face images which belong to the same person as the first face image and are not of the real face age through the face generation model; and determining the plurality of face images and the first face image as the plurality of second face images.
With reference to the third aspect, in one possible design, the age difference between two adjacent ages after the ages of the faces corresponding to the plurality of second face images are arranged in sequence is a preset difference.
With reference to the third aspect, in one possible design, the training module is specifically configured to: respectively inputting the plurality of second face images into the face age recognition model to respectively extract face features corresponding to the plurality of second face images; determining age probability predicted values corresponding to the plurality of second face images based on face features corresponding to the plurality of second face images, wherein the age probability predicted value corresponding to one second face image comprises the probability that a face in the one second face image belongs to each face age; and according to the age probability predicted values corresponding to the plurality of second face images, carrying out parameter adjustment on the face age recognition model until the face age recognition model converges, and determining the converged face age recognition model as the target face age recognition model.
With reference to the third aspect, in one possible design, the plurality of second face images includes the first face image; the plurality of second face images all carry age labels; the training module is specifically used for: according to the age probability predicted value corresponding to each of the plurality of second face images and the age labels carried by each of the plurality of second face images, calculating age classification loss of the face age recognition model; according to the age probability predicted values corresponding to the second face images, determining the face predicted ages corresponding to the face images except the first face image in the second face images; according to the face predicted ages corresponding to the other face images and the real face ages corresponding to the first face image, calculating the age predicted loss of the face age recognition model; and carrying out parameter adjustment on the face age identification model according to the age classification loss and the age prediction loss.
With reference to the third aspect, in one possible design, the training module is specifically configured to: the age classification loss and the age prediction loss are weighted and summed to obtain the total loss of the face age recognition model; and carrying out parameter adjustment on the face age identification model according to the total loss.
With reference to the third aspect, in one possible design, the apparatus further includes: an affine transformation module, configured to perform coordinate affine transformation on the first face image, so as to adjust a face in the first face image into a front face state; and the normalization processing module is used for intercepting the face effective area of the first face image after the coordinate affine transformation and carrying out normalization processing on the face area obtained by interception.
In a fourth aspect, a face age recognition apparatus is provided, including:
The second acquisition module is used for acquiring the face image to be identified;
the input module is used for inputting the face image to be recognized into a target face age recognition model, and the target face age recognition model is obtained through training by the training method of the age recognition model in the first aspect;
And the prediction module is used for determining the face age corresponding to the face image to be recognized through the target face age recognition model.
In a fifth aspect, a computer device is provided, comprising a memory and one or more processors configured to execute one or more computer programs stored in the memory, the one or more processors, when executing the one or more computer programs, cause the computer device to implement the training method of the age identification model of the first aspect or the face age identification method of the second aspect.
In a sixth aspect, there is provided a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the training method of the age identification model of the first aspect or the face age identification method of the second aspect.
The application can realize the following beneficial effects: because the plurality of second face images are face images of the same person at different ages, the aging change characteristics of the plurality of second face images are learned, the aging change condition of the same person at different ages is learned, further the characteristic change information of the face images of the same person at the time dimension can be learned, and the face age recognition model can finely distinguish and recognize the characteristics of the face images at the time dimension by learning the characteristic change information of the face images of the same person at the time dimension, so that the accuracy of age recognition can be improved.
Drawings
Fig. 1 is a flow chart of a training method of an age identification model according to an embodiment of the present application;
Fig. 2 is a flowchart of a method for enhancing age data of a face image according to an embodiment of the present application;
FIG. 3 is a flowchart of another training method of an age identification model according to an embodiment of the present application;
Fig. 4 is a schematic structural diagram of an age-of-face recognition model according to an embodiment of the present application;
Fig. 5 is a flow chart of a face age prediction method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a training device for an age identification model according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a face age identifying device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings in the embodiments of the present application.
The technical scheme of the application can be suitable for various scenes of face recognition, and particularly can be used for recognizing the age of the face corresponding to the face image in the scene of face recognition. In a face recognition scene, in some implementations, a face image of the scene is recognized by a face age recognition model with an age recognition function, so as to determine a face age corresponding to the face image, wherein the face age recognition model is obtained through pre-training.
In the process of training the obtained face age recognition model, a large number of face images are obtained, corresponding age labels are marked on each face image, then each face image and the age labels corresponding to each face image are input into the face age recognition model which is not trained yet for training, the face age recognition model obtained through training can be infinitely close to the age labels corresponding to each age image based on the age recognition results output by each face image, and therefore the age recognition model learns feature commonalities among a large number of face images corresponding to the same age and feature changes among a large number of face images of different ages, further the face age recognition model with the capability of distinguishing face images of different ages can be used for recognizing the face ages corresponding to the face images.
In order to facilitate understanding of the technical scheme of the application, a training process of a face age recognition model is specifically introduced through an example. Taking the age range to be identified between 1 and 100, training the face age identification model through 5 face images at a time as an example, assume that the face age corresponding to the face image 1 is 20 years old, the face age corresponding to the face image 2 is 25 years old, the face age corresponding to the face image 3 is 37 years old, the face age corresponding to the face image 4 is 55 years old, and the face age corresponding to the face image 5 is 86 years old.
The primary training process is as follows:
1) And labeling each face image with a face age label. Specifically, since the age range to be identified is between 1 and 100, a 100-dimensional vector is used as an age tag corresponding to one face image, and one 100-dimensional vector is used to indicate the age of the face corresponding to one face image. The 20 th bit in the 100-dimensional vector corresponding to the face image 1 is 1, and the rest bits are 0; the 25 th bit is 1 and the rest bits are 0 in the 100-dimensional vector corresponding to the face image 2; the 37 th bit in the 100-dimensional vector corresponding to the face image 3 is 1, and the rest bits are 0; the 55 th bit is 1 in the 100-dimensional vector corresponding to the face image 4, and the rest bits are 0; the 86 th bit is 1 and the rest bits are 0 in the 100-dimensional vector corresponding to the face image 5. Namely, in the 100-dimensional vector, the position corresponding to the age corresponding to the face image is marked as 1, and the other positions are marked as 0 so as to be used for representing the face age corresponding to the face image. This way of labelling is also known as one-hot (one-hot) coding.
2) And inputting each face image into a face age recognition model, and outputting an age detection result aiming at each face image by the age recognition model, wherein each age detection result comprises the probability of the face image corresponding to each age. Specifically, after 5 face images are input into the age recognition model, each detection result output by the age recognition model is a 100-dimensional vector, each 100-dimensional vector contains 100 probability values, and each probability value is in a range of 0 to 1 so as to respectively indicate the probability that the age of the face image corresponding to the 100-dimensional vector is 1-100. That is, after the face images 1 to 5 are input into the age recognition model, the age recognition model outputs a 100-dimensional vector corresponding to the face image 1, which is used to indicate the probability that the face in the face image 1 is 1 year old, the probability that the face is 2 years old, the probability that the face is 3 years old … … and the probability that the face is 100 years old; outputting a 100-dimensional vector corresponding to the face image 2, wherein the 100-dimensional vector is used for indicating the probability that the face in the face image 2 is 1 year old, the probability that the face is 2 years old, the probability that the face is 3 years old … … and the probability that the face is 100 years old; outputting a 100-dimensional vector corresponding to the face image 3, wherein the 100-dimensional vector is used for indicating the probability that the face in the face image 3 is 1 year old, the probability that the face is 2 years old, the probability that the face is 3 years old … … and the probability that the face is 100 years old; outputting a 100-dimensional vector corresponding to the face image 4, wherein the 100-dimensional vector is used for indicating the probability that the face in the face image 4 is 1 year old, the probability that the face is 2 years old, the probability that the face is 3 years old … … and the probability that the face is 100 years old; and outputting a 100-dimensional vector corresponding to the face image 5, wherein the 100-dimensional vector is used for indicating the probability that the face in the face image 5 is 1 year old, the probability that the face is 2 years old and the probability that the face is 3 years old and the probability that the face is … … years old is 100 years old.
3) And calculating the difference between the output result of the face age recognition model and the age label to determine the loss of the age recognition model, wherein the loss of the age recognition model represents the accuracy of age recognition, and the smaller the loss is, the higher the accuracy of the age recognition model is, and the closer the accuracy is to the real situation. Specifically, calculating the gap between the age label of the face image 1 and the age detection result of the face image 1 to obtain a gap 1; calculating the difference between the age label of the face image 2 and the age detection result of the face image 2 to obtain a difference 2; calculating the difference between the age label of the face image 3 and the age detection result of the face image 3 to obtain a difference 3; calculating the difference between the age label of the face image 4 and the age detection result of the face image 4 to obtain a difference 4; calculating the difference between the age label of the face image 5 and the age detection result of the face image 5 to obtain a difference 5; and summing and averaging the gaps 1, 2, 3, 4 and 5 to obtain the loss of the face age recognition model.
4) And (5) according to the internal parameters of the damaged and deregulated whole face age identification model.
The method is a one-time training parameter-adjusting process of the face age recognition model, a large number of face images can be obtained in the actual training process, the real ages corresponding to the face images are used as face age labels, and repeated iterative parameter-adjusting training is carried out. Because a user only has one real age, only one age face image can be generally obtained for a user, so that the face age recognition model only can learn the feature commonality and the feature difference between face images of different users corresponding to the same age and different ages, and can not learn the feature commonality and the feature difference between face images of the same user at different ages, and the recognition effect is poor for people with younger but older faces or more mature but less age faces.
In view of the above, the technical solution of the present application provides a new method for training a face age recognition model, after a large number of face images of different users are obtained, the face image of each user is subjected to age data enhancement, so as to obtain face images of each user at different ages, and the face age recognition model is trained by the face images of each user at different ages, so that the face age recognition model can learn not only feature commonalities between face images of different users corresponding to the same ages and feature differences between face images of different users corresponding to different ages, but also feature commonalities and feature differences between face images of the same user at different ages, thereby improving the accuracy of model training, and further improving the recognition accuracy of the face age recognition model.
The technical scheme of the application is specifically described below.
Referring first to fig. 1, fig. 1 is a flowchart of a training method of an age identification model according to an embodiment of the present application, where the method may be applied to various face recognition devices, as shown in fig. 1, and the method includes the following steps:
s101, acquiring a first face image.
Here, the first face image refers to a sample image for training a face age recognition model. Generally, the number of sample images for training the face age recognition model is multiple, and the sample images are face images of users with different ages. In the actual training process, for each age, a plurality of different face images with the true age being the age are required to be obtained to be used as face sample subsets corresponding to the ages, the face sample subsets corresponding to the ages are combined to obtain a face sample set, and any one sample image in the face sample set can be called a first face image.
In a specific implementation, a face image set may be obtained from various public face databases to serve as a face sample set, so as to obtain a first face image.
S102, performing age data enhancement on the first face image to obtain a plurality of second face images.
In the embodiment of the application, the step of performing age data enhancement on the first face image means that the face image is subjected to data enhancement in the age dimension so as to obtain face images of the same person at different ages, namely, a plurality of second face images are obtained, that is, faces in the plurality of second face images all belong to the same person. For example, assuming that the acquired first face image is a face image of the user a, the real age of the user a is 25 years old, the first face image is subjected to age data enhancement to obtain face images of the user a at the ages of 30 years old, 34 years old and 40 years old, the face images of the user a at the ages of 30 years old, 34 years old and 40 years old and the face images of the user a at the ages of 25 years old (i.e., the first face image) are all referred to as second face images, and the face images of the user a at the ages of 30 years old, 34 years old and 40 years old and the face images of the user a at the ages of 25 years old are images obtained by performing age data enhancement on the first face images.
It should be understood that in the actual training process, data enhancement is required in the age dimension for each sample image in the face sample set, so as to obtain face images corresponding to a plurality of different ages of each sample image.
In the embodiment of the application, the age data enhancement can be performed on the first face image in various modes. For example, interfaces of various image processing software may be invoked to perform age data enhancement on the first face image by increasing/decreasing the degree of wrinkles, increasing/decreasing age spots, increasing/decreasing skin smoothness, adjusting face proportions, and the like; for another example, various software capable of generating a face aging change image may be utilized to enhance the age data of the first face image; for another example, the age data enhancement may be performed on the first face image by training the generated face generation model in advance, which is not limited to the above manner. In a specific implementation, the age data enhancement can be performed on the first face image by adopting the combination of one or more modes, wherein the age data enhancement can be performed on the first face image by combining the modes, the age data enhancement can be performed on the face image from different dimensions, and the diversity of the face image in the age is increased, so that the accuracy of model training is improved.
And S103, training the face age recognition model based on the plurality of second face images so as to learn aging change characteristics of the plurality of second face images through the face age recognition model and obtain a target face recognition model.
In the embodiment of the application, because the plurality of second face images are face images of the same person at different ages, the face age recognition model is trained based on the plurality of second face images, so that aging change characteristics of the plurality of second face images can be learned through the face age recognition model, and feature commonalities and feature differences among the face images of the same user at different ages can be learned. It should be understood that in the actual training process, the face sample set is used for training, and the face sample set contains face images of different users in different ages, so that the face age identification model can learn feature commonalities between face images of different users corresponding to the same age and feature differences between face images of different users corresponding to different ages, which are learned in the general training process. By combining the learned feature commonalities and feature differences, the face image can be distinguished more finely when the face age recognition model is used for face age recognition, so that the face age can be recognized better.
Specifically, training the face age recognition model based on the plurality of second face images refers to calculating the loss of the face age recognition model based on age labels corresponding to the plurality of second face images and age prediction results obtained after the plurality of second face images are input into the face age recognition model, and adjusting the face age recognition model based on loss imbalance, so as to iteratively adjust parameters until the age prediction results output by the face age recognition model are infinitely approximate to the age labels. For the course of a training, reference can be made to the examples described above.
In the embodiment of the application, the acquired first face image is subjected to age data enhancement to obtain a plurality of second face images which belong to the same person and are of different ages of the same person, and then the face age recognition model is trained based on the plurality of second face images so as to learn the aging change characteristics of the plurality of second face images through the face age recognition module, so that the target face age recognition model is obtained. Because the plurality of second face images are face images of the same person at different ages, the aging change characteristics of the plurality of second face images are learned, the aging change condition of the same person at different ages is learned, further the characteristic change information of the face images of the same person at the time dimension can be learned, and the face age recognition model can finely distinguish and recognize the characteristics of the face images at the time dimension by learning the characteristic change information of the face images of the same person at the time dimension, so that the accuracy of age recognition can be improved.
In some possible embodiments, the appearance at different ages can be constructed through a model, so that the purpose of enhancing the age data is achieved. Referring to fig. 2, fig. 2 is a flowchart of a method for enhancing age data of a face image according to an embodiment of the present application, where the method is a feasible implementation of the foregoing step S102, and as shown in fig. 2, the method includes the following steps:
S201, inputting the first face image and the real face age corresponding to the first face image into a preset face generation model, and generating a plurality of face images which belong to the same person as the first face image and are not the real face age through the face generation model.
Here, the preset face generation model is a model capable of generating a face image of the same person as the original image and having a different age from the original image based on the original image of the face image.
In a possible implementation, the preset face generation model may be a pre-trained generated challenge network (GAN) model. Specifically, the GAN model may include a generative model (GENERATIVE MODEL) and a discriminant model (DISCRIMINATIVE MODEL). The generation model is used for generating a false image similar to the original image based on the input original image, outputting the generated false image similar to the original image to the discrimination model, and the discrimination model is used for discriminating the false image similar to the original image generated by the generation model and judging whether the image similar to the original image is a real image or not. After iterative training is carried out on the GAN network, a game balance can be formed between the generation model and the discrimination model, namely the discrimination model has the capability of discriminating true and false images, and the generation model has the capability of generating false images with false and spurious images. In the embodiment of the application, after training, the generation model has the capability of generating the face image of the same person as the face in the original face image, the real face age corresponding to the first face image and the first face image is input into the generation model of the GAN model, and the generation model can output the face image of the same person as the face in the first face image under the appointed age through the preset appointed age. In this way, the generated face image can be made different from the first face image in the face characteristics in the age dimension, while the face characteristics in the other dimensions remain identical.
S202, determining the plurality of face images and the first face image as a plurality of second face images.
Here, since the first face image is an original image, the plurality of second face images include the first face image and the plurality of face images generated by the model, that is, the face image that most truly reflects the face appearance is included, and when the plurality of second face images are used for training, the model is facilitated to be able to extract the most authentic face features. Alternatively, in other possible cases, the plurality of face images generated by the model may be directly determined as the plurality of second face images, and since the plurality of second face images are generated by the same model, feature commonalities among the plurality of second face images are enhanced, and when the plurality of second face images are used for training, the model is facilitated to better extract the feature commonalities among the plurality of second face images.
In the technical scheme corresponding to fig. 2, a plurality of face images with different real ages corresponding to the first face image are generated through a preset face generation model, so that the generated plurality of face images can be consistent with the real images on other face features except for the dimension of the ages, the generated plurality of face images are ensured to be similar enough mutually, the face age recognition model can learn the feature change information of the same face image in the dimension of the time better, and the model training precision is improved.
In the foregoing technical solution, since the plurality of second face images are face images of the same person at different ages, the plurality of second face images each correspond to one age, and in some possible embodiments, an age difference between two adjacent ages after the ages corresponding to the plurality of second face images are sorted according to the sizes of the ages is a preset difference. That is, the sequence obtained by age-ordered arrangement of the plurality of second face images obtained by age data enhancement is an arithmetic progression. Specifically, the preset difference value may be 1,2, 3, etc., and may be set according to specific situations.
For example, if the preset difference is 2, 4 face images are to be generated, the plurality of second face images include a first face image, and the age corresponding to the first face image is 35 years old, the face images at the ages of 37 years old, 39 years old and 41 years old may be generated based on the age corresponding to the first face image. By generating the face images which are orderly and have the ages distributed in an equidifferent manner, the face images are closer and orderly in ages, so that the face age recognition model can learn some finer characteristic differences of the same face image in the time dimension better, and the face ages corresponding to the face images can be distinguished more accurately when recognition is carried out.
In some possible embodiments, in the process of training the face age recognition model, the parameters of the face age recognition model need to be continuously adjusted through multiple times of training until the face age recognition model converges. Referring to fig. 3, fig. 3 is a flow chart of another training method of a face age identification model according to an embodiment of the present application, where the method is a feasible implementation of the step 103, as shown in fig. 3, and the method includes the following steps:
S301, respectively inputting the plurality of second face images into a face age recognition model to respectively extract face features corresponding to the plurality of second face images.
In the embodiment of the application, the face age recognition model can be any model capable of realizing the face age recognition function. In a possible implementation manner, the face age recognition model may include a feature extraction module, where the feature extraction module is configured to extract various face features of the face image, such as color features, texture features, shape features, and so on, and facilitate determining feature commonalities and feature differences from the face features corresponding to each of the plurality of second face images by extracting the face features corresponding to each of the plurality of second face images.
The feature extraction module may be composed of a plurality of convolution layers, where each convolution layer may include a plurality of convolution kernels, each convolution kernel corresponds to an activation function, the number and size of the convolution kernels included in different convolution layers are different, and different convolution kernels may be used to extract different face features.
In a specific embodiment, as shown in fig. 4, the feature extraction module may be composed of 5 convolution layers, where the convolution layer CL1 includes 32 convolution kernels, and is configured to perform convolution processing on a face image input to the face age identification model to obtain 32 first face feature images with a size of 256×256; the convolution layer CL2 is connected with the convolution layer CL1, and the convolution layer CL2 comprises 64 convolution kernels and is used for carrying out convolution processing on the first face feature map output by the convolution layer CL1 to obtain 64 second face feature maps with the size of 128 x 128; the convolution layer CL3 is connected with the convolution layer CL2, and the convolution layer CL3 comprises 128 convolution kernels and is used for carrying out convolution processing on the second face feature map output by the convolution layer CL2 so as to obtain 128 third face feature maps with the size of 64 x 64; the convolution layer CL4 is connected with the convolution layer CL3, and the convolution layer CL4 comprises 256 convolution kernels and is used for carrying out convolution processing on the third face feature map output by the convolution layer CL3 to obtain 256 fourth face feature maps with the size of 32 x 32; the convolution layer CL5 is connected with the convolution layer CL4, and the convolution layer CL5 includes 512 convolution kernels, and is configured to perform convolution processing on the fourth face feature map output by the convolution layer CL4, so as to obtain 512 fifth face convolution feature maps with a size of 16×16. As can be seen from fig. 4, as the convolutional layer deepens, the size of the convolutional feature map gradually becomes smaller, and the number of the convolutional feature maps gradually becomes larger, so that the features of each second face image can be extracted in an omnibearing manner, the face features which embody the feature commonalities and feature differences of the same person at different ages are determined, and further, the plurality of second face images can be finely distinguished based on the extracted face features. The same feature extraction module is respectively arranged for a plurality of second face images which are simultaneously input into the face age recognition model, so that the model can learn the same feature and different features of the same person at different ages.
S302, determining age probability predicted values corresponding to the plurality of second face images based on face features corresponding to the plurality of second face images.
In the embodiment of the application, a prediction result, namely an age probability prediction value, can be obtained for each second face image, and the age probability prediction value corresponding to one second face image comprises probabilities that the second face image belongs to each age respectively. The age probability predictor is typically embodied as an n-dimensional vector, n referring to the number of ages within the age range interval to be predicted. Regarding examples of age probability predictors, reference may be made to the description of the foregoing one-time training process.
In one possible implementation, the face features corresponding to each of the plurality of second face images may be processed based on the full connection layer and the softmax layer to determine the plurality of second face images. The full connection layer can convert the face features corresponding to the second face images into n-dimensional vectors, and each value in one n-dimensional vector reflects the possibility that one second face image belongs to each age; and normalizing the values in the n-dimensional vector by a softmax layer connected with the full connection layer (mapping the numerical value to the range of 0-1 by the index), thereby obtaining the age probability predicted value corresponding to each second face image.
In a specific embodiment, as shown in fig. 4, the full-connection layer and the softmax layer may be connected to the last convolutional layer CL5 in the feature extraction module, where the full-connection layer includes a full-connection matrix of (16×16) ×n, and the full-connection layer is configured to convert the fifth face feature map output by the convolutional layer CL5 into an n-dimensional vector of 1*n, and the softmax layer is configured to normalize the n-dimensional vector to obtain the age probability prediction value.
And S303, carrying out parameter adjustment on the face age recognition model according to the age probability predicted values corresponding to the second face images until the face age recognition model converges.
In the embodiment of the application, the plurality of second face images correspond to different face ages, and each face image carries an age label for indicating the corresponding face age when training is performed, and the description of the age labels can be seen in the foregoing description. And obtaining age probability predicted values corresponding to the plurality of second face images, and calculating the loss of the face age recognition model according to the age probability predicted values corresponding to the plurality of second face images and the age labels corresponding to the plurality of second face images.
In a possible embodiment, when the plurality of second face images includes the first face image and the ages corresponding to the plurality of second face images are arranged in an equidifferent manner, the manner of calculating the loss of the face age recognition model is as follows:
A1, calculating age classification loss of the face age recognition model according to age probability predicted values corresponding to the plurality of second face images and age labels carried by the plurality of second face images.
Specifically, the formula for calculating the age classification loss is as follows:
Where L1 is the age classification penalty, N is the number of training samples (i.e., the number of first face images acquired from the aforementioned set of face samples in a single training), k is the number of second face images, Age label of j second face image corresponding to i training sample,/>And the age probability predicted value corresponding to the j second face image corresponding to the i training sample. By calculating the age classification loss, a deviation between the predicted age and the true age can be determined. /(I)
A2, determining the predicted age of the face corresponding to each of the other face images except the first face image in the plurality of second face images based on the predicted age probability value corresponding to each of the plurality of second face images; and calculating the age prediction loss of the face age recognition model according to the face prediction ages corresponding to the other face images and the real face ages corresponding to the first face image.
Specifically, the formula for calculating the age prediction loss of the face age recognition model is as follows:
wherein L2 is age prediction loss, and N is the number of training samples; k is the number of second face images, For the true age of the first face image corresponding to the ith training sample,/>The age is predicted for the face corresponding to the jth other face image corresponding to the ith training sample, and m is the preset difference value. Calculating the age prediction loss can determine the deviation caused by the enhancement of the age data.
And A3, carrying out parameter adjustment on the face age identification model according to the age classification loss and the age prediction loss until the face age identification model converges. Based on the age classification loss and the age prediction loss, the parameters of the whole face age recognition model can enable the face age recognition model to learn the common characteristics of faces corresponding to all ages, and the similar characteristics of different faces of the same age and the characteristics of different age values of the same person.
Specifically, the total loss of the face age recognition model can be calculated by a weighted summation mode, and then the parameter adjustment is carried out on the face age recognition model according to the total loss. The calculation formula of the total loss is as follows:
L=α*L1+β*L2
Wherein L is total loss, and alpha and beta are weights of age classification loss and age prediction loss respectively. Illustratively, α=0.7, β=0.3. By setting different weights for the age classification loss and the age prediction loss, the influence of the age data enhancement on the model loss can be reduced, and the model training precision is improved.
Specifically, after the total loss of the face age recognition model is obtained by calculation, the parameters of the face age recognition model can be adjusted by adopting an adaptive moment estimation (adaptive moment estimation, adam) algorithm based on the total loss, and then the steps of the method shown in fig. 1-3 are repeatedly executed until the face age recognition model converges. The convergence of the face age recognition model may be that the total loss of the face age recognition model is smaller than a preset difference value, or the number of parameter adjustment times has reached a preset number of times. It should be noted that, the adjustment of the parameters of the face age recognition model is to adjust the parameters of the feature extraction module and the full connection layer.
S304, determining the converged face age recognition model as a target face age recognition model.
In the technical scheme corresponding to fig. 3, by respectively extracting the face features corresponding to each second face image, the commonality and the difference of the face features of the same person at different ages can be determined, the age probability prediction is performed on each second face image, and the model parameters are adjusted, so that the face age recognition model can learn the aging change information of the same person at different ages.
Alternatively, in some embodiments, the training accuracy may be further improved by adjusting the training images to be "standard faces". Before the age data enhancement is performed on the first face image to obtain a plurality of second face images, the method may further include the steps of:
1. And carrying out coordinate affine transformation on the first face image so as to adjust the face in the first face image into a positive face state.
Specifically, face key points in the first face image may be determined, and left-eye center coordinates and right-eye center coordinates representing positions of the left-eye center and the right-eye center are determined based on the key points in the face of the first face image; and calculating the angle theta of the left-right rotation of the face in the first face image based on the left-eye center coordinate and the right-eye center coordinate, wherein the calculation formula of the angle theta is as follows:
wherein (x 1, y 1) represents the left eye center coordinate and (x 2, y 2) represents the right eye center coordinate value.
And then, according to the rotation change matrix, adjusting the first face image so as to enable the face in the first face image to be adjusted to be in a positive face state. Wherein the rotation change matrix is as follows:
Wherein, (x, y) represents the coordinate value of the pixel point in the first face image, (x ', y') represents the coordinate value of the pixel point after affine transformation of the coordinates, and when the rotation direction is clockwise rotation, t x and t y represent translation values, or may be preset constants, for example, all are 1.
2. And intercepting the face effective area of the first face image after the coordinate affine transformation, and carrying out normalization processing on the intercepted face image.
Specifically, the nose center position may be determined in the first face image after affine transformation of coordinates, the nose center position is taken as the center, the maximum distance between key points in the first face image after affine transformation is taken as the length, the face image is intercepted from the first face image after affine transformation of coordinates, and then the intercepted face image is adjusted to the size required by the face age recognition model, so as to obtain the normalized face image. For example, as shown in fig. 4, the face age recognition model needs to adjust the captured face image to 256×256.
The face image is converted into the face image in the face state, and the face area is intercepted, so that the face age recognition model can extract more accurate and effective face features, and the training precision of the face age recognition model is improved.
After the training method of the age identification model provided by the application is adopted, the age identification model can be utilized for face age identification. Referring to fig. 5, fig. 5 is a flow chart of a face age prediction method according to an embodiment of the present application, as shown in fig. 5, the method includes the following steps:
S401, acquiring a face image to be recognized.
Here, the face image to be recognized refers to a face image of the age of the face to be recognized.
S402, inputting the face image to be recognized into a target face age recognition model.
Here, under the condition that the acquired size of the face image to be recognized is the size which can be recognized by the target face age recognition model, the face image to be recognized is directly input into the target face age recognition model; under the condition that the acquired size of the face image to be identified is not the size which can be identified by the target face age identification model, the face image to be identified can be adjusted according to the mode of adjusting the standard face, so that the target face age identification model can be identified.
S403, outputting the face age corresponding to the face image to be recognized through the target face age recognition model.
After the face image to be identified is input into the target face age identification model, the target face age identification model can determine an age probability prediction value corresponding to the face image to be identified, and further determine the age corresponding to the maximum probability value in the age probability prediction value corresponding to the face image to be identified as the face age corresponding to the face image to be identified. The method for determining the age probability prediction value corresponding to the face image to be identified by the target face age identification model may refer to the description related to determining the age probability prediction value corresponding to the second face image in the foregoing steps S301 to S302, which is not repeated herein.
In the technical scheme corresponding to fig. 5, the target face age recognition model is obtained through training by the method of fig. 1-3, and the target face age recognition model not only learns the feature commonalities between face images of different users corresponding to the same age and the feature differences between face images of different users corresponding to different ages, but also learns the feature commonalities and feature differences between face images of the same user at different ages, so that the face ages corresponding to the face images can be accurately recognized, and can be finely distinguished from the age dimension for special people with younger faces but larger ages or more mature faces but smaller ages, and the like, thereby achieving a better recognition effect.
The above describes the method of the application and the apparatus of the application is described next in order to better carry out the method of the application.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an age identifying model training device according to an embodiment of the present application, as shown in fig. 6, the device 50 includes:
a first obtaining module 501, configured to obtain a first face image;
The data enhancing module 502 is configured to enhance age data of the first face image to obtain a plurality of second face images, where faces in the plurality of second face images and faces in the first face image belong to a same person, and the plurality of second face images are face images of the same person at different ages respectively;
the training module 503 is configured to train the face age recognition model based on the plurality of second face images, so as to learn aging characteristics of the plurality of second face images through the face age recognition model, and obtain a target face age recognition model.
In one possible design, the data enhancement module 502 is specifically configured to: inputting the first face image and the real face age corresponding to the first face image into a preset face generation model, and generating a plurality of face images which belong to the same person as the first face image and are not of the real face age through the face generation model; and determining the plurality of face images and the first face image as the plurality of second face images.
In one possible design, the age difference between two adjacent ages after the ages of the faces corresponding to the plurality of second face images are arranged in sequence is a preset difference. The face images which are orderly and have the ages distributed in the equal difference mode are generated, so that the face age identification model is facilitated to learn the characteristic change information of the same face image in the time dimension better.
In one possible design, the training module 503 is specifically configured to: respectively inputting the plurality of second face images into the face age recognition model to respectively extract face features corresponding to the plurality of second face images; determining age probability predicted values corresponding to the plurality of second face images based on face features corresponding to the plurality of second face images, wherein the age probability predicted value corresponding to one second face image comprises the probability that a face in the one second face image belongs to each face age; and according to the age probability predicted values corresponding to the plurality of second face images, carrying out parameter adjustment on the face age recognition model until the face age recognition model converges, and determining the converged face age recognition model as the target face age recognition model.
In one possible design, the plurality of second face images includes the first face image; the plurality of second face images all carry age labels; the training module 503 is specifically configured to: according to the age probability predicted value corresponding to each of the plurality of second face images and the age labels carried by each of the plurality of second face images, calculating age classification loss of the face age recognition model; according to the age probability predicted values corresponding to the second face images, determining the face predicted ages corresponding to the face images except the first face image in the second face images; according to the face predicted ages corresponding to the other face images and the real face ages corresponding to the first face image, calculating the age predicted loss of the face age recognition model; and carrying out parameter adjustment on the face age identification model according to the age classification loss and the age prediction loss.
In one possible design, the training module 503 is specifically configured to: the age classification loss and the age prediction loss are weighted and summed to obtain the total loss of the face age recognition model; and carrying out parameter adjustment on the face age identification model according to the total loss.
In one possible design, the apparatus 50 further includes: an affine transformation module 504, configured to perform coordinate affine transformation on the first face image to adjust a face in the first face image to a positive face state; a normalization processing module 505, configured to intercept the face effective region of the first face image after affine transformation of the coordinates, and normalize the intercepted face region
It should be noted that, what is not mentioned in the embodiment corresponding to fig. 6 may refer to the description of the method embodiment corresponding to fig. 1 to 4, and will not be repeated here.
According to the device, the obtained first face image is subjected to age data enhancement to obtain a plurality of second face images which belong to the same person and are of different ages of the same person, and then the face age recognition model is trained based on the plurality of second face images so as to learn aging change characteristics of the plurality of second face images through the face age recognition module, so that the target face age recognition model is obtained. Because the plurality of second face images are face images of the same person at different ages, the aging change characteristics of the plurality of second face images are learned, the aging change condition of the same person at different ages is learned, further the characteristic change information of the face images of the same person at the time dimension can be learned, and the face age recognition model can finely distinguish and recognize the characteristics of the face images at the time dimension by learning the characteristic change information of the face images of the same person at the time dimension, so that the accuracy of age recognition can be improved.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a face age identifying device according to an embodiment of the present application, as shown in fig. 7, the device 60 includes:
A second acquiring module 601, configured to acquire a face image to be identified;
The input module 602 is configured to input the face image to be identified to a target face age identification model, where the target face age identification model is obtained by training the face age identification model according to the training method of the first aspect;
and the prediction module 603 is configured to determine, according to the target face age recognition model, a face age corresponding to the face image to be recognized.
It should be noted that, what is not mentioned in the embodiment corresponding to fig. 7 may refer to the description of the method embodiment corresponding to fig. 5, and will not be repeated here.
The target face age recognition model in the device not only learns the feature commonalities between face images of different users corresponding to the same age and the feature differences between face images of different users corresponding to different ages, but also learns the feature commonalities and the feature differences between face images of the same user at different ages, so that the face ages corresponding to the face images can be accurately recognized, and the face can be finely distinguished from the dimension of the age for special people with younger but older or more mature but smaller faces, so that a better recognition effect can be achieved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application, and the computer device 70 includes a processor 701 and a memory 702. The processor 701 is connected to a memory 702, for example the processor 701 may be connected to the memory 702 by a bus.
The processor 701 is configured to support the computer device 70 to perform the corresponding functions in the methods of fig. 1-4 or the method of fig. 5. The processor 701 may be a central processor (central processing unit, CPU), a network processor (network processor, NP), a hardware chip, or any combination thereof. The hardware chip may be an Application SPECIFIC INTEGRATED Circuit (ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (FPGA) GATE ARRAY, generic array logic (GENERIC ARRAY logic, GAL), or any combination thereof.
The memory 702 is used for storing program codes and the like. The memory 702 may include Volatile Memory (VM), such as random access memory (random access memory, RAM); the memory 1002 may also include a non-volatile memory (NVM), such as read-only memory (ROM), flash memory (flash memory), hard disk (HARD DISK DRIVE, HDD) or solid state disk (solid-state drive (STATE DRIVE, SSD); memory 502 may also include a combination of the types of memory described above.
In some possible cases, the processor 701 may call the program code to:
Acquiring a first face image;
Performing age data enhancement on the first face image to obtain a plurality of second face images, wherein faces in the second face images and faces in the first face image belong to the same person, and the second face images are face images of the same person at different ages respectively;
training a face age recognition model based on the plurality of second face images to learn aging change characteristics of the plurality of second face images through the face age recognition model, so as to obtain a target face age recognition model.
In other possible cases, the processor 701 may call the program code to:
Acquiring a face image to be identified;
inputting the face image to be identified into a target face age identification model, wherein the target face age identification model is obtained by training the face age identification model training method in the first aspect;
And outputting the face age corresponding to the face image to be recognized through the target face age recognition model.
It should be noted that, implementation of each operation may also correspond to the corresponding description referring to the above method embodiment; the processor 701 may also cooperate with other functional hardware to perform other operations in the method embodiments described above.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method of the previous embodiments.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in the embodiments may be accomplished by computer programs stored in a computer-readable storage medium, which when executed, may include the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only memory (ROM), a random-access memory (Random Access memory, RAM), or the like.
The foregoing disclosure is illustrative of the present application and is not to be construed as limiting the scope of the application, which is defined by the appended claims.
Claims (10)
1. A method for training an age identification model, comprising:
Acquiring a first face image;
performing age data enhancement on the first face image to obtain a plurality of second face images, wherein faces in the second face images and faces in the first face image belong to the same person, and the second face images are respectively face images of the same person at different ages;
training a face age recognition model based on the plurality of second face images to learn aging change characteristics of the plurality of second face images through the face age recognition model so as to obtain a target face age recognition model;
the step of performing age data enhancement on the first face image to obtain a plurality of second face images includes:
Inputting the first face image and the real face age corresponding to the first face image into a preset face generation model, and generating a plurality of face images which belong to the same person as the first face image and are not of the real face age through the face generation model;
Determining the plurality of face images and the first face image as the plurality of second face images;
Training the face age recognition model based on the plurality of second face images to learn aging change characteristics of the plurality of second face images through the face age recognition model, thereby obtaining a target face age recognition model, comprising:
Respectively inputting the plurality of second face images into the face age recognition model to respectively extract face features corresponding to the plurality of second face images;
Determining age probability predicted values corresponding to the plurality of second face images based on face features corresponding to the plurality of second face images, wherein the age probability predicted value corresponding to one second face image comprises the probability that a face in the one second face image belongs to each face age;
And carrying out parameter adjustment on the face age recognition model according to the age probability predicted values corresponding to the second face images until the face age recognition model converges, and determining the converged face age recognition model as the target face age recognition model.
2. The method of claim 1, wherein the age difference between two adjacent ages after the ages of the faces corresponding to the plurality of second face images are sequentially arranged is a preset difference.
3. The method of claim 1, wherein the plurality of second face images includes the first face image; the plurality of second face images all carry age labels;
the step of performing parameter adjustment on the face age recognition model according to the age probability predicted values corresponding to the plurality of second face images respectively comprises the following steps:
according to the age probability predicted values corresponding to the second face images and the age labels carried by the second face images, calculating the age classification loss of the face age recognition model;
According to the age probability predicted values corresponding to the second face images, determining the face predicted ages corresponding to the face images except the first face image in the second face images;
According to the human face predicted ages corresponding to the other human face images and the real human face ages corresponding to the first human face image, calculating the age predicted loss of the human face age recognition model;
And carrying out parameter adjustment on the face age identification model according to the age classification loss and the age prediction loss.
4. A method according to claim 3, wherein said parameter adjustment of said face age recognition model based on said age classification loss and said age prediction loss comprises:
Carrying out weighted summation on the age classification loss and the age prediction loss to obtain the total loss of the face age recognition model;
and carrying out parameter adjustment on the face age identification model according to the total loss.
5. The method of claim 1, further comprising, prior to age data enhancement of the first face image to obtain a plurality of second face images:
Carrying out coordinate affine transformation on the first face image so as to adjust the face in the first face image into a positive face state;
and intercepting the face effective area of the first face image after the coordinate affine transformation, and carrying out normalization processing on the face area obtained by interception.
6. A face age identification method, comprising:
Acquiring a face image to be identified;
Inputting the face image to be recognized into a target face age recognition model, wherein the target face age recognition model is obtained by training the face age recognition model according to any one of claims 1-5;
And outputting the face age corresponding to the face image to be recognized through the target face age recognition model.
7. An age identification model training device, comprising:
The first acquisition module is used for acquiring a first face image;
The data enhancement module is used for enhancing the age data of the first face image to obtain a plurality of second face images, wherein faces in the second face images and faces in the first face image belong to the same person, and the second face images are face images of the same person at different ages respectively;
The training module is used for training the face age recognition model based on the plurality of second face images so as to learn aging change characteristics of the plurality of second face images through the face age recognition model and obtain a target face age recognition model;
the data enhancement module is specifically configured to:
Inputting the first face image and the real face age corresponding to the first face image into a preset face generation model, and generating a plurality of face images which belong to the same person as the first face image and are not of the real face age through the face generation model;
Determining the plurality of face images and the first face image as the plurality of second face images;
the training module is specifically used for:
Respectively inputting the plurality of second face images into the face age recognition model to respectively extract face features corresponding to the plurality of second face images;
Determining age probability predicted values corresponding to the plurality of second face images based on face features corresponding to the plurality of second face images, wherein the age probability predicted value corresponding to one second face image comprises the probability that a face in the one second face image belongs to each face age;
And carrying out parameter adjustment on the face age recognition model according to the age probability predicted values corresponding to the second face images until the face age recognition model converges, and determining the converged face age recognition model as the target face age recognition model.
8. A face age recognition apparatus, comprising:
The second acquisition module is used for acquiring the face image to be identified;
The input module is used for inputting the face image to be recognized into a target face age recognition model, and the target face age recognition model is obtained by training the face age recognition model according to the training method of any one of claims 1-5;
And the prediction module is used for determining the face age corresponding to the face image to be recognized through the target face age recognition model.
9. A computer device comprising a memory and one or more processors configured to execute one or more computer programs stored in the memory, which when executed, cause the computer device to implement the method of any of claims 1-5 or claim 6.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1-5 or claim 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110317299.3A CN113076833B (en) | 2021-03-25 | 2021-03-25 | Training method of age identification model, face age identification method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110317299.3A CN113076833B (en) | 2021-03-25 | 2021-03-25 | Training method of age identification model, face age identification method and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113076833A CN113076833A (en) | 2021-07-06 |
CN113076833B true CN113076833B (en) | 2024-05-31 |
Family
ID=76611714
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110317299.3A Active CN113076833B (en) | 2021-03-25 | 2021-03-25 | Training method of age identification model, face age identification method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113076833B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019109526A1 (en) * | 2017-12-06 | 2019-06-13 | 平安科技(深圳)有限公司 | Method and device for age recognition of face image, storage medium |
CN111209878A (en) * | 2020-01-10 | 2020-05-29 | 公安部户政管理研究中心 | Cross-age face recognition method and device |
CN111401339A (en) * | 2020-06-01 | 2020-07-10 | 北京金山云网络技术有限公司 | Method and device for identifying age of person in face image and electronic equipment |
CN112183326A (en) * | 2020-09-27 | 2021-01-05 | 深圳数联天下智能科技有限公司 | Face age recognition model training method and related device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110532965B (en) * | 2019-08-30 | 2022-07-26 | 京东方科技集团股份有限公司 | Age identification method, storage medium and electronic device |
-
2021
- 2021-03-25 CN CN202110317299.3A patent/CN113076833B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019109526A1 (en) * | 2017-12-06 | 2019-06-13 | 平安科技(深圳)有限公司 | Method and device for age recognition of face image, storage medium |
CN111209878A (en) * | 2020-01-10 | 2020-05-29 | 公安部户政管理研究中心 | Cross-age face recognition method and device |
CN111401339A (en) * | 2020-06-01 | 2020-07-10 | 北京金山云网络技术有限公司 | Method and device for identifying age of person in face image and electronic equipment |
CN112183326A (en) * | 2020-09-27 | 2021-01-05 | 深圳数联天下智能科技有限公司 | Face age recognition model training method and related device |
Non-Patent Citations (2)
Title |
---|
基于区域的年龄估计模型研究;孙劲光;荣文钊;;计算机科学(08);全文 * |
基于卷积神经网络的人脸年龄估计方法;杨国亮;张雨;;北京联合大学学报(01);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113076833A (en) | 2021-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110443143B (en) | Multi-branch convolutional neural network fused remote sensing image scene classification method | |
CN110348319B (en) | Face anti-counterfeiting method based on face depth information and edge image fusion | |
CN111950453B (en) | Random shape text recognition method based on selective attention mechanism | |
Yu et al. | Yolo-facev2: A scale and occlusion aware face detector | |
Baró et al. | Traffic sign recognition using evolutionary adaboost detection and forest-ECOC classification | |
CN108230291B (en) | Object recognition system training method, object recognition method, device and electronic equipment | |
US10169683B2 (en) | Method and device for classifying an object of an image and corresponding computer program product and computer-readable medium | |
CN110032925B (en) | Gesture image segmentation and recognition method based on improved capsule network and algorithm | |
CN110796199B (en) | Image processing method and device and electronic medical equipment | |
CN110188829B (en) | Neural network training method, target recognition method and related products | |
CN111414946B (en) | Artificial intelligence-based medical image noise data identification method and related device | |
CN110765860A (en) | Tumble determination method, tumble determination device, computer apparatus, and storage medium | |
CN109255289B (en) | Cross-aging face recognition method based on unified generation model | |
CN111027576B (en) | Cooperative significance detection method based on cooperative significance generation type countermeasure network | |
CN110543846A (en) | Multi-pose face image obverse method based on generation countermeasure network | |
EP4085369A1 (en) | Forgery detection of face image | |
CN111126482A (en) | Remote sensing image automatic classification method based on multi-classifier cascade model | |
CN108520215B (en) | Single-sample face recognition method based on multi-scale joint feature encoder | |
CN111832405A (en) | Face recognition method based on HOG and depth residual error network | |
CN110633727A (en) | Deep neural network ship target fine-grained identification method based on selective search | |
CN110175500B (en) | Finger vein comparison method, device, computer equipment and storage medium | |
CN114022713A (en) | Model training method, system, device and medium | |
CN115240280A (en) | Construction method of human face living body detection classification model, detection classification method and device | |
CN113569607A (en) | Motion recognition method, motion recognition device, motion recognition equipment and storage medium | |
US20140247996A1 (en) | Object detection via visual search |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |