CN112990145B - Group-sparse-based age estimation method and electronic equipment - Google Patents

Group-sparse-based age estimation method and electronic equipment Download PDF

Info

Publication number
CN112990145B
CN112990145B CN202110487414.1A CN202110487414A CN112990145B CN 112990145 B CN112990145 B CN 112990145B CN 202110487414 A CN202110487414 A CN 202110487414A CN 112990145 B CN112990145 B CN 112990145B
Authority
CN
China
Prior art keywords
face
picture
side face
age
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110487414.1A
Other languages
Chinese (zh)
Other versions
CN112990145A (en
Inventor
苏旋
郭轩
魏凤仙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guanchuan Network Technology Nanjing Co ltd
Original Assignee
Guanchuan Network Technology Nanjing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guanchuan Network Technology Nanjing Co ltd filed Critical Guanchuan Network Technology Nanjing Co ltd
Priority to CN202110487414.1A priority Critical patent/CN112990145B/en
Publication of CN112990145A publication Critical patent/CN112990145A/en
Application granted granted Critical
Publication of CN112990145B publication Critical patent/CN112990145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a group-sparse-based age estimation method and electronic equipment. The method extracts the features of the front face picture and the side face picture through preprocessing, fuses the two features based on the group sparse theory and performs feature dimensionality reduction, classifies the features to obtain the age expectation to complete age estimation, and improves the accuracy and efficiency of the age estimation.

Description

Group-sparse-based age estimation method and electronic equipment
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a group-sparse-based age estimation method and electronic equipment.
Background
Predicting an apparent age of a person from a face image is a classic problem in the fields of computer vision and artificial intelligence. From the priori knowledge of bioinformatics, people know that human face images contain rich age information, which brings a new idea for the age estimation task, especially when the information of large-angle side faces and faces is incomplete.
In recent years, a convolutional neural network-based method is widely used for age classification. Facial images at or near the front are typically used for these studies, and side facial images are rarely used for age estimation, which results in an inaccurate estimated age. Meanwhile, when the convolutional neural network is used for calculation, the calculation complexity is higher and the calculation efficiency is lower.
Therefore, a method is lacked in the prior art, and the age can be accurately and quickly estimated by fully utilizing the face information.
Disclosure of Invention
The application provides an age identification method based on group sparsity in order to solve the problems that age estimation is not accurate enough and calculation efficiency is not high in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows:
an age estimation method based on group sparsity, comprising the steps of: s1: obtaining a feature extraction model through training; s2: acquiring a front face picture and a side face picture; s3: respectively inputting the front face picture and the side face picture into a feature extraction model, and respectively extracting features of the front face and the side face to obtain a front face feature vector and a side face feature vector; s4: and fusing the front face feature vector and the side face feature vector by using a group sparse algorithm and reducing the dimension to finish the age estimation.
Further, the feature extraction model specifically includes: a forward feature extraction model and a lateral feature extraction model; the obtaining of the feature extraction model through training includes: acquiring a face front picture and a face side picture from a network; recognizing the face front image by using a shape _ predictor _68_ face _ frames. dat model, calculating the coordinate positions of 68 forward feature points, and intercepting the forward image from the face front image according to the coordinate positions of the forward feature points; marking out a side face from the side face picture of the human face by using a marking tool, and training by using yolov4 to obtain a side face detector; screening a side face picture with side face information from a human face side face picture by using the side face detector, identifying the side face picture by using a shape _ predictor _68_ face _ landworks. dat model, calculating the coordinate positions of 68 side feature points, and intercepting the side face picture from the side face picture according to the coordinate positions of the side feature points; dividing the obtained forward pictures and the obtained lateral pictures into a training set, a verification set and a test set according to the proportion respectively; the ratio is 8:1: 1; and respectively training a forward characteristic extraction model and a lateral characteristic extraction model by using the training set.
Further, the step of recognizing the face front picture by using the shape _ predictor _68_ face _ landworks. dat model and calculating the coordinate positions of 68 forward feature points is as follows: a. acquiring a face detector, a face forward characteristic point detection model, an input path and an output path; b. reading a face front picture according to an input path; c. detecting a face front picture by using a face detector, and judging whether the face front picture has a plurality of faces; if a plurality of faces exist, performing the step d, and if only one face exists, skipping to the step e; d. amplifying the face front picture by 2 times; e. detecting forward characteristic points in the face image by using a face forward characteristic point detection model, and calculating the coordinate positions of the forward characteristic points; f. outputting the coordinate position of the forward characteristic point according to an output path; and identifying the side face picture by using a shape _ predictor _68_ face _ landworks. dat model, and calculating the coordinate positions of 68 side feature points by the following steps: g. acquiring a face detector, a face lateral characteristic point detection model, an input path and an output path; h. reading a side face picture according to an input path; i. detecting a side face picture by using a human face detector, and judging whether the side face picture has a plurality of side faces; if a plurality of side faces exist, performing the step j, and if only one side face exists, skipping to the step k; j. amplifying the side face picture by 2 times; k. detecting lateral characteristic points in the face picture by using a face lateral characteristic point detection model, and calculating the coordinate positions of the lateral characteristic points; and l, outputting the coordinate position of the lateral characteristic point according to the output path.
Preferably, after the front face feature vector and the side face feature vector are obtained, the front face feature vector and the side face feature vector are normalized respectively.
Further, the determining the age prediction values corresponding to the front face picture and the side face picture by using the group sparse algorithm further comprises the steps of reducing the dimension of the features by using the group sparse algorithm, and sending the features after dimension reduction to a full connection layer; measuring feature-to-phase using central loss functionThe distance of the center is related, the center loss is calculated, and the maximum flexible loss function is used for supplementing the center loss function; outputting the probability value on each age label of 0-100, sorting according to the probability value from high to low, selecting the age label with the top ten of the probability value, calculating the age predicted value according to the age corresponding to the age label and the probability thereof, and E (Y) =
Figure RE-DEST_PATH_IMAGE001
Wherein E (Y) represents an age-related prediction value, YiAge, Z, corresponding to age label representing the top ten of probability valuei、ZjRepresenting the output values of the neural network neurons.
It is another object of the embodiments of the invention to provide an electronic apparatus, which includes a memory, and a computer program including the aforementioned method, the computer program being stored in the memory and configured to be executed by at least one processor.
Compared with the prior art, the method has the advantages that the features of the front face picture and the side face picture are extracted, then the two features are fused based on the group sparse theory, the feature dimensionality reduction is carried out, the classification score is obtained by age classification, and finally the age expectation is calculated to complete the age estimation, so that the accuracy and the efficiency of the age estimation are greatly improved, and the defects of the previous research are overcome.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Fig. 2 is a system architecture framework diagram of an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the present invention will be further described in detail with reference to the embodiments of the present invention and the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, belong to the scope of the present invention.
An age estimation method based on group sparsity is provided in embodiment 1 of the present application, as shown in fig. 1, fig. 2 is a structural block diagram of an electronic device using the method of the present invention, and the main steps are as follows:
s1: obtaining a feature extraction model through training;
firstly, a face front face picture and a face side face picture are obtained from a network, and the obtained pictures are used as training samples for training. And carrying out 68-point calibration on the obtained face front face picture by using a shape _ predictor _68_ face _ landworks. dat model trained by a Dlib library to obtain the coordinate information of 68 forward feature points. Specifically, the coordinate information of the 68 forward feature points can be written as FF (x1, y1), FF (x2, y2), … and FF (x68, y68), and then the forward picture is cut out from the face forward picture according to the coordinate information of the 68 forward feature points.
Secondly, marking out a side face from the side face picture of the human face by using a marking tool, and training a side face detector by using yolov 4; and screening a side face picture with a clear side face image from the obtained side face picture of the human face by using a side face detector, and carrying out 68-point calibration on the obtained side face picture of the human face by using a shape _ predictor _68_ face _ landworks. Specifically, the coordinate information of the 68 lateral feature points can be written as SF (x1, y1), SF (x2, y2), … and SF (x68, y68), and then the lateral image is cut out from the human face and lateral image according to the coordinate information of the 68 lateral feature points.
The method comprises the following steps of identifying a face front image by using a shape _ predictor _68_ face _ landworks. dat model, and calculating the coordinate positions of 68 forward feature points:
a. acquiring a face detector, a face forward characteristic point detection model, an input path and an output path;
b. reading a face front picture according to an input path;
c. detecting a face front picture by using a face detector, and judging whether the face front picture has a plurality of faces; if a plurality of faces exist, performing the step d, and if only one face exists, skipping to the step e;
d. amplifying the face front image by 2 times or more than 2 times, wherein the magnification can be 3-10 times;
e. detecting forward characteristic points in the face image by using a face forward characteristic point detection model, and calculating the coordinate positions of the forward characteristic points;
f. outputting the coordinate position of the forward characteristic point according to an output path;
and identifying the side face picture by using a shape _ predictor _68_ face _ landworks. dat model, and calculating the coordinate positions of 68 side feature points by the following steps:
g. acquiring a face detector, a face lateral characteristic point detection model, an input path and an output path;
h. reading a side face picture according to an input path;
i. detecting a side face picture by using a human face detector, and judging whether the side face picture has a plurality of side faces; if a plurality of side faces exist, performing the step j, and if only one side face exists, skipping to the step k;
j. amplifying the side face picture by 2 times or more than 2 times, wherein the magnification can be 3-10 times;
k. detecting lateral characteristic points in the face picture by using a face lateral characteristic point detection model, and calculating the coordinate positions of the lateral characteristic points;
and l, outputting the coordinate position of the lateral characteristic point according to the output path.
Finally, dividing the obtained forward pictures and the obtained lateral pictures into a training set, a verification set and a test set according to the proportion respectively; the ratio is 8:1: 1; and respectively training a forward characteristic extraction model and a lateral characteristic extraction model by using the training set.
The method comprises the steps of judging whether a plurality of front faces/side faces to be identified exist or not according to the identified front face pictures and side face pictures, selecting a corresponding processing mode, and executing preprocessing operation in the early stage of algorithm calculation, so that the overall identification efficiency is improved; the preprocessing mode of the invention can reduce the process of converting the side face image into the front face image and reduce the complexity of the fusion of the front face image and the side face image; in addition, the side face amplification of the step can overcome the problem of low side face identification precision in the prior art.
S2: acquiring a front face picture and a side face picture;
the front face picture and the side face picture of the same person are obtained from common face data sets such as Olivetti Faces, CFP, IMDB-wiki.
S3: and respectively inputting the front face picture and the side face picture into a feature extraction model, and respectively extracting features of the front face and the side face to obtain a front face feature vector and a side face feature vector.
Firstly, inputting a frontal face picture of a person into a forward characteristic extraction model, and performing characteristic extraction on the frontal face picture to obtain a frontal face characteristic vector; and inputting the side face picture of the person into a side feature extraction model, and performing feature extraction on the side face picture to obtain a side face feature vector. The feature extraction of the picture is realized through a convolutional neural network to obtain a corresponding feature vector.
However, the obtained front face feature vector and side face feature vector are normalized.
S4: and fusing the front face feature vector and the side face feature vector by using a group sparse algorithm and reducing the dimension to finish the age estimation.
Firstly, fusing normalized front face characteristic vectors and side face characteristic vectors of the same person; performing dimensionality reduction on the features by using a group sparse algorithm, and sending the dimensionality-reduced features to a full-connection layer in a convolutional neural network;
measuring the distance from the characteristics to the relevant class center by using a center loss function, calculating the center loss, and supplementing the center loss function by using a maximum flexible loss function;
finally, outputting a probability value on each age label of 0-100, sorting according to the probability values from high to low, selecting the age labels with the top ten of the probability values, calculating age predicted values according to ages corresponding to the age labels and the probabilities thereof, and E (Y) =
Figure DEST_PATH_IMAGE002
Wherein E (Y) represents a yearAge prediction value, YiAge, Z, corresponding to age label representing the top ten of probability valuei、ZjRepresenting the output values of the neural network neurons.
The technical scheme of the invention obtains front face and side face pictures through preprocessing before fusion, so that the time complexity of the fusion step is lower, and the calculation speed of data fusion is improved.
Based on the same inventive concept as embodiment 1, embodiment 2 of the present invention provides an electronic device including: there is a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors the one or more programs including instructions for:
obtaining a feature extraction model through training;
firstly, a face front face picture and a face side face picture are obtained from a network, and the obtained pictures are used as training samples for training. And carrying out 68-point calibration on the obtained face front face picture by using a shape _ predictor _68_ face _ landworks. dat model trained by a Dlib library to obtain the coordinate information of 68 forward feature points. Specifically, the coordinate information of the 68 forward feature points can be written as FF (x1, y1), FF (x2, y2), … and FF (x68, y68), and then the forward picture is cut out from the face forward picture according to the coordinate information of the 68 forward feature points.
Secondly, marking out a side face from the side face picture of the human face by using a marking tool, and training a side face detector by using yolov 4; and screening a side face picture with a clear side face image from the obtained side face picture of the human face by using a side face detector, and carrying out 68-point calibration on the obtained side face picture of the human face by using a shape _ predictor _68_ face _ landworks. Specifically, the coordinate information of the 68 lateral feature points can be written as SF (x1, y1), SF (x2, y2), … and SF (x68, y68), and then the lateral image is cut out from the human face and lateral image according to the coordinate information of the 68 lateral feature points.
The method comprises the following steps of identifying a face front image by using a shape _ predictor _68_ face _ landworks. dat model, and calculating the coordinate positions of 68 forward feature points:
a. acquiring a face detector, a face forward characteristic point detection model, an input path and an output path; b. reading a face front picture according to an input path;
c. detecting a face front picture by using a face detector, and judging whether the face front picture has a plurality of faces; if a plurality of faces exist, performing the step d, and if only one face exists, skipping to the step e;
d. amplifying the face front picture by 2 times;
e. detecting forward characteristic points in the face image by using a face forward characteristic point detection model, and calculating the coordinate positions of the forward characteristic points;
f. outputting the coordinate position of the forward characteristic point according to an output path;
and identifying the side face picture by using a shape _ predictor _68_ face _ landworks. dat model, and calculating the coordinate positions of 68 side feature points by the following steps:
g. acquiring a face detector, a face lateral characteristic point detection model, an input path and an output path;
h. reading a side face picture according to an input path;
i. detecting a side face picture by using a human face detector, and judging whether the side face picture has a plurality of side faces; if a plurality of side faces exist, performing the step j, and if only one side face exists, skipping to the step k;
j. amplifying the side face picture by 2 times;
k. detecting lateral characteristic points in the face picture by using a face lateral characteristic point detection model, and calculating the coordinate positions of the lateral characteristic points;
and l, outputting the coordinate position of the lateral characteristic point according to the output path.
Finally, dividing the obtained forward pictures and the obtained lateral pictures into a training set, a verification set and a test set according to the proportion respectively; the ratio is 8:1: 1; and respectively training a forward characteristic extraction model and a lateral characteristic extraction model by using the training set.
The method comprises the steps of judging whether a plurality of front faces/side faces to be identified exist or not according to the identified front face pictures and side face pictures, selecting a corresponding processing mode, and executing preprocessing operation in the early stage of algorithm calculation, so that the overall identification efficiency is improved; the preprocessing mode of the invention can reduce the process of converting the side face image into the front face image and reduce the complexity of the fusion of the front face image and the side face image; in addition, the side face amplification of the step can overcome the problem of low side face identification precision in the prior art.
Acquiring a front face picture and a side face picture;
the front face picture and the side face picture of the same person are obtained from common face data sets such as Olivetti Faces, CFP, IMDB-wiki.
And respectively inputting the front face picture and the side face picture into a feature extraction model, and respectively extracting features of the front face and the side face to obtain a front face feature vector and a side face feature vector.
Firstly, inputting a frontal face picture of a person into a forward characteristic extraction model, and performing characteristic extraction on the frontal face picture to obtain a frontal face characteristic vector; and inputting the side face picture of the person into a side feature extraction model, and performing feature extraction on the side face picture to obtain a side face feature vector. The feature extraction of the picture is realized through a convolutional neural network to obtain a corresponding feature vector.
However, the obtained front face feature vector and side face feature vector are normalized.
And fusing the front face feature vector and the side face feature vector by using a group sparse algorithm and reducing the dimension to finish the age estimation.
Firstly, fusing normalized front face characteristic vectors and side face characteristic vectors of the same person; performing dimensionality reduction on the features by using a group sparse algorithm, and sending the dimensionality-reduced features to a full-connection layer in a convolutional neural network;
measuring the distance from the characteristics to the relevant class center by using a center loss function, calculating the center loss, and supplementing the center loss function by using a maximum flexible loss function;
finally, outputting a probability value on each age label of 0-100, sorting according to the probability values from high to low, selecting the age labels with the top ten of the probability values, calculating age predicted values according to ages corresponding to the age labels and the probabilities thereof, and E (Y) =
Figure 191296DEST_PATH_IMAGE002
Wherein E (Y) represents an age-related prediction value, YiAge, Z, corresponding to age label representing the top ten of probability valuei、ZjRepresenting the output values of the neural network neurons.
The technical scheme of the invention obtains front face and side face pictures through preprocessing before fusion, so that the time complexity of the fusion step is lower, and the calculation speed of data fusion is improved.
Tests were performed according to the examples provided by the present invention and compared with the prior art, the comparison results were as follows:
Figure DEST_PATH_IMAGE003
according to the test results, the accuracy and the calculation efficiency of the age prediction are obviously improved, and a better choice is provided for age identification.
The invention provides a group-sparse-based age estimation method and electronic equipment. According to the invention, the features of the front face picture and the side face picture are extracted, two features are fused based on a group sparse theory and feature dimensionality reduction is carried out, classification scores are obtained by age classification, and finally, an age expectation is calculated to complete age estimation, so that the accuracy and efficiency of age estimation are greatly improved.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include elements inherent in the list. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. In addition, parts of the above technical solutions provided in the embodiments of the present application, which are consistent with the implementation principles of corresponding technical solutions in the prior art, are not described in detail so as to avoid redundant description.
The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (4)

1. An age estimation method based on group sparsity, characterized by comprising the steps of:
s1: obtaining a feature extraction model through training;
the feature extraction model specifically includes: a forward feature extraction model and a lateral feature extraction model;
recognizing a face front image by using a shape _ predictor _68_ face _ frames. dat model, and calculating the coordinate positions of 68 forward feature points, wherein the method specifically comprises the following steps:
a. acquiring a face detector, a face forward characteristic point detection model, an input path and an output path;
b. reading a face front picture according to an input path;
c. detecting a face front picture by using a face detector, and judging whether the face front picture has a plurality of faces; if a plurality of faces exist, performing the step d, and if only one face exists, skipping to the step e;
d. amplifying the face front image by N times, wherein N is not less than 2;
e. detecting forward characteristic points in the face image by using a face forward characteristic point detection model, and calculating the coordinate positions of the forward characteristic points;
f. outputting the coordinate position of the forward characteristic point according to an output path;
identifying a side face picture by using a shape _ predictor _68_ face _ landworks. dat model, and calculating the coordinate positions of 68 side feature points, wherein the method comprises the following specific steps:
g. acquiring a face detector, a face lateral characteristic point detection model, an input path and an output path;
h. reading a side face picture according to an input path;
i. detecting a side face picture by using a human face detector, and judging whether the side face picture has a plurality of side faces; if a plurality of side faces exist, performing the step j, and if only one side face exists, skipping to the step k;
j. amplifying the side face picture by M times, wherein M is not less than 2;
k. detecting lateral characteristic points in the face picture by using a face lateral characteristic point detection model, and calculating the coordinate positions of the lateral characteristic points;
i, outputting the coordinate position of the lateral characteristic point according to the output path;
intercepting a forward image from the face image according to the coordinate position of the forward feature point; intercepting a side image from the side face image according to the coordinate position of the side characteristic point; training a forward characteristic extraction model and a lateral characteristic extraction model;
s2: acquiring a front face picture and a side face picture;
s3: respectively inputting the front face picture and the side face picture obtained in the step S2 into a feature extraction model, and respectively extracting features of the front face and the side face to obtain a front face feature vector and a side face feature vector;
s4: fusing the front face feature vector and the side face feature vector by using a group sparse algorithm and reducing dimensions to complete age estimation;
specifically, a front face feature vector and a side face feature vector of the same person after normalization are fused; reducing the dimension of the features by using a group sparse algorithm, and sending the features after dimension reduction to a full connection layer;
measuring the distance from the characteristics to the center of the relevant class by using a center loss function, calculating the center loss, and supplementing the center loss function by using a maximum flexible loss function;
outputting the probability value on each age label of 0-100, sorting according to the probability value from high to low, selecting the age label with the top ten of the probability value, calculating the age predicted value according to the age corresponding to the age label and the probability thereof, and E (Y) =
Figure 149227DEST_PATH_IMAGE001
Wherein E (Y) represents an age-related prediction value, YiAge, Z, corresponding to age label representing the top ten of probability valuei、ZjRepresenting the output values of the neural network neurons.
2. The group sparsity-based age estimation method of claim 1, wherein the obtaining of the feature extraction model through training includes:
acquiring a face front picture and a face side picture from a network;
recognizing the face front image by using a shape _ predictor _68_ face _ frames. dat model, calculating the coordinate positions of 68 forward feature points, and intercepting the forward image from the face front image according to the coordinate positions of the forward feature points;
marking out a side face from the side face picture of the human face by using a marking tool, and training by using yolov4 to obtain a side face detector; screening a side face picture with side face information from a human face side face picture by using the side face detector, identifying the side face picture by using a shape _ predictor _68_ face _ landworks. dat model, calculating the coordinate positions of 68 side feature points, and intercepting the side face picture from the side face picture according to the coordinate positions of the side feature points;
dividing the obtained forward pictures and the obtained lateral pictures into a training set, a verification set and a test set according to the proportion respectively; the ratio is 8:1: 1;
and respectively training a forward characteristic extraction model and a lateral characteristic extraction model by using the training set.
3. The group sparsity-based age estimation method as claimed in claim 1, wherein after the front face feature vector and the side face feature vector are obtained, the front face feature vector and the side face feature vector are normalized respectively.
4. An electronic device comprising a memory and a computer program comprising the method according to any of claims 1-3, the computer program being stored in the memory and configured to be executed by at least one processor.
CN202110487414.1A 2021-05-06 2021-05-06 Group-sparse-based age estimation method and electronic equipment Active CN112990145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110487414.1A CN112990145B (en) 2021-05-06 2021-05-06 Group-sparse-based age estimation method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110487414.1A CN112990145B (en) 2021-05-06 2021-05-06 Group-sparse-based age estimation method and electronic equipment

Publications (2)

Publication Number Publication Date
CN112990145A CN112990145A (en) 2021-06-18
CN112990145B true CN112990145B (en) 2021-09-14

Family

ID=76336928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110487414.1A Active CN112990145B (en) 2021-05-06 2021-05-06 Group-sparse-based age estimation method and electronic equipment

Country Status (1)

Country Link
CN (1) CN112990145B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299701A (en) * 2018-10-15 2019-02-01 南京信息工程大学 Expand the face age estimation method that more ethnic group features cooperate with selection based on GAN
CN110532965A (en) * 2019-08-30 2019-12-03 京东方科技集团股份有限公司 Age recognition methods, storage medium and electronic equipment
CN110874587A (en) * 2019-12-26 2020-03-10 浙江大学 Face characteristic parameter extraction system
CN111967383A (en) * 2020-08-14 2020-11-20 北京金山云网络技术有限公司 Age estimation method, and training method and device of age estimation model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553254A (en) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 Face comparison preprocessing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299701A (en) * 2018-10-15 2019-02-01 南京信息工程大学 Expand the face age estimation method that more ethnic group features cooperate with selection based on GAN
CN110532965A (en) * 2019-08-30 2019-12-03 京东方科技集团股份有限公司 Age recognition methods, storage medium and electronic equipment
CN110874587A (en) * 2019-12-26 2020-03-10 浙江大学 Face characteristic parameter extraction system
CN111967383A (en) * 2020-08-14 2020-11-20 北京金山云网络技术有限公司 Age estimation method, and training method and device of age estimation model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multi-View Facial Expression Recognition Based on Group Sparse Reduced-Rank Regression;Wenming Zheng 等;《IEEE Transactions on Affective Computing》;20140219;第71-85页 *
基于多视角信息融合的人脸表情识别;黄海勇;《中国优秀硕士学位论文全文数据库 工程科技辑》;20150315;第33-44页 *
基于特征融合与投票模型的人脸表情图像识别研究;杨飞;《中国优秀硕士学位论文全文数据库 信息科技辑》;20210415;全文 *

Also Published As

Publication number Publication date
CN112990145A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112990432B (en) Target recognition model training method and device and electronic equipment
CN108898137B (en) Natural image character recognition method and system based on deep neural network
CN109086756B (en) Text detection analysis method, device and equipment based on deep neural network
CN109086811B (en) Multi-label image classification method and device and electronic equipment
CA3066029A1 (en) Image feature acquisition
CN111931859B (en) Multi-label image recognition method and device
CN106372624B (en) Face recognition method and system
CN111476315A (en) Image multi-label identification method based on statistical correlation and graph convolution technology
Haque et al. Two-handed bangla sign language recognition using principal component analysis (PCA) and KNN algorithm
CN111325237B (en) Image recognition method based on attention interaction mechanism
CN112560993A (en) Data screening method and device, electronic equipment and storage medium
Patel American sign language detection
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN117197904A (en) Training method of human face living body detection model, human face living body detection method and human face living body detection device
CN111539456A (en) Target identification method and device
CN111144462A (en) Unknown individual identification method and device for radar signals
CN110717407A (en) Human face recognition method, device and storage medium based on lip language password
CN113283396A (en) Target object class detection method and device, computer equipment and storage medium
CN116152576B (en) Image processing method, device, equipment and storage medium
CN113255501A (en) Method, apparatus, medium, and program product for generating form recognition model
CN113158777A (en) Quality scoring method, quality scoring model training method and related device
CN117516937A (en) Rolling bearing unknown fault detection method based on multi-mode feature fusion enhancement
CN112990145B (en) Group-sparse-based age estimation method and electronic equipment
WO2023273570A1 (en) Target detection model training method and target detection method, and related device therefor
CN115457620A (en) User expression recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant