CN112329607A - Age prediction method, system and device based on facial features and texture features - Google Patents

Age prediction method, system and device based on facial features and texture features Download PDF

Info

Publication number
CN112329607A
CN112329607A CN202011210058.0A CN202011210058A CN112329607A CN 112329607 A CN112329607 A CN 112329607A CN 202011210058 A CN202011210058 A CN 202011210058A CN 112329607 A CN112329607 A CN 112329607A
Authority
CN
China
Prior art keywords
facial
texture
age
features
facial feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011210058.0A
Other languages
Chinese (zh)
Other versions
CN112329607B (en
Inventor
陈维洋
王梦杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN202011210058.0A priority Critical patent/CN112329607B/en
Publication of CN112329607A publication Critical patent/CN112329607A/en
Application granted granted Critical
Publication of CN112329607B publication Critical patent/CN112329607B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an age prediction method, a system and a device based on facial features and textural features, belongs to the technical field of facial image processing, and aims to solve the technical problem of accurately, quickly and effectively estimating the age according to a face image. The method comprises the following steps: acquiring a plurality of face images, wherein each face image is provided with an age label; extracting texture characteristic values from the face image based on the gray level co-occurrence matrix; performing face detection based on a Viola-Jones algorithm, and extracting facial feature points through a CHEHRA model; carrying out data standardization processing on the facial feature points based on the central coordinates of the facial feature point coordinates, and carrying out data standardization processing on texture feature values; and constructing a training set based on the standardized coordinates of the facial feature points and the standardized texture feature values, and performing regression prediction on the training set based on an SVM regression prediction method to obtain the predicted age.

Description

Age prediction method, system and device based on facial features and texture features
Technical Field
The invention relates to the technical field of facial image processing, in particular to an age prediction method, system and device based on facial features and texture features.
Background
With the rapid development of computer technology and artificial intelligence, human biometric identification technology has also been rapidly developed and is increasingly paid more attention by people. Age is an important characteristic of humans and plays a key role in many real-world applications, with different ages having different rights, obligations, cognition, preferences and behavioural abilities, such as preventing minors from buying cigarettes and wines, setting adolescent models, man-machine interaction, biometric identification, etc. The age prediction can not only guarantee our rights and obligations, but also reduce the occurrence of age counterfeiting events.
The aging degree prediction based on the facial features refers to the fact that the rule of a facial image changing along with the age is researched by utilizing a computer technology, and the specific age or the age range of a person can be judged by a computer according to the visual features of the facial image through modeling, so that the non-contact age estimation is realized, and the problem of how to estimate the accurate age of the face according to the facial features is mainly solved. The aging degree prediction based on the facial features has important application in systems such as age-based human-computer interaction, information push service, face image retrieval and filtration, public information acquisition and public safety, computer vision and the like, and creates huge economic and social benefits. The method has positive significance for improving the face recognition precision, reducing the face image search range and the like. However, since the growth and aging process of a human is very complicated, influenced by various factors, and individual differences are large, age estimation based on face images is a difficult and important research work in the fields of computer vision, pattern recognition, and artificial intelligence.
How to accurately, quickly and effectively estimate the age according to the face image is a technical problem to be solved.
Disclosure of Invention
The technical task of the present invention is to provide a method, a system and a device for predicting age based on facial features and texture features to solve the technical problem of how to accurately, quickly and effectively estimate age according to a face image.
In a first aspect, the present invention provides an age prediction method based on facial features and texture features, comprising the steps of:
acquiring a plurality of face images, wherein each face image is provided with an age label;
extracting texture characteristic values from the face image based on the gray level co-occurrence matrix;
performing face detection based on a Viola-Jones algorithm, and extracting facial feature points through a CHEHRA model;
carrying out data standardization processing on the facial feature points based on the central coordinates of the facial feature point coordinates to obtain standardized coordinates of the facial feature points, carrying out data standardization processing on texture feature values, and uniformly mapping the texture feature values to a predetermined number domain interval to obtain standardized texture feature values;
and constructing a training set based on the standardized coordinates of the facial feature points and the standardized texture feature values, and performing regression prediction on the training set based on an SVM regression prediction method to obtain the predicted age.
Preferably, the face image is a photographed image and is downloaded to an FGNET face age database.
Preferably, the offset amount is set to (Δ a, Δ b) for a face image I of n × m, and the formula for calculating the gray level co-occurrence matrix C (I, j) corresponding to the face image I is:
Figure BDA0002758530160000021
wherein p and q are both increments, p: 1-n, 1: 1-, m, i denote pixels with a gray level i, and j denotes pixels with a gray level j.
Preferably, the texture feature value includes:
f1. auto-correlation
f1=∑ij(ij)*p(i,j)
f2. Contrast ratio
f2=∑ij(i-j)2*p(i,j)
f3. First correlation measurement value
Figure BDA0002758530160000022
f4 second correlation measurement
Figure BDA0002758530160000031
f5. Cluster protrusion
f5=∑ij(i+j-μxy)4*p(i,j)
f6. Cluster shadow
f6=∑ij(i+j-μxy)3*p(i,j)
f7. Dissimilarity between two different types of plants
f7=∑ij|i-j|*p(i,j)
f8. Energy of
f8=∑ijp(i,j)*p(i,j)
f9. Entropy of the entropy
f9=-∑ijp(i,j)*log(p(i,j))
f10. First homogeneity/inverse difference distance measurement information
Figure BDA0002758530160000032
f11. Second homogeneity/inverse difference measurement information
Figure BDA0002758530160000033
f12. Maximum probability
f10=MAXp(i,j)
f13. Variance (variance)
f13=∑ij(i-μ)2*p(i,j)
f14. Sum mean value
Figure BDA0002758530160000034
f15. Sum variance
Figure BDA0002758530160000035
f16. And entropy
Figure BDA0002758530160000041
f17. Variance of difference
Figure BDA0002758530160000042
f18. Differential entropy
Figure BDA0002758530160000043
f19. Information measurement value of first correlation
Figure BDA0002758530160000044
Wherein,
HXY=f9=-∑ijp(i,j)*log(p(i,j))
HXY1=-∑ijp(i,j)*log(px(i)*py(j))
HXY2=-∑ijpx(i)*py(j)*log(px(i)*py(j))
HX=-∑ipx(i)*log(px(i))
HY=-∑jpy(j)*log(py(j))
f20. information measurement value of second correlation
Figure BDA0002758530160000045
f21. Normalized inverse difference
Figure BDA0002758530160000046
f22. Normalized inverse difference moment
Figure BDA0002758530160000047
Wherein the gray co-occurrence matrix is represented by a relative frequency matrix of two adjacent pixels i and j appearing on the image separated by a distance, one having a gray level i and the other having a gray level j, the gray co-occurrence matrix being a function of the angular relationship and the distance between the adjacent pixels, p (i, j) being the (i, j) th item in the gray co-occurrence matrix, px(i) Denotes the ith entry, N, in the summed p (i, j) rows matrixgThe number of gray levels in the image is shown, and mu is the mean value of the gray level co-occurrence matrix;
Figure BDA0002758530160000051
Figure BDA0002758530160000052
Figure BDA0002758530160000053
Figure BDA0002758530160000054
mean value μx=∑iji*p(i,j)μy=∑ijj*p(i,j)
Standard deviation (mean square error)
Figure BDA0002758530160000055
Figure BDA0002758530160000056
Preferably, the data normalization processing of the face feature points based on the center coordinates of the face feature points includes:
calculating the center coordinates of the facial feature points for each facial feature point identified in the face image;
and calculating the difference between the original coordinates and the central coordinates of the facial feature points for each facial feature point identified in the face image to obtain the standardized coordinates of the facial feature points.
Preferably, the texture characteristic value is subjected to data standardization processing by a normalization method or a regularization method;
the normalization methods include a min-max normalization method and a zero-mean normalization method.
Preferably, the regression prediction is performed on the training set based on the SVM regression prediction method, and the method comprises the following steps:
in sample space, the hyperplane is divided by the following linear equation:
wTx+b=0
wherein x represents a point on a plane;
w=(w1;w2;......;wd) The normal vector of the hyperplane is obtained;
b represents a displacement term used for determining the distance between the hyperplane and the origin;
recording the normal vector w and the displacement term b as (w, b), and setting phi (x) to represent a feature vector after x is mapped;
each point (x) in the training set is processed by SVM regression modeli,yi) Fitting to a linear model, wherein the loss function metric of the SVM regression model is as follows:
err(xi,yi)={0|yi-w·φ(xi)-b}≤∈|yi-w·φ(xi)+b|-∈|yi-w·φ(xi)-b|>∈
the objective function is:
Figure BDA0002758530160000061
wherein e is a constant and e > 0.
In a second aspect, the present invention provides an age prediction system based on facial features and texture features, for performing the age prediction method based on facial features and texture features according to any one of the first aspect, the system comprising:
the system comprises a face image acquisition module, a face image acquisition module and a face image processing module, wherein the face image acquisition module is used for acquiring a plurality of face images from an FGNET face age database, and the face images are provided with age labels;
the texture feature extraction module is used for extracting a texture feature value from the face image based on the gray level co-occurrence matrix;
the facial feature extraction module is used for carrying out face detection based on the Viola-Jones algorithm and extracting facial feature points through a CHEHRA model;
the data calibration module is used for carrying out data standardization processing on the facial feature points based on the central coordinates of the facial feature point coordinates to obtain standardized coordinates of the facial feature points, carrying out data standardization processing on texture feature values through a normalization method or a regularization method, and uniformly mapping the texture feature values to a preset value domain interval to obtain standardized texture feature values;
and the age prediction module is configured with an SVM regression model and used for constructing a training set based on the standardized coordinates and the standardized texture features of the facial feature points and performing regression prediction on the training set based on an SVM regression prediction method to obtain the predicted age.
In a third aspect, the present invention provides an apparatus comprising: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor is configured to invoke the machine-readable program to perform the method of any of the first aspects.
In a fourth aspect, the present invention provides a computer readable medium having stored thereon computer instructions which, when executed by a processor, cause the processor to perform the method of any of the first aspects.
The age prediction method, the system and the device based on the facial features and the texture features have the following advantages:
1. extracting texture characteristic values based on the gray level co-occurrence matrix, performing face detection through a Viola-Jones algorithm, extracting face characteristic points through a CHEHRA model, constructing a training set by the normalized texture characteristic values and the face characteristic points, performing regression prediction on the training set based on an SVM regression prediction method to obtain the predicted age, wherein the classification precision of the predicted age is high;
2. the data calibration is carried out on the face characteristic points, so that the difference of coordinate values caused by face displacement and other reasons in different face images can be eliminated;
3. and carrying out data calibration processing on the texture characteristic value, scaling the data to enable the data to fall into a small specific interval so as to remove unit limitation of the data, converting the unit limitation of the data into a dimensionless pure numerical value, and facilitating comparison and weighting of indexes of different units or orders of magnitude.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
The invention is further described below with reference to the accompanying drawings.
Fig. 1 is a block flow diagram of an age prediction method based on facial features and texture features according to embodiment 1;
fig. 2 is a schematic diagram illustrating labeling of facial feature points in an age prediction method based on facial features and texture features according to embodiment 1;
fig. 3 is a facial feature point label of all images in the FGNET database in the age prediction method based on facial features and texture features according to embodiment 1;
fig. 4 is a diagram illustrating the result of regression prediction of age in the age prediction method based on facial features and texture features according to embodiment 1.
Detailed Description
The present invention is further described in the following with reference to the drawings and the specific embodiments so that those skilled in the art can better understand the present invention and can implement the present invention, but the embodiments are not to be construed as limiting the present invention, and the embodiments and the technical features of the embodiments can be combined with each other without conflict.
The embodiment of the invention provides an age prediction method, system and device based on facial features and textural features, which are used for solving the technical problem of accurately, quickly and effectively estimating the age according to a face image.
Example 1:
the age prediction method based on the facial features and the texture features comprises the following steps:
s100, acquiring a plurality of face images, wherein each face image is provided with an age label;
s200, extracting texture feature values from the face image based on the gray level co-occurrence matrix, and performing data standardization processing on the face feature points based on the center coordinates of the face feature points to obtain standardized coordinates of the face feature points;
performing face detection based on a Viola-Jones algorithm, extracting face feature points through a CHEHRA model, performing data standardization processing on texture feature values, and uniformly mapping coordinate data of the texture feature values to a predetermined value domain interval to obtain standardized texture feature values;
s300, constructing a training set based on the standardized coordinates of the facial feature points and the standardized texture feature values, and performing regression prediction on the training set based on an SVM regression prediction method to obtain the predicted age.
The collected face images are face images shot by a camera or other equipment, 1002 face images of 82 persons are collected in the embodiment, the face images are downloaded from databases such as a public FGNET face age database, and each face image has an age label.
The gray co-occurrence matrix is the number of different gray pairs obtained from an image given a certain offset. When expressed by a mathematical expression, for a face image I of n × m, the offset is set to be (Δ a, Δ b), and the calculation formula of the gray level co-occurrence matrix C (I, j) corresponding to the face image I is as follows:
Figure BDA0002758530160000091
wherein p and q are both increments, p: 1-n, 1: 1-, m, i denote pixels with a gray level i, and j denotes pixels with a gray level j.
In this embodiment, the parameters of the gray level co-occurrence matrix are selected as follows: for the gray scale value range of the image pixel with eight-bit gray scale level is 0-255, the offset parameters Δ x and Δ y of the gray scale co-occurrence matrix are set to be 0 or 1, that is, the pixel with the interval of 1 is considered in the embodiment, four angles are selected to obtain the gray scale co-occurrence matrix, and the four angles are 0 °, 45 °, 90 ° and 135 °, respectively, so that four gray scale co-occurrence matrices G1, G2, G3 and G4 are obtained. The orientation may be any value between 0 ° and 360 ° in principle, but the texture may be expressed representatively in four orientations of 0 °/45 °/90 ° and 135 ° in general. The texture features in this embodiment relate to texture quantities of twenty-two images, which are:
f1. auto-correlation
f1=∑ij(ij)*p(i,j)
f2. Contrast ratio
f2=∑ij(i-j)2*p(i,j)
f3. First correlation measurement value
Figure BDA0002758530160000092
f4 second correlation measurement
Figure BDA0002758530160000093
f5. Cluster protrusion
f5=∑ij(i+j-μxy)4*p(i,j)
f6. Cluster shadow
f6=∑ij(i+j-μxy)3*p(i,j)
f7. Dissimilarity between two different types of plants
f7=∑ij|i-j|*p(i,j)
f8. Energy of
f8=∑ijp(i,j)*p(i,j)
f9. Entropy of the entropy
f9=-∑ijp(i,j)*log(p(i,j))
f10. First homogeneity/inverse difference distance measurement information
Figure BDA0002758530160000101
f11. Second homogeneity/inverse difference measurement information
Figure BDA0002758530160000102
f12. Maximum probability
f10=MAXp(i,j)
f13. Variance (variance)
f13=∑ij(i-μ)2*p(i,j)
f14. Sum mean value
Figure BDA0002758530160000103
f15. Sum variance
Figure BDA0002758530160000104
f16. And entropy
Figure BDA0002758530160000105
f17. Variance of difference
Figure BDA0002758530160000106
f18. Differential entropy
Figure BDA0002758530160000111
f19. Information measurement value of first correlation
Figure BDA0002758530160000112
Wherein,
HXY=f9=-∑ijp(i,j)*log(p(i,j))
HXY1=-∑ijp(i,j)*log(px(i)*py(j))
HXY2=-∑ijpx(i)*py(j)*log(px(i)*py(j))
HX=-∑ipx(i)*log(px(i))
HY=-∑jpy(j)*log(py(j))
f20. information measurement value of second correlation
Figure BDA0002758530160000113
f21. Normalized inverse difference
Figure BDA0002758530160000114
f22. Normalized inverse difference moment
Figure BDA0002758530160000115
Wherein the gray co-occurrence matrix is represented by a relative frequency matrix of two adjacent pixels i and j appearing on the image separated by a distance, one having a gray level i and the other having a gray level j, the gray co-occurrence matrix being a function of the angular relationship and the distance between the adjacent pixels, p (i, j) being the (i, j) th item in the gray co-occurrence matrix, px(i) Denotes the ith entry, N, in the summed p (i, j) rows matrixgThe number of gray levels in the image is shown, and mu is the mean value of the gray level co-occurrence matrix;
Figure BDA0002758530160000116
Figure BDA0002758530160000117
Figure BDA0002758530160000121
Figure BDA0002758530160000122
mean value μx=∑iji*p(i,j)μy=∑ijj*p(i,j)
Standard deviation (mean square error)
Figure BDA0002758530160000123
Figure BDA0002758530160000124
When extracting facial feature points, firstly, carrying out face detection based on a Viola-Jones algorithm, wherein the algorithm is based on Haar feature selection, a complete image classifier is created by using Adaboost training and cascading, and the capability of the algorithm for robustly detecting faces under different illumination conditions is well established; then, facial feature detection is performed using the chevra model, which is a machine learning algorithm, for detecting facial feature points as shown in fig. 2.
In order to predict more accurately, the obtained texture features and the obtained face features are subjected to standardization processing.
In order to remove the difference of coordinate values of the facial feature points recognized in different face images due to face displacement and the like, the facial feature point normalization method based on the coordinate center is used in the embodiment to normalize the obtained facial feature point data to improve the accuracy of age prediction. The specific operation is as follows:
(1) calculating the center coordinates of all the facial feature point coordinates identified in each facial image;
(2) the standardized coordinates of each facial feature point are the result of subtracting the central coordinates from the original coordinates;
(3) and taking the coordinates after center standardization as the coordinates of the facial feature points for subsequent age prediction.
Normalization of data (normalization) is to scale data to fall within a small specific interval. In some index processing for comparison and evaluation, unit limitation of data is removed and converted into a dimensionless pure numerical value, so that indexes of different units or orders can be compared and weighted conveniently. The most typical of these is the normalization of the data, i.e. the uniform mapping of the data onto the [0, 1] interval.
In this embodiment, normalization is selected to perform data normalization processing on texture features, the normalization method includes a min-max normalization method and a zero-mean normalization method, and the calculation formula of the min-max normalization method is as follows:
y=((x-MinValue)/(MaxValue-MinValue))(new_MaxValue-new_MinValue)+new_min Value)
a zero-mean normalization method, i.e., removing mean and variance scaling, with the formula: (X-mean)/std.
The calculation is done separately for each attribute/column, and the mean is subtracted from the data by attribute (by column) and the variance is taken. The result is that all data is clustered around 0 for each attribute/column, with a variance of 1.
The normalization objective is: 1. change the number to a decimal between [0, 1 ]: the data are mapped to the range of 0-1 for processing, so that the processing is more convenient and faster, and the data should fall into the digital signal processing range; 2. changing the dimensional expression into a dimensionless expression: normalization is a simplified calculation mode, namely, a dimensional expression is transformed into a dimensionless expression to become a pure quantity.
The process of regularization is to scale each sample to a unit norm (norm of 1 for each sample), which is useful if similarity between two samples is to be calculated later using, for example, quadratic (dot product) or other kernel methods.
The normalization is a transformation such as data scaling and the like for the convenience of the next processing of data, and is not a transformation for the convenience of processing or comparing with other data, for example, after the data is subjected to zero-mean normalization, the data is more favorably processed by using the property of standard normal distribution;
normalization is to eliminate dimension between different data and facilitate data comparison and common processing. In the case of a parametric-based model or a distance-based model, normalization is required because either the parameter or the distance needs to be calculated.
Regularization is implemented by using prior knowledge, a regularizer is introduced in the processing process, the effect of guiding constraint is increased, and the phenomenon of overfitting can be effectively reduced by using regularization in logistic regression.
After the facial feature points and the texture feature values are subjected to digital standardization, a training set is constructed based on the standardized coordinates of the facial feature points and the standardized texture feature values, and regression prediction is carried out on the training set based on an SVM regression prediction method. The specific method comprises the following steps:
in sample space, the partition hyperplane can be described by the following linear equation:
WTx+b=0
wherein w ═ w1;w2;......;wd) The direction of the hyperplane is determined for the normal vector of the hyperplane, and b represents a displacement term for determining the distance between the hyperplane and the origin. Obviously, the partition hyperplane can be determined by a normal vector w and a displacement term b, which is labeled (w, b). Phi (x) represents the feature vector after mapping x, and for the SVM regression model, the goal is to let each point (x) in the training seti,yi) Fitting to Linear model yi=w·φ(xi)+b。
The loss function of the SVM regression model is different from the general regression model definition, the SVM regression model defines a constant E, the E is larger than 0, and for a certain point (x)i,yi) If yi-w·φ(xi) B ≦ e, no loss at all, if yi-w·φ(xi) B | >. e, then the corresponding penalty is | yi-w·φ(xi) B | - ∈, this loss function of mean square difference is different, if it is, then as long as y isi-w·φ(xi) -b ≠ 0, there is a penalty. In summary, the measure of the loss function of the SVM regression model is:
err(xi,yi)={0|yi-w·φ(xi)-b}≤∈|yi-w·φ(xi)+b|-∈|yi-w·φ(xi)-b|>∈
the objective function is as follows:
Figure BDA0002758530160000141
similar to the SVM classification model, the SVM regression model may apply to each sample (x)i,yi) Adding relaxation variables xii≥0。
The SVM regression algorithm is effective in solving the classification problem and the regression problem of high-dimensional features, and still has a good effect when the feature dimension is greater than the sample number; only a part of branch support vectors are used for making the decision of the hyperplane, and the dependence on all data is not needed; a large number of kernel functions can be used, so that various nonlinear classification regression problems can be flexibly solved; when the sample size is not mass data, the classification accuracy is high, and the generalization capability is strong.
In this embodiment, the obtained data sets of the facial feature points and the texture features are firstly subjected to standardization processing, then regression prediction is performed on the data sets, and according to experimental results, it can be obtained that the predicted age shows an ascending trend along with the increase of the real age, and the facial feature points or the texture features are combined with the facial feature points to perform the regression prediction on the age.
Example 2:
the age prediction system based on the facial features and the texture features is used for executing the age prediction method based on the facial features and the texture features, and comprises a face image acquisition module, a texture feature extraction module, a facial feature extraction module, a data calibration module and an age prediction module, wherein the face image acquisition module, the texture feature extraction module, the facial feature extraction module, the data calibration module and the age prediction module are respectively connected with the face image acquisition module and the texture feature acquisition module.
The face image acquisition module is used for acquiring a plurality of face images from an FGNET face age database, and the face images are provided with age labels; the texture feature extraction module is used for extracting texture feature values from the face image based on the gray level co-occurrence matrix; the facial feature extraction module is used for carrying out face detection based on the Viola-Jones algorithm and extracting facial feature points through a CHEHRA model; the data calibration module is used for carrying out data standardization processing on the facial feature points based on the central coordinates of the facial feature point coordinates to obtain standardized coordinates of the facial feature points, carrying out data standardization processing on texture feature values through a normalization method or a regularization method, and uniformly mapping the coordinate data of the texture feature values to a preset value domain interval to obtain the standardized texture feature values; the age prediction module is configured with an SVM regression model and used for constructing a training set based on the standardized coordinates and the standardized texture feature values of the facial feature points and performing regression prediction on the training set based on an SVM regression prediction method to obtain the predicted age.
The gray level co-occurrence matrix is the number of different gray level pairs obtained by giving a certain offset to an image. When expressed by a mathematical expression, for a face image I of n × m, the offset is set to be (Δ a, Δ b), and the calculation formula of the gray level co-occurrence matrix C (I, j) corresponding to the face image I is as follows:
Figure BDA0002758530160000151
wherein p and q are both increments, p: 1-n, 1: 1-, m, i denote pixels with a gray level i, and j denotes pixels with a gray level j.
In this embodiment, the parameters of the gray level co-occurrence matrix are selected as follows: for the gray scale value range of the image pixel with eight-bit gray scale level is 0-255, the offset parameters Δ x and Δ y of the gray scale co-occurrence matrix are set to be 0 or 1, that is, in the embodiment, the pixel with the interval of 1 is considered, four angles are selected to obtain the gray scale co-occurrence matrix, and the four angles are 0 °, 45 °, 90 ° and 135 ° 0 respectively, so that four gray scale co-occurrence matrices G1, G2, G3 and G4 are obtained. The orientation may be any value between 0 ° and 360 ° in principle, but the texture may be expressed representatively in four orientations of 0 °/45 °/90 ° and 135 ° in general. The texture features in this embodiment relate to texture quantities of twenty-two images, which are:
f1. auto-correlation
f1=∑ij(ij)*p(i,j)
f2. Contrast ratio
f2=∑ij(i-j)2*p(i,j)
f3. First correlation measurement value
Figure BDA0002758530160000161
f4 second correlation measurement
Figure BDA0002758530160000162
f5. Cluster protrusion
f5=∑ij(i+j-μxy)4*p(i,j)
f6. Cluster shadow
f6=∑ij(i+j-μxy)3*p(i,j)
f7. Dissimilarity between two different types of plants
f7=∑ij|i-j|*p(i,j)
f8. Energy of
f8=∑ijp(i,j)*p(i,j)
f9. Entropy of the entropy
f9=-∑ijp(i,j)*log(p(i,j))
f10. First homogeneity/inverse difference distance measurement information
Figure BDA0002758530160000163
f11. Second homogeneity/inverse difference measurement information
Figure BDA0002758530160000164
f12. Maximum probability
f10=MAXp(i,j)
f13. Variance (variance)
f13=∑ij(i-μ)2*p(i,j)
f14. Sum mean value
Figure BDA0002758530160000171
f15. Sum variance
Figure BDA0002758530160000172
f16. And entropy
Figure BDA0002758530160000173
f17. Variance of difference
Figure BDA0002758530160000174
f18. Differential entropy
Figure BDA0002758530160000175
f19. Information measurement value of first correlation
Figure BDA0002758530160000176
Wherein,
HXY=f9=-∑ijp(i,j)*log(p(i,j))
HXY1=-∑ijp(i,j)*log(px(i)*py(j))
HXY2=-∑ijpx(i)*py(j)*log(px(i)*py(j))
HX=-∑ipx(i)*log(px(i))
HY=-∑jpy(j)*log(py(j))
f20. information measurement value of second correlation
Figure BDA0002758530160000177
f21. Normalized inverse difference
Figure BDA0002758530160000178
f22. Normalized inverse difference moment
Figure BDA0002758530160000181
Wherein the gray co-occurrence matrix is represented by a relative frequency matrix of two adjacent pixels i and j appearing on the image separated by a distance, one having a gray level i and the other having a gray level j, the gray co-occurrence matrix being a function of the angular relationship and the distance between the adjacent pixels, p (i, j) being the (i, j) th item in the gray co-occurrence matrix, px(i) Denotes the ith entry, N, in the summed p (i, j) rows matrixgThe number of gray levels in the image is shown, and mu is the mean value of the gray level co-occurrence matrix;
Figure BDA0002758530160000182
Figure BDA0002758530160000183
Figure BDA0002758530160000184
Figure BDA0002758530160000185
mean value μx=∑iji*p(i,j)μy=∑ijj*p(i,j)
Standard deviation (mean square error)
Figure BDA0002758530160000186
Figure BDA0002758530160000187
When the facial feature point extraction module extracts facial feature points, firstly, face detection is carried out based on a Viola-Jones algorithm, the algorithm is based on Haar feature selection, a complete image classifier is created by using Adaboost training and cascading, and the capability of the algorithm for robustly detecting faces under different illumination conditions is well established; then, facial feature detection is performed using the chevra model, which is a machine learning algorithm, for detecting facial feature points as shown in fig. 2.
In order to predict more accurately, the obtained texture features and the obtained face features are subjected to standardization processing through a data calibration module.
In order to remove the difference of coordinate values of the facial feature points identified in different face images due to face displacement and the like, the data calibration module of the embodiment normalizes the obtained facial feature point data based on a facial feature point normalization method of a coordinate center to improve the accuracy of age prediction. The specific operation is as follows:
(1) calculating the center coordinates of all the facial feature point coordinates identified in each facial image;
(2) the standardized coordinates of each facial feature point are the result of subtracting the central coordinates from the original coordinates;
(3) and taking the coordinates after center standardization as the coordinates of the facial feature points for subsequent age prediction.
Normalization of data (normalization) is to scale data to fall within a small specific interval. In some index processing for comparison and evaluation, unit limitation of data is removed and converted into a dimensionless pure numerical value, so that indexes of different units or orders can be compared and weighted conveniently. The most typical of these is the normalization of the data, i.e. the uniform mapping of the data onto the [0, 1] interval.
In this embodiment, normalization is selected to perform data normalization processing on texture features, the normalization method includes a min-max normalization method and a zero-mean normalization method, and the calculation formula of the min-max normalization method is as follows:
y=((x-MinValue)/(MaxValue-MinValue))(new_MaxValue-new_MinValue)+new_min Value)
a zero-mean normalization method, i.e., removing mean and variance scaling, with the formula: (X-mean)/std.
Regularization is implemented by using prior knowledge, a regularizer is introduced in the processing process, the effect of guiding constraint is increased, and the phenomenon of overfitting can be effectively reduced by using regularization in logistic regression.
In the age prediction module, the measure of the loss function of the SVM regression model is:
err(xi,yi)={0|yi-w·φ(xi)-b}≤∈|yi-w·φ(xi)+b|-∈|yi-w·φ(xi)-b|>∈
the objective function is as follows:
Figure BDA0002758530160000191
example 3:
an embodiment of the present invention further provides an apparatus, including: at least one memory and at least one processor; the at least one memory for storing a machine-readable program; the at least one processor is used for calling the machine readable program and executing the method disclosed by the embodiment 1.
Example 4:
embodiments of the present invention further provide a computer-readable medium, on which computer instructions are stored, and when executed by a processor, the computer instructions cause the processor to execute the method disclosed in the embodiments. Specifically, a system or an apparatus equipped with a storage medium on which software program codes that realize the functions of any of the above-described embodiments are stored may be provided, and a computer (or a CPU or MPU) of the system or the apparatus is caused to read out and execute the program codes stored in the storage medium.
In this case, the program code itself read from the storage medium can realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code constitute a part of the present invention.
Examples of the storage medium for supplying the program code include a floppy disk, a hard disk, a magneto-optical disk, an optical disk (e.g., CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD + RW), a magnetic tape, a nonvolatile memory card, and a ROM. Alternatively, the program code may be downloaded from a server computer via a communications network.
Further, it should be clear that the functions of any one of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform a part or all of the actual operations based on instructions of the program code.
Further, it is to be understood that the program code read out from the storage medium is written to a memory provided in an expansion board inserted into the computer or to a memory provided in an expansion unit connected to the computer, and then causes a CPU or the like mounted on the expansion board or the expansion unit to perform part or all of the actual operations based on instructions of the program code, thereby realizing the functions of any of the above-described embodiments.
It should be noted that not all steps and modules in the above flows and system structure diagrams are necessary, and some steps or modules may be omitted according to actual needs. The execution order of the steps is not fixed and can be adjusted as required. The system structure described in the above embodiments may be a physical structure or a logical structure, that is, some modules may be implemented by the same physical entity, or some modules may be implemented by a plurality of physical entities, or some components in a plurality of independent devices may be implemented together.
While the invention has been shown and described in detail in the drawings and in the preferred embodiments, it is not intended to limit the invention to the embodiments disclosed, and it will be apparent to those skilled in the art that many more embodiments of the invention are possible that combine the features of the different embodiments described above and still fall within the scope of the invention.

Claims (10)

1. The age prediction method based on the facial features and the texture features is characterized by comprising the following steps of:
acquiring a plurality of face images, wherein each face image is provided with an age label;
extracting texture characteristic values from the face image based on the gray level co-occurrence matrix;
performing face detection based on a Viola-Jones algorithm, and extracting facial feature points through a CHEHRA model;
carrying out data standardization processing on the facial feature points based on the central coordinates of the facial feature point coordinates to obtain standardized coordinates of the facial feature points, carrying out data standardization processing on texture feature values, and uniformly mapping the texture feature values to a predetermined number domain interval to obtain standardized texture feature values;
and constructing a training set based on the standardized coordinates of the facial feature points and the standardized texture feature values, and performing regression prediction on the training set based on an SVM regression prediction method to obtain the predicted age.
2. The age prediction method based on facial features and texture features as claimed in claim 1, wherein the facial image is a photographed image downloaded into FGNET facial age database.
3. The age prediction method based on facial features and texture features according to claim 1, wherein the offset is set to (Δ a, Δ b) for n × m face images I, and a formula for calculating a gray level co-occurrence matrix C (I, j) corresponding to the face images I is as follows:
Figure FDA0002758530150000011
wherein p and q are both increments, p: 1-n, 1: 1-, m, i denote pixels with a gray level i, and j denotes pixels with a gray level j.
4. The age prediction method based on facial features and texture features as claimed in claim 1, wherein the texture feature value comprises:
f1. auto-correlation
f1=∑ij(ij)*p(i,j)
f2. Contrast ratio
f2=∑ij(i-j)2*p(i,j)
f3. First correlation measurement value
Figure FDA0002758530150000021
f4 second correlation measurement
Figure FDA0002758530150000022
f5. Cluster protrusion
f5=∑ij(i+j-μxy)4*p(i,j)
f6. Cluster shadow
f6=∑ij(i+j-μxy)3*p(i,j)
f7. Dissimilarity between two different types of plants
f7=∑ij|i-j|*p(i,j)
f8. Energy of
f8=∑ijp(i,j)*p(i,j)
f9. Entropy of the entropy
f9=-∑ijp(i,j)*log(p(i,j))
f10. First homogeneity/inverse difference distance measurement information
Figure FDA0002758530150000023
f11. Second homogeneity/inverse difference measurement information
Figure FDA0002758530150000024
f12. Maximum probability
f10=MAXp(i,j)
f13. Variance (variance)
f13=∑ij(i-μ)2*p(i,j)
f14. Sum mean value
Figure FDA0002758530150000031
f15. Sum variance
Figure FDA0002758530150000032
f16. And entropy
Figure FDA0002758530150000033
f17. Variance of difference
Figure FDA0002758530150000034
f18. Differential entropy
Figure FDA0002758530150000035
f19. Information measurement value of first correlation
Figure FDA0002758530150000036
Wherein,
HXY=f9=-∑ijp(i,j)*log(p(i,j))
HXY1=-∑ijp(i,j)*log(px(i)*py(j))
HXY2=-∑ijpx(i)*py(j)*log(px(i)*py(j))
HX=-∑ipx(i)*log(px(i))
HY=-∑jpy(j)*log(py(j))
f20. information measurement value of second correlation
Figure FDA0002758530150000037
f21. Normalized inverse difference
Figure FDA0002758530150000038
f22. Normalized inverse difference moment
Figure FDA0002758530150000041
Wherein the gray co-occurrence matrix is represented by a relative frequency matrix of two adjacent pixels i and j appearing on the image separated by a distance, one having a gray level i and the other having a gray level j, the gray co-occurrence matrix being a function of the angular relationship and the distance between the adjacent pixels, p (i, j) being the (i, j) th item in the gray co-occurrence matrix, px(i) Denotes the ith entry, N, in the summed p (i, j) rows matrixgThe number of gray levels in the image is shown, and mu is the mean value of the gray level co-occurrence matrix;
Figure FDA0002758530150000042
Figure FDA0002758530150000043
Figure FDA0002758530150000044
Figure FDA0002758530150000045
mean value μx=∑iji*p(i,j) μy=∑ijj*p(i,j)
Standard deviation (mean square error)
Figure FDA0002758530150000046
Figure FDA0002758530150000047
5. The age prediction method based on facial features and texture features as claimed in claim 1, wherein the data normalization process is performed on the facial feature points based on the center coordinates of the facial feature points, comprising the steps of:
calculating the center coordinates of the facial feature points for each facial feature point identified in the face image;
and calculating the difference between the original coordinates and the central coordinates of the facial feature points for each facial feature point identified in the face image to obtain the standardized coordinates of the facial feature points.
6. The age prediction method based on facial features and texture features according to claim 1, characterized in that the texture feature values are subjected to data normalization processing by a normalization method or a regularization method;
the normalization methods include a min-max normalization method and a zero-mean normalization method.
7. The age prediction method based on facial features and texture features as claimed in claim 1, wherein the regression prediction is performed on the training set based on SVM regression prediction method, comprising the steps of:
in sample space, the hyperplane is divided by the following linear equation:
WTx+b=0
wherein x represents a point on a plane;
w=(w1;w2;......;wd) The normal vector of the hyperplane is obtained;
b represents a displacement term used for determining the distance between the hyperplane and the origin;
recording the normal vector w and the displacement term b as (w, b), and setting phi (x) to represent a feature vector after x is mapped;
each point (x) in the training set is processed by SVM regression modeli,yi) Fitting to a linear model, wherein the loss function metric of the SVM regression model is as follows:
err(xi,yi)={0|yi-w·φ(xi)-b}≤∈|yi-w·φ(xi)+b|-∈|yi-w·φ(xi)-b|>∈
the objective function is:
Figure FDA0002758530150000051
wherein e is a constant and e > 0.
8. Age prediction system based on facial features and texture features, characterized in that it is adapted to perform the age prediction method based on facial features and texture features according to any one of claims 1 to 7, said system comprising:
the system comprises a face image acquisition module, a face image acquisition module and a face image processing module, wherein the face image acquisition module is used for acquiring a plurality of face images from an FGNET face age database, and the face images are provided with age labels;
the texture feature extraction module is used for extracting a texture feature value from the face image based on the gray level co-occurrence matrix;
the facial feature extraction module is used for carrying out face detection based on the Viola-Jones algorithm and extracting facial feature points through a CHEHRA model;
the data calibration module is used for carrying out data standardization processing on the facial feature points based on the central coordinates of the facial feature point coordinates to obtain standardized coordinates of the facial feature points, carrying out data standardization processing on texture feature values through a normalization method or a regularization method, and uniformly mapping the texture feature values to a preset value domain interval to obtain standardized texture feature values;
and the age prediction module is configured with an SVM regression model and used for constructing a training set based on the standardized coordinates and the standardized texture feature values of the facial feature points and performing regression prediction on the training set based on an SVM regression prediction method to obtain the predicted age.
9. An apparatus, comprising: at least one memory and at least one processor;
the at least one memory to store a machine readable program;
the at least one processor, configured to invoke the machine readable program to perform the method of any of claims 1 to 7.
10. Computer readable medium, characterized in that it has stored thereon computer instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 8.
CN202011210058.0A 2020-11-03 2020-11-03 Age prediction method, system and device based on facial features and texture features Active CN112329607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011210058.0A CN112329607B (en) 2020-11-03 2020-11-03 Age prediction method, system and device based on facial features and texture features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011210058.0A CN112329607B (en) 2020-11-03 2020-11-03 Age prediction method, system and device based on facial features and texture features

Publications (2)

Publication Number Publication Date
CN112329607A true CN112329607A (en) 2021-02-05
CN112329607B CN112329607B (en) 2022-10-21

Family

ID=74323526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011210058.0A Active CN112329607B (en) 2020-11-03 2020-11-03 Age prediction method, system and device based on facial features and texture features

Country Status (1)

Country Link
CN (1) CN112329607B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011467A (en) * 2021-02-25 2021-06-22 南京中医药大学 Angelica sinensis medicinal material producing area identification method based on image structure texture information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140369626A1 (en) * 2005-05-09 2014-12-18 Google Inc. System and method for providing objectified image renderings using recognition information from images
CN105550641A (en) * 2015-12-04 2016-05-04 康佳集团股份有限公司 Age estimation method and system based on multi-scale linear differential textural features
CN108197592A (en) * 2018-01-22 2018-06-22 百度在线网络技术(北京)有限公司 Information acquisition method and device
CN109299701A (en) * 2018-10-15 2019-02-01 南京信息工程大学 Expand the face age estimation method that more ethnic group features cooperate with selection based on GAN
CN109934287A (en) * 2019-03-12 2019-06-25 上海宝尊电子商务有限公司 A kind of clothing texture method for identifying and classifying based on LBP and GLCM
CN110458362A (en) * 2019-08-15 2019-11-15 中储粮成都储藏研究院有限公司 Grain quality index prediction technique based on SVM supporting vector machine model
CN110532970A (en) * 2019-09-02 2019-12-03 厦门瑞为信息技术有限公司 Age-sex's property analysis method, system, equipment and the medium of face 2D image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140369626A1 (en) * 2005-05-09 2014-12-18 Google Inc. System and method for providing objectified image renderings using recognition information from images
CN105550641A (en) * 2015-12-04 2016-05-04 康佳集团股份有限公司 Age estimation method and system based on multi-scale linear differential textural features
CN108197592A (en) * 2018-01-22 2018-06-22 百度在线网络技术(北京)有限公司 Information acquisition method and device
CN109299701A (en) * 2018-10-15 2019-02-01 南京信息工程大学 Expand the face age estimation method that more ethnic group features cooperate with selection based on GAN
CN109934287A (en) * 2019-03-12 2019-06-25 上海宝尊电子商务有限公司 A kind of clothing texture method for identifying and classifying based on LBP and GLCM
CN110458362A (en) * 2019-08-15 2019-11-15 中储粮成都储藏研究院有限公司 Grain quality index prediction technique based on SVM supporting vector machine model
CN110532970A (en) * 2019-09-02 2019-12-03 厦门瑞为信息技术有限公司 Age-sex's property analysis method, system, equipment and the medium of face 2D image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贾磊: "基于LBP和HOG特征融合的人脸表情识别算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011467A (en) * 2021-02-25 2021-06-22 南京中医药大学 Angelica sinensis medicinal material producing area identification method based on image structure texture information

Also Published As

Publication number Publication date
CN112329607B (en) 2022-10-21

Similar Documents

Publication Publication Date Title
Xia et al. Multi-stage feature constraints learning for age estimation
CN110909651B (en) Method, device and equipment for identifying video main body characters and readable storage medium
Puzicha et al. Non-parametric similarity measures for unsupervised texture segmentation and image retrieval
US5901244A (en) Feature extraction system and face image recognition system
CN107545276B (en) Multi-view learning method combining low-rank representation and sparse regression
CN108415883B (en) Convex non-negative matrix factorization method based on subspace clustering
US5774576A (en) Pattern recognition by unsupervised metric learning
CN112070058A (en) Face and face composite emotional expression recognition method and system
CN116340796B (en) Time sequence data analysis method, device, equipment and storage medium
CN113569554B (en) Entity pair matching method and device in database, electronic equipment and storage medium
Adams et al. GEFE: Genetic & evolutionary feature extraction for periocular-based biometric recognition
CN116110089A (en) Facial expression recognition method based on depth self-adaptive metric learning
Li et al. Experimental evaluation of FLIR ATR approaches—a comparative study
Dias et al. A multirepresentational fusion of time series for pixelwise classification
CN112329607B (en) Age prediction method, system and device based on facial features and texture features
Brazdil et al. Dataset characteristics (metafeatures)
Kim et al. An integration scheme for image segmentation and labeling based on Markov random field model
Bindu et al. Hybrid feature descriptor and probabilistic neuro-fuzzy system for face recognition
Saeed et al. Classification of live scanned fingerprints using dense sift based ridge orientation features
Gowda Age estimation by LS-SVM regression on facial images
CN116152551A (en) Classification model training method, classification method, device, equipment and medium
Yao et al. Fingerprint quality assessment: Matching performance and image quality
Yan et al. A CNN-based fingerprint image quality assessment method
RU2789609C1 (en) Method for tracking, detection and identification of objects of interest and autonomous device with protection from copying and hacking for their implementation
Yang et al. Deep learning for video face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant