CN106651978B - Face image prediction method and system - Google Patents

Face image prediction method and system Download PDF

Info

Publication number
CN106651978B
CN106651978B CN201610886084.2A CN201610886084A CN106651978B CN 106651978 B CN106651978 B CN 106651978B CN 201610886084 A CN201610886084 A CN 201610886084A CN 106651978 B CN106651978 B CN 106651978B
Authority
CN
China
Prior art keywords
face
face image
image
images
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610886084.2A
Other languages
Chinese (zh)
Other versions
CN106651978A (en
Inventor
吴子扬
刘聪
刘庆峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Iflytek Information Technology Co Ltd
Original Assignee
Iflytek Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iflytek Information Technology Co Ltd filed Critical Iflytek Information Technology Co Ltd
Priority to CN201610886084.2A priority Critical patent/CN106651978B/en
Publication of CN106651978A publication Critical patent/CN106651978A/en
Application granted granted Critical
Publication of CN106651978B publication Critical patent/CN106651978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for predicting a face image, wherein the method comprises the following steps: acquiring a face image to be predicted and a time point of predicting the face image; extracting face attribute features from the face image; determining a corresponding face image prediction model by using the face attribute characteristics; and inputting the pixel points of the face image into the face prediction model to obtain a predicted face image. By using the method and the device, the face image obtained by prediction and the face image to be predicted have higher correlation, and the user experience is greatly improved.

Description

Face image prediction method and system
Technical Field
The invention relates to the field of image processing, in particular to a method and a system for predicting a face image.
Background
Along with the continuous improvement of modern living standard and the continuous progress of science and technology, the requirements of people for entertainment and informatization are more and more diversified. In order to meet the increasing material culture requirements and the requirements of high-tech auxiliary social management of people, the requirements of predicting the previous or later long phase of a person are stronger through an image processing technology, particularly, the long phase of a user at a corresponding time such as 20 years old or 30 years old or 20 years old is predicted to be widely applied in entertainment by inputting a user picture, and the like, and the long phase of a specific application such as film production can be used for predicting the young or old long phase of a certain star and searching for a corresponding avatar; if the photos of the user at the corresponding time are predicted in the application of the mobile phone, the entertainment of the application of the mobile phone is increased; certainly, the system also has corresponding requirements in other fields, such as the public safety field, and can predict the growth of criminal suspects after years, help police to solve cases and the like.
The existing face image prediction method is generally based on a face synthesis technology, when a face image is predicted, a three-dimensional reconstruction method is used for synthesizing the face image of a corresponding age bracket, after key points of the face are detected, three-dimensional modeling is carried out on an input image, face textures in the image are pasted on a three-dimensional model, three-dimensional deformation is carried out on the detected key points, and interpolation processing is carried out on the three-dimensional paste; and finally, adding pre-generated wrinkles of the corresponding age group to the processed three-dimensional model, and performing smoothing treatment to obtain a reconstructed face image of the corresponding age group, wherein the wrinkles need to collect a large amount of data in advance to perform model training, and are simulated by using the trained model. However, because the human face texture changes of each person have great differences, in the existing method, when synthesizing the human face images of the corresponding age groups, the pre-generated wrinkles of the corresponding age groups are added to the three-dimensional model, the pre-generated wrinkles are not related to the human face images input by the current user, the human face images with wrinkles added are often different from the human face images provided by the user, especially the edge differences of the added wrinkles are more obvious, so that the synthesized human face images look stranger, have poor reality and are lower in user experience.
Disclosure of Invention
The embodiment of the invention provides a method and a system for predicting a face image, which are used for improving the sense of reality of the predicted face image and improving the user experience.
Therefore, the invention provides the following technical scheme:
a face image prediction method comprises the following steps:
acquiring a face image to be predicted and a time point of predicting the face image;
extracting face attribute features from the face image;
determining a corresponding face image prediction model by using the face attribute characteristics;
and inputting the pixel points of the face image into the face prediction model to obtain a predicted face image.
Preferably, the face image prediction model comprises a time-light forward flow model and/or a time-light backward flow model, the time-light forward flow model is used for predicting the future long-phase situation of the face, and the time-light backward flow model is used for predicting the past long phase of the face;
the method further comprises the following steps of constructing a face image prediction model:
collecting a large number of face images, and constructing a temporal light transformation database;
extracting face attribute features from the face image in the time-light transformation database;
the face images in the time-light conversion database are normalized to obtain normalized face images;
clustering the normalized face images according to the extracted face attribute characteristics to obtain clustered face images;
and constructing a face prediction model according to the clustered face images.
Preferably, the face attribute features include any one or more of the following: gender, expression, whether wearing glasses, region, occupation;
the extracting of the face attribute features from the face image in the time-light transformation database comprises:
carrying out face detection and face feature point positioning on the face image to obtain the position of a local feature point of the face in the image;
and extracting the face attribute characteristics of each face image according to the position of each local characteristic point and a pre-trained classification model.
Preferably, the warping the face image in the time-light transformation database includes:
and regulating the coordinates and the scale of the face in the face image in the time-light conversion database.
Preferably, the clustering the normalized face images according to the extracted face attribute features includes:
(1) selecting a face attribute feature as a root node of a decision tree, determining each edge of the root node according to the value of the selected face attribute feature, and dividing a face image into a plurality of classes;
(2) calculating the minimum variance of the values of the residual face attribute features in each type of face image by using the extracted face attribute features;
(3) judging whether the minimum variance is larger than a set value or not; if yes, executing the step (4); otherwise, executing the step (5);
(4) taking the node to which the class corresponding to the minimum variance belongs as a leaf node, and not continuously dividing; then executing the step (6);
(5) taking the attribute characteristics corresponding to the minimum variance in each class as upper nodes of each class of face images, obtaining the edges of the upper nodes according to the values of the upper nodes, and continuously dividing each class of face images into multiple classes;
(6) judging whether attribute features are not added into the decision tree or not; if yes, executing the step (2); otherwise, executing the step (7);
(7) counting the number of the face images under each leaf node, if the number of the face images in the leaf node is smaller than a set number threshold, deleting the leaf node and the brother node thereof, adding the face images in the leaf node and the brother node thereof to a parent node of the leaf node, and finishing the construction of the decision tree.
Preferably, the constructing a face prediction model according to the clustered face images includes:
for each leaf node in the decision tree, constructing a face prediction model corresponding to the leaf node, specifically comprising:
sequencing the face images in the time-varying conversion database, and sequencing the face images of the same person according to age;
initializing the face prediction model by using the sequenced data to obtain an initialized face prediction model;
and performing incremental training on the initialized face prediction model to obtain a final face prediction model.
Preferably, the determining a corresponding face image prediction model by using the face attribute features includes:
traversing the decision tree according to the face attribute characteristics to find corresponding leaf nodes;
and acquiring a face prediction model corresponding to the leaf node.
Preferably, the method further comprises:
and restoring the predicted face image according to the face attribute characteristics to obtain a restored face image.
Preferably, the method further comprises:
and synthesizing the predicted face image or the restored face image with the background of the face image to be predicted to obtain a synthesized face image.
Preferably, the method further comprises:
and feeding back the predicted face image, or the restored face image, or the synthesized face image to the user.
A face image prediction system comprising:
the receiving module is used for acquiring a face image to be predicted and a time point of the predicted face image;
the characteristic extraction module is used for extracting face attribute characteristics from the face image;
the model selection module is used for determining a corresponding face image prediction model by utilizing the face attribute characteristics;
and the prediction module is used for inputting the pixel points of the face image into the face prediction model to obtain a predicted face image.
Preferably, the face image prediction model comprises a time-light forward flow model and/or a time-light backward flow model, the time-light forward flow model is used for predicting the future long-phase situation of the face, and the time-light backward flow model is used for predicting the past long phase of the face;
the system further comprises a predictive model construction module, the predictive model construction module comprising:
the image collection unit is used for collecting a large number of face images and constructing a time-light conversion database;
the characteristic extraction unit is used for extracting human face attribute characteristics from the human face image in the time-light conversion database;
the regularization unit is used for regularizing the face images in the time-light transformation database to obtain regularized face images;
the clustering unit is used for clustering the normalized face images according to the extracted face attribute characteristics to obtain clustered face images;
and the model construction unit is used for constructing a face prediction model according to the clustered face images.
Preferably, the face attribute features include any one or more of the following: gender, expression, whether wearing glasses, region, occupation;
the feature extraction unit includes:
the positioning subunit is used for carrying out face detection and face feature point positioning on the face image to obtain the position of a local feature point of the face in the image;
and the extraction subunit is used for extracting the face attribute characteristics of each face image according to the position of each local characteristic point and a pre-trained classification model.
Preferably, the regularizing unit is specifically configured to regularize coordinates and a scale of a face in the face image in the time-light transformation database.
Preferably, the clustering unit is specifically configured to cluster the normalized face images in the following manner:
(1) selecting a face attribute feature as a root node of a decision tree, determining each edge of the root node according to the value of the selected face attribute feature, and dividing a face image into a plurality of classes;
(2) calculating the minimum variance of the values of the residual face attribute features in each type of face image by using the extracted face attribute features;
(3) judging whether the minimum variance is larger than a set value or not; if yes, executing the step (4); otherwise, executing the step (5);
(4) taking the node to which the class corresponding to the minimum variance belongs as a leaf node, and not continuously dividing; then executing the step (6);
(5) taking the attribute characteristics corresponding to the minimum variance in each class as upper nodes of each class of face images, obtaining the edges of the upper nodes according to the values of the upper nodes, and continuously dividing each class of face images into multiple classes;
(6) judging whether attribute features are not added into the decision tree or not; if yes, executing the step (2); otherwise, executing the step (7);
(7) counting the number of the face images under each leaf node, if the number of the face images in the leaf node is smaller than a set number threshold, deleting the leaf node and the brother node thereof, adding the face images in the leaf node and the brother node thereof to a parent node of the leaf node, and finishing the construction of the decision tree.
Preferably, the model construction unit is specifically configured to construct, for each leaf node in the decision tree, a face prediction model corresponding to the leaf node; the model building unit specifically comprises:
the sequencing subunit is used for sequencing the face images in the time-varying light conversion database, and sequencing the face images of the same person according to age;
the initialization subunit is used for initializing the face prediction model by using the sorted data to obtain an initialized face prediction model;
and the increment training subunit is used for carrying out increment training on the initialized face prediction model to obtain a final face prediction model.
Preferably, the model selection module comprises:
the traversal unit is used for traversing the decision tree according to the face attribute characteristics to find out corresponding leaf nodes;
and the model acquisition unit is used for acquiring the face prediction model corresponding to the leaf node.
Preferably, the system further comprises:
and the restoring module is used for restoring the predicted face image according to the face attribute characteristics to obtain a restored face image.
Preferably, the system further comprises:
and the synthesis module is used for synthesizing the predicted face image or the restored face image with the background of the face image to be predicted to obtain a synthesized face image.
Preferably, the system further comprises:
and the feedback module is used for feeding back the predicted face image, or the restored face image, or the synthesized face image to the user.
The face image prediction method and system provided by the embodiment of the invention pre-construct a face image prediction model, wherein the face image prediction model can comprise: the time light forward flow model is used for predicting the future long-phase situation of the face, and/or the time light reverse flow model is used for predicting the past long phase of the face. The method comprises the steps of extracting face attribute characteristics related to face growth or face change from a face image to be predicted, determining a corresponding face prediction model by using the extracted face attribute characteristics, inputting pixel points of the face image into the face prediction model, and obtaining the face image at the time point, so that the predicted face image has high correlation with the face image to be predicted, the reality is high, a substitution sense is provided for a user, and the user experience is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a flow chart of constructing a face image prediction model according to an embodiment of the present invention;
FIG. 2 is a flow chart of clustering face images using a decision tree in an embodiment of the present invention;
FIG. 3 is a flow chart of a face prediction model construction corresponding to each leaf node in the embodiment of the present invention;
FIG. 4 is a flowchart of a face image prediction method according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a face image prediction system according to an embodiment of the present invention;
FIG. 6 is a block diagram of a prediction model building block according to an embodiment of the present invention;
fig. 7 is another schematic structural diagram of a face image prediction system according to an embodiment of the present invention.
Detailed Description
In order to make the technical field of the invention better understand the scheme of the embodiment of the invention, the embodiment of the invention is further described in detail with reference to the drawings and the implementation mode.
The face image prediction method and the face image prediction system provided by the embodiment of the invention pre-construct a face image prediction model, and specifically construct the face image prediction model by clustering face attribute features extracted from a large number of images by using a decision tree and constructing the face image prediction model according to the face image attribute features of each leaf node falling into the decision tree after clustering. The face image prediction model may include: the time light forward flow model is used for predicting the future long-phase situation of the face, and/or the time light reverse flow model is used for predicting the past long phase of the face. Extracting face attribute characteristics from a face image to be predicted, acquiring a corresponding face prediction model by using the extracted face attribute characteristics, and predicting the face image at a specified time point by using the face prediction model.
First, the construction process of the face image prediction model in the embodiment of the present invention is explained in detail below.
As shown in fig. 1, the flowchart of constructing a face image prediction model in the embodiment of the present invention includes the following steps:
step 101, collecting a large number of face images, and constructing a temporal light transformation database.
Specifically, a plurality of face images of the same face image at different ages are collected, and a plurality of face images of different faces at different ages form a time-varying database. It should be noted that, when constructing the light transformation database, the face detection technology may discard large-area missing face images in the collected face images, so as to avoid affecting the accuracy of the face image prediction model parameters.
And 102, extracting face attribute features from the face image in the time-light conversion database.
Firstly, carrying out face detection and face characteristic point positioning on a face image to obtain the position of a local characteristic point of a face in the image; and then, extracting the face attribute characteristics of each face image according to the position of each local characteristic point and a pre-trained classification model.
The face detection is mainly to find the position of a face in an image, and the specific method is the same as that in the prior art, for example, by collecting a large number of images containing the face in advance, extracting Scale-invariant feature transform (SIFT) features, training a classification model of the face and a non-face, and using the classification model to perform face detection on the face image in a database. The positioning of the face feature points is mainly to further determine the local positions of the face, such as eyes, eyebrows, a nose, a mouth, an outer contour of the face and the like, on the basis of face detection, and during specific positioning, the positioning of the face feature points is mainly performed through the combination of texture features of the face and position constraints among feature points, for example, an Active Shape Model (ASM) or Active Appearance Model (AAM) algorithm can be adopted to perform the positioning of the face feature points, so that the local feature point positions of the face in an image are obtained.
In the embodiment of the present invention, the face attribute features mainly refer to attribute features related to face growth or face change, and the extracted face attribute features may include any one or more of the following: age, gender, facial expression, whether glasses are worn, region, occupation, etc. The values of the attribute features are as follows:
age: dividing the age into a plurality of intervals by taking a fixed year as a span, for example, dividing the age of 0 to 99 years into 20 intervals by taking 5 years as a span, and directly providing the age interval to which the face image belongs when extracting the age characteristics;
sex: male, female;
expression: the facial expression can be roughly divided into happiness, anger, sadness and happiness;
whether the glasses are worn: yes, no;
region: the region can be divided according to the provinces, and the provinces of the face in the image are obtained through prediction;
occupation: several different professions can be divided, such as one for infants, students, farmers, office staff, etc.
In the specific extraction process, a classification model can be trained for each feature in advance, and the attribute value of each feature is predicted by using the classification model. The classification model can be described by adopting a deep neural network, and the specific extraction method is the same as that in the prior art and is not detailed here.
And 103, warping the face image in the time-light transformation database to obtain a warped face image.
The regularization mainly comprises regularization of coordinates and scales of a human face in a human face image, for example, regularization is carried out on the coordinates and scales of the human face by taking a nose tip point as a central point, a connecting line of two eyes as an x axis and taking a straight line which passes through the nose tip point and is vertical to the x axis as a y axis; meanwhile, for the face image with shielding or wearing glasses, an image without shielding or glasses is predicted by using an image smoothing technology, and the predicted image replaces the original image in the database, wherein the smoothing technology is based on image reconstruction of a deep neural network or a solving method based on sparse expression.
And 104, clustering the normalized face images according to the extracted face attribute characteristics to obtain clustered face images.
In the embodiment of the present invention, the structured face images may be clustered by using a decision tree, nodes of the decision tree represent each face attribute feature, each edge represents a specific value of the face attribute feature, and a specific clustering process will be described in detail later. Of course, some other clustering methods in the prior art may also be used to cluster the normalized face images, and the embodiment of the present invention is not limited thereto.
And 105, constructing a face prediction model according to the clustered face images.
After the face images are clustered, a complete decision tree is obtained, each face image falls into leaf nodes of the decision tree according to the value conditions of the face attribute characteristics, the face images falling into the same leaf node share the same model, and the face images of different leaf nodes respectively train different face prediction models. The face prediction model can specifically adopt a deep neural network structure to carry out model training, an incremental training method is adopted during model training, and the specific construction process will be described in detail later.
As shown in fig. 2, which is a flowchart for clustering face images by using a decision tree in the embodiment of the present invention, the method includes the following steps:
step 201, selecting a face attribute feature as a root node of a decision tree, determining each edge of the root node according to the value of the selected face attribute feature, and dividing a face image into multiple classes.
The nodes of the decision tree represent each attribute feature, and each edge represents a specific value of the attribute feature. Specifically, the face attribute features with strong distinctiveness can be used as root nodes of the decision tree, and each edge of the root nodes is obtained according to the value of the face attribute features, so that the face image is divided into a plurality of classes. For example, the face images are divided into two types, namely, a male face image and a female face image according to the value of the gender attribute characteristic, namely, the male face image and the female face image.
Step 202, using the extracted face attribute features to calculate the minimum variance of the values of the remaining face attribute features in each type of face image.
Specifically, for each type of face image, calculating the variance of the residual face attribute feature values of each face image for each person; then calculating the variance sum of each residual face attribute feature value (each attribute corresponds to one variance sum) aiming at all people in each class; and finally, selecting one minimum variance sum in the variance sums of the attribute values as the attribute value minimum variance sum in the class.
For example, if the gender attribute feature is used as the root node, the remaining facial attribute features are age, expression, whether glasses are worn, region, and occupation, respectively. And respectively calculating the variance of the residual face attribute feature values of all the images aiming at a plurality of different face images of each person, so as to obtain the variance of the residual face attribute feature values of the different face images of each person. For example, if there are 4 images corresponding to zhang san, the variance of the values of the 4 images on 5 attribute features, namely age, expression, whether glasses are worn, region, and occupation, is calculated (for convenience of description, it is called attribute variance); similarly, if there are 3 images corresponding to lie four, the variance of the values of the 3 images on 5 attribute features of age, expression, whether glasses are worn, region, and occupation is calculated respectively. And then, summing the variances of the corresponding attribute feature values of the face images of all the persons obtained aiming at each face attribute feature of all the persons in each class, and obtaining the variance sum of the remaining face attribute feature values of all the persons in each class.
It should be noted that, considering that the variance is large, the value deviation of the corresponding face attribute features is large, that is, the predicted deviation is large during feature extraction. Therefore, if the calculated attribute variance of a person is larger than the set threshold, the value of the attribute feature of the face corresponding to the person can be disregarded when the minimum variance of the class to which the person belongs is calculated.
Step 203, judging whether the minimum variance is larger than a set value. If yes, go to step 204; otherwise, step 205 is performed.
Step 204, taking the node to which the class corresponding to the minimum variance belongs as a leaf node, and not continuing to divide; step 206 is then performed.
Step 205, taking the attribute features corresponding to the minimum variance in each class as upper nodes of each class of face images, obtaining corresponding edges of the upper nodes in each class according to the values of the upper nodes in each class, and continuously dividing each class of face images into multiple classes.
Step 206, judging whether attribute features are not added into the decision tree; if yes, go to step 202; otherwise, step 207 is performed.
And step 207, counting the number of the face images under each leaf node, if the number of the face images in the leaf nodes is smaller than a set number threshold, deleting the leaf nodes and the brother nodes thereof, adding the face images in the leaf nodes and the brother nodes thereof into the father nodes of the leaf nodes, completing the construction of the decision tree, and finishing the clustering of the face images.
As shown in fig. 3, the process of constructing a face prediction model corresponding to each leaf node in the embodiment of the present invention includes the following steps:
step 301, sequencing the face images in the time-varying light conversion database, and sequencing the face images of the same person according to age.
And step 302, initializing the face prediction model by using the sequenced data to obtain an initialized face prediction model.
For training of the time-light forward flow model, the face image in the lowest age interval is used as input, for example, 5 years is used as one age interval, the face image in the next age interval is used as output, and the model is trained to obtain the time-light forward flow model with a short-time prediction reconstruction function;
and for training of the time-light countercurrent model, the facial image in the lowest age interval is taken as the output of the model, the facial image in the next age interval is taken as the input, and the model is trained to obtain the time-light countercurrent model with the short-time push-back reconstruction function.
Step 303, performing incremental training on the initialized face prediction model to obtain a final face prediction model.
Sequentially adding the face images of the age intervals, and performing incremental training on the time light forward flow model and the time light reverse flow model in the step 302, specifically as follows:
for the time-light forward flow model, firstly, taking a face image in a lowest age interval as input, predicting to obtain a face image in a next age interval, then taking the predicted face image in the next age interval as input, predicting to obtain a face image in a third age interval, minimizing an error between the predicted face image in the third age interval and the face image in a real third age interval, and performing update training on time-light forward flow model parameters, wherein the face image in the real third age interval is a face image in a time-light conversion database; sequentially increasing age intervals of the face image, training the time-varying downstream model each time one or more age intervals are increased, updating the time-varying downstream model each time the age interval is increased, and obtaining a final time-varying downstream model until all the age intervals are increased;
for the time-light countercurrent model, firstly, the face image after the increasing age interval is taken as input, for example, the face image in the third age interval is taken as input, the face image in the previous age interval obtained by the pushback reconstruction is utilized to carry out the pushback reconstruction by the time-light countercurrent model, then the previous age interval obtained by the pushback reconstruction is taken as the input of the time-light countercurrent model, the face image in the lowest age interval is pushback reconstructed, the error between the face image in the lowest age interval reconstructed by the pushback reconstruction and the face image in the real lowest age interval is minimized, the time-light countercurrent model parameters are updated and trained, the face image in the real lowest age interval is the face image in the time-light conversion database, the increasing age intervals of the face images are sequentially carried out, the time-light countercurrent model is trained by increasing one or more age intervals each time, the time-light countercurrent model is updated once by increasing the age intervals each time, and obtaining a final time light countercurrent model until all age intervals are increased.
With the face prediction model, a flowchart of a face image prediction method provided by the embodiment of the present invention is shown in fig. 4, and includes the following steps:
step 401, obtaining a face image to be predicted and a time point of predicting the face image.
The image can be an image directly uploaded by a user or an image directly obtained by shooting through a camera.
The time point of the predicted face image refers to a time point of a face image which the user wants to predict, and specifically, the age can be expressed by using an age, for example, the age predicted by the current face image is 20 years old, and the user wants to obtain the face image at 30 years old. The point in time may be input or selected by the user.
Step 402, extracting face attribute features from the face image.
The face attribute features may include any one or more of: gender, expression, whether wearing glasses, region, occupation. The method for extracting the face attribute features can be a method for extracting the face attribute features of a large number of face images when a front face image model is constructed, namely, firstly, the face detection and the face feature point positioning are carried out on the face images to obtain the positions of the local feature points of the faces in the images; and then, extracting the face attribute characteristics of each face image according to the position of each local characteristic point and a pre-trained classification model. Of course, if the user provides an image with not only a face but also a background and additional information unrelated to the face (such as earrings, glasses, etc.), the background and the unrelated information need to be removed.
And step 403, determining a corresponding face image prediction model by using the face attribute characteristics.
Specifically, the decision tree may be traversed according to the face attribute features, leaf node positions corresponding to the face attribute features are found, and then a face prediction model corresponding to the leaf nodes is obtained.
Step 404, inputting the pixel points of the face image into the face prediction model to obtain a predicted face image.
If the time point is after the time point of the current received face image, the time-light forward flow model is used for prediction, and if the time point is before the time point of the current image, the time-light backward flow model is used for carrying out backward pushing reconstruction on the image. If the user inputs 20 years later, after the time point corresponding to the current face image, if the user inputs 20 years earlier, before the time point corresponding to the current face image, or the user directly inputs the current age and the predicted age. For example, if the age of the current face image is 30 years, the user wants to obtain a face image corresponding to any time point between 30 and 99 years, the time-use optical forward flow model is used for predicting the current image, and the user wants to obtain a face image corresponding to any time point between 0 and 29 years, the time-use optical backward flow model is used for performing backward reconstruction on the current image.
During specific prediction, the age interval to which the current face image belongs and the age interval in which the user wants to obtain the face image can be determined according to the pre-divided age intervals; then, taking the current face image as the input of a face prediction model, predicting or reconstructing the face image of one age interval each time, and then taking the predicted or reconstructed face image as the input to continue predicting or reconstructing the face image of the next age interval until the age interval where the face image which the user wants to obtain is located; and finally, taking the predicted or reconstructed face image as a final generated face image.
After the predicted face image is obtained, the face image can be directly fed back to the user.
As mentioned above, the face image to be predicted provided by the user may include additional information unrelated to the face in addition to the face information, and therefore, for such face image, it is also necessary to perform restoration of corresponding attribute features, such as expression, whether to wear glasses, etc., on the predicted face image according to the extracted face attribute features. Specifically, the edge information can be used for detecting the attachments such as glasses in the received image, and the pixels of the eye area are directly replaced to the corresponding positions of the generated face image; for the expression features, the deformation of the generated facial area of the facial image can be controlled by using the facial key points, and the facial expression corresponding to the received image is fitted. Through the restoration processing, the restored face image has more sense of reality, and the restored face image is fed back to the user, so that the user experience can be further improved.
In addition, if the face image to be predicted provided by the user further includes a background, the predicted face image and the background of the face image to be predicted need to be synthesized, and an image with the same background as the face image to be predicted is generated. Specifically, firstly, carrying out scale normalization on a predicted face image, namely, zooming the predicted face image according to face orientation information of the face image to be predicted to obtain a zoomed face image; and then interpolating the zoomed face image and the background of the face image to be predicted to obtain a synthesized face image, wherein the specific method is the same as the prior art, and the detailed description is not repeated. Through the synthesis processing, the synthesized face image can be richer in picture, and the synthesized face image is fed back to the user, so that the user experience can be further improved.
The face image prediction method provided by the embodiment of the invention is characterized in that a face image prediction model is constructed in advance, and the face image prediction model can comprise: the time light forward flow model is used for predicting the future long-phase situation of the face, and/or the time light reverse flow model is used for predicting the past long phase of the face. The method comprises the steps of extracting face attribute characteristics related to face growth or face change from a face image to be predicted, determining a corresponding face prediction model by using the extracted face attribute characteristics, inputting pixel points of the face image into the face prediction model, and obtaining the face image at the time point, so that the predicted face image has high correlation with the face image to be predicted, the reality is high, a substitution sense is provided for a user, and the user experience is greatly improved.
Correspondingly, an embodiment of the present invention further provides a face image prediction system, as shown in fig. 5, which is a schematic structural diagram of the face image prediction system according to the embodiment of the present invention, and includes the following modules:
a receiving module 501, configured to obtain a face image to be predicted and a time point of predicting the face image;
a feature extraction module 502, configured to extract a face attribute feature from the face image;
a model selection module 503, configured to determine a corresponding face image prediction model using the face attribute features;
the prediction module 504 is configured to input the pixel points of the face image into the face prediction model to obtain a predicted face image.
The facial image to be predicted and the time point of the predicted facial image may be directly input by the user, and the facial attribute features may include any one or more of the following: gender, expression, whether wearing glasses, region, occupation. Correspondingly, the feature extraction module 502 may perform face detection and face feature point positioning on the face image to obtain the position of the local feature point of the face in the image; and then, extracting the face attribute characteristics of each face image according to the position of each local characteristic point and a pre-trained classification model.
The model selection module 503 may obtain a corresponding face image prediction model according to a decision tree constructed during face image prediction model training, and specifically includes the following two units:
the traversal unit is used for traversing the decision tree according to the face attribute characteristics to find out corresponding leaf nodes;
and the model acquisition unit is used for acquiring the face prediction model corresponding to the leaf node.
The face image prediction system of the embodiment of the invention carries out face image prediction by utilizing a face image prediction model which is constructed in advance, wherein the face image prediction model can comprise: the time light forward flow model is used for predicting the future long-phase situation of the face, and/or the time light reverse flow model is used for predicting the past long phase of the face. The face image prediction model can be constructed by corresponding prediction model construction modules, and corresponding face image prediction models are required to be constructed corresponding to different face attribute characteristics. In practical application, the prediction model construction module can construct the face prediction model aiming at various different face attribute characteristics by a clustering method. Moreover, the prediction model building module may be a part of the system of the present invention, or may be independent of the system of the present invention, that is, an independent entity, and the embodiment of the present invention is not limited thereto.
As shown in fig. 6, the structure diagram of the prediction model building module in the embodiment of the present invention includes the following units:
the image collecting unit 61 is used for collecting a large number of face images and constructing a time-light conversion database, and specifically, can collect a plurality of face images of the same face image at different ages, and the plurality of face images of different faces at different ages form the time-light conversion database;
a feature extraction unit 62, configured to extract a face attribute feature from the face image in the time-light transformation database;
a regularization unit 63, configured to regularize the face images in the time-light transformation database to obtain a regularized face image, and specifically, to regularize coordinates and scales of faces in the face images in the time-light transformation database;
the clustering unit 64 is configured to cluster the normalized face images according to the extracted face attribute features to obtain clustered face images;
and the model construction unit 65 is used for constructing a face prediction model according to the clustered face images.
The above-described feature extraction unit 62 may include: the positioning sub-unit is used for carrying out face detection and face characteristic point positioning on the face image to obtain the position of a local characteristic point of a face in the image; the extraction subunit is used for extracting the face attribute characteristics of each face image according to the positions of the local characteristic points and a pre-trained classification model.
The clustering unit 64 may specifically cluster the normalized face images according to the manner shown in fig. 2.
The model building unit 65 needs to build a face prediction model corresponding to each leaf node in the decision tree; a specific structure of the model building unit 65 may include the following sub-units:
the sequencing subunit is used for sequencing the face images in the time-varying light conversion database, and sequencing the face images of the same person according to age;
the initialization subunit is used for initializing the face prediction model by using the sorted data to obtain an initialized face prediction model;
and the increment training subunit is used for carrying out increment training on the initialized face prediction model to obtain a final face prediction model.
As shown in fig. 7, in another embodiment of the facial image prediction system of the present invention, the system may further include:
and the restoring module 701 is configured to restore the predicted face image according to the face attribute feature, so as to obtain a restored face image.
Further, the system may further include:
a synthesizing module 702, configured to synthesize the predicted face image or the restored face image with the background of the face image to be predicted, so as to obtain a synthesized face image.
Further, the system may further include:
a feedback module 703, configured to feed back the predicted face image, or the restored face image, or the synthesized face image to the user.
It should be noted that, in practical applications, the restoring module 701 and the synthesizing module 702 may be selected as needed according to whether a background is included in a to-be-predicted image provided by a user, whether the image further includes some additional information unrelated to a human face, and the like. Moreover, the feedback module 703 may also feed back the finally obtained face image to the user in a variety of ways, for example, directly display the face image on a screen, or store the image in a corresponding file, and the like, which is not limited in this embodiment of the present invention.
The face image prediction system provided by the embodiment of the invention is characterized in that a face image prediction model is constructed in advance, and the face image prediction model can comprise: the time light forward flow model is used for predicting the future long-phase situation of the face, and/or the time light reverse flow model is used for predicting the past long phase of the face. The method comprises the steps of extracting face attribute characteristics related to face growth or face change from a face image to be predicted, determining a corresponding face prediction model by using the extracted face attribute characteristics, inputting pixel points of the face image into the face prediction model, and obtaining the face image at the time point, so that the predicted face image has high correlation with the face image to be predicted, the reality is high, a substitution sense is provided for a user, and the user experience is greatly improved.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, they are described in a relatively simple manner, and reference may be made to some descriptions of method embodiments for relevant points. The above-described system embodiments are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above embodiments of the present invention have been described in detail, and the present invention is described herein using specific embodiments, but the above embodiments are only used to help understanding the method and system of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (20)

1. A face image prediction method is characterized by comprising the following steps:
acquiring a face image to be predicted and a time point of predicting the face image;
extracting face attribute features from the face image, wherein the face attribute features are attribute features related to face growing phase or face change;
determining a corresponding face prediction model from a plurality of face prediction models which are divided based on the face attribute characteristics and aim at different face image categories by using the face attribute characteristics;
and inputting the pixel points of the face image into the face prediction model to obtain a predicted face image.
2. The method according to claim 1, wherein the face prediction model comprises a time-light forward flow model and/or a time-light backward flow model, the time-light forward flow model is used for predicting the future long-phase situation of the face, and the time-light backward flow model is used for predicting the past long phase of the face;
the method further comprises constructing a face prediction model in the following manner:
collecting a large number of face images, and constructing a temporal light transformation database;
extracting face attribute features from the face image in the time-light transformation database;
the face images in the time-light conversion database are normalized to obtain normalized face images;
clustering the normalized face images according to the extracted face attribute characteristics to obtain clustered face images;
and constructing a face prediction model according to the clustered face images.
3. The method of claim 2, wherein the face attribute features comprise any one or more of: gender, expression, whether wearing glasses, region, occupation;
the extracting of the face attribute features from the face image in the time-light transformation database comprises:
carrying out face detection and face feature point positioning on the face image to obtain the position of a local feature point of the face in the image;
and extracting the face attribute characteristics of each face image according to the position of each local characteristic point and a pre-trained classification model.
4. The method of claim 2, wherein the warping the face image in the time-light transform database comprises:
and regulating the coordinates and the scale of the face in the face image in the time-light conversion database.
5. The method of claim 2, wherein the clustering the warped face images according to the extracted face attribute features comprises:
(1) selecting a face attribute feature as a root node of a decision tree, determining each edge of the root node according to the value of the selected face attribute feature, and dividing a face image into a plurality of classes;
(2) calculating the minimum variance of the values of the residual face attribute features in each type of face image by using the extracted face attribute features;
(3) judging whether the minimum variance is larger than a set value or not; if yes, executing the step (4); otherwise, executing the step (5);
(4) taking the node to which the class corresponding to the minimum variance belongs as a leaf node, and not continuously dividing; then executing the step (6);
(5) taking the attribute characteristics corresponding to the minimum variance in each class as upper nodes of each class of face images, obtaining the edges of the upper nodes according to the values of the upper nodes, and continuously dividing each class of face images into multiple classes;
(6) judging whether attribute features are not added into the decision tree or not; if yes, executing the step (2); otherwise, executing the step (7);
(7) counting the number of the face images under each leaf node, if the number of the face images in the leaf node is smaller than a set number threshold, deleting the leaf node and the brother node thereof, adding the face images in the leaf node and the brother node thereof to a parent node of the leaf node, and finishing the construction of the decision tree.
6. The method of claim 5, wherein constructing the face prediction model from the clustered face images comprises:
for each leaf node in the decision tree, constructing a face prediction model corresponding to the leaf node, specifically comprising:
sequencing the face images in the time-varying conversion database, and sequencing the face images of the same person according to age;
initializing the face prediction model by using the sequenced data to obtain an initialized face prediction model;
and performing incremental training on the initialized face prediction model to obtain a final face prediction model.
7. The method of claim 5, wherein the determining the corresponding face prediction model using the face attribute features comprises:
traversing the decision tree according to the face attribute characteristics to find corresponding leaf nodes;
and acquiring a face prediction model corresponding to the leaf node.
8. The method according to any one of claims 1-7, further comprising:
and restoring the predicted face image according to the face attribute characteristics to obtain a restored face image.
9. The method of claim 8, further comprising:
and synthesizing the predicted face image or the restored face image with the background of the face image to be predicted to obtain a synthesized face image.
10. The method of claim 9, further comprising:
and feeding back the predicted face image, or the restored face image, or the synthesized face image to the user.
11. A face image prediction system, comprising:
the receiving module is used for acquiring a face image to be predicted and a time point of the predicted face image;
the characteristic extraction module is used for extracting face attribute characteristics from the face image, wherein the face attribute characteristics are attribute characteristics related to face growth or face change;
the model selection module is used for determining a corresponding face prediction model from a plurality of face prediction models which are divided based on the face attribute characteristics and aim at different face image categories by using the face attribute characteristics;
and the prediction module is used for inputting the pixel points of the face image into the face prediction model to obtain a predicted face image.
12. The system according to claim 11, wherein the face prediction model comprises a time-light forward flow model and/or a time-light backward flow model, the time-light forward flow model is used for predicting the future long-phase situation of the face, and the time-light backward flow model is used for predicting the past long phase of the face;
the system further comprises a predictive model construction module, the predictive model construction module comprising:
the image collection unit is used for collecting a large number of face images and constructing a time-light conversion database;
the characteristic extraction unit is used for extracting human face attribute characteristics from the human face image in the time-light conversion database;
the regularization unit is used for regularizing the face images in the time-light transformation database to obtain regularized face images;
the clustering unit is used for clustering the normalized face images according to the extracted face attribute characteristics to obtain clustered face images;
and the model construction unit is used for constructing a face prediction model according to the clustered face images.
13. The system of claim 12, wherein the face attribute features comprise any one or more of: gender, expression, whether wearing glasses, region, occupation;
the feature extraction unit includes:
the positioning subunit is used for carrying out face detection and face feature point positioning on the face image to obtain the position of a local feature point of the face in the image;
and the extraction subunit is used for extracting the face attribute characteristics of each face image according to the position of each local characteristic point and a pre-trained classification model.
14. The system of claim 12,
and the regularizing unit is specifically used for regularizing the coordinates and the scale of the face in the face image in the time-light conversion database.
15. The system according to claim 12, wherein the clustering unit is specifically configured to cluster the warped face images in the following manner:
(1) selecting a face attribute feature as a root node of a decision tree, determining each edge of the root node according to the value of the selected face attribute feature, and dividing a face image into a plurality of classes;
(2) calculating the minimum variance of the values of the residual face attribute features in each type of face image by using the extracted face attribute features;
(3) judging whether the minimum variance is larger than a set value or not; if yes, executing the step (4); otherwise, executing the step (5);
(4) taking the node to which the class corresponding to the minimum variance belongs as a leaf node, and not continuously dividing; then executing the step (6);
(5) taking the attribute characteristics corresponding to the minimum variance in each class as upper nodes of each class of face images, obtaining the edges of the upper nodes according to the values of the upper nodes, and continuously dividing each class of face images into multiple classes;
(6) judging whether attribute features are not added into the decision tree or not; if yes, executing the step (2); otherwise, executing the step (7);
(7) counting the number of the face images under each leaf node, if the number of the face images in the leaf node is smaller than a set number threshold, deleting the leaf node and the brother node thereof, adding the face images in the leaf node and the brother node thereof to a parent node of the leaf node, and finishing the construction of the decision tree.
16. The system of claim 15,
the model construction unit is specifically configured to construct, for each leaf node in the decision tree, a face prediction model corresponding to the leaf node; the model building unit specifically comprises:
the sequencing subunit is used for sequencing the face images in the time-varying light conversion database, and sequencing the face images of the same person according to age;
the initialization subunit is used for initializing the face prediction model by using the sorted data to obtain an initialized face prediction model;
and the increment training subunit is used for carrying out increment training on the initialized face prediction model to obtain a final face prediction model.
17. The system of claim 15, wherein the model selection module comprises:
the traversal unit is used for traversing the decision tree according to the face attribute characteristics to find out corresponding leaf nodes;
and the model acquisition unit is used for acquiring the face prediction model corresponding to the leaf node.
18. The system according to any one of claims 11-17, further comprising:
and the restoring module is used for restoring the predicted face image according to the face attribute characteristics to obtain a restored face image.
19. The system of claim 18, further comprising:
and the synthesis module is used for synthesizing the predicted face image or the restored face image with the background of the face image to be predicted to obtain a synthesized face image.
20. The system of claim 19, further comprising:
and the feedback module is used for feeding back the predicted face image, or the restored face image, or the synthesized face image to the user.
CN201610886084.2A 2016-10-10 2016-10-10 Face image prediction method and system Active CN106651978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610886084.2A CN106651978B (en) 2016-10-10 2016-10-10 Face image prediction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610886084.2A CN106651978B (en) 2016-10-10 2016-10-10 Face image prediction method and system

Publications (2)

Publication Number Publication Date
CN106651978A CN106651978A (en) 2017-05-10
CN106651978B true CN106651978B (en) 2021-04-02

Family

ID=58854367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610886084.2A Active CN106651978B (en) 2016-10-10 2016-10-10 Face image prediction method and system

Country Status (1)

Country Link
CN (1) CN106651978B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194868A (en) * 2017-05-19 2017-09-22 成都通甲优博科技有限责任公司 A kind of Face image synthesis method and device
CN107563122B (en) * 2017-09-20 2020-05-19 长沙学院 Crime prediction method based on interleaving time sequence local connection cyclic neural network
CN107729886B (en) * 2017-11-24 2021-03-02 北京小米移动软件有限公司 Method and device for processing face image
CN108171167B (en) * 2017-12-28 2019-10-08 百度在线网络技术(北京)有限公司 Method and apparatus for exporting image
CN108520036B (en) * 2018-03-30 2020-08-14 Oppo广东移动通信有限公司 Image selection method and device, storage medium and electronic equipment
CN111340932A (en) * 2018-12-18 2020-06-26 富士通株式会社 Image processing method and information processing apparatus
CN110035271B (en) * 2019-03-21 2020-06-02 北京字节跳动网络技术有限公司 Fidelity image generation method and device and electronic equipment
CN111339991A (en) * 2020-03-12 2020-06-26 北京爱笔科技有限公司 Human body attribute identification method and device
CN113808010B (en) * 2021-09-24 2023-08-11 深圳万兴软件有限公司 Cartoon portrait generating method, device, equipment and medium without attribute deviation

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120288166A1 (en) * 2011-05-13 2012-11-15 Microsoft Corporation Association and prediction in facial recognition

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100386778C (en) * 2006-06-15 2008-05-07 西安交通大学 Human face image age changing method based on average face and senile proportional image
CN101556701A (en) * 2009-05-15 2009-10-14 陕西盛世辉煌智能科技有限公司 Human face image age changing method based on average face and aging scale map
CN105787974B (en) * 2014-12-24 2018-12-25 中国科学院苏州纳米技术与纳米仿生研究所 Bionic human face aging model method for building up

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120288166A1 (en) * 2011-05-13 2012-11-15 Microsoft Corporation Association and prediction in facial recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A Compositional and Dynamic Model for Face Aging;Jinli Suo 等;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20100331;第32卷(第3期);第385-401页 *

Also Published As

Publication number Publication date
CN106651978A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN106651978B (en) Face image prediction method and system
CN105005777B (en) Audio and video recommendation method and system based on human face
Du et al. Representation learning of temporal dynamics for skeleton-based action recognition
CN107169454B (en) Face image age estimation method and device and terminal equipment thereof
CN108288051B (en) Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN109815826A (en) The generation method and device of face character model
CN102567716B (en) Face synthetic system and implementation method
CN109657554A (en) A kind of image-recognizing method based on micro- expression, device and relevant device
CN111325851A (en) Image processing method and device, electronic equipment and computer readable storage medium
JP6207210B2 (en) Information processing apparatus and method
CN110765863B (en) Target clustering method and system based on space-time constraint
CN111582342B (en) Image identification method, device, equipment and readable storage medium
WO2015070764A1 (en) Face positioning method and device
CN111401339B (en) Method and device for identifying age of person in face image and electronic equipment
CN111680550B (en) Emotion information identification method and device, storage medium and computer equipment
CN111028216A (en) Image scoring method and device, storage medium and electronic equipment
CN114360067A (en) Dynamic gesture recognition method based on deep learning
Núñez et al. Multiview 3D human pose estimation using improved least-squares and LSTM networks
CN111680544B (en) Face recognition method, device, system, equipment and medium
Kumar Jain et al. (Retracted) Modeling of human action recognition using hyperparameter tuned deep learning model
Zhang et al. A Gaussian mixture based hidden Markov model for motion recognition with 3D vision device
CN110598097B (en) Hair style recommendation system, method, equipment and storage medium based on CNN
CN104881647B (en) Information processing method, information processing system and information processing unit
CN111597894A (en) Face database updating method based on face detection technology
CN107886568B (en) Method and system for reconstructing facial expression by using 3D Avatar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant