CN108985215B - Picture processing method, picture processing device and terminal equipment - Google Patents

Picture processing method, picture processing device and terminal equipment Download PDF

Info

Publication number
CN108985215B
CN108985215B CN201810746936.7A CN201810746936A CN108985215B CN 108985215 B CN108985215 B CN 108985215B CN 201810746936 A CN201810746936 A CN 201810746936A CN 108985215 B CN108985215 B CN 108985215B
Authority
CN
China
Prior art keywords
face
picture
model
trained
processing model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810746936.7A
Other languages
Chinese (zh)
Other versions
CN108985215A (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN201810746936.7A priority Critical patent/CN108985215B/en
Publication of CN108985215A publication Critical patent/CN108985215A/en
Application granted granted Critical
Publication of CN108985215B publication Critical patent/CN108985215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a picture processing method, a picture processing device and terminal equipment, wherein the method comprises the following steps: acquiring a face picture and age information; inputting the face picture and the age information into an initial face processing model to obtain a generated picture; inputting the generated picture into a trained first discrimination model and a trained second discrimination model, and continuously adjusting parameters of a current face processing model until a first element and a second element of the generated picture output by the current face processing model respectively accord with a first condition and a second condition; and taking the current face processing model as a trained face processing model, and processing the face picture to be processed through the trained face processing model. The method and the device can obtain the face processing model meeting the requirements, and can meet different processing requirements of different users or developers on the face.

Description

Picture processing method, picture processing device and terminal equipment
Technical Field
The present application belongs to the field of computer vision technology, and in particular, to a picture processing method, a picture processing apparatus, a terminal device, and a computer-readable storage medium.
Background
Currently, many applications simulating face aging and rejuvenation exist in the market, such as oldiffy, old face, time camera, etc., which can predict the growing phase of a face in an input face picture at a certain age according to the face picture.
In order to realize the aging and the rejuvenation of the human face, a method commonly used at present is to perform feature point positioning on the human face in a human face picture, find out the positions of facial five sense organs and facial muscles, and then reconstruct the shapes of the five sense organs and the facial muscles, so as to realize the aging and the rejuvenation of the human face. The conventional method is relatively fixed and lacks flexibility for processing aging and rejuvenation of the human face, however, in many cases, different users or developers have different processing requirements for the human face, and thus, the conventional method cannot meet different processing requirements of different users or developers.
Disclosure of Invention
In view of this, the present application provides a picture processing method, a picture processing apparatus, a terminal device and a computer readable storage medium, which can meet different processing requirements of different users or developers for faces.
A first aspect of the present application provides an image processing method, including:
acquiring a face picture, and acquiring age information to be converted of a face in the face picture;
inputting the face picture and the age information into an initial face processing model to obtain a generated picture, wherein the face processing model is used for converting a face in the face picture into a face corresponding to the age information;
inputting the generated picture into a trained first discrimination model, judging whether a first element of the generated picture meets a first condition, inputting the generated picture into a trained second discrimination model, judging whether a second element of the generated picture meets a second condition, and continuously adjusting parameters of a current face processing model until the first element and the second element of the generated picture output by the current face processing model respectively meet the first condition and the second condition;
and taking the current face processing model as a trained face processing model, and processing the face picture to be processed through the trained face processing model.
A second aspect of the present application provides a picture processing apparatus, including:
the training data acquisition module is used for acquiring a face picture and acquiring age information to be converted of a face in the face picture;
a generated image obtaining module, configured to input the face image and the age information into an initial face processing model to obtain a generated image, where the face processing model is configured to convert a face in the face image into a face corresponding to the age information;
a model training module, configured to input the generated picture into a trained first discrimination model, determine whether a first element of the generated picture meets a first condition, input the generated picture into a trained second discrimination model, determine whether a second element of the generated picture meets a second condition, and continuously adjust parameters of a current face processing model until the first element and the second element of the generated picture output by the current face processing model meet the first condition and the second condition, respectively;
and the image processing module is used for taking the current face processing model as a trained face processing model and processing the face image to be processed through the trained face processing model.
A third aspect of the present application provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to the first aspect when executing the computer program.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect as described above.
A fifth aspect of the present application provides a computer program product comprising a computer program which, when executed by one or more processors, performs the steps of the method of the first aspect as described above.
In view of the above, the present application provides a method for processing a picture, which includes, first, obtaining a face picture, and obtaining age information to be converted of a face in the face picture; secondly, inputting the face picture and the age information into an initial face processing model to obtain a generated picture, wherein the face processing model is used for converting the face in the face picture into a face corresponding to the age information; then, inputting the generated picture into a trained first discrimination model, judging whether a first element of the generated picture meets a first condition, inputting the generated picture into a trained second discrimination model, judging whether a second element of the generated picture meets a second condition, and continuously adjusting parameters of a current face processing model until the first element and the second element of the generated picture output by the current face processing model respectively meet the first condition and the second condition; and finally, taking the current face processing model as a trained face processing model, and processing the face picture to be processed through the trained face processing model. Therefore, compared with the traditional method, the method needs to train the face processing model for face aging and rejuvenation in advance, and continuously adjusts the parameters of the current face processing model until the current face processing model meets the requirements, so that the method can obtain the face processing model meeting the requirements, and can meet different processing requirements of different users or developers on the face.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a picture processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a training process of a face processing model according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of another picture processing method according to the second embodiment of the present application;
FIG. 4 is a schematic diagram of a training process of a first discriminant model according to a second embodiment of the present disclosure;
fig. 5 is a schematic diagram of a training process of a second discrimination model provided in the second embodiment of the present application;
fig. 6 is a schematic diagram of a training process of a face processing model according to a second embodiment of the present application;
fig. 7 is a schematic structural diagram of a picture processing apparatus according to a third embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The image processing method provided by the embodiment of the application can be applied to terminal equipment, and the terminal equipment includes, but is not limited to: smart phones, tablet computers, learning machines, intelligent wearable devices, and the like.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In particular implementations, the terminal devices described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the devices described above are not portable communication devices, but rather are desktop computers having touch-sensitive surfaces (e.g., touch screen displays and/or touch pads).
In the discussion that follows, a terminal device that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal device supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal device may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to explain the technical solution of the present application, the following description will be given by way of specific examples.
Example one
Referring to fig. 1, a picture processing method provided in a first embodiment of the present application is described below, where the picture processing method in the first embodiment of the present application includes:
in step S101, a face picture is obtained, and age information to be converted of a face in the face picture is obtained;
in the first embodiment of the present application, the face picture may be obtained from a sample database, where the sample database may include a plurality of different sample face pictures, and any one of the same face pictures in the sample database may be selected as a face picture of an initial face processing model for training, and because the face processing model in the present application is used to implement aging and rejuvenation of a face, in order to train the face processing model in the present application, it is also necessary to obtain age information to be converted of the face, such as age information of 5 years, age information of 30 years, age information of 50 years, age information of 80 years, and the like.
In addition, in a general situation, the face processing model can only process a picture with a fixed picture size, and therefore, in the first embodiment of the present application, after the face picture is obtained, the face picture can also be preprocessed. For example, firstly, the face image is subjected to face alignment; and secondly, cutting and zooming the face image according to the aligned face position to obtain an image with a uniform size. The preprocessing operation can remove a large amount of interference information in the face picture and can meet the requirements of a face processing model.
In step S102, inputting the face image and the age information into an initial face processing model to obtain a generated image, where the face processing model is used to convert a face in the face image into a face corresponding to the age information;
in the first embodiment of the present application, the face image and the age information obtained in step S101 are input into an initial face processing model, so that the initial face processing model converts the face in the face image into a face corresponding to the age information, and outputs a generated image. As shown in fig. 2, it is assumed that a sample face picture selected from a sample database is a face picture 201, age information indicates that a face in the face picture 201 is converted into a face corresponding to the age of 80, an initial face processing model processes the face picture 201 according to the age information, and a generated picture 202 is output, because the initial face processing model is a face processing model that has not been trained yet, a face in the generated picture output by the initial face processing model may be very hard and unnatural, or may not meet some requirements of a user or a developer, in which case, the initial face processing model needs to be trained by using subsequent steps in the technical solution provided in the first embodiment of the present application.
In step S103, inputting the generated picture into a trained first decision model, determining whether a first element of the generated picture meets a first condition, inputting the generated picture into a trained second decision model, determining whether a second element of the generated picture meets a second condition, and continuously adjusting parameters of a current face processing model until the first element and the second element of the generated picture output by the current face processing model meet the first condition and the second condition, respectively;
in the embodiment of the present application, after a generated picture output by the initial face processing model is obtained, the generated picture is input into the trained first discrimination model and the trained second discrimination model.
The execution subject for determining whether the first element of the generated picture meets the first condition may be the first discrimination model after the training, and the execution subject for determining whether the second element of the generated picture meets the second condition may be the second discrimination model after the training. That is, the trained first discriminant model is used to determine whether the first element of the generated image meets a first condition, for example, the trained first discriminant model may be used to determine whether the leftmost face in the generated image wears a hat; or, the method can be used to determine whether the rightmost face in the generated picture is wearing glasses; or, the method can be used for judging whether the face in the generated picture is made up or not; alternatively, the first condition may be used to determine whether the generated picture belongs to the sample database, and the first condition is not limited herein. The trained second determination model is used to determine whether the second element of the generated picture meets a second condition, for example, the trained second determination model may be used to determine whether the color of a hat worn by the face of the person in the generated picture is red; or, whether the glasses worn by the human face in the generated picture are circular frames can be judged; alternatively, it may be determined whether the face in the generated picture meets the face aging rule, and the like, and the second element and the second condition are not limited herein.
And continuously adjusting the parameters of the current face processing model until the trained first discrimination model judges that the generated picture of the current face processing model accords with the first condition, and the trained second discrimination model judges that the generated picture of the current face processing model accords with the second condition.
In order to more intuitively describe the training process of the face processing model provided in step S103, the step S103 is described in detail below with reference to fig. 2. As shown in fig. 2, it is assumed that the first trained discrimination model is used to determine whether the glasses are worn on the face in the generated picture of the current face processing model, and the second trained discrimination model is used to determine whether the glasses worn on the face in the generated picture of the current face processing model are circular frames. Inputting a generated picture 202 output by an initial face processing model into the trained first discrimination model, judging whether a face in the generated picture 202 wears glasses or not by the trained first discrimination model, if the face does not wear glasses, adjusting parameters of the initial face processing model, and inputting the generated picture output by the face processing model after the parameters are adjusted into the trained first discrimination model again, so that the first discrimination model continuously judges whether the face in the generated picture of the face processing model after the parameters are adjusted wears glasses or not, and continuously adjusts parameters of the current face processing model until the trained first discrimination model judges that the face in the generated picture output by the current face processing model wears glasses; then, the generated picture of the face processing model after the training of the trained first discrimination model is input into the trained second discrimination model, so that the trained second discrimination model judges whether glasses worn by the face in the generated picture of the current face processing model are circular frames or not, and the parameters of the current face processing model are continuously adjusted until the second discrimination model judges that the glasses worn by the face in the generated picture of the current face processing model are circular frames. The above training process is only an example, and does not constitute a limitation on the training method of the face processing model. That is, in the embodiment of the present application, the trained first discrimination model may be used to train the current face processing model, and after the trained first discrimination model is trained, the trained second discrimination model may be used to train the face processing model after the trained first discrimination model is trained.
In addition, the faces in the generated pictures output by the trained face processing model can be more natural and real by using the trained first discrimination model and the trained second discrimination model, which may be specifically referred to in the description of the second embodiment of the present application.
In step S104, the current face processing model is used as a trained face processing model, and a face picture to be processed is processed through the trained face processing model;
in the embodiment of the present application, the current face processing model trained by using the trained first and second discrimination models is used as the trained face processing model, and the trained face processing model can be used to process the face picture to be processed.
The processing of the face picture to be processed by the trained face processing model may be:
acquiring a face picture to be processed and age information to be converted of a face in the face picture to be processed;
and inputting the face picture to be processed and age information to be converted of the face in the face picture to be processed into the trained face processing model to obtain the face picture output by the trained face processing model.
In addition, in this embodiment of the application, after obtaining the face picture output by the trained face processing model, the method may further include:
judging whether the face picture output by the trained face processing model meets the user requirements or not;
if not, the trained face processing model is used as an initial face processing model, and the step S101 is returned to be executed, so that the currently trained face processing model is retrained.
The training process of the face processing model in steps S101 to S103 in the first embodiment of the present application may be performed before the terminal device leaves a factory, and before the terminal device leaves the factory, the trained face processing model is obtained in advance, and the trained face processing model is stored in the memory of the terminal device, so that a subsequent user can directly process a face picture to be processed by using the trained face processing model in the memory; or, when the terminal device is ready to process the face picture to be processed, the initial face processing model may be trained to obtain the trained face processing model, and then the trained face processing model is used to process the face picture to be processed, which is not limited herein.
In the first embodiment of the present application, a face processing model for face aging and face rejuvenation needs to be trained in advance, and parameters of the current face processing model are continuously adjusted until the trained first discrimination model and the trained second discrimination model determine that the current face processing model meets requirements. Therefore, the method and the device can obtain the human face processing model meeting the requirements, and can meet different processing requirements of different users or developers on the human face.
Example two
Referring to fig. 3, another picture processing method provided in the second embodiment of the present application is described below, where the picture processing method in the second embodiment of the present application includes:
in step S201, acquiring any face picture from a sample database, and acquiring age information of a face to be converted in the face picture;
in step S202, the face image and the age information are input into an initial face processing model to obtain a generated image, where the face processing model is used to convert a face in the face image into a face corresponding to the age information;
in the second embodiment of the present application, the steps S201 to S202 are performed in the same manner as the steps S101 to S102 in the first embodiment, and specific reference may be made to the description of the first embodiment, which is not repeated herein.
In step S203, inputting the generated picture into the trained first discrimination model, and determining whether the generated picture of the current face processing model belongs to the sample database;
in the second embodiment of the present application, the determination of whether the first element of the generated picture meets the first condition in the first embodiment is defined as "determining whether the generated picture belongs to the sample database", and in the embodiment of the present application, the execution subject for determining whether the generated picture of the current face processing model belongs to the sample database may be the first trained discrimination model, that is, the first trained discrimination model is used to determine whether the generated picture output by the current face processing model belongs to the sample database. For example, if the sample database contains different face photos, the trained first discrimination model is used to continuously train the current face processing model, so that the generated pictures output by the face processing model are as much as possible like the face photos in the sample database, and the generated pictures output by the face processing model trained by the trained first discrimination model are more natural.
Before step S203, an initial first discriminant model may be trained using the initial face processing model and the sample database to obtain the trained first discriminant model.
As shown in fig. 4, the training process of the trained first discriminant model may be as follows:
firstly, selecting one or more sample face pictures, such as a picture A and a picture B, in a sample database, processing the picture A and the picture B by using the initial face processing model to generate corresponding generated pictures, such as a picture A1 and a picture B1; secondly, setting the label of the picture a1 and the picture B1 as "not belonging to the sample database", selecting one or more sample pictures from the sample database, such as a picture C, a picture D, and a picture E, and setting the label of the picture C, the picture D, and the picture E as "belonging to the sample database"; finally, the picture a1, the picture B1, the picture C, the picture D, the picture E and the corresponding labels are used for training an initial first discriminant model, so that the trained first discriminant model can correctly classify the input pictures and can identify whether the input pictures belong to the sample database.
In step S204, continuously adjusting parameters of the current face processing model until a generated picture of the current face processing model belongs to the sample database;
in this embodiment, an initial face processing model is first input into the trained first discrimination model according to the face picture and the generated picture output by the age information, and then parameters of the current face processing model are continuously adjusted until the trained first discrimination model determines that the generated picture output by the current face processing model belongs to the sample database, so that the generated picture output by the face processing model trained by the trained first discrimination model is similar to the sample face picture in the sample database.
In step S205, inputting the generated image of the current face processing model trained by the trained first judgment model into the trained second judgment model, and judging whether the face in the generated image of the current face processing model conforms to the face aging rule;
in the second embodiment of the present application, it is defined that whether the second element of the generated picture in the first embodiment meets the second condition is "judging whether the face in the generated picture meets the face aging rule", and in addition, in the embodiment of the present application, the execution main body for judging whether the face in the generated picture of the current face processing model meets the face aging rule may be the second judgment model after the training, that is, the second judgment model after the training is used for judging whether the face in the generated picture output by the current face processing model meets the face aging rule. The following describes the training process of the second discrimination model after training when "facial aging law" refers specifically to "facial aging law of the same person".
Before step S205, an initial second determination model may be trained by using the initial face processing model and the sample database, so as to obtain the trained second determination model.
As shown in fig. 5, the training process of the trained second determination model may be as follows:
firstly, selecting one or more sample face pictures, such as a picture F and a picture G, in a sample database, wherein the sample database comprises sample face pictures of a plurality of different sample individuals in different age groups, for example, the sample database takes face pictures of pink, xiaoming and xiaojun respectively in 0-5 years old, 5-10 years old, 10-15 years old, 15-10 years old, 20-30 years old, 30-40 years old, 40-50 years old, 50-60 years old, 60-70 years old and 70-80 years old as the sample face pictures, wherein the picture F can be a face picture of the pink in 0-5 years old, and the picture G can be a face picture of the xiaoming in 15-20 years old; secondly, processing the picture F and the picture G by using the initial face processing model to generate corresponding generated pictures, assuming that the generated pictures are respectively a picture F1 (corresponding age information is X, for example, 55 years) and a picture G1 (corresponding age information is Y, for example, 80 years); thirdly, acquiring a sample face picture F2 (namely, acquiring a face picture of the minired in the sample database at the age range 40 to 50 years old or acquiring a face picture of the minired in the sample database at the age range 60 to 70 years old) adjacent to the age range (namely, 50 to 60 years old) of the age information X (namely, acquiring a face picture of the minired in the sample database at the age range 40 to 50 years old, or acquiring a face picture of the minired in the sample database at the age range 60 to 70 years old, which is not limited in the present application), and acquiring a sample face picture G2 (namely, acquiring a face picture of the minired in the sample database at the age range 60 to 70 years old) of the sample individual corresponding to the picture G in the sample database at the age range adjacent to the age range of the age information Y; from this time, the picture F1 and the picture F2 are regarded as the picture group 1, the label of the picture group 1 is set as "not conforming to the facial aging rule", the picture G1 and the picture G2 are regarded as the picture group 2, and the label of the picture group 2 is set as "not conforming to the facial aging rule"; then, selecting a picture group of one or more sample individuals in adjacent age groups from the sample database, and setting the label of the picture group as 'according with the facial aging rule', for example, selecting a human face picture H and a picture I which are just in the ages of 40-50 and 50-60 in the sample database as a picture group 3, and setting the label of the picture group 3 as 'according with the facial aging rule'; and finally, training the initial second judgment model according to each picture group and the corresponding label, so that the trained second judgment model can identify whether the face in the input picture group conforms to the face aging rule of one person.
In step S206, continuously adjusting parameters of the current face processing model until the face in the generated picture of the current face processing model conforms to the face aging rule;
in this embodiment, if the trained second determination model is used to determine whether an input image conforms to the "facial aging rule of the same person", and the training process is step S205, in order to train a current face processing model using the trained second determination model, a training individual and a training age group may be first obtained, where the training individual is a sample individual corresponding to the face image in the sample database, and the training age group is an age group corresponding to the age information in the sample database; secondly, acquiring a reference picture, wherein the reference picture is a sample face picture of the training individual in the sample database in an age group adjacent to the training age group; and thirdly, simultaneously inputting the generated picture output by the current face processing model and the reference picture into the trained second judgment model, and continuously adjusting the parameters of the current face processing model until the trained second judgment model judges that the face in the generated picture output by the current face processing model accords with the face aging rule.
In step S207, the current face processing model is used as a trained face processing model, and a face picture to be processed is processed through the trained face processing model;
in the second embodiment of the present application, the execution manner of the step S207 is the same as that of the step S104 in the first embodiment, and reference may be specifically made to the description of the first embodiment, and details are not repeated here.
In the second embodiment of the present application, the above steps S201 to S206 provide a training method for a face processing model, that is:
firstly, training an initial first discrimination model and an initial second discrimination model by using an initial face processing model and a sample database, and generating the trained first discrimination model and the trained second discrimination model;
and finally, training an initial face processing model by using the trained first discrimination model and the trained second discrimination model so as to generate a trained generation model.
The generated picture output by the trained face processing model generated by the training method is similar to the sample face picture in the sample database, and the face in the output generated picture meets the face aging rule, so that the aging and the youthful processing of the picture generated by the trained face processing model to the face are more real and vivid. In addition, in the embodiment of the present application, the training process may be subjected to loop iteration, so as to generate a face processing model with better performance, which is specifically shown in fig. 6:
firstly, the training methods disclosed in the synchronization steps S201-S206 are the same, that is, an initial first discrimination model and an initial second discrimination model are trained by using an initial face processing model and a sample database, and the trained first discrimination model and the trained second discrimination model are generated; then, training an initial face processing model by using the trained first discrimination model and the trained second discrimination model, thereby generating a trained face processing model;
secondly, updating an initial face processing model, an initial first discrimination model and an initial second discrimination model, namely, taking the trained face processing model obtained in the last step as the initial face processing model, taking the trained first discrimination model obtained in the last step as the initial first discrimination model, taking the trained second discrimination model obtained in the last step as the initial second discrimination model, repeatedly executing the last step, and generating the trained first discrimination model, the trained second discrimination model and the trained face processing model again;
and finally, continuously updating the initial face processing model, the initial first judgment model and the initial second judgment model until the number of the cycle iterations reaches a certain requirement, taking the face processing model obtained at the last time after training as the face processing model finished by the final training, and processing the face picture to be processed by using the face processing model finished by the final training.
In the second embodiment of the present application, a face processing model for face aging and rejuvenation needs to be trained in advance, and parameters of the current face processing model are continuously adjusted until the first discriminant model after training and the second discriminant model after training determine that the generated picture output by the current face processing model belongs to the sample database, and the face in the generated picture output by the current face processing model conforms to the face aging rule. Therefore, the face processing model which is more natural and vivid in processing of face aging and rejuvenation can be obtained, and user experience can be improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
EXAMPLE III
In the third embodiment of the present application, a picture processing apparatus is provided, and for convenience of description, only the parts related to the present application are shown, as shown in fig. 7, the picture processing apparatus 300 includes,
a training data obtaining module 301, configured to obtain a face picture, and obtain age information to be converted of a face in the face picture;
a generated image obtaining module 302, configured to input the face image and the age information into an initial face processing model to obtain a generated image, where the face processing model is configured to convert a face in the face image into a face corresponding to the age information;
a model training module 303, configured to input the generated image into a trained first decision model, determine whether a first element of the generated image meets a first condition, input the generated image into a trained second decision model, determine whether a second element of the generated image meets a second condition, and continuously adjust parameters of a current face processing model until the first element and the second element of the generated image output by the current face processing model meet the first condition and the second condition, respectively;
the image processing module 304 is configured to use the current face processing model as a trained face processing model, and process a face image to be processed through the trained face processing model.
Optionally, the face picture is any sample face picture in a sample database, and the model training module 303 includes:
a first judgment input unit, configured to input the generated picture into the trained first judgment model, and judge whether the generated picture of the current face processing model belongs to the sample database;
the first model training unit is used for continuously adjusting the parameters of the current face processing model until the generated picture of the current face processing model belongs to the sample database;
a second judgment input unit, configured to input the generated image of the current face processing model trained by the trained first judgment model into the trained second judgment model, and judge whether the face in the generated image of the current face processing model conforms to the face aging rule;
and the second model training unit is used for continuously adjusting the parameters of the current face processing model until the face in the generated picture of the current face processing model conforms to the face aging rule.
Optionally, the sample database includes a plurality of sample face pictures of different sample individuals in different age groups, and the second determination input unit includes:
a training data obtaining subunit, configured to obtain a training individual and a training age group, where the training individual is a sample individual corresponding to the face image in the sample database, and the training age group is an age group corresponding to the age information in the sample database;
a reference picture obtaining subunit, configured to obtain a reference picture, where the reference picture is a sample face picture of the training individual in the sample database in an age group adjacent to the training age group;
and the second judgment input subunit is used for inputting the generated picture of the current face processing model after the training of the trained first judgment model and the reference picture into the trained second judgment model and judging whether the face in the generated picture conforms to the face aging rule or not.
Optionally, the model training module 303 further includes:
a first discriminant training unit, configured to train an initial first discriminant model using the initial face processing model and the sample database, so as to obtain the trained first discriminant model;
and the second judgment training unit is used for training the initial second judgment model by using the initial face processing model and the sample database so as to obtain the trained second judgment model.
Optionally, the image processing module 304 includes:
the system comprises a to-be-processed data acquisition unit, a conversion unit and a conversion unit, wherein the to-be-processed data acquisition unit is used for acquiring a to-be-processed face picture and age information of a face to be converted in the to-be-processed face picture;
and the image processing unit is used for inputting the face image to be processed and age information to be converted of the face in the face image to be processed into the trained face processing model to obtain the face image output by the trained face processing model.
Optionally, the image processing module 304 further includes:
a satisfaction judging unit for judging whether the face image output by the trained face processing model meets the user requirement;
and the updating unit is used for taking the trained face processing model as an initial face processing model if the satisfaction judging unit judges that the face picture output by the trained face processing model does not meet the requirements of the user.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Example four
Fig. 8 is a schematic diagram of a terminal device according to a fourth embodiment of the present application. As shown in fig. 8, the terminal device 4 of this embodiment includes: a processor 40, a memory 41 and a computer program 42 stored in said memory 41 and executable on said processor 40. The processor 40 implements the steps of the various method embodiments described above, such as steps S101 to S104 shown in fig. 1, when executing the computer program 42. Alternatively, the processor 40, when executing the computer program 42, implements the functions of the modules/units in the device embodiments, such as the functions of the modules 301 to 304 shown in fig. 7.
Illustratively, the computer program 42 may be divided into one or more modules/units, which are stored in the memory 41 and executed by the processor 40 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 42 in the terminal device 4. For example, the computer program 42 may be divided into a training data acquisition module, a generated image acquisition module, a model training module, and an image processing module, and each module has the following functions:
acquiring a face picture, and acquiring age information to be converted of a face in the face picture;
inputting the face picture and the age information into an initial face processing model to obtain a generated picture, wherein the face processing model is used for converting a face in the face picture into a face corresponding to the age information;
inputting the generated picture into a trained first discrimination model, judging whether a first element of the generated picture meets a first condition, inputting the generated picture into a trained second discrimination model, judging whether a second element of the generated picture meets a second condition, and continuously adjusting parameters of a current face processing model until the first element and the second element of the generated picture output by the current face processing model respectively meet the first condition and the second condition;
and taking the current face processing model as a trained face processing model, and processing the face picture to be processed through the trained face processing model.
The terminal device 4 may be a smart phone, a tablet computer, a learning machine, an intelligent wearable device, or other computing device. The terminal device may include, but is not limited to, a processor 40 and a memory 41. Those skilled in the art will appreciate that fig. 8 is merely an example of terminal device 4 and does not constitute a limitation of terminal device 4 and may include more or fewer components than shown, or some components may be combined, or different components, for example, the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 41 may be an internal storage unit of the terminal device 4, such as a hard disk or a memory of the terminal device 4. The memory 41 may be an external storage device of the terminal device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like provided in the terminal device 4. Further, the memory 41 may include both an internal storage unit and an external storage device of the terminal device 4. The memory 41 is used for storing the computer program and other programs and data required by the terminal device. The above-mentioned memory 41 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the embodiments of the methods described above. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (7)

1. An image processing method, comprising:
acquiring a face picture, and acquiring age information to be converted of a face in the face picture;
inputting the face picture and the age information into an initial face processing model to obtain a generated picture, wherein the face processing model is used for converting a face in the face picture into a face corresponding to the age information;
inputting the generated picture into a trained first discrimination model, judging whether a first element of the generated picture meets a first condition, inputting the generated picture into a trained second discrimination model, judging whether a second element of the generated picture meets a second condition, and continuously adjusting parameters of a current face processing model until the first element and the second element of the generated picture output by the current face processing model respectively meet the first condition and the second condition;
taking the current face processing model as a trained face processing model, and processing a face picture to be processed through the trained face processing model;
the method includes the steps of inputting the generated picture into a trained first discrimination model, judging whether a first element of the generated picture meets a first condition, inputting the generated picture into a trained second discrimination model, judging whether a second element of the generated picture meets a second condition, and continuously adjusting parameters of a current face processing model until the first element and the second element of the generated picture output by the current face processing model respectively meet the first condition and the second condition, wherein the steps include:
inputting the generated picture into the trained first discrimination model, and judging whether the generated picture of the current face processing model belongs to the sample database;
continuously adjusting parameters of the current face processing model until a generated picture of the current face processing model belongs to the sample database;
inputting the generated picture of the current face processing model after the training of the trained first discrimination model into the trained second discrimination model, and judging whether the face in the generated picture of the current face processing model conforms to the face aging rule or not;
continuously adjusting parameters of the current face processing model until the face in the generated picture of the current face processing model accords with the face aging rule;
the method comprises the following steps that a sample database comprises sample face pictures of a plurality of different sample individuals in different age groups, a generated picture of a current face processing model after the training of the trained first discrimination model is input into a trained second discrimination model, and whether the face in the generated picture of the current face processing model accords with a face aging rule or not is judged, and the method comprises the following steps:
acquiring a training individual and a training age group, wherein the training individual is a sample individual corresponding to the face picture in the sample database, and the training age group is an age group corresponding to the age information in the sample database;
acquiring a reference picture, wherein the reference picture is a sample face picture of the training individual in the sample database in an age group adjacent to the training age group;
and inputting the generated picture of the current face processing model after the training of the trained first discrimination model and the reference picture into the trained second discrimination model, and judging whether the face in the generated picture conforms to the face aging rule.
2. The method according to claim 1, wherein before the step of inputting the generated picture into the trained first discriminant model and determining whether the generated picture of the current face processing model belongs to the sample database, the method further comprises:
training an initial first discriminant model by using the initial face processing model and the sample database, thereby obtaining the trained first discriminant model;
before the step of inputting the generated picture of the current face processing model after the training of the trained first discrimination model into the trained second discrimination model and judging whether the face in the generated picture of the current face processing model conforms to the face aging law, the method further comprises the following steps:
and training an initial second judgment model by using the initial face processing model and the sample database so as to obtain the trained second judgment model.
3. The image processing method according to claim 1 or 2, wherein the processing the face image to be processed by the trained face processing model comprises:
acquiring a face picture to be processed and age information to be converted of a face in the face picture to be processed;
and inputting the face picture to be processed and age information to be converted of the face in the face picture to be processed into the trained face processing model to obtain the face picture output by the trained face processing model.
4. The image processing method according to claim 3, wherein after obtaining the face image output by the trained face processing model, the method further comprises:
judging whether the face picture output by the trained face processing model meets the user requirements or not;
and if not, taking the trained face processing model as an initial face processing model, and returning to execute the steps of obtaining a face picture and obtaining age information to be converted of the face in the face picture and the subsequent steps.
5. A picture processing apparatus, comprising:
the training data acquisition module is used for acquiring a face picture and acquiring age information to be converted of a face in the face picture;
the generated image acquisition module is used for inputting the face image and the age information into an initial face processing model to obtain a generated image, and the face processing model is used for converting the face in the face image into a face corresponding to the age information;
the model training module is used for inputting the generated picture into a trained first discrimination model, judging whether a first element of the generated picture meets a first condition or not, inputting the generated picture into a trained second discrimination model, judging whether a second element of the generated picture meets a second condition or not, and continuously adjusting parameters of the current face processing model until the first element and the second element of the generated picture output by the current face processing model respectively meet the first condition and the second condition;
the image processing module is used for taking the current face processing model as a trained face processing model and processing a face image to be processed through the trained face processing model;
the face picture is any sample face picture in a sample database;
the model training module comprises:
the first judgment input unit is used for inputting the generated picture into the trained first judgment model and judging whether the generated picture of the current face processing model belongs to the sample database;
the first model training unit is used for continuously adjusting the parameters of the current face processing model until the generated picture of the current face processing model belongs to the sample database;
a second judgment input unit, configured to input the generated image of the current face processing model trained by the trained first judgment model into the trained second judgment model, and judge whether the face in the generated image of the current face processing model conforms to the face aging rule;
the second model training unit is used for continuously adjusting the parameters of the current face processing model until the face in the generated picture of the current face processing model conforms to the face aging rule;
the sample database comprises a plurality of sample face pictures of different sample individuals in different age groups, and the second judgment input unit comprises:
a training data obtaining subunit, configured to obtain a training individual and a training age group, where the training individual is a sample individual corresponding to the face image in the sample database, and the training age group is an age group corresponding to the age information in the sample database;
a reference picture obtaining subunit, configured to obtain a reference picture, where the reference picture is a sample face picture of the training individual in the sample database in an age group adjacent to the training age group;
and the second judgment input subunit is used for inputting the generated picture of the current face processing model after the training of the trained first judgment model and the reference picture into the trained second judgment model and judging whether the face in the generated picture conforms to the face aging rule or not.
6. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when executing the computer program.
7. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201810746936.7A 2018-07-09 2018-07-09 Picture processing method, picture processing device and terminal equipment Active CN108985215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810746936.7A CN108985215B (en) 2018-07-09 2018-07-09 Picture processing method, picture processing device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810746936.7A CN108985215B (en) 2018-07-09 2018-07-09 Picture processing method, picture processing device and terminal equipment

Publications (2)

Publication Number Publication Date
CN108985215A CN108985215A (en) 2018-12-11
CN108985215B true CN108985215B (en) 2020-05-22

Family

ID=64536465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810746936.7A Active CN108985215B (en) 2018-07-09 2018-07-09 Picture processing method, picture processing device and terminal equipment

Country Status (1)

Country Link
CN (1) CN108985215B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321802B (en) * 2019-06-10 2021-10-01 达闼机器人有限公司 Face image generation method and apparatus, storage device and electronic device
CN111145080B (en) * 2019-12-02 2023-06-23 北京达佳互联信息技术有限公司 Training method of image generation model, image generation method and device
CN111553838A (en) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 Model parameter updating method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787974A (en) * 2014-12-24 2016-07-20 中国科学院苏州纳米技术与纳米仿生研究所 Establishment method for establishing bionic human facial aging model
CN107169454A (en) * 2017-05-16 2017-09-15 中国科学院深圳先进技术研究院 A kind of facial image age estimation method, device and its terminal device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107563283B (en) * 2017-07-26 2023-01-06 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for generating attack sample
CN108021905A (en) * 2017-12-21 2018-05-11 广东欧珀移动通信有限公司 image processing method, device, terminal device and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787974A (en) * 2014-12-24 2016-07-20 中国科学院苏州纳米技术与纳米仿生研究所 Establishment method for establishing bionic human facial aging model
CN107169454A (en) * 2017-05-16 2017-09-15 中国科学院深圳先进技术研究院 A kind of facial image age estimation method, device and its terminal device

Also Published As

Publication number Publication date
CN108985215A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
US11158057B2 (en) Device, method, and graphical user interface for processing document
CN109345553B (en) Palm and key point detection method and device thereof, and terminal equipment
CN109118447B (en) Picture processing method, picture processing device and terminal equipment
CN108985215B (en) Picture processing method, picture processing device and terminal equipment
CN108961267B (en) Picture processing method, picture processing device and terminal equipment
CN108898082B (en) Picture processing method, picture processing device and terminal equipment
CN108924440B (en) Sticker display method, device, terminal and computer-readable storage medium
CN109086742A (en) scene recognition method, scene recognition device and mobile terminal
CN109471626B (en) Page logic structure, page generation method, page data processing method and device
CN110363084A (en) A kind of class state detection method, device, storage medium and electronics
CN110266994B (en) Video call method, video call device and terminal
CN112839223B (en) Image compression method, image compression device, storage medium and electronic equipment
CN111009031B (en) Face model generation method, model generation method and device
CN107193598A (en) One kind application startup method, mobile terminal and computer-readable recording medium
CN115661912B (en) Image processing method, model training method, electronic device, and readable storage medium
CN113553039A (en) Method and device for generating executable code of operator
CN111984803B (en) Multimedia resource processing method and device, computer equipment and storage medium
CN110555102A (en) media title recognition method, device and storage medium
CN111176533A (en) Wallpaper switching method, device, storage medium and terminal
CN113192639A (en) Training method, device and equipment of information prediction model and storage medium
CN110929159A (en) Resource delivery method, device, equipment and medium
CN116741197B (en) Multi-mode image generation method and device, storage medium and electronic equipment
CN108810401A (en) Guide the method and system taken pictures
CN110059739B (en) Image synthesis method, image synthesis device, electronic equipment and computer-readable storage medium
CN104469092A (en) Image acquisition method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant