CN108985215A - A kind of image processing method, picture processing unit and terminal device - Google Patents
A kind of image processing method, picture processing unit and terminal device Download PDFInfo
- Publication number
- CN108985215A CN108985215A CN201810746936.7A CN201810746936A CN108985215A CN 108985215 A CN108985215 A CN 108985215A CN 201810746936 A CN201810746936 A CN 201810746936A CN 108985215 A CN108985215 A CN 108985215A
- Authority
- CN
- China
- Prior art keywords
- face
- picture
- training
- model
- processing model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
This application provides a kind of image processing method, picture processing unit and terminal devices, which comprises obtains face picture and age information;The face picture and the age information are input in initial face processing model, obtain generating picture;The second discrimination model after the first discrimination model that the generation picture is input to after training and training, the parameter of current face processing model is constantly adjusted, until the first element for the generation picture that current face processing model exports and the second element correspond with first condition and second condition;Using the current face processing model as the face processing model after training, and face picture to be processed is handled by the face processing model after training.The application can obtain satisfactory face processing model, can satisfy the different disposal requirement of different user or developer for face.
Description
Technical field
The application belongs to technical field of computer vision more particularly to a kind of image processing method, picture processing unit, end
End equipment and computer readable storage medium.
Background technique
Currently, there is the application program of many simulation face agings and rejuvenation in the market, for example, Oldify, old
Age face and time camera etc., these application programs can be predicted in the face picture according to the face picture of input
Face some age bracket appearance.
In order to realize the aging and rejuvenation of face, at present commonly used approach be to the face in face picture into
Row positioning feature point finds out facial face position and facial muscle locations, then carries out to face form and facial muscles
Reconstruct, to realize the aging and rejuvenation of face.Place of the above-mentioned traditional method for aging and the rejuvenation of face
Reason method is relatively fixed, lacks flexibility, however, in many cases, the place of different users or developer for face
Reason requirement is not identical, and therefore, above-mentioned traditional method is not able to satisfy different user or the different disposal requirement of developer.
Summary of the invention
It can in view of this, this application provides a kind of image processing method, picture processing unit, terminal device and computers
Storage medium is read, can satisfy the different disposal requirement of different user or developer for face.
The application first aspect provides a kind of image processing method, comprising:
Face picture is obtained, and obtains the age information that face is to be converted in above-mentioned face picture;
Above-mentioned face picture and above-mentioned age information are input in initial face processing model, obtain generation figure
Piece, above-mentioned face processing model are used to the face in above-mentioned face picture being converted to face corresponding with above-mentioned age information;
Above-mentioned generation picture is input to the first discrimination model after training, and judges the first element of above-mentioned generation picture
Whether meet first condition, above-mentioned generation picture is input to the second discrimination model after training, and judge above-mentioned generation picture
The second element whether meet second condition, the parameter of current face processing model is constantly adjusted, until at current face
The first element of the generation picture of reason model output and the second element correspond with above-mentioned first condition and above-mentioned Article 2
Part;
Using above-mentioned current face processing model as the face processing model after training, and pass through at the face after training
Reason model handles face picture to be processed.
The application second aspect provides a kind of picture processing unit, comprising:
Training data obtains module, for obtaining face picture, and obtains the year that face is to be converted in above-mentioned face picture
Age information;
It generates picture and obtains module, for above-mentioned face picture and above-mentioned age information to be input at initial face
Manage model in, obtain generate picture, above-mentioned face processing model be used for by the face in above-mentioned face picture be converted to it is above-mentioned
The corresponding face of age information;
Model training module for above-mentioned generation picture to be input to the first discrimination model after training, and judges above-mentioned
Whether the first element for generating picture meets first condition, and above-mentioned generation picture is input to the second discrimination model after training,
And judge whether the second element of above-mentioned generation picture meets second condition, constantly adjust the ginseng of current face processing model
Number, until the first element for the generation picture that current face processing model exports and the second element correspond with above-mentioned first
Condition and above-mentioned second condition;
Picture processing module, for using above-mentioned current face processing model as training after face processing model, and
Face picture to be processed is handled by the face processing model after training.
The application third aspect provides a kind of terminal device, including memory, processor and is stored in above-mentioned storage
In device and the computer program that can run on above-mentioned processor, above-mentioned processor are realized as above when executing above-mentioned computer program
The step of stating first aspect method.
The application fourth aspect provides a kind of computer readable storage medium, above-mentioned computer-readable recording medium storage
There is computer program, realizes when above-mentioned computer program is executed by processor such as the step of above-mentioned first aspect method.
The 5th aspect of the application provides a kind of computer program product, and above-mentioned computer program product includes computer journey
Sequence is realized when above-mentioned computer program is executed by one or more processors such as the step of above-mentioned first aspect method.
Therefore this application provides a kind of image processing methods, firstly, obtaining a face picture, and obtain above-mentioned
Face age information to be converted in face picture;Secondly, above-mentioned face picture and above-mentioned age information are input to initially
Face processing model in, obtain generate picture, above-mentioned face processing model be used for by above-mentioned face picture face convert
For face corresponding with above-mentioned age information;Then, above-mentioned generation picture is input to the first discrimination model after training and sentenced
Whether the first element of above-mentioned generation picture of breaking meets first condition, and second after above-mentioned generation picture to be input to training differentiates
Model, and judge whether the second element of above-mentioned generation picture meets second condition, constantly adjust current face processing model
Parameter, until the first element for the generation picture that current face processing model exports and the second element correspond with it is above-mentioned
First condition and above-mentioned second condition;Finally, using above-mentioned current face processing model as the face processing mould after training
Type, and face picture to be processed is handled by the face processing model after training.Therefore, compared to conventional method, originally
Application needs precondition for face aging and the face processing model of rejuvenation, constantly adjusts current face processing mould
The parameter of type is until current face processing model meets the requirements, and therefore, the application can obtain satisfactory face processing
Therefore model can satisfy the different disposal requirement of different user or developer for face.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is a kind of implementation process schematic diagram for image processing method that the embodiment of the present application one provides;
Fig. 2 is the training process schematic diagram for the face processing model that the embodiment of the present application one provides;
Fig. 3 is the implementation process schematic diagram for another image processing method that the embodiment of the present application two provides;
Fig. 4 is the training process schematic diagram for the first discrimination model of one kind that the embodiment of the present application two provides;
Fig. 5 is the training process schematic diagram for the second discrimination model of one kind that the embodiment of the present application two provides;
Fig. 6 is a kind of training process schematic diagram for face processing model that the embodiment of the present application two provides;
Fig. 7 is a kind of structural schematic diagram for picture processing unit that the embodiment of the present application three provides;
Fig. 8 is the structural schematic diagram for the terminal device that the embodiment of the present application four provides.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed
Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific
The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
Image processing method provided by the embodiments of the present application can be adapted for terminal device, and illustratively, above-mentioned terminal is set
It is standby to include but is not limited to: smart phone, tablet computer, learning machine, intelligent wearable device etc..
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special
Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step,
Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, terminal device described in the embodiment of the present application is including but not limited to such as with the sensitive table of touch
Mobile phone, laptop computer or the tablet computer in face (for example, touch-screen display and/or touch tablet) etc it is other
Portable device.It is to be further understood that in certain embodiments, above equipment is not portable communication device, but is had
The desktop computer of touch sensitive surface (for example, touch-screen display and/or touch tablet).
In following discussion, the terminal device including display and touch sensitive surface is described.However, should manage
Solution, terminal device may include that one or more of the other physical User of such as physical keyboard, mouse and/or control-rod connects
Jaws equipment.
Terminal device supports various application programs, such as one of the following or multiple: drawing application program, demonstration application
Program, word-processing application, website creation application program, disk imprinting application program, spreadsheet applications, game are answered
With program, telephony application, videoconference application, email application, instant messaging applications, forging
Refining supports application program, photo management application program, digital camera application program, digital camera application program, web-browsing to answer
With program, digital music player application and/or video frequency player application program.
At least one of such as touch sensitive surface can be used in the various application programs that can be executed on the terminal device
Public physical user-interface device.It can be adjusted among applications and/or in corresponding application programs and/or change touch is quick
Feel the corresponding information shown in the one or more functions and terminal on surface.In this way, terminal public physical structure (for example,
Touch sensitive surface) it can support the various application programs with user interface intuitive and transparent for a user.
In addition, term " first ", " second " etc. are only used for distinguishing description, and should not be understood as in the description of the present application
Indication or suggestion relative importance.
In order to illustrate the above-mentioned technical solution of the application, the following is a description of specific embodiments.
Embodiment one
The image processing method provided below the embodiment of the present application one is described, and please refers to attached drawing 1, and the application is implemented
Image processing method in example one includes:
In step s101, face picture is obtained, and obtains the age information that face is to be converted in above-mentioned face picture;
In the embodiment of the present application one, above-mentioned face picture, above-mentioned sample database can be obtained from sample database
In may include multiple and different sample face pictures, any one sample face picture in the sample database can be chosen and made
For the face picture of the initial face processing model of training, since the face processing model in the application is to realize the old of face
Change and rejuvenation, therefore, for the face processing model in training the application, it is also necessary to face age information to be converted is obtained,
Such as 5 years old, 30 years old, 50 years old or 80 years old etc..
Furthermore, it is generally the case that face processing model can only be handled the fixed picture of picture size, therefore,
In the embodiment of the present application one, after obtaining above-mentioned face picture, which can also be pre-processed.For example, first
First, face alignment is carried out to above-mentioned face picture;Secondly, being cut out according to the face location that alignment obtains to above-mentioned face picture
It cuts and scales to obtain the picture of unified size.Pretreatment operation can remove a large amount of interference information in above-mentioned face picture, and
And it can satisfy the demand of face processing model.
In step s 102, above-mentioned face picture and above-mentioned age information are input to initial face processing model
In, it obtains generating picture, above-mentioned face processing model is used to be converted to the face in above-mentioned face picture to be believed with the above-mentioned age
Cease corresponding face;
In the embodiment of the present application one, the above-mentioned steps S101 face picture obtained and age information are input to initially
Face processing model in so that the initial face processing model is converted to the face in the face picture and the age
The corresponding face of information, and export generation picture.As illustrated in fig. 2, it is assumed that the sample face picture chosen from sample database
For face picture 201, the face in face picture 201 is converted to 80 years old corresponding face, initial people by age information instruction
Face processing model is handled face picture 201 according to age information, and output generates picture 202, at initial face
Managing model is the face processing model for not training completion also, therefore, in the generation picture of the initial face processing model output
Face be possible to very stiff unnatural, or and be unsatisfactory for some demands of user or developer, in this feelings
Under condition, the face processing model initial to this using the subsequent step in technical solution provided by the embodiment of the present application one is needed
It is trained.
In step s 103, above-mentioned generation picture is input to the first discrimination model after training, and judges above-mentioned generation
Whether the first element of picture meets first condition, above-mentioned generation picture is input to the second discrimination model after training, and sentence
Whether the second element of above-mentioned generation picture of breaking meets second condition, constantly adjusts the parameter of current face processing model, directly
The first element of the generation picture exported to current face processing model and the second element correspond with above-mentioned first condition
And above-mentioned second condition;
In the embodiment of the present application, after the generation picture for getting above-mentioned initial face processing model output, by this
In the second discrimination model after generating the first discrimination model and training that picture is input to after training.
Wherein, the executing subject that whether the above-mentioned first element for judging above-mentioned generation picture meets first condition can be to be upper
Whether the first discrimination model after stating training, above-mentioned the second element for judging above-mentioned generation picture meet the execution master of second condition
Body can be the second discrimination model after above-mentioned training.It that is to say, the first discrimination model after above-mentioned training is for determining the life
Whether meet first condition at the first element of picture, for example, the first discrimination model after the training can be used for determining the life
Whether it is branded as at the face of the leftmost side in picture;Alternatively, can be used for determining the face of the rightmost side in the generation picture
Whether wear glasses;Alternatively, can be used for determining whether the face in the generation picture makes up;Alternatively, can be also used for determining to be somebody's turn to do
It generates whether picture belongs to above-mentioned sample database etc., the above-mentioned first element and above-mentioned first condition is not limited herein
It is fixed.The second discrimination model after above-mentioned training for determining whether the second element of the generation picture meets second condition, for example,
The second decision model after the training can be used for determining whether the cap color that the face in the generation picture is worn is red;
Or, it is possible to determine that whether the glasses that the face in the generation picture is worn are circular frame;Alternatively, can also determine the generation figure
Whether the face in piece meets facial senescence law etc., does not limit above-mentioned second element and above-mentioned second condition herein
It is fixed.
The parameter of current face processing model is constantly adjusted, until the first discrimination model after above-mentioned training determines currently
The generation picture of face processing model meet above-mentioned first condition, and the second discrimination model after above-mentioned training determine it is current
The generation picture of face processing model meets above-mentioned second condition.
In order to more intuitively describe the training process of face processing model provided by step S103, below with attached drawing
2, step S103 is described in detail.As illustrated in fig. 2, it is assumed that the first discrimination model after above-mentioned training is current for determining
Face processing model generation picture in face whether wear glasses, the second discrimination model after above-mentioned training for determine work as
Whether the glasses that the face in the generation picture of preceding face processing model is worn are circular frame.By initial face processing model
The generation picture 202 of output is input to the first discrimination model in the first discrimination model after the training, after the training and determines to be somebody's turn to do
Whether the face generated in picture 202 wears glasses, if not wearing glasses, adjusts the parameter of the initial face processing model, and
The generation picture of parameter face processing model output adjusted is re-entered in the first discrimination model to the training,
So that whether first discrimination model continues to determine the face in the generation picture of parameter face processing model adjusted
It wears glasses, constantly adjusts the parameter of current face processing model, until the first discrimination model after the training determines currently
Until face in the generation picture of face processing model output is worn glasses;Then, by the first discrimination model after above-mentioned training
The generation picture of face processing model after the completion of training is input in the second discrimination model after training, so that after the training
The second discrimination model glasses for determining that the face in the generation picture of current face processing model is worn whether be circular frame,
The parameter of current face processing model is constantly adjusted, until second discrimination model determines the life of current face processing model
Until being circular frame at the glasses that the face in picture is worn.The above training process is merely illustrative, and is not constituted to people
The restriction of face processing model training method.It that is to say, it in the embodiment of the present application, can be first with first after above-mentioned training
Discrimination model is trained current face processing model, after the completion of the first discrimination model after above-mentioned training is trained, then
Using the second discrimination model after above-mentioned training to the face processing mould after the completion of the first discrimination model training after above-mentioned training
Type is followed by being trained.
Further, it is also possible to instruct using the first discrimination model after above-mentioned training and the second discrimination model after training
The face more natural reality of face processing model output after white silk generated in picture, specifically may refer to the application implementation
The description of example two.
In step S104, using above-mentioned current face processing model as the face processing model after training, and pass through
Face processing model after training handles face picture to be processed;
In the embodiment of the present application, mould will be differentiated using the first discrimination model after above-mentioned training and second after training
The current face processing model that type training is completed can use the people after the training as the face processing model after training
Face processing model handles face picture to be processed.
Wherein, handling above by the face processing model after training face picture to be processed can be with are as follows:
Obtain the age information that face is to be converted in face picture to be processed and above-mentioned face picture to be processed;
By the age information input that face in above-mentioned face picture to be processed and above-mentioned face picture to be processed is to be converted
In face processing model after to above-mentioned training, the face picture of the face processing model output after obtaining above-mentioned training.
In addition, in the embodiment of the present application, the face figure of the face processing model output after the above-mentioned training of above-mentioned acquisition
After piece, can also include:
Whether the face picture of the face processing model output after judging above-mentioned training meets user demand;
If it is not, then using the face processing model after above-mentioned training as initial face processing model, and return and execute step
Rapid S101, to carry out re -training to the face processing model after current training.
Wherein, the training process of the face processing model in the step S101-S103 in the embodiment of the present application one can be
It is carried out before terminal device factory, before terminal device factory, obtains the face processing model after training in advance, and will
Face processing model after training is stored in the memory of terminal device, in order to which subsequent user can directly utilize memory
In training after face processing model face picture to be processed is handled;Alternatively, can also be in terminal device preparation pair
When face picture to be processed is handled, just initial face processing model is trained, the face processing after obtaining training
Then model handles the face picture to be processed using the face processing model after training, is not construed as limiting herein to this.
In the embodiment of the present application one, precondition is needed to be used for the face processing model of face aging and rejuvenation,
Second after constantly adjusting the first discrimination model and training of the parameter of current face processing model after training differentiates
Face processing model to determine current meets the requirements.Therefore, the application can obtain satisfactory face processing model, can
To meet the different disposal requirement of different user or developer for face.
Embodiment two
Another image processing method provided below the embodiment of the present application two is described, and please refers to attached drawing 3, this Shen
Please the image processing method in embodiment two include:
In step s 201, any face picture is obtained from sample database, and obtains face in above-mentioned face picture
Age information to be converted;
In step S202, above-mentioned face picture and above-mentioned age information are input to initial face processing model
In, it obtains generating picture, above-mentioned face processing model is used to be converted to the face in above-mentioned face picture to be believed with the above-mentioned age
Cease corresponding face;
In the embodiment of the present application two, above-mentioned steps S201-S202 is executed with step S101-S102 in example 1
Mode is identical, and for details, reference can be made to the descriptions of embodiment one, and details are not described herein.
In step S203, above-mentioned generation picture is input in the first discrimination model after above-mentioned training, judgement is current
The generation picture of face processing model whether belong to above-mentioned sample database;
In the embodiment of the present application two, whether the first element for judging above-mentioned generation picture in embodiment one is met
One term restriction is " judging whether above-mentioned generation picture belongs to above-mentioned sample database ", in addition, in the embodiment of the present application, on
State judge whether the generation picture of current face processing model belongs to the executing subject of above-mentioned sample database can be to be above-mentioned
The first discrimination model after training, that is to say, the first discrimination model after the training is for judging current face processing model
Whether the generation picture of output belongs to above-mentioned sample database.For example, include in sample database is each different face
Photo, then the first discrimination model after the training is for constantly training current face processing model, so that the face processing mould
Human face photo in the generation picture database decent as far as possible of type output, so that first after the training differentiates mould
The generation picture for the face processing model output that type training is completed is more natural.
In addition, can use above-mentioned initial face processing model and above-mentioned sample data before step S203
Library is trained the first initial discrimination model, to obtain the first discrimination model after above-mentioned training.
As shown in figure 4, the training process of the first discrimination model after training can be such that
Firstly, choose one or more sample face pictures in sample database, for example, picture A and picture B, utilization it is above-mentioned
Initial face processing model handles picture A and picture B, generates corresponding generation picture, such as picture A1 and figure
Piece B1;Secondly, the label that above-mentioned picture A1 and picture B1 is arranged is " being not belonging to above-mentioned sample database ", and from sample database
The one or more samples pictures of middle selection, such as picture C, picture D and picture E, and picture C, picture D and picture E are set
Label is " belonging to above-mentioned sample database ";Finally, using above-mentioned picture A1, picture B1, picture C, picture D, picture E and its
The first initial discrimination model of corresponding label training, so that the first discrimination model after training can be to the picture of input
Correctly classified, can identify whether the picture of input belongs to above-mentioned sample database.
In step S204, the parameter of current face processing model is constantly adjusted, until current face processing model
Generation picture belong to above-mentioned sample database;
In the embodiment of the present application, first by initial face processing model according to above-mentioned face picture and above-mentioned age
The generation picture of information output is input in the first discrimination model after above-mentioned training, then constantly adjusts current face processing
The parameter of model, until the first discrimination model after above-mentioned training determines the generation picture category of current face processing model output
Until above-mentioned sample database, thus the face processing model output that the first discrimination model training after the training is completed
Generating picture can be similar compared with the sample face picture in above-mentioned sample database.
In step S205, by the life of the current face processing model after the first discrimination model training after above-mentioned training
In the second discrimination model after being input to above-mentioned training at picture, the people of current face processing model generated in picture is judged
Whether face meets facial senescence law;
In the embodiment of the present application two, whether the second element for judging above-mentioned generation picture in embodiment one is met
Two term restrictions are " judging whether the face in above-mentioned generation picture meets facial senescence law ", in addition, implementing in the application
In example, whether the face in the above-mentioned generation picture for judging current face processing model meets the execution master of facial senescence law
Body can be the second discrimination model after above-mentioned training, that is to say, the second discrimination model after the training is current for judging
Whether the face in the generation picture of face processing model output meets facial senescence law, in the embodiment of the present application, in order to
The face of face processing model output after guaranteeing training generated in picture is more life-like, and above-mentioned face senescence law can refer to
" the facial senescence law of the same person ".It is described below when " facial senescence law " refers in particular to " the facial senescence law of the same person "
When, the training process of the second discrimination model after training.
Before step S205, above-mentioned initial face processing model and above-mentioned sample database can use, it is right
The second initial discrimination model is trained, to obtain the second discrimination model after above-mentioned training.
As shown in figure 5, the training process of the second discrimination model after above-mentioned training can be such that
Firstly, choosing one or more sample face pictures in sample database, such as picture F and picture G, wherein should
It include sample face picture of multiple and different individual of sample in different age group in sample database, for example, the sample database
It is middle by small red, Xiao Ming and small just respectively in 0-5 years old, 5-10 years old, 10-15 years old, 15-10 years old, 20-30 years old, 30-40 years old, 40-50
The human face photo in year, 50-60 years old, 60-70 years old and 70-80 years old is as each sample face picture, wherein above-mentioned picture F can
Think that the small red human face photo at 0-5 years old, above-mentioned picture G can be human face photo of the Xiao Ming at 15-20 years old;Secondly, using upper
It states initial face processing model to handle picture F and picture G, generates corresponding generation picture, it is assumed that generate picture
(corresponding age information is Y, for example is 80 by respectively picture F1 (corresponding age information is X, for example is 55 years old) and picture G1
Year);Again, the age where with age information X of individual of sample (i.e. small red) corresponding to picture F is obtained in sample database
Sample face picture F2 (the i.e. acquisition sample number of the adjacent age bracket (i.e. 40-50 years old or 60-70 years old) of section (i.e. 50-60 years old)
According to the small red human face photo at 40-50 years old in library, or the also small red people at 60-70 years old in available sample database
Face photo, the application are not construed as limiting this), individual of sample corresponding to picture G, which is obtained, in sample database believes with the age
(Xiao Ming i.e. in acquisition sample database was at 60-70 years old by the sample face picture G2 of the adjacent age bracket of age bracket where ceasing Y
Human face photo);From secondary, using picture F1 and picture F2 as picture group 1, and the label that the picture group 1 is arranged is " not meet face
Portion's senescence law ", using picture G1 and picture G2 as a picture group 2, and the label that the picture group 2 is arranged is " not meet
Facial senescence law ";Then, one or more individual of sample are chosen from sample database in the picture group of adjacent age bracket,
And be arranged picture group label be " meeting facial senescence law ", for example, choose sample database in it is small just at 40-50 years old and
50-60 years old human face photo picture H and picture I is as picture group 3, and the label of the picture group 3 is " to meet facial aging rule
Rule ";Finally, the second initial discrimination model is trained according to above-mentioned each picture group and corresponding label, so that
The second discrimination model after training can identify whether the face in the picture group of input meets the facial aging rule of the same person
Rule.
In step S206, the parameter of current face processing model is constantly adjusted, until current face processing model
Generation picture in face meet facial senescence law;
In the embodiment of the present application, if the second discrimination model after above-mentioned training inputs whether picture meets for judging
" the facial senescence law of the same person ", and training process is described in step S205, then for second after the above-mentioned training of utilization
Discrimination model is trained current face processing model, can obtain training individuals and sport career age section first, wherein
Above-mentioned training individuals are individual of sample corresponding to above-mentioned face picture in above-mentioned sample database, and above-mentioned sport career age section is upper
State age bracket corresponding to above-mentioned age information in sample database;Secondly, obtaining reference picture, wherein the reference picture is
The sample face picture of above-mentioned training individuals in above-mentioned sample database in the age bracket adjacent with above-mentioned sport career age section;Again
Secondary, after the generation picture of current face processing model output and the reference picture to be input to above-mentioned training simultaneously second
In discrimination model, the parameter of current face processing model is constantly adjusted, until the second discrimination model judgement after the training is worked as
Until face in the generation picture of preceding face processing model output meets facial senescence law.
In step S207, using above-mentioned current face processing model as the face processing model after training, and pass through
Face processing model after training handles face picture to be processed;
In the embodiment of the present application two, above-mentioned steps S207 is identical as step S104 executive mode in example 1,
For details, reference can be made to the descriptions of embodiment one, and details are not described herein.
In the embodiment of the present application two, above-mentioned steps S201-S206 provides a kind of training method of face processing model,
That is:
Firstly, using the first initial discrimination model of initial face processing model and sample database training and just
The second discrimination model to begin, the first discrimination model and the second discrimination model after generating training;
Finally, at using the initial face of the first discrimination model after training and the second discrimination model training after training
Model is managed, to generate the generation model after training.
The meeting of generation picture and sample using the face processing model output after training caused by above-mentioned training method
Sample face picture in database is more similar, and the face generated in picture exported meets facial senescence law, from
And make the face processing model picture generated after training more true true to nature to aging and the rejuvenation processing of face.
In addition, in the embodiment of the present application, loop iteration can also be carried out to above-mentioned training process, to generate the better face of performance
Model is handled, specific as shown in Figure 6:
Firstly, identical with training method disclosed in step S201-S206, that is, using initial face processing model with
And the first discrimination model and the second discrimination model initially that sample database training is initial, first after generating training differentiates
The second discrimination model after model and training;Then, sentenced using the first discrimination model after training and second after training
Other model, the initial face processing model of training, to generate the face processing model after training;
Secondly, initial face processing model, the first initial discrimination model and the second initial discrimination model are updated,
That is, using the face processing model after the training obtained in last step as initial face processing model, it will be in last step
The first discrimination model after the training of acquisition is as the first initial discrimination model, by the after the training obtained in last step
Two discrimination models repeat last step as the second initial discrimination model, and first after generating training again differentiates mould
The face processing model after the second discrimination model and training after type, training;
Finally, constantly update initial face processing model, the first initial discrimination model and initial second sentence
Other model, until loop iteration number reach centainly require when until, by last time obtain training after face processing
The face processing model that model is completed as final training, and handled using face processing model that the final training is completed to
Handle face picture.
In the embodiment of the present application two, precondition is needed to be used for the face processing model of face aging and rejuvenation,
Second after constantly adjusting the first discrimination model and training of the parameter of current face processing model after training differentiates
To determine that the generation picture of current face processing model output belongs to above-mentioned sample database and current face handles model
The face in generation picture exported meets facial senescence law.Therefore, the application can be obtained to face aging and year
The processing gently changed face processing model more naturally true to nature, can be improved user experience.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limit
It is fixed.
Embodiment three
The embodiment of the present application three provides a kind of picture processing unit, for purposes of illustration only, only showing relevant to the application
Part, picture processing unit 300 as shown in Figure 7 include,
Training data obtains module 301, and for obtaining face picture, and it is to be converted to obtain face in above-mentioned face picture
Age information;
It generates picture and obtains module 302, for above-mentioned face picture and above-mentioned age information to be input to initial people
Face handle model in, obtain generate picture, above-mentioned face processing model be used for by the face in above-mentioned face picture be converted to
The above-mentioned corresponding face of age information;
Model training module 303, for above-mentioned generation picture to be input to the first discrimination model after training, and in judgement
Whether the first element for stating generation picture meets first condition, and second after above-mentioned generation picture to be input to training differentiates mould
Type, and judge whether the second element of above-mentioned generation picture meets second condition, constantly adjust current face processing model
Parameter, until the first element for the generation picture that current face processing model exports and the second element correspond with above-mentioned the
One condition and above-mentioned second condition;
Picture processing module 304, for using above-mentioned current face processing model as training after face processing model,
And face picture to be processed is handled by the face processing model after training.
Optionally, above-mentioned face picture is any sample face picture in sample database, above-mentioned model training module
303 include:
First differentiates input unit, for above-mentioned generation picture to be input in the first discrimination model after above-mentioned training,
Judge whether the generation picture of current face processing model belongs to above-mentioned sample database;
First model training unit, for constantly adjusting the parameter of current face processing model, until current face
The generation picture of processing model belongs to above-mentioned sample database;
Second differentiates input unit, for the current face processing after training the first discrimination model after above-mentioned training
The generation picture of model is input in the second discrimination model after above-mentioned training, judges the generation figure of current face processing model
Whether the face in piece meets facial senescence law;
Second model training unit, for constantly adjusting the parameter of current face processing model, until current face
The face handled in the generation picture of model meets facial senescence law.
It optionally, include sample face figure of multiple and different individual of sample in different age group in above-mentioned sample database
Piece, above-mentioned second differentiates input unit, comprising:
Training data obtains subelement, and for obtaining training individuals and sport career age section, above-mentioned training individuals are above-mentioned
Individual of sample corresponding to above-mentioned face picture in sample database, above-mentioned sport career age section are above-mentioned in above-mentioned sample database
Age bracket corresponding to age information;
Reference picture obtains subelement, and for obtaining reference picture, above-mentioned reference picture is in above-mentioned sample database
Sample face picture of the above-mentioned training individuals in the age bracket adjacent with above-mentioned sport career age section;
Second differentiates input subelement, at the current face after training the first discrimination model after above-mentioned training
The generation picture and above-mentioned reference picture for managing model are input in the second discrimination model after above-mentioned training, judge above-mentioned generation
Whether the face in picture meets facial senescence law.
Optionally, above-mentioned model training module 303, further includes:
First discriminative training unit, it is right for utilizing above-mentioned initial face processing model and above-mentioned sample database
The first initial discrimination model is trained, to obtain the first discrimination model after above-mentioned training;
Second discriminative training unit, it is right for utilizing above-mentioned initial face processing model and above-mentioned sample database
The second initial discrimination model is trained, to obtain the second discrimination model after above-mentioned training.
Optionally, above-mentioned picture processing module 304 includes:
Pending data acquiring unit, for obtaining people in face picture to be processed and above-mentioned face picture to be processed
Face age information to be converted;
Picture processing unit, for waiting for face in above-mentioned face picture to be processed and above-mentioned face picture to be processed
The age information of conversion is input in the face processing model after above-mentioned training, and the face processing model after obtaining above-mentioned training is defeated
Face picture out.
Optionally, above-mentioned picture processing module 304 further include:
Judgement is satisfied with unit, for judging whether the face picture of the output of the face processing model after above-mentioned training meets use
Family demand;
Updating unit, if being satisfied with the face of the output of the face processing model after the above-mentioned training of unit judges for above-mentioned judgement
Picture is unsatisfactory for user demand, then using the face processing model after above-mentioned training as initial face processing model.
It should be noted that the contents such as information exchange, implementation procedure between above-mentioned apparatus/unit, due to the application
Embodiment of the method is based on same design, concrete function and bring technical effect, for details, reference can be made to embodiment of the method part, this
Place repeats no more.
Example IV
Fig. 8 is the schematic diagram for the terminal device that the embodiment of the present application four provides.As shown in figure 8, the terminal of the embodiment is set
Standby 4 include: processor 40, memory 41 and are stored in the meter that can be run in above-mentioned memory 41 and on above-mentioned processor 40
Calculation machine program 42.Above-mentioned processor 40 realizes the step in above-mentioned each embodiment of the method when executing above-mentioned computer program 42,
Such as step S101 to S104 shown in FIG. 1.Alternatively, above-mentioned processor 40 realized when executing above-mentioned computer program 42 it is above-mentioned each
The function of each module/unit in Installation practice, such as the function of module 301 to 304 shown in Fig. 7.
Illustratively, above-mentioned computer program 42 can be divided into one or more module/units, said one or
Multiple module/units are stored in above-mentioned memory 41, and are executed by above-mentioned processor 40, to complete the application.Above-mentioned one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for
Implementation procedure of the above-mentioned computer program 42 in above-mentioned terminal device 4 is described.For example, above-mentioned computer program 42 can be divided
Training data is cut into obtain module, generate picture acquisition module, model training module and picture processing module, each specific function of module
It can be as follows:
Face picture is obtained, and obtains the age information that face is to be converted in above-mentioned face picture;
Above-mentioned face picture and above-mentioned age information are input in initial face processing model, obtain generation figure
Piece, above-mentioned face processing model are used to the face in above-mentioned face picture being converted to face corresponding with above-mentioned age information;
Above-mentioned generation picture is input to the first discrimination model after training, and judges the first element of above-mentioned generation picture
Whether meet first condition, above-mentioned generation picture is input to the second discrimination model after training, and judge above-mentioned generation picture
The second element whether meet second condition, the parameter of current face processing model is constantly adjusted, until at current face
The first element of the generation picture of reason model output and the second element correspond with above-mentioned first condition and above-mentioned Article 2
Part;
Using above-mentioned current face processing model as the face processing model after training, and pass through at the face after training
Reason model handles face picture to be processed.
Above-mentioned terminal device 4 can be smart phone, tablet computer, learning machine, intelligent wearable device etc. and calculate equipment.On
Stating terminal device may include, but be not limited only to, processor 40, memory 41.It will be understood by those skilled in the art that Fig. 8 is only
It is the example of terminal device 4, does not constitute the restriction to terminal device 4, may include components more more or fewer than diagram, or
Person combines certain components or different components, such as above-mentioned terminal device can also include input-output equipment, network insertion
Equipment, bus etc..
Alleged processor 40 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
Above-mentioned memory 41 can be the internal storage unit of above-mentioned terminal device 4, such as the hard disk or interior of terminal device 4
It deposits.Above-mentioned memory 41 is also possible to the External memory equipment of above-mentioned terminal device 4, such as be equipped on above-mentioned terminal device 4
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge
Deposit card (Flash Card) etc..Further, above-mentioned memory 41 can also both include the storage inside list of above-mentioned terminal device 4
Member also includes External memory equipment.Above-mentioned memory 41 is for storing needed for above-mentioned computer program and above-mentioned terminal device
Other programs and data.Above-mentioned memory 41 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function
Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of above-mentioned apparatus is divided into different functional unit or module, more than completing
The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also
To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list
Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system
The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device/terminal device and method, it can be with
It realizes by another way.For example, device described above/terminal device embodiment is only schematical, for example, on
The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as
Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately
A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device
Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If above-mentioned integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation
All or part of the process in example method, can also instruct relevant hardware to complete, above-mentioned meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, above-mentioned computer program includes computer program code, above-mentioned computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..Above-mentioned computer-readable medium
It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry above-mentioned computer program code
Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that above-mentioned
The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice
Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions
Believe signal.
Above above-described embodiment is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality
Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each
Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified
Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all
Comprising within the scope of protection of this application.
Claims (10)
1. a kind of image processing method characterized by comprising
Face picture is obtained, and obtains the age information that face is to be converted in the face picture;
The face picture and the age information are input in initial face processing model, obtain generating picture, institute
Face processing model is stated for the face in the face picture to be converted to face corresponding with the age information;
The generation picture is input to the first discrimination model after training, and whether judges the first element for generating picture
Meet first condition, the generation picture is input to the second discrimination model after training, and judges described to generate the of picture
Whether two elements meet second condition, constantly adjust the parameter of current face processing model, until current face processing mould
The first element of the generation picture of type output and the second element correspond with the first condition and the second condition;
Using the current face processing model as the face processing model after training, and pass through the face processing mould after training
Type handles face picture to be processed.
2. image processing method as described in claim 1, which is characterized in that the face picture is appointing in sample database
This same face picture, it is described that the generation picture is input to the first discrimination model after training, and judge the generation figure
Whether the first element of piece meets first condition, the generation picture is input to the second discrimination model after training, and judge
Whether second element for generating picture meets second condition, constantly adjusts the parameter of current face processing model, until
The first element of the generation picture of current face processing model output and the second element correspond with the first condition with
And the second condition, comprising:
The generation picture is input in the first discrimination model after the training, judges the life of current face processing model
Whether belong to the sample database at picture;
Constantly adjust the parameter of current face processing model, until the generation picture of current face processing model belong to it is described
Sample database;
The generation picture of current face processing model after the first discrimination model training after the training is input to described
In the second discrimination model after training, judge whether the face in the generation picture of current face processing model meets face and decline
Old rule;
The parameter of current face processing model is constantly adjusted, until the face of current face processing model generated in picture
Meet facial senescence law.
3. image processing method as claimed in claim 2, which is characterized in that include multiple and different samples in the sample database
This individual different age group sample face picture, it is current after the first discrimination model training by after the training
The generation picture of face processing model is input in the second discrimination model after the training, judges current face processing model
Generation picture in face whether meet facial senescence law, comprising:
It obtains training individuals and sport career age section, the training individuals is right for face picture described in the sample database
The individual of sample answered, the sport career age section are age bracket corresponding to age information described in the sample database;
Obtain reference picture, the reference picture be the sample database in the training individuals with the sport career age
The sample face picture of the age bracket of Duan Xianglin;
By the generation picture of the current face processing model after the first discrimination model training after the training and the ginseng
It examines in the second discrimination model after picture is input to the training, judges whether the face in the generation picture meets face and decline
Old rule.
4. image processing method as claimed in claim 2, which is characterized in that it is described the generation picture is input to it is described
In the first discrimination model after training, judge whether the generation picture of current face processing model belongs to the sample database
The step of before, further includes:
Using initial the face processing model and the sample database, the first initial discrimination model is instructed
Practice, to obtain the first discrimination model after the training;
It is inputted in the generation picture by the current face processing model after the first discrimination model training after the training
In the second discrimination model after to the training, judge whether the face in the generation picture of current face processing model meets
Before the step of facial senescence law, further includes:
Using initial the face processing model and the sample database, the second initial discrimination model is instructed
Practice, to obtain the second discrimination model after the training.
5. image processing method according to any one of claims 1 to 4, which is characterized in that the people by after training
Face processing model handles face picture to be processed, comprising:
Obtain the age information that face is to be converted in face picture to be processed and the face picture to be processed;
The age information to be converted of face in the face picture to be processed and the face picture to be processed is input to institute
In face processing model after stating training, the face picture of the face processing model output after obtaining the training.
6. image processing method as claimed in claim 5, which is characterized in that the face processing after the acquisition training
After the face picture of model output, further includes:
Whether the face picture of the face processing model output after judging the training meets user demand;
If it is not, then using the face processing model after the training as initial face processing model, and return to execution and obtain people
Face picture, and the step of obtaining the age information to be converted of face in the face picture and subsequent step.
7. a kind of picture processing unit characterized by comprising
Training data obtains module, for obtaining face picture, and obtains the age letter that face is to be converted in the face picture
Breath;
It generates picture and obtains module, for the face picture and the age information to be input to initial face processing mould
In type, obtain generating picture, the face processing model is for being converted to and the age face in the face picture
The corresponding face of information;
Model training module for the generation picture to be input to the first discrimination model after training, and judges the generation
Whether the first element of picture meets first condition, the generation picture is input to the second discrimination model after training, and sentence
Whether second element for generating picture of breaking meets second condition, constantly adjusts the parameter of current face processing model, directly
The first element of the generation picture exported to current face processing model and the second element correspond with the first condition
And the second condition;
Picture processing module, for and passing through using the current face processing model as the face processing model after training
Face processing model after training handles face picture to be processed.
8. picture processing unit as claimed in claim 7, which is characterized in that
The model training module includes:
First differentiates input unit, for the generation picture to be input in the first discrimination model after the training, judges
Whether the generation picture of current face processing model belongs to the sample database;
First model training unit, for constantly adjusting the parameter of current face processing model, until current face processing
The generation picture of model belongs to the sample database;
Second differentiates input unit, for the current face processing model after training the first discrimination model after the training
Generation picture be input in the second discrimination model after the training, in the generation picture for judging current face processing model
Face whether meet facial senescence law;
Second model training unit, for constantly adjusting the parameter of current face processing model, until current face processing
Face in the generation picture of model meets facial senescence law.
9. a kind of terminal device, including memory, processor and storage are in the memory and can be on the processor
The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 6 when executing the computer program
The step of any one the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In when the computer program is executed by processor the step of any one of such as claim 1 to 6 of realization the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810746936.7A CN108985215B (en) | 2018-07-09 | 2018-07-09 | Picture processing method, picture processing device and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810746936.7A CN108985215B (en) | 2018-07-09 | 2018-07-09 | Picture processing method, picture processing device and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108985215A true CN108985215A (en) | 2018-12-11 |
CN108985215B CN108985215B (en) | 2020-05-22 |
Family
ID=64536465
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810746936.7A Active CN108985215B (en) | 2018-07-09 | 2018-07-09 | Picture processing method, picture processing device and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108985215B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110321802A (en) * | 2019-06-10 | 2019-10-11 | 深圳前海达闼云端智能科技有限公司 | Face image generation method and apparatus, storage device and electronic device |
CN111145080A (en) * | 2019-12-02 | 2020-05-12 | 北京达佳互联信息技术有限公司 | Training method of image generation model, image generation method and device |
CN111553838A (en) * | 2020-05-08 | 2020-08-18 | 深圳前海微众银行股份有限公司 | Model parameter updating method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787974A (en) * | 2014-12-24 | 2016-07-20 | 中国科学院苏州纳米技术与纳米仿生研究所 | Establishment method for establishing bionic human facial aging model |
CN107169454A (en) * | 2017-05-16 | 2017-09-15 | 中国科学院深圳先进技术研究院 | A kind of facial image age estimation method, device and its terminal device |
CN107563283A (en) * | 2017-07-26 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and the storage medium of generation attack sample |
CN108021905A (en) * | 2017-12-21 | 2018-05-11 | 广东欧珀移动通信有限公司 | image processing method, device, terminal device and storage medium |
-
2018
- 2018-07-09 CN CN201810746936.7A patent/CN108985215B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105787974A (en) * | 2014-12-24 | 2016-07-20 | 中国科学院苏州纳米技术与纳米仿生研究所 | Establishment method for establishing bionic human facial aging model |
CN107169454A (en) * | 2017-05-16 | 2017-09-15 | 中国科学院深圳先进技术研究院 | A kind of facial image age estimation method, device and its terminal device |
CN107563283A (en) * | 2017-07-26 | 2018-01-09 | 百度在线网络技术(北京)有限公司 | Method, apparatus, equipment and the storage medium of generation attack sample |
CN108021905A (en) * | 2017-12-21 | 2018-05-11 | 广东欧珀移动通信有限公司 | image processing method, device, terminal device and storage medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110321802A (en) * | 2019-06-10 | 2019-10-11 | 深圳前海达闼云端智能科技有限公司 | Face image generation method and apparatus, storage device and electronic device |
CN110321802B (en) * | 2019-06-10 | 2021-10-01 | 达闼机器人有限公司 | Face image generation method and apparatus, storage device and electronic device |
CN111145080A (en) * | 2019-12-02 | 2020-05-12 | 北京达佳互联信息技术有限公司 | Training method of image generation model, image generation method and device |
CN111145080B (en) * | 2019-12-02 | 2023-06-23 | 北京达佳互联信息技术有限公司 | Training method of image generation model, image generation method and device |
CN111553838A (en) * | 2020-05-08 | 2020-08-18 | 深圳前海微众银行股份有限公司 | Model parameter updating method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108985215B (en) | 2020-05-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109165249B (en) | Data processing model construction method and device, server and user side | |
CN110490213A (en) | Image-recognizing method, device and storage medium | |
CN108319592A (en) | A kind of method, apparatus and intelligent terminal of translation | |
CN110489582A (en) | Personalization shows the generation method and device, electronic equipment of image | |
CN108073851B (en) | Grabbing gesture recognition method and device and electronic equipment | |
CN109309878A (en) | The generation method and device of barrage | |
CN108985215A (en) | A kind of image processing method, picture processing unit and terminal device | |
CN105279494A (en) | Human-computer interaction system, method and equipment capable of regulating user emotion | |
CN109471626A (en) | Page logic structure, page generation method, page data processing method and device | |
CN110347876A (en) | Video classification methods, device, terminal device and computer readable storage medium | |
CN109214333A (en) | Convolutional neural networks structure, face character recognition methods, device and terminal device | |
CN108898082A (en) | Image processing method, picture processing unit and terminal device | |
CN109783749A (en) | A kind of Material for design intelligent recommendation method, apparatus and terminal device | |
CN109118447A (en) | A kind of image processing method, picture processing unit and terminal device | |
CN108961267A (en) | Image processing method, picture processing unit and terminal device | |
CN110266994A (en) | A kind of video call method, video conversation apparatus and terminal | |
CN107920162A (en) | Control method, mobile terminal and the computer-readable recording medium of alarm clock | |
CN111538852A (en) | Multimedia resource processing method, device, storage medium and equipment | |
CN107357782A (en) | One kind identification user's property method for distinguishing and terminal | |
CN109848052A (en) | A kind of method and terminal device of goods sorting | |
CN116229188B (en) | Image processing display method, classification model generation method and equipment thereof | |
CN113052246A (en) | Method and related device for training classification model and image classification | |
CN107071553A (en) | A kind of method, device and computer-readable recording medium for changing video speech | |
CN109376602A (en) | A kind of finger vein identification method, device and terminal device | |
CN108932704A (en) | Image processing method, picture processing unit and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |