CN108509846A - Image processing method, device, computer equipment and storage medium - Google Patents
Image processing method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN108509846A CN108509846A CN201810134026.3A CN201810134026A CN108509846A CN 108509846 A CN108509846 A CN 108509846A CN 201810134026 A CN201810134026 A CN 201810134026A CN 108509846 A CN108509846 A CN 108509846A
- Authority
- CN
- China
- Prior art keywords
- facial image
- characteristic point
- human face
- face characteristic
- mapping
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The present invention proposes a kind of image processing method, device, computer equipment and storage medium, wherein method includes:Obtain the first facial image when the face of the first user is in non-makeup state and the second facial image when in makeup state;The first human face characteristic point of the first facial image and the second human face characteristic point of the second facial image are extracted respectively;Human face characteristic point and the first human face characteristic point, the second human face characteristic point are referred to according to preset, carrying out deformation to the first facial image and the second facial image respectively obtains the first mapping facial image and the second mapping facial image;According to the first mapping facial image and the second mapping facial image, the dressing information of the first user is extracted;On the face by the target person of dressing Information application to selection, the target facial image after makeup is obtained.By this method, the dressing material for meeting user's aesthetical standard can be provided for different users, meets the individual demand of user, improve the specific aim and authenticity of dressing material.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a kind of image processing method, device, computer equipment and
Storage medium.
Background technology
Currently, the function of U.S. figure software is more and more abundant, and it not only can be with picture mosaic, U.S. face, increase special efficacy etc., some U.S.'s figures are soft
Part also provides makeups function.For example, the makeups function of P figures has U.S. face, big eye, thin face, bright eye, foundation cream, lip gloss, eye everyday
The abilities such as shadow, hair dyeing.
Existing makeups function is mostly by the way that the color of makeup or material are cut figure (as shown in Figure 1) and wait for the people of makeup
Face image carries out fitting realization.Figure is cut due to the material of the dressing image of U.S. figure software offer often more to exaggerate, authenticity
It is relatively low, cause the facial image after makeups and the facial image difference of plain face larger, the facial image after makeups is obviously beautification
Image afterwards lacks the sense of reality, and may not meet the expectation of user.Makeups are carried out using laminating type, it is also possible to be existed
Dressing drift, dressing image and the unmatched problem of facial image size.
Invention content
The present invention is directed to solve at least some of the technical problems in related technologies.
For this purpose, first purpose of the present invention is to propose a kind of image processing method, when by obtaining the non-makeup of user
The first facial image and the second facial image when makeup, extract respectively the first facial image the first human face characteristic point and
Second human face characteristic point of the second facial image, and respectively obtain first with reference to preset reference human face characteristic point progress deformation and reflect
Facial image and the second mapping facial image are penetrated, user is extracted according to the first mapping facial image and the second mapping facial image
Dressing information, the dressing material for meeting user's aesthetical standard can be provided for different users, meet the personalized need of user
It asks, improves the specific aim and authenticity of dressing material.
Second object of the present invention is to propose a kind of image processing apparatus.
Third object of the present invention is to propose a kind of computer equipment.
Fourth object of the present invention is to propose a kind of non-transitorycomputer readable storage medium.
The 5th purpose of the present invention is to propose a kind of computer program product.
In order to achieve the above object, first aspect present invention embodiment proposes a kind of image processing method, including:
Obtain the first facial image and the second facial image, wherein first facial image is the people in the first user
Face is in the image acquired when non-makeup state, and second facial image is that the face of first user is in makeup state
When the image that acquires;
The second face of the first human face characteristic point and second facial image of first facial image is extracted respectively
Characteristic point;
Human face characteristic point and first human face characteristic point are referred to according to preset, shape is carried out to first facial image
Become, generates the first mapping facial image;
Human face characteristic point and second human face characteristic point are referred to according to described, shape is carried out to second facial image
Become, generates the second mapping facial image;
Facial image and the second mapping facial image are mapped according to described first, extracts the dressing letter of first user
Breath;
Selection target face, and by the dressing Information application of the first user to the target person on the face, after obtaining makeup
Target facial image.
The image processing method of the embodiment of the present invention, first when the face by obtaining the first user is in non-makeup state
Facial image and the second facial image when in makeup state, extract respectively the first facial image the first human face characteristic point and
Second human face characteristic point of the second facial image refers to human face characteristic point and the first human face characteristic point and second according to preset
Human face characteristic point carries out deformation to the first facial image and the second facial image, generates the first mapping facial image and second and reflects
Facial image is penetrated, according to the first mapping facial image and the second mapping facial image, the dressing information of the first user is extracted, by the
The dressing Information application of one user to selection target person on the face, obtain the target facial image after makeup.By from user's
Face, which is in the image of makeup state, extracts dressing information for user's selection so that and dressing information is more realistic, to
The dressing material for meeting user's aesthetical standard can be provided for different users, meet the individual demand of user, improve dressing
The specific aim and authenticity of material, solve that material exaggeration in the prior art, that authenticity is low, the image after makeups does not meet user is pre-
The technical issues of phase.
In order to achieve the above object, second aspect of the present invention embodiment proposes a kind of image processing apparatus, including:
Image collection module, for obtaining the first facial image and the second facial image, wherein first facial image
For the image acquired when the face of the first user is in non-makeup state, second facial image is first user's
Face is in the image acquired when makeup state;
Feature point extraction module, the first human face characteristic point for extracting first facial image respectively and described second
Second human face characteristic point of facial image;
Mapping block, for referring to human face characteristic point and first human face characteristic point according to preset, to described first
Facial image carries out deformation, generates the first mapping facial image, and refer to human face characteristic point and second people according to described
Face characteristic point carries out deformation to second facial image, generates the second mapping facial image;
Dressing extraction module maps image, described in extraction for mapping facial image and the second face according to described first
The dressing information of first user;
Makeup module, be used for selection target face, and by the dressing Information application of the first user to the target person on the face,
Obtain the target facial image after makeup.
The image processing apparatus of the embodiment of the present invention, first when the face by obtaining the first user is in non-makeup state
Facial image and the second facial image when in makeup state, extract respectively the first facial image the first human face characteristic point and
Second human face characteristic point of the second facial image refers to human face characteristic point and the first human face characteristic point and second according to preset
Human face characteristic point carries out deformation to the first facial image and the second facial image, generates the first mapping facial image and second and reflects
Facial image is penetrated, according to the first mapping facial image and the second mapping facial image, the dressing information of the first user is extracted, by the
The dressing Information application of one user to selection target person on the face, obtain the target facial image after makeup.By from user's
Face, which is in the image of makeup state, extracts dressing information for user's selection so that and dressing information is more realistic, to
The dressing material for meeting user's aesthetical standard can be provided for different users, meet the individual demand of user, improve dressing
The specific aim and authenticity of material, solve that material exaggeration in the prior art, that authenticity is low, the image after makeups does not meet user is pre-
The technical issues of phase.
In order to achieve the above object, third aspect present invention embodiment proposes a kind of computer equipment, including:It processor and deposits
Reservoir;Wherein, the processor is held to run with described by reading the executable program code stored in the memory
The corresponding program of line program code, for realizing the image processing method as described in first aspect embodiment.
In order to achieve the above object, fourth aspect present invention embodiment proposes a kind of non-transitory computer-readable storage medium
Matter is stored thereon with computer program, is realized when which is executed by processor at the image as described in first aspect embodiment
Reason method.
In order to achieve the above object, fifth aspect present invention embodiment proposes a kind of computer program product, when the calculating
The image processing method as described in first aspect embodiment is realized when instruction in machine program product is executed by processor.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description
Obviously, or practice through the invention is recognized.
Description of the drawings
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments
Obviously and it is readily appreciated that, wherein:
Fig. 1 is that existing makeups material cuts diagram intention;
The flow diagram for the first image processing method that Fig. 2 is provided by the embodiment of the present invention;
The flow diagram for second of image processing method that Fig. 3 is provided by the embodiment of the present invention;
The flow diagram for the third image processing method that Fig. 4 is provided by the embodiment of the present invention;
The flow diagram for the 4th kind of image processing method that Fig. 5 is provided by the embodiment of the present invention;
The flow diagram for the 5th kind of image processing method that Fig. 6 is provided by the embodiment of the present invention;
Fig. 7 is the exemplary plot of shape of face and dressing information MAP table;
The apparatus structure schematic diagram of Fig. 8 image processing methods of embodiment to realize the present invention;
Fig. 9 is the exemplary plot of the view finder with portrait profile navigational figure;
Figure 10 is the exemplary plot that human face analysis device extracts human face characteristic point;
The structural schematic diagram for the first image processing apparatus that Figure 11 is provided by the embodiment of the present invention;
The structural schematic diagram for second of image processing apparatus that Figure 12 is provided by the embodiment of the present invention;
The structural schematic diagram for the third image processing apparatus that Figure 13 is provided by the embodiment of the present invention;
The structural schematic diagram for the 4th kind of image processing apparatus that Figure 14 is provided by the embodiment of the present invention;
The structural schematic diagram for the 5th kind of image processing apparatus that Figure 15 is provided by the embodiment of the present invention;And
The structural schematic diagram for the computer equipment that Figure 16 is provided by the embodiment of the present invention.
Specific implementation mode
The embodiment of the present invention is described below in detail, examples of the embodiments are shown in the accompanying drawings, wherein from beginning to end
Same or similar label indicates same or similar element or element with the same or similar functions.Below with reference to attached
The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings image processing method, device, computer equipment and the storage medium of the embodiment of the present invention are described.
Existing makeups function is mostly that preset dressing material is used to carry out makeup for the facial image of user, is susceptible to
With the unmatched situation of size of face in the size and facial image of dressing material, makeup effect is poor.Most users are in day
Dressing often in life is generally thin, and preset dressing material often compares exaggeration, does not meet needed for the daily life of user,
Facial image after makeups usually cannot be satisfied user's expection, and the sense of reality is poor.
In view of the above-mentioned problems, an embodiment of the present invention provides a kind of image processing method, pass through the makeup image from user
In extract dressing information as dressing material for users to use, can be implemented as user and provide meeting user's makeup custom and examining
The dressing material of American Standard standard so that the facial image after makeups meets user's expectation, promotes user experience.
The flow diagram for the first image processing method that Fig. 2 is provided by the embodiment of the present invention.
As shown in Fig. 2, the image processing method includes the following steps:
Step 101, the first facial image and the second facial image are obtained, wherein the first facial image is in the first user
Face be in the image acquired when non-makeup state, the second facial image is that the face of the first user is adopted when being in makeup state
The image of collection.
Most of female users have a custom of makeup in daily life, and certain customers' such as working women is toward contact meeting
Change dressing according to the different of occasion are attended.For example, dressing when usually working is than thin, dressing when attending party is relatively flat
When want heavier.
Current electronic equipment nearly all has camera function, and the function of camera is also compared with horn of plenty, for example, with filter,
The functions such as U.S. face, special efficacy, user also increasingly likes taking pictures using the camera function of electronic equipment, especially for women
Family, and can take pictures whenever and wherever possible.For example, user thinks that the dressing of oneself today is especially beautiful, the camera for opening electronic equipment is clapped
It opens and takes pictures certainly;For another example, when user travels outdoors, the group photo at shooting and sight spot is as souvenir.It may in the electronic equipment of user
Store the photo that has many consumers, for example, solely the shining, take pictures certainly of user oneself, with other people group photo etc., thus, the present embodiment
In, different images can be obtained from the photograph album of the electronic equipment of user.
Specifically, the image acquired when the face of the first user is in non-makeup state can be obtained as the first face
Image obtains the image acquired when the face of the first user is in makeup state as the second facial image.Wherein, it first uses
Family can be the holder of electronic equipment, can also be the kith and kin of holder.
Herein it should be noted that the first facial image of the first user obtained can be one, the second people of acquisition
Face image can be multiple, and the face to obtain the first user is in image when different dressing states.
Step 102, the first human face characteristic point of the first facial image and the second face of the second facial image are extracted respectively
Characteristic point.
In the present embodiment, relevant human face characteristic point extraction algorithm may be used, extracted from the first facial image respectively
Go out the first human face characteristic point of the first facial image, and extracts the second people of the second facial image from the second facial image
Face characteristic point.For example, active shape model (Active Shape Model, ASM) extraction human face characteristic point may be used, or
Person can also use the face characteristic of movable appearance model (Active Appreance Model, AAM) extraction facial image
Point.In general, the number of the human face characteristic point of extraction is 68,83, even more, human face characteristic point can depict face wheel
Wide and human face five-sense-organ key position, such as eyes, nose, face etc..
Step 103, human face characteristic point and the first human face characteristic point are referred to according to preset, shape is carried out to the first facial image
Become, generates the first mapping facial image.
Wherein, with reference to human face characteristic point to being obtained after the preset progress facial feature points calibration with reference to facial image,
The method of characteristic point calibration can be consistent with the first human face characteristic point of extraction and the algorithm of the second human face characteristic point, so as to face
The result that image carries out deformation is more accurate.Preset can be model's facial image of a standard with reference to facial image, use
In reference object facial image as input progress triangle division and be bonded.
It, can be according to preset with reference to human face characteristic point and the after extracting the first human face characteristic point in the present embodiment
One the first facial image of human face characteristic point pair carries out deformation, obtains the first mapping facial image.
As a kind of possible realization method, relevant triangulation can be first used, is based on the first face characteristic
Point carries out triangulation to the face of user, forms multiple Delta Regions.Correspondingly, it is based on reference to human face characteristic point to reference man
Face carries out triangulation, forms multiple Delta Regions.Delta Region on user's face is with the Delta Region of reference man on the face
Correspondingly.Further, using each Delta Region constituted with reference to human face characteristic point, the first human face characteristic point is constituted
There are the Delta Region of correspondence carry out mapping deform upon, after each Delta Region deformation, so that it may to obtain one
The first facial image after deformation, using the first facial image after deformation as the first mapping facial image.With the spy on nose
For the Delta Region that sign point is constituted, the nose Delta Region on user's face is more than the nose Delta Region of reference man on the face,
During deformation, the deformation of a diminution can be carried out to the nose Delta Region on user's face, after being become smaller
Nose Delta Region so that become smaller under the constraint of nose of the nose in reference man on the face on user's face.
Herein it should be noted that triangulation is subdivision algorithm more mature in the prior art, for example, it is common
Delaunay Triangulation algorithm, unnecessary to avoid, the present invention does not elaborate to this.
Step 104, according to human face characteristic point and the second human face characteristic point is referred to, deformation is carried out to the second facial image, it is raw
At the second mapping facial image.
Using method identical with the first mapping method of facial image is generated, second is extracted from the second facial image
After human face characteristic point, deformation can be carried out according to reference to human face characteristic point and second the second facial image of human face characteristic point pair,
Obtain the second mapping facial image.
Step 105, according to the first mapping facial image and the second mapping facial image, the dressing letter of the first user of extraction
Breath.
First mapping facial image is that the face based on the first user is in the image acquired when non-makeup state and obtains,
Second mapping facial image is that the face based on the first user be in the image acquired when makeup state and obtains, thus, this reality
It applies in example, it, can be further according to the first mapping face after obtaining the first mapping facial image and the second mapping facial image
Image and the second mapping facial image, extract the dressing information of the first user.Wherein, may include because of makeup in dressing information
Caused by facial image color and illumination reflection situation of change.
It, can be respectively from the first mapping facial image and the second mapping facial image as a kind of possible realization method
The pixel value of same pixel point is extracted, and then the pixel value extracted from the second mapping facial image is mapped with from first
The pixel value of the corresponding pixel points extracted in facial image does ratio, using gained ratio as the dressing information of the first user.
It, can be respectively from the first mapping facial image and the second mapping facial image as a kind of possible realization method
The pixel value of same pixel point is extracted, and then the pixel value extracted from the second mapping facial image is mapped with from first
The pixel value of the corresponding pixel points extracted in facial image makes the difference, using gained difference as the dressing information of the first user.
Step 106, selection target face, and by the dressing Information application of the first user to target person on the face, obtain makeup
Target facial image afterwards.
In the present embodiment, after the dressing information for extracting the first user, you can the dressing information using extraction is to wait for
The image of adornment carries out makeup.When carrying out makeup, the face for waiting for makeup can be first determined as target face, then pass through camera
Shooting includes the image of target face, or an image for including target face is selected from the picture library of electronic equipment, and from
Extract target face in image, so by the dressing Information application of the first user to target person on the face, obtain the mesh after makeup
Mark facial image.
Optionally, when the face extracted from image be it is multiple when, can be selected from multiple faces one as mesh
Mark face, and then by the dressing Information application of the first user to target person on the face, obtain the target facial image after makeup.
The image processing method of the present embodiment, the first face when the face by obtaining the first user is in non-makeup state
Image and the second facial image when in makeup state, extract the first human face characteristic point and second of the first facial image respectively
Second human face characteristic point of facial image, according to preset with reference to human face characteristic point and the first human face characteristic point and the second face
Characteristic point carries out deformation to the first facial image and the second facial image, generates the first mapping facial image and the second mapping people
Face image extracts the dressing information of the first user, first is used according to the first mapping facial image and the second mapping facial image
The dressing Information application at family to selection target person on the face, obtain the target facial image after makeup.Pass through the face from user
Dressing information is extracted in image in makeup state to select for user so that dressing information is more realistic, so as to
The dressing material for meeting user's aesthetical standard is provided for different users, meets the individual demand of user, improves dressing material
Specific aim and authenticity, solve that material exaggeration in the prior art, authenticity is low, the image after makeups is not met expected from user
Technical problem.
In order to clearly describe in previous embodiment according to the first mapping facial image and the second mapping facial image
The specific implementation process of the dressing information of the first user is obtained, the embodiment of the present invention proposes another image processing method, Fig. 3
By the flow diagram for second of image processing method that the embodiment of the present invention provides.
As shown in figure 3, on the basis of embodiment as shown in Figure 2, step 105 may comprise steps of:
Step 201, the first pixel value of the pixel of the first mapping of extraction facial image.
Step 202, the second pixel value of the pixel of the second mapping of extraction facial image.
It, can be from the first mapping after obtaining the first mapping facial image and the second mapping facial image in the present embodiment
The first pixel value of each pixel is extracted in facial image, and extracts each pixel from the second mapping facial image
Second pixel value of point.
Step 203, for each corresponding pixel, the second pixel value of corresponding pixel and the first pixel value are done
Ratio, using ratio as the dressing parameter of corresponding pixel.
First mapping facial image and the second mapping facial image be by the first facial image and the second facial image according to
It is obtained after the same progress deformation with reference to human face characteristic point, the picture in the first mapping facial image and the second mapping facial image
Vegetarian refreshments should be one-to-one.To which in the present embodiment, the first mapping facial image and the second mapping face figure can be directed to
Each corresponding pixel as in, does ratio, by gained by the second pixel value of corresponding pixel and the first pixel value
Dressing parameter of each ratio as corresponding pixel.
Assuming that the first mapping facial image is A, the second mapping facial image is A ', then the dressing of each corresponding pixel
Information can use formula (1) to indicate.
cp=a'p/ap (1)
Wherein, a'pIndicate the pixel value of pixel p in the second mapping facial image A ', i.e. the second pixel value, apIndicate the
The pixel value of pixel p, i.e. the first pixel value in one mapping facial image A;cpIndicate the dressing parameter at pixel p.
Step 204, using the dressing parameter of each corresponding pixel points, the dressing information of the first user is constituted.
In the present embodiment, it is determined that each corresponding pixel in the first mapping facial image and the second mapping facial image
Dressing parameter after, can utilize all pixels point dressing parameter constitute the first user dressing information.
It specifically, can be by the dressing information of the dressing parameter of each corresponding pixel points gathered as the first user.It is false
If the dressing information of the first user is indicated with set C, then the element in C is the dressing parameter c at pixel pp。
The image processing method of the present embodiment, by being extracted from the first mapping facial image and the second mapping facial image
The first pixel value and the second pixel value of corresponding pixel points, for each corresponding pixel, by the second of corresponding pixel
Pixel value and the first pixel value do ratio, using ratio as the dressing parameter of corresponding pixel, utilize each corresponding pixel points
Dressing parameter, constitute the first user dressing information, the dressing material for the first user can be obtained so that user utilize
Dressing material meets desired dressing image to that can be obtained after facial image progress makeup, promotes user experience.
The flow diagram for the third image processing method that Fig. 4 is provided by the embodiment of the present invention.
As shown in figure 4, the image processing method may comprise steps of:
Step 301, the first facial image and the second facial image are obtained, wherein the first facial image is in the first user
Face be in the image acquired when non-makeup state, the second facial image is that the face of the first user is adopted when being in makeup state
The image of collection.
Step 302, the first human face characteristic point of the first facial image and the second face of the second facial image are extracted respectively
Characteristic point.
It should be noted that the description of step 301- steps 302 may refer in the present embodiment it is right in previous embodiment
The description of step 101- steps 102, to avoid repeating, details are not described herein again.
Step 303, recognition of face is carried out to the first facial image, obtains the first shape of face of the first user.
Specifically, the first facial image of relevant face recognition technology pair may be used and carry out recognition of face, from the first
Face is identified in face image, and determines the outline data of face, and then determines the first user's according to the outline data of face
First shape of face.
As an example, it can pre-set in the electronic device and to store shape of face corresponding with the outline data of face
Relationship, and then after the outline data of face of the first user is determined, by inquiring correspondence, determine the first user's
First shape of face.
Step 304, from preset multiple candidate reference facial images, one similar with the first shape of face candidate ginseng is chosen
Facial image is examined to be used as with reference to facial image.
Wherein, preset candidate reference facial image can be the standard faces image being arranged according to different shapes of face, than
Such as, the facial image of the different shapes of face such as melon seeds shape of face, goose egg shape of face, round face shape of face, state's word face shape of face is prestored as candidate
With reference to facial image.
In turn, in the present embodiment, it is determined that, can be according to the first shape of face, from default after the first shape of face of the first user
Multiple candidate reference facial images in choose similar with a first shape of face candidate reference facial image and be used as with reference to face
Image.
As a kind of possible realization method, it can be directed to each candidate reference facial image, known using relevant face
Other technology carries out recognition of face to candidate reference facial image, obtains the second shape of face of each candidate reference facial image, in turn
Second shape of face is compared with the first shape of face, by candidate reference facial image corresponding with the second shape of face of the first shape of face unanimously
As with reference to facial image.
As a kind of possible realization method, each candidate reference facial image and matched second face can be prestored
The correspondence of type, and then correspondence is inquired according to the first shape of face, determine that second shape of face consistent with the first shape of face is corresponding
Candidate reference facial image, which is used as, refers to facial image.
Step 305, characteristic point calibration is carried out with reference to facial image to preset, obtains referring to human face characteristic point.
Specifically, relevant feature point extraction algorithm (such as ASM algorithms, AAM algorithms etc.) may be used to referring to face
Image is demarcated into characteristic point, obtains referring to human face characteristic point.
Herein it should be noted that step 303- steps 305 can be after obtaining the first facial image, and generate
Any time before mapping facial image executes, and is only executed after step 302 with step 303- steps 305 in the present embodiment
For the present invention is explained, and cannot function as limitation of the present invention.
Step 306, it is based on triangulation, it is right under the constraint with reference to human face characteristic point and the first human face characteristic point
First facial image and reference facial image carry out face texture mapping, obtain the first mapping facial image.
Wherein, it is facial image corresponding with reference human face characteristic point with reference to facial image.
For example, can first use Delaunay Triangulation algorithm by the first human face characteristic point and with reference in human face characteristic point
Spatial point be connected as triangle, the principle of triangulation is that the circumscribed circle of any triangle does not include any other vertex.
The step of Delaunay Triangulation algorithm is:
(1) delta-shaped region is constructed, which includes all human face characteristic points, and the triangle of construction is inserted into
In triangle chained list;
(2) it is sequentially inserted into new human face characteristic point, it includes new insert that circumscribed circle region is found out in current triangle chained list
Enter all triangles of human face characteristic point, these triangles are referred to as influencing triangle;
(3) deleting influences the common edge of triangle, the face characteristic that connection influences all vertex of triangle and is newly inserted into
Point forms new triangle, and the triangle newly formed is inserted into triangle chained list;
(4) step (2) is repeated, until the insertion of all people's face characteristic point finishes.
Triangle division is carried out using the first human face characteristic point of Delaunay Triangulation algorithm pair and with reference to human face characteristic point
Afterwards, the first human face characteristic point and reference human face characteristic point are divided into a series of triangles, and the first human face characteristic point and reference
Triangle in human face characteristic point be it is mutual corresponding, can with reference to human face characteristic point and the first human face characteristic point constraint
Under, face texture mapping is carried out using the first facial image of relevant face unity and coherence in writing mapping method pair and with reference to facial image, is made
The first facial image deforms upon, and map to reference on facial image, obtaining the first mapping facial image.
Step 307, under the constraint with reference to human face characteristic point and the second human face characteristic point, it is based on triangulation, it is right
Second facial image and reference facial image carry out face texture mapping, obtain the second mapping facial image.
In the present embodiment, using the identical method of the first mapping facial image is obtained, the second mapping face figure can be obtained
Picture, realization principle is similar, and details are not described herein again.
Step 308, according to the first mapping facial image and the second mapping facial image, the dressing letter of the first user of extraction
Breath.
Step 309, selection target face, and by the dressing Information application of the first user to target person on the face, obtain makeup
Target facial image afterwards.
It should be noted that the description of step 308- steps 309 may refer in the present embodiment it is right in previous embodiment
The description of step 105- steps 106, to avoid repeating, details are not described herein again.
The image processing method of the present embodiment carries out recognition of face by the first facial image to acquisition and obtains the first use
First shape of face at family selects similar with the first shape of face one according to the first shape of face from preset multiple candidate reference facial images
A candidate reference facial image, which is used as, refers to facial image, can obtain facial image similar with user's shape of face and be used as with reference to people
Face image can reduce mapping difficulty for the first facial image is laid a good foundation to reference to facial image mapping.By to ginseng
It examines facial image progress characteristic point to demarcate to obtain with reference to human face characteristic point, triangulation is based on, with reference to human face characteristic point
Under constraint with the first human face characteristic point and the second human face characteristic point, the first mapping facial image and the second mapping are respectively obtained
Facial image, and then the dressing information that facial image extracts the first user is mapped according to the first mapping facial image and second,
The dressing material for meeting user's makeup custom can be obtained, it is with strong points and more true.
It, can be using dressing information as dressing after the dressing information for extracting the first user in the embodiment of the present invention
Material is to wait for that the facial image of makeup carries out makeup, so that the facial image after makeup can meet the expectation of user, is improved
The sense of reality of facial image after makeup.In order to clearly describe selection target face in previous embodiment, and first is used
The dressing Information application at family on the face, obtains the specific implementation process of the target facial image after makeup to target person, and the present invention is real
It applies example and proposes another image processing method, the stream for the 4th kind of image processing method that Fig. 5 is provided by the embodiment of the present invention
Journey schematic diagram.
As shown in figure 5, on the basis of previous embodiment, selection target face, and the dressing information of the first user is answered
It uses target person on the face, obtains the target facial image after makeup, may comprise steps of:
Step 401, the third facial image for the target face for waiting for makeup is obtained.
When user goes for the makeup image of a certain face, can be set from electronics using the face as target face
Select a facial image for waiting for makeup comprising target face as third facial image in standby photograph album.Wherein, the third party
Face image is any one photo for including target face stored in photograph album, can be the facial image of the first user, also may be used
To be the facial image of her people, that is to say, that target face can be the face of user, can also be the face of her people.
In order to ensure makeup effect, the reduction degree of dressing migration is improved, it is preferable that third facial image is user's selection comprising first
The facial image of user's face.
When user wants to carry out makeup to oneself photo, user in addition to can be selected from photograph album the photo of oneself it
Outside, the Self-timer that can also open camera in electronic equipment obtains the third for including itself face for waiting for makeup by shooting
Facial image.It preferably, when shooting, can be in the shooting interface of camera to user in order to obtain preferable makeup effect
Display shooting guiding curve, is guided user to be taken pictures with correct posture, user is reminded to use up head portrait by shooting guiding curve
It is likely to be positioned in shooting guiding curve, the positive face head portrait to obtain user shines as third facial image.
Step 402, the third human face characteristic point of third facial image is extracted.
For example, ASM algorithms, AAM algorithms etc. may be used carries out characteristic point calibration to third facial image, to extract the
Three human face characteristic points.
Step 403, according to human face characteristic point and third human face characteristic point is referred to, deformation is carried out to third facial image, it is raw
Facial image is mapped at third.
In the present embodiment, after extracting third human face characteristic point, it may be used and the acquisition described in previous embodiment
The identical method of method of first mapping facial image, according to human face characteristic point and third human face characteristic point is referred to, to the third party
Face image carries out deformation, generates third and maps facial image, to avoid repeating, no longer maps facial image to obtaining third herein
Detailed process be described in detail.
Step 404, using dressing information, makeup processing is carried out to third mapping facial image, obtains the 4th facial image.
As a kind of possible realization method, when the dressing parameter for constituting dressing information is the second pixel value and the first pixel
When the ratio of value, when carrying out makeup processing to third mapping facial image using dressing information, third mapping people can be first obtained
Third pixel value is corresponded to picture by the third pixel value of each pixel in face image for each pixel with dressing information
The dressing parameter of vegetarian refreshments is multiplied, and obtains the 4th pixel value of pixel, and then utilize the 4th pixel value of all pixels point,
Generate the 4th facial image.
Assuming that third mapping facial image is B, the 4th facial image is B ', then the 4th pixel at the pixel p in B '
Value can be calculated with formula (2).
b'p=cp*bp (2)
Wherein, bpIndicate the pixel value at pixel p, i.e. third pixel value, c in third mapping facial image BpIndicate adornment
Hold the dressing parameter at pixel p, b' in information CpIndicate the pixel value at pixel p, i.e. the 4th picture in third facial image
Element value, b'pSet constitute the 4th facial image B '.It should be appreciated that B ' is with reference under facial image, the band of third facial image
Adornment image.
As a kind of possible realization method, when the dressing parameter for constituting dressing information is the second pixel value and the first pixel
When the difference of value, when carrying out makeup processing to third mapping facial image using dressing information, third mapping people can be first obtained
Third pixel value is corresponded to picture by the third pixel value of each pixel in face image for each pixel with dressing information
The dressing parameter of vegetarian refreshments is added, and obtains the 4th pixel value of pixel, and then utilize the 4th pixel value of all pixels point,
Generate the 4th facial image.
Step 405, the 4th human face characteristic point of the 4th facial image is extracted.
In the present embodiment, ASM algorithms, AAM algorithms etc. may be used, characteristic point calibration is carried out to third facial image, to carry
Take out third human face characteristic point.
Step 406, according to third human face characteristic point and the 4th human face characteristic point, face reduction is carried out to the 4th facial image
Operation, obtains the target facial image after makeup.
As a kind of possible realization method, it can be directed to the corresponding subdivision triangle being made of human face characteristic point, obtained
Different information under third human face characteristic point under subdivision triangle and the 4th human face characteristic point between subdivision triangle, and then utilize difference
Information adjusts the shape of subdivision triangle under the 4th human face characteristic point, generates target facial image, and target facial image is from ginseng
The facial image restored in facial image is examined, it can be appreciated that the spy of the characteristic point of target facial image and third facial image
Sign point is consistent, i.e., target facial image is the image after third facial image makeup.
As a kind of possible realization method third face can be recorded while generating third mapping facial image
The opposite deformation data with reference to corresponding subdivision triangle under human face characteristic point of subdivision triangle under characteristic point, in turn, according to the third party
Face characteristic point and the 4th human face characteristic point carry out face restoring operation to obtain the target person after makeup to the 4th human face characteristic point
When face image, corresponding subdivision triangle under the 4th human face characteristic point can be carried out according to the deformation data of each subdivision triangle
Reverse strain generates target facial image.For example, subdivision triangle is opposite with reference to right under human face characteristic point under third human face characteristic point
The deformation data for the subdivision triangle answered is to reduce certain size, then can be by corresponding subdivision triangle under the 4th human face characteristic point
Amplify identical size.
It is opposite with reference to corresponding subdivision triangle under human face characteristic point by recording subdivision triangle under third human face characteristic point
Deformation data can be reduced with carrying out reciprocal transformation according to corresponding subdivision triangle under the 4th human face characteristic point of deformation data pair
The treating capacity of 4th facial image restoring operation improves reduction efficiency.
The image processing method of the present embodiment waits for the third human face characteristic point of the third facial image of makeup by extraction,
Third mapping facial image is generated according to deformation is carried out to third facial image with reference to human face characteristic point and third human face characteristic point,
Makeup is carried out to third mapping facial image to handle to obtain the 4th facial image, can be to wait for that makeup image is taken using dressing information
With the desired dressing of user is met, the accuracy of dressing migration is improved.By the 4th face characteristic for extracting the 4th facial image
Point carries out face restoring operation according to third human face characteristic point and the 4th facial image of the 4th human face characteristic point pair, obtains makeup
Target facial image afterwards can realize the makeup operation for treating makeup image, improve the reduction degree and the sense of reality of dressing.
The photo of user itself may be stored in the electronic equipment of user incessantly, it is also possible to be stored with friend relative of user
It is multiple can to obtain friends and family of user itself, user etc. according to the photo stored in the electronic equipment of user for the photo of friend
Multiple dressing information of personage.Since everyone shape of face may be different, to possible in one kind of the embodiment of the present invention
In realization method, before acquisition waits for the third facial image of target face of makeup, dressing information and first can be first established
Mapping relations between the corresponding shape of face of facial image obtain the shape of face with target face when carrying out makeup to target face
Matched dressing information is target person adornment on the face, to improve the reduction degree of dressing information.To on the basis of previous embodiment
On, as shown in fig. 6, selection target face, and by the dressing Information application of the first user to target person on the face, after obtaining makeup
Target facial image can also include the following steps:
Step 501, the mapping relations between dressing information shape of face corresponding with the first facial image are established.
For the different dressings extracted according to the first facial image and the second facial image of the first different users
Information can extract the shape of face of face from the first facial image, establish dressing information face corresponding with the first facial image
Mapping relations between type, and mapping relations are stored in shape of face and dressing information MAP table, in order to subsequently according to shape of face
Inquire corresponding dressing information.
Fig. 7 is the exemplary plot of shape of face and dressing information MAP table.As shown in fig. 7, each shape of face corresponds to different dressings
Information, shape of face A correspond to dressing information 1, and shape of face B corresponds to dressing information 2 etc., known to shape of face, are mapped by inquiring
Table can uniquely determine corresponding dressing information.
Step 502, the third facial image for the target face for waiting for makeup is obtained.
Step 503, the third human face characteristic point of third facial image is extracted.
Step 504, according to human face characteristic point and third human face characteristic point is referred to, deformation is carried out to third facial image, it is raw
Facial image is mapped at third.
It should be noted that the description in the present embodiment to step 502- steps 504, it is right in previous embodiment to may refer to
The description of step 401- steps 403, details are not described herein again.
Step 505, recognition of face is carried out to third facial image, obtains the third shape of face of third facial image.
Step 506, mapping relations are inquired according to third shape of face, obtained and the matched dressing information of third shape of face.
Third facial image is identified for example, relevant face recognition technology may be used, from third facial image
In identify face, and determine the outline data of face, and then determine in third facial image and wrap according to the outline data of face
The third shape of face of the face contained.In turn, according to third shape of face, by inquiring the mapping relations of shape of face and dressing information, determine and
The matched dressing information of third shape of face.
For example, the mapping relations of shape of face and dressing information are as shown in fig. 7, when determining third shape of face is shape of face B,
Then by inquiring shape of face shown in Fig. 7 and dressing information MAP table, it may be determined that the corresponding dressing information of third shape of face is dressing
Information 2.
Step 507, using dressing information, makeup processing is carried out to third mapping facial image, obtains the 4th facial image.
In the present embodiment, it is determined that after the matched dressing information of third shape of face, using determining dressing information to third
It maps facial image and carries out makeup processing, the 4th facial image with adornment can be obtained.
Step 508, the 4th human face characteristic point of the 4th facial image is extracted.
Step 509, according to third human face characteristic point and the 4th human face characteristic point, face reduction is carried out to the 4th facial image
Operation, obtains the target facial image after makeup.
It should be noted that the description in the present embodiment to step 508- steps 509, it is right in previous embodiment to may refer to
The description of step 405- steps 406, details are not described herein again.
The image processing method of the present embodiment, by establishing between dressing information shape of face corresponding with the first facial image
Mapping relations first determine the corresponding shape of face of third facial image, in turn when the third facial image for treating makeup carries out makeup
Mapping relations are inquired according to shape of face and determine matched dressing information, and makeup is carried out to third mapping facial image using dressing information
Processing obtains the 4th facial image, the makeup effect for the 4th facial image that can ensure, improves the reduction of dressing information
The validity of facial image after degree and makeup.
The apparatus structure schematic diagram of Fig. 8 image processing methods of embodiment to realize the present invention.As shown in figure 8, the device
It is made of view finder, human face analysis device, face stipulations mapper, face dressing extractor and face makeup device.Wherein, view finder
In can be provided with portrait profile navigational figure.It can be appreciated that the facial image obtained gets over specification, extracted from facial image
Dressing information is more accurate, and the detailed information carried in dressing information is more, by dressing information transfer to waiting for makeup facial image
Effect is better, and portrait profile navigational figure is arranged in view finder can guide user to take pictures with correct posture, to obtain just
Face head portrait shines, and lays the foundation to extract accurate dressing information.View finder with portrait profile navigational figure as shown in figure 9,
The view finder can guide user that positive face head portrait is placed on as much as possible in portrait profile navigational figure and take pictures, to obtain
The positive face head portrait of specification shines.
Human face analysis device is used to demarcate the human face characteristic point of facial image, that view finder is currently obtained and/or obtain before
The user's facial image (including non-makeup facial image and with adornment facial image) taken is input in human face analysis device, can be obtained
The human face characteristic point of inputted user's facial image, such as shown in Figure 10, human face characteristic point features the pass of face and face
Key position.
Face stipulations mapper is for mapping user's facial image.By user's facial image, user's facial image
Human face characteristic point, standard faces image and standard faces image human face characteristic point be input in face stipulations mapper,
Face stipulations mapper is according to the human face characteristic point of user's facial image and the human face characteristic point of standard faces image to user people
Face image and standard faces image carry out triangulation, mapping and are bonded, user's facial image is made to deform upon and map to mark
On quasi- faceform, standardized user's facial image is obtained.
Face dressing extractor is for extracting dressing image.By standardized user's facial image (including it is standardized not
Makeup facial image and standardized band adornment facial image) be input in face dressing extractor, face dressing extractor according to
Band adornment user's facial image after non-makeup user facial image and standardization after standardization, can extract dressing image.
Face makeup device carries out makeup for treating makeup facial image using dressing image.Wherein, makeup face figure is waited for
Picture can be the facial image that view finder currently obtains, and can also be the facial image for obtaining and storing before view finder.It treats
When makeup facial image carries out makeup operation, it will first wait for that makeup facial image is input to obtain in human face analysis device and wait for makeup face
The human face characteristic point of image, and then will wait for that the human face characteristic point of makeup facial image is input to face stipulations mapper, it is marked
Standardization waits for makeup facial image, waits for that the dressing image of makeup facial image and acquisition is input to face makeup by standardized
Device obtains the standardized band adornment image after makeup facial image makeup, and anti-reduction transformation is carried out to the band adornment image of acquisition,
It obtains and waits for the matched object tape adornment image of makeup facial image and export.
Further, human face analysis device can not only extract the human face characteristic point of face, may recognize that the face of user
Type, and the relevant information of shape of face is sent to face dressing extractor, the dressing information of extraction and shape of face are established into correspondence,
And the mapping table stored to face and dressing information.The mapping table can be stored in face dressing extractor, with
Inquiry acquisition is carried out when convenient for subsequently using.
The facial image that makeup is treated using above-mentioned apparatus carries out makeup, can obtain meeting the desired band adornment figure of user
Picture improves dressing migration effect, improves the validity with adornment image, promotes user experience.
In order to realize that above-described embodiment, the present invention also propose a kind of image processing apparatus.
The structural schematic diagram for the first image processing apparatus that Figure 11 is provided by the embodiment of the present invention.
As shown in figure 11, which includes:Image collection module 910, is reflected at feature point extraction module 920
Penetrate module 930, dressing extraction module 940 and makeup module 950.Wherein,
Image collection module 910, for obtaining the first facial image and the second facial image, wherein the first facial image
For the image acquired when the face of the first user is in non-makeup state, the second facial image is that the face of the first user is in
The image acquired when makeup state.
Feature point extraction module 920, the first human face characteristic point and the second face for extracting the first facial image respectively
Second human face characteristic point of image.
Mapping block 930, for referring to human face characteristic point and the first human face characteristic point according to preset, to the first face figure
As carrying out deformation, the first mapping facial image is generated, and according to human face characteristic point and the second human face characteristic point is referred to, to second
Facial image carries out deformation, generates the second mapping facial image.
Dressing extraction module 940, for mapping image according to the first mapping facial image and the second face, extraction first is used
The dressing information at family.
Makeup module 950, be used for selection target face, and by the dressing Information application of the first user to target person on the face,
Obtain the target facial image after makeup.
Further, in a kind of possible realization method of the embodiment of the present invention, as shown in figure 12, as shown in figure 11 real
On the basis of applying example, dressing extraction module 940 includes:
Pixel value extraction unit 941, the first pixel value of the pixel for extracting the first mapping facial image;And
Second pixel value of the pixel of extraction the second mapping facial image.
Dressing extraction unit 942, for for each corresponding pixel, by the second pixel value of corresponding pixel and
First pixel value does ratio, using ratio as the dressing parameter of corresponding pixel, is joined using the dressing of each corresponding pixel points
Number constitutes the dressing information of the first user.
By the first pixel value for extracting corresponding pixel points from the first mapping facial image and the second mapping facial image
The second pixel value of corresponding pixel and the first pixel value are done into ratio for each corresponding pixel with the second pixel value
Value constitutes the first user using ratio as the dressing parameter of corresponding pixel using the dressing parameter of each corresponding pixel points
Dressing information, the dressing material for the first user can be obtained so that user using dressing material to facial image carry out
It can be obtained after makeup and meet desired dressing image, promote user experience.
In a kind of possible realization method of the embodiment of the present invention, as shown in figure 13, in the base of embodiment as shown in figure 11
On plinth, which further includes:
Reference picture chooses module 900, for carrying out recognition of face to the first facial image, obtains the first of the first user
Shape of face chooses a candidate reference facial image similar with the first shape of face from preset multiple candidate reference facial images
As with reference to facial image.
Fixed reference feature point demarcating module 901 is referred to for carrying out characteristic point calibration with reference to facial image to preset
Human face characteristic point.
In the present embodiment, mapping block 930 is specifically used for being based on triangulation, is referring to human face characteristic point and first
Under the constraint of human face characteristic point, face texture mapping is carried out to the first facial image and with reference to facial image, obtains the first mapping
Facial image;And under the constraint with reference to human face characteristic point and the second human face characteristic point, it is based on triangulation, to the
Two facial images and reference facial image carry out face texture mapping, obtain the second mapping facial image;Wherein, with reference to face figure
As being facial image corresponding with reference human face characteristic point.
Recognition of face is carried out by the first facial image to acquisition and obtains the first shape of face of the first user, according to the first face
Type selected from preset multiple candidate reference facial images a candidate reference facial image similar with the first shape of face as
It with reference to facial image, facial image similar with user's shape of face can be obtained is used as and refer to facial image, for by the first face figure
As laying a good foundation to reference to facial image mapping, mapping difficulty can be reduced.By to carrying out characteristic point with reference to facial image
Calibration obtains referring to human face characteristic point, is based on triangulation, with reference to human face characteristic point and the first human face characteristic point and
Under the constraint of second human face characteristic point, the first mapping facial image and the second mapping facial image are respectively obtained, and then according to the
One mapping facial image and the second mapping facial image extract the dressing information of the first user, can obtain and meet user's makeup
The dressing material of custom, it is with strong points and more true.
In a kind of possible realization method of the embodiment of the present invention, as shown in figure 14, in the base of embodiment as shown in figure 11
On plinth, makeup module 950 may include:
Acquiring unit 951, the third facial image for obtaining the target face for waiting for makeup, and extract third facial image
Third human face characteristic point.
Map unit 952, for according to human face characteristic point and third human face characteristic point is referred to, being carried out to third facial image
Deformation generates third and maps facial image.
Makeup unit 953 carries out makeup processing to third mapping facial image, obtains the 4th for utilizing dressing information
Facial image.
Specifically, makeup unit 953 is used to obtain the third pixel value of each pixel in third mapping facial image;Needle
To each pixel, third pixel value is multiplied with the dressing parameter of corresponding pixel points in dressing information, obtains pixel
The 4th pixel value, utilize the 4th pixel value of all pixels point, generate the 4th facial image.
Reduction unit 954, the 4th human face characteristic point for extracting the 4th facial image, and according to third human face characteristic point
With the 4th human face characteristic point, face restoring operation is carried out to the 4th facial image, obtains the target facial image after makeup.
As a kind of possible realization method, reduction unit 954 can be cutd open for corresponding by what human face characteristic point was constituted
Point triangle obtains the different information under subdivision triangle and the 4th human face characteristic point between subdivision triangle under third human face characteristic point;
The shape of subdivision triangle under the 4th human face characteristic point is adjusted using different information, generates target facial image.
As a kind of possible realization method, mapping block 930, can be with while generating third and mapping facial image
Subdivision triangle under third human face characteristic point, the relatively deformation data with reference to corresponding subdivision triangle under human face characteristic point are recorded, into
And reduction unit 954 can be cutd open according to the deformation data of each subdivision triangle of record to corresponding under the 4th human face characteristic point
Divide triangle to carry out reverse strain, generates target facial image.
It is opposite with reference to corresponding subdivision triangle under human face characteristic point by recording subdivision triangle under third human face characteristic point
Deformation data can be reduced with carrying out reciprocal transformation according to corresponding subdivision triangle under the 4th human face characteristic point of deformation data pair
The treating capacity of 4th facial image restoring operation improves reduction efficiency.
The third human face characteristic point that the third facial image of the target face of makeup is waited for by extraction, according to special with reference to face
Sign point and third human face characteristic point carry out deformation generation third to third facial image and map facial image, utilize dressing information pair
Third mapping facial image carries out makeup and handles to obtain the 4th facial image and extract the 4th face characteristic of the 4th facial image
Point, and then face restoring operation is carried out according to third human face characteristic point and the 4th facial image of the 4th human face characteristic point pair, it obtains
Target facial image after makeup can realize the makeup operation for treating makeup image, to wait for that the collocation of makeup image meets user
Desired dressing improves accuracy and the sense of reality of dressing migration.
The photo of user itself may be stored in the electronic equipment of user incessantly, it is also possible to be stored with friend relative of user
It is multiple can to obtain friends and family of user itself, user etc. according to the photo stored in the electronic equipment of user for the photo of friend
Multiple dressing information of personage.Since everyone shape of face may be different, to possible in one kind of the embodiment of the present invention
In realization method, as shown in figure 15, on the basis of embodiment as shown in figure 14, makeup module 950 can also include:
Mapping relations establish unit 955, for establishing reflecting between dressing information shape of face corresponding with the first facial image
Penetrate relationship.
Dressing information matching unit 956 obtains third facial image for carrying out recognition of face to third facial image
Third shape of face;Mapping relations are inquired according to third shape of face, are obtained and the matched dressing information of third shape of face.
By establishing the mapping relations between dressing information shape of face corresponding with the first facial image, makeup target is being treated
When the third facial image of face carries out makeup, the corresponding shape of face of third facial image is first determined, and then reflect according to shape of face inquiry
The relationship of penetrating determines matched dressing information, and carrying out makeup to third mapping facial image using dressing information handles to obtain the 4th people
Face image, the makeup effect for the 4th facial image that can ensure, improve dressing information reduction degree and makeup after face
The validity of image.
It should be noted that the aforementioned image for being also applied for the embodiment to the explanation of image processing method embodiment
Processing unit, realization principle is similar, and details are not described herein again.
The image processing apparatus of the present embodiment, the first face when the face by obtaining the first user is in non-makeup state
Image and the second facial image when in makeup state, extract the first human face characteristic point and second of the first facial image respectively
Second human face characteristic point of facial image, according to preset with reference to human face characteristic point and the first human face characteristic point and the second face
Characteristic point carries out deformation to the first facial image and the second facial image, generates the first mapping facial image and the second mapping people
Face image extracts the dressing information of the first user, first is used according to the first mapping facial image and the second mapping facial image
The dressing Information application at family to selection target person on the face, obtain the target facial image after makeup.Pass through the face from user
Dressing information is extracted in image in makeup state to select for user so that dressing information is more realistic, so as to
The dressing material for meeting user's aesthetical standard is provided for different users, meets the individual demand of user, improves dressing material
Specific aim and authenticity, solve that material exaggeration in the prior art, authenticity is low, the image after makeups is not met expected from user
Technical problem.
In order to realize that above-described embodiment, the present invention also propose a kind of computer equipment.
The structural schematic diagram for the computer equipment that Figure 16 is provided by the embodiment of the present invention.As shown in figure 16, the computer
Equipment 10 includes:Processor 110 and memory 120.Wherein, processor 110 is executable by being stored in reading memory 120
Program code runs program corresponding with executable program code, for realizing image procossing as in the foregoing embodiment
Method.
In order to realize that above-described embodiment, the present invention also propose a kind of non-transitorycomputer readable storage medium, deposit thereon
Computer program is contained, which realizes image processing method as in the foregoing embodiment when being executed by processor.
In order to realize that above-described embodiment, the present invention also propose a kind of computer program product, when the computer program product
In instruction realize image processing method as in the foregoing embodiment when being executed by processor.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is included at least one embodiment or example of the invention.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It can be combined in any suitable manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples
It closes and combines.
In addition, term " first ", " second " are used for description purposes only, it is not understood to indicate or imply relative importance
Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the present invention, the meaning of " plurality " is at least two, such as two, three
It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing custom logic function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, include according to involved function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (system of such as computer based system including processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicating, propagating or passing
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.The more specific example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or when necessary with it
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the present invention can be realized with hardware, software, firmware or combination thereof.Above-mentioned
In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be executed with storage
Or firmware is realized.Such as, if realized in another embodiment with hardware, following skill well known in the art can be used
Any one of art or their combination are realized:With for data-signal realize logic function logic gates from
Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile
Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries
Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium
In matter, which includes the steps that one or a combination set of embodiment of the method when being executed.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, it can also
That each unit physically exists alone, can also two or more units be integrated in a module.Above-mentioned integrated mould
The form that hardware had both may be used in block is realized, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and when sold or used as an independent product, can also be stored in a computer
In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above
The embodiment of the present invention is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the present invention
System, those skilled in the art can be changed above-described embodiment, change, replace and become within the scope of the invention
Type.
Claims (20)
1. a kind of image processing method, which is characterized in that including:
Obtain the first facial image and the second facial image, wherein first facial image is at the face of the first user
The image acquired when non-makeup state, second facial image are that the face of first user is adopted when being in makeup state
The image of collection;
The second face characteristic of the first human face characteristic point and second facial image of first facial image is extracted respectively
Point;
Human face characteristic point and first human face characteristic point are referred to according to preset, deformation is carried out to first facial image,
Generate the first mapping facial image;
Human face characteristic point and second human face characteristic point are referred to according to described, deformation is carried out to second facial image, it is raw
At the second mapping facial image;
Facial image and the second mapping facial image are mapped according to described first, extracts the dressing information of first user;
Selection target face, and by the dressing Information application of the first user to the target person on the face, obtain the target after makeup
Facial image.
2. according to the method described in claim 1, it is characterized in that, described reflect according to the first mapping facial image and second
Facial image is penetrated, the dressing information of first user is obtained, including:
Extract the first pixel value of the pixel of the first mapping facial image;
Extract the second pixel value of the pixel of the second mapping facial image;
For each corresponding pixel, second pixel value of the corresponding pixel and first pixel value are done
Ratio, using the ratio as the dressing parameter of the corresponding pixel;
Using the dressing parameter of each corresponding pixel points, the dressing information of first user is constituted.
3. according to the method described in claim 1, it is characterized in that, described refer to human face characteristic point according to preset, to described
First human face characteristic point carries out deformation, before generating the first mapping facial image, further includes:
Characteristic point calibration is carried out with reference to facial image to preset, is obtained described with reference to human face characteristic point.
4. according to the method described in claim 3, it is characterized in that, described extract the ginseng from preset reference facial image
Before examining human face characteristic point, further include:
Recognition of face is carried out to first facial image, obtains the first shape of face of first user;
From preset multiple candidate reference facial images, a candidate reference face figure similar with first shape of face is chosen
As referring to facial image as described.
5. according to the method described in claim 1, it is characterized in that, it is described according to preset with reference to human face characteristic point and described the
One human face characteristic point carries out deformation to first facial image, generates the first mapping facial image, including:
Based on triangulation, described with reference to human face characteristic point and under the constraint of first human face characteristic point, to described
First facial image and reference facial image carry out face texture mapping, obtain the first mapping facial image;Wherein, described
It is facial image corresponding with the reference human face characteristic point with reference to facial image;
It is described to refer to human face characteristic point and second human face characteristic point according to described, shape is carried out to second facial image
Become, generates the second mapping facial image, including:
Described with reference to human face characteristic point and under the constraint of second human face characteristic point, it is based on triangulation, to described
Second facial image and the reference facial image carry out face texture mapping, obtain the second mapping facial image.
6. according to claim 1-5 any one of them methods, which is characterized in that the selection target face, and first is used
The dressing Information application at family on the face, obtains the target facial image after makeup to the target person, including:Obtain the mesh for waiting for makeup
Mark the third facial image of face;
Extract the third human face characteristic point of the third facial image;
Human face characteristic point and the third human face characteristic point are referred to according to described, deformation is carried out to the third facial image, it is raw
Facial image is mapped at third;
Using the dressing information, makeup processing is carried out to third mapping facial image, obtains the 4th facial image;
Extract the 4th human face characteristic point of the 4th facial image;
According to the third human face characteristic point and the 4th human face characteristic point, face reduction is carried out to the 4th facial image
Operation, obtains the target facial image after makeup.
7. according to the method described in claim 6, it is characterized in that, described utilize the dressing information, to third mapping
Facial image carries out makeup processing, obtains the 4th facial image, including:
Obtain the third pixel value of each pixel in the third mapping facial image;
For each pixel, by the dressing parameter of corresponding pixel points in the third pixel value and the dressing information into
Row is multiplied, and obtains the 4th pixel value of the pixel;
Using the 4th pixel value of all pixels point, the 4th facial image is generated.
8. according to the method described in claim 6, it is characterized in that, the third face figure for obtaining the target face for waiting for makeup
Before picture, further include:
Establish the mapping relations between dressing information shape of face corresponding with first facial image;
It is described to utilize the dressing information, makeup processing is carried out to third mapping facial image, obtains the 4th facial image
Before, further include:
Recognition of face is carried out to the third facial image, obtains the third shape of face of the third facial image;
The mapping relations are inquired according to the third shape of face, are obtained and the matched dressing information of the third shape of face.
9. according to claim 6-8 any one of them methods, which is characterized in that it is described according to the third human face characteristic point and
4th human face characteristic point carries out face restoring operation to the 4th facial image, obtains the target person after makeup
Face image, including:
For the corresponding subdivision triangle being made of human face characteristic point, the subdivision triangle under the third human face characteristic point is obtained
With the different information between the subdivision triangle under the 4th human face characteristic point;
The shape that the subdivision triangle under the 4th human face characteristic point is adjusted using the different information, generates the target person
Face image.
10. according to claim 6-8 any one of them methods, which is characterized in that map the same of facial image generating third
When, subdivision triangle under the third human face characteristic point is recorded, it is relatively described with reference to corresponding subdivision triangle under human face characteristic point
Deformation data;
It is described according to the third human face characteristic point and the 4th human face characteristic point, face is carried out to the 4th facial image
Restoring operation obtains the target facial image after makeup, including:
According to the deformation data of each subdivision triangle, the corresponding subdivision triangle under the 4th human face characteristic point is carried out anti-
To deformation, the target facial image is generated.
11. a kind of image processing apparatus, which is characterized in that including:
Image collection module, for obtaining the first facial image and the second facial image, wherein first facial image be
The face of first user is in the image acquired when non-makeup state, and second facial image is the face of first user
The image acquired when in makeup state;
Feature point extraction module, the first human face characteristic point for extracting first facial image respectively and second face
Second human face characteristic point of image;
Mapping block, for referring to human face characteristic point and first human face characteristic point according to preset, to first face
Image carries out deformation, generates the first mapping facial image, and according to described special with reference to human face characteristic point and second face
Point is levied, deformation is carried out to second facial image, generates the second mapping facial image;
Dressing extraction module, for mapping facial image and the second face mapping image, extraction described first according to described first
The dressing information of user;
Makeup module, be used for selection target face, and by the dressing Information application of the first user to the target person on the face, obtain
Target facial image after makeup.
12. according to the devices described in claim 11, which is characterized in that the dressing extraction module, including:
Pixel value extraction unit, the first pixel value of the pixel for extracting the first mapping facial image;And extraction
Second pixel value of the pixel of the second mapping facial image;
Dressing extraction unit, for being directed to each corresponding pixel, by second pixel value of the corresponding pixel
Ratio is done with first pixel value, using the ratio as the dressing parameter of corresponding pixel, utilizes each respective pixel
The dressing parameter of point constitutes the dressing information of first user.
13. according to the devices described in claim 11, which is characterized in that the mapping block is specifically used for:
Based on triangulation, described with reference to human face characteristic point and under the constraint of first human face characteristic point, to described
First facial image and reference facial image carry out face texture mapping, obtain the first mapping facial image;Wherein, described
It is facial image corresponding with the reference human face characteristic point with reference to facial image;And it is described with reference to human face characteristic point and
Under the constraint of second human face characteristic point, be based on triangulation, to second facial image and it is described refer to face
Image carries out face texture mapping, obtains the second mapping facial image.
14. according to claim 11-13 any one of them devices, which is characterized in that the makeup module, including:
Acquiring unit, the third facial image for obtaining the target face for waiting for makeup, and extract the third facial image
Third human face characteristic point;
Map unit, for referring to human face characteristic point and the third human face characteristic point according to described, to the third face figure
As carrying out deformation, generates third and map facial image;
Makeup unit carries out makeup processing to third mapping facial image, obtains the 4th for utilizing the dressing information
Facial image;
Reduction unit, the 4th human face characteristic point for extracting the 4th facial image, and according to the third face characteristic
Point and the 4th human face characteristic point carry out face restoring operation to the 4th facial image, obtain the mesh after makeup
Mark facial image.
15. device according to claim 14, which is characterized in that the makeup unit is specifically used for:
Obtain the third pixel value of each pixel in the third mapping facial image;
For each pixel, by the dressing parameter of corresponding pixel points in the third pixel value and the dressing information into
Row is multiplied, and obtains the 4th pixel value of the pixel;
Using the 4th pixel value of all pixels point, the 4th facial image is generated.
16. the device according to claims 14 or 15, which is characterized in that the reduction unit is specifically used for:
For the corresponding subdivision triangle being made of human face characteristic point, the subdivision triangle under the third human face characteristic point is obtained
With the different information between the subdivision triangle under the 4th human face characteristic point;
The shape that the subdivision triangle under the 4th human face characteristic point is adjusted using the different information, generates the target person
Face image.
17. the device according to claims 14 or 15, which is characterized in that the mapping block is additionally operable to:
While generating third mapping facial image, subdivision triangle under the third human face characteristic point, the relatively described ginseng are recorded
Examine the deformation data of corresponding subdivision triangle under human face characteristic point;
The reduction unit, is specifically used for:
According to the deformation data of each subdivision triangle, the corresponding subdivision triangle under the 4th human face characteristic point is carried out anti-
To deformation, the target facial image is generated.
18. a kind of computer equipment, which is characterized in that including processor and memory;
Wherein, the processor can perform to run with described by reading the executable program code stored in the memory
The corresponding program of program code, for realizing the image processing method as described in any one of claim 1-10.
19. a kind of non-transitorycomputer readable storage medium, is stored thereon with computer program, which is characterized in that the program
The image processing method as described in any one of claim 1-10 is realized when being executed by processor.
20. a kind of computer program product, which is characterized in that when the instruction in the computer program product is executed by processor
Image processing methods of the Shi Shixian as described in any one of claim 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810134026.3A CN108509846B (en) | 2018-02-09 | 2018-02-09 | Image processing method, image processing apparatus, computer device, storage medium, and computer program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810134026.3A CN108509846B (en) | 2018-02-09 | 2018-02-09 | Image processing method, image processing apparatus, computer device, storage medium, and computer program product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108509846A true CN108509846A (en) | 2018-09-07 |
CN108509846B CN108509846B (en) | 2022-02-11 |
Family
ID=63374622
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810134026.3A Active CN108509846B (en) | 2018-02-09 | 2018-02-09 | Image processing method, image processing apparatus, computer device, storage medium, and computer program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108509846B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109712090A (en) * | 2018-12-18 | 2019-05-03 | 维沃移动通信有限公司 | A kind of image processing method, device and mobile terminal |
CN109858392A (en) * | 2019-01-11 | 2019-06-07 | 复旦大学 | One kind is for front and back facial image automatic identifying method of making up |
CN110349239A (en) * | 2019-07-05 | 2019-10-18 | 厦门大学 | The dot method for drafting that characteristics of image is kept |
CN110458121A (en) * | 2019-08-15 | 2019-11-15 | 京东方科技集团股份有限公司 | A kind of method and device of Face image synthesis |
CN110825765A (en) * | 2019-10-23 | 2020-02-21 | 中国建设银行股份有限公司 | Face recognition method and device |
CN111507259A (en) * | 2020-04-17 | 2020-08-07 | 腾讯科技(深圳)有限公司 | Face feature extraction method and device and electronic equipment |
CN111563855A (en) * | 2020-04-29 | 2020-08-21 | 百度在线网络技术(北京)有限公司 | Image processing method and device |
CN111583163A (en) * | 2020-05-07 | 2020-08-25 | 厦门美图之家科技有限公司 | AR-based face image processing method, device, equipment and storage medium |
CN111815533A (en) * | 2020-07-14 | 2020-10-23 | 厦门美图之家科技有限公司 | Dressing method, device, electronic apparatus, and readable storage medium |
CN112734894A (en) * | 2021-01-25 | 2021-04-30 | 腾讯科技(深圳)有限公司 | Virtual hair drawing method and device, storage medium and electronic equipment |
CN112734661A (en) * | 2020-12-30 | 2021-04-30 | 维沃移动通信有限公司 | Image processing method and device |
CN113313660A (en) * | 2021-05-14 | 2021-08-27 | 北京市商汤科技开发有限公司 | Makeup migration method, device, equipment and computer readable storage medium |
CN113781330A (en) * | 2021-08-23 | 2021-12-10 | 北京旷视科技有限公司 | Image processing method, device and electronic system |
CN113780047A (en) * | 2021-01-11 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Virtual makeup trying method and device, electronic equipment and storage medium |
WO2022089272A1 (en) * | 2020-10-28 | 2022-05-05 | 维沃移动通信有限公司 | Image processing method and apparatus |
WO2022258013A1 (en) * | 2021-06-11 | 2022-12-15 | 维沃移动通信有限公司 | Image processing method and apparatus, electronic device and readable storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104205162A (en) * | 2013-02-01 | 2014-12-10 | 松下电器产业株式会社 | Makeup application assistance device, makeup application assistance method, and makeup application assistance program |
CN104463938A (en) * | 2014-11-25 | 2015-03-25 | 福建天晴数码有限公司 | Three-dimensional virtual make-up trial method and device |
CN104732506A (en) * | 2015-03-27 | 2015-06-24 | 浙江大学 | Character picture color style converting method based on face semantic analysis |
CN105427238A (en) * | 2015-11-30 | 2016-03-23 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN105488472A (en) * | 2015-11-30 | 2016-04-13 | 华南理工大学 | Digital make-up method based on sample template |
CN105787981A (en) * | 2016-02-25 | 2016-07-20 | 上海斐讯数据通信技术有限公司 | Method and system for assisting in makeup through mobile terminal |
CN107403185A (en) * | 2016-05-20 | 2017-11-28 | 北京大学 | Portrait color changeover method and portrait color conversion system |
CN107622472A (en) * | 2017-09-12 | 2018-01-23 | 北京小米移动软件有限公司 | Face dressing moving method and device |
-
2018
- 2018-02-09 CN CN201810134026.3A patent/CN108509846B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104205162A (en) * | 2013-02-01 | 2014-12-10 | 松下电器产业株式会社 | Makeup application assistance device, makeup application assistance method, and makeup application assistance program |
CN104463938A (en) * | 2014-11-25 | 2015-03-25 | 福建天晴数码有限公司 | Three-dimensional virtual make-up trial method and device |
CN104732506A (en) * | 2015-03-27 | 2015-06-24 | 浙江大学 | Character picture color style converting method based on face semantic analysis |
CN105427238A (en) * | 2015-11-30 | 2016-03-23 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN105488472A (en) * | 2015-11-30 | 2016-04-13 | 华南理工大学 | Digital make-up method based on sample template |
CN105787981A (en) * | 2016-02-25 | 2016-07-20 | 上海斐讯数据通信技术有限公司 | Method and system for assisting in makeup through mobile terminal |
CN107403185A (en) * | 2016-05-20 | 2017-11-28 | 北京大学 | Portrait color changeover method and portrait color conversion system |
CN107622472A (en) * | 2017-09-12 | 2018-01-23 | 北京小米移动软件有限公司 | Face dressing moving method and device |
Non-Patent Citations (1)
Title |
---|
DICKSON TONG, CHI-KEUNG TANG, MICHAEL S. BROWN, YING-QING XU: "《Example-Based Cosmetic Transfer》", 《IEEE COMPUTER SOCIETY》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109712090A (en) * | 2018-12-18 | 2019-05-03 | 维沃移动通信有限公司 | A kind of image processing method, device and mobile terminal |
CN109858392A (en) * | 2019-01-11 | 2019-06-07 | 复旦大学 | One kind is for front and back facial image automatic identifying method of making up |
CN109858392B (en) * | 2019-01-11 | 2021-02-02 | 复旦大学 | Automatic face image identification method before and after makeup |
CN110349239A (en) * | 2019-07-05 | 2019-10-18 | 厦门大学 | The dot method for drafting that characteristics of image is kept |
US11295115B2 (en) | 2019-08-15 | 2022-04-05 | Boe Technology Group Co., Ltd. | Method and device for generating face image, electronic device and computer readable storage medium |
CN110458121A (en) * | 2019-08-15 | 2019-11-15 | 京东方科技集团股份有限公司 | A kind of method and device of Face image synthesis |
CN110458121B (en) * | 2019-08-15 | 2023-03-14 | 京东方科技集团股份有限公司 | Method and device for generating face image |
CN110825765A (en) * | 2019-10-23 | 2020-02-21 | 中国建设银行股份有限公司 | Face recognition method and device |
CN110825765B (en) * | 2019-10-23 | 2022-10-04 | 中国建设银行股份有限公司 | Face recognition method and device |
CN111507259A (en) * | 2020-04-17 | 2020-08-07 | 腾讯科技(深圳)有限公司 | Face feature extraction method and device and electronic equipment |
CN111507259B (en) * | 2020-04-17 | 2023-03-24 | 腾讯科技(深圳)有限公司 | Face feature extraction method and device and electronic equipment |
CN111563855A (en) * | 2020-04-29 | 2020-08-21 | 百度在线网络技术(北京)有限公司 | Image processing method and device |
CN111583163B (en) * | 2020-05-07 | 2023-06-13 | 厦门美图之家科技有限公司 | AR-based face image processing method, device, equipment and storage medium |
CN111583163A (en) * | 2020-05-07 | 2020-08-25 | 厦门美图之家科技有限公司 | AR-based face image processing method, device, equipment and storage medium |
CN111815533A (en) * | 2020-07-14 | 2020-10-23 | 厦门美图之家科技有限公司 | Dressing method, device, electronic apparatus, and readable storage medium |
CN111815533B (en) * | 2020-07-14 | 2024-01-19 | 厦门美图之家科技有限公司 | Dressing processing method, device, electronic equipment and readable storage medium |
WO2022089272A1 (en) * | 2020-10-28 | 2022-05-05 | 维沃移动通信有限公司 | Image processing method and apparatus |
CN112734661A (en) * | 2020-12-30 | 2021-04-30 | 维沃移动通信有限公司 | Image processing method and device |
WO2022143382A1 (en) * | 2020-12-30 | 2022-07-07 | 维沃移动通信有限公司 | Image processing method and apparatus |
CN113780047A (en) * | 2021-01-11 | 2021-12-10 | 北京沃东天骏信息技术有限公司 | Virtual makeup trying method and device, electronic equipment and storage medium |
CN112734894A (en) * | 2021-01-25 | 2021-04-30 | 腾讯科技(深圳)有限公司 | Virtual hair drawing method and device, storage medium and electronic equipment |
CN112734894B (en) * | 2021-01-25 | 2023-07-14 | 腾讯科技(深圳)有限公司 | Virtual hair drawing method and device, storage medium and electronic equipment |
CN113313660A (en) * | 2021-05-14 | 2021-08-27 | 北京市商汤科技开发有限公司 | Makeup migration method, device, equipment and computer readable storage medium |
WO2022258013A1 (en) * | 2021-06-11 | 2022-12-15 | 维沃移动通信有限公司 | Image processing method and apparatus, electronic device and readable storage medium |
CN113781330A (en) * | 2021-08-23 | 2021-12-10 | 北京旷视科技有限公司 | Image processing method, device and electronic system |
Also Published As
Publication number | Publication date |
---|---|
CN108509846B (en) | 2022-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108509846A (en) | Image processing method, device, computer equipment and storage medium | |
US10467779B2 (en) | System and method for applying a reflectance modifying agent to change a person's appearance based on a digital image | |
CN110807836B (en) | Three-dimensional face model generation method, device, equipment and medium | |
CN108765273A (en) | The virtual lift face method and apparatus that face is taken pictures | |
CN105404392B (en) | Virtual method of wearing and system based on monocular cam | |
US20130307848A1 (en) | Techniques for processing reconstructed three-dimensional image data | |
CN108537126B (en) | Face image processing method | |
CN101779218A (en) | Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program | |
CN107479801A (en) | Displaying method of terminal, device and terminal based on user's expression | |
WO2008100878A1 (en) | System and method for applying a reflectance modifying agent to change a person's appearance based on a digital image | |
WO2022237081A1 (en) | Makeup look transfer method and apparatus, and device and computer-readable storage medium | |
KR101823869B1 (en) | Real-time video makeup implementation system based Augmented Reality using Facial camera device | |
CN107341762A (en) | Take pictures processing method, device and terminal device | |
CN109697749A (en) | A kind of method and apparatus for three-dimensional modeling | |
CN108629821A (en) | Animation producing method and device | |
CN110866139A (en) | Cosmetic treatment method, device and equipment | |
CN103260036B (en) | Image processing apparatus, image processing method, storage medium and image processing system | |
CN109242760A (en) | Processing method, device and the electronic equipment of facial image | |
CN113870404B (en) | Skin rendering method of 3D model and display equipment | |
CN110276735A (en) | Method, device and equipment for generating image color retention effect and storage medium | |
CN107341774A (en) | Facial image U.S. face processing method and processing device | |
CN110188699A (en) | A kind of face identification method and system of binocular camera | |
CN107392099A (en) | Extract the method, apparatus and terminal device of hair detailed information | |
CN108510538A (en) | 3-D view synthetic method, device and computer readable storage medium | |
Castelán et al. | Acquiring height maps of faces from a single image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |