CN105787878A - Beauty processing method and device - Google Patents

Beauty processing method and device Download PDF

Info

Publication number
CN105787878A
CN105787878A CN201610105295.8A CN201610105295A CN105787878A CN 105787878 A CN105787878 A CN 105787878A CN 201610105295 A CN201610105295 A CN 201610105295A CN 105787878 A CN105787878 A CN 105787878A
Authority
CN
China
Prior art keywords
face
key point
picture
face key
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610105295.8A
Other languages
Chinese (zh)
Other versions
CN105787878B (en
Inventor
蔡苗苗
谢衍涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Gexiang Technology Co Ltd
Original Assignee
Hangzhou Gexiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Gexiang Technology Co Ltd filed Critical Hangzhou Gexiang Technology Co Ltd
Priority to CN201610105295.8A priority Critical patent/CN105787878B/en
Publication of CN105787878A publication Critical patent/CN105787878A/en
Application granted granted Critical
Publication of CN105787878B publication Critical patent/CN105787878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a beauty processing method and device. The method comprises: performing face detection of a picture to be processed, and determining the face area in the picture to be processed; performing face key point detection of the face area, determining the face key points, and generating a face key point layer according to the face key points; detecting the skin color area of the picture to be processed, and generating a skin color processing layer; splicing at least one setting area in the face key point layer to a position synthesis beauty processing layer corresponding to the skin color processing layer; and performing beauty processing of the picture to be processed according to the beauty processing layer, and generating a beauty picture. The beauty processing method and device are able to realize the beauty process of a portrait in a picture and reserve the details of a face setting area.

Description

A kind of U.S. face processing method and device
Technical field
The present embodiments relate to picture Processing Technique, particularly relate to a kind of U.S. face processing method and device.
Background technology
Along with developing rapidly of digital technology in recent years, people are by possessing the equipment record Life intravenous drip of camera function, and the figure software of repairing therefore improving picture effect of taking pictures also gets more and more, and what demand was the highest repaiies figure no more than the face certainly taken pictures.
And existing U.S. face solution is typically all and whole pictures carries out Fuzzy Processing on the whole and hue adjustment, to reach visual whitening and the effect of skin mill skin.But on the one hand now with the continuous lifting of smart machine hardware, the pixel that photographic head is taken pictures also constantly promotes, and the image taken pictures is increasing, this obscures, for these full figures existing, the algorithm that mill skin processes, and is relatively time consuming;On the other hand, losing the information of image detail (such as eyes, eyebrow and nose etc.) after full figure is fuzzy, this can cause that the image after processing becomes unintelligible, makes face look and is not as spiritedness.Eyes are the windows of soul, and what after full figure Fuzzy Processing, eyes also can become obscures, and will lose anima originally.
Summary of the invention
The present invention provides a kind of U.S. face processing method and device, to realize the portrait in picture carries out U.S. face process, and retains the details of face key position.
First aspect, embodiments provides a kind of U.S. face processing method, including:
Pending picture is carried out Face datection, it is determined that the human face region in described pending picture;
Described human face region is carried out face critical point detection, it is determined that face key point, and generate face key point diagram layer according to described face key point;
The area of skin color of described pending picture is detected, generates the colour of skin and process figure layer;
At least one setting regions in described face key point diagram layer is spliced to the U.S. face process figure layer of correspondence position synthesis of described colour of skin process figure layer;
Process figure layer according to described U.S. face, described pending picture is carried out U.S. face and processes, generate U.S. face picture.
Further, described described human face region is carried out face critical point detection, it is determined that face key point, and generate face key point diagram layer according to described face key point, including:
Face key point is estimated in acquisition;
By described estimate face key point be input to the first convolutional neural networks carry out the overall situation regression training, generate the first face key point;
Described first face key point is input to the second convolutional neural networks and carries out local correlations regression training, generate the second face key point;
The key point of at least one setting regions in described second face key point is input to the 3rd convolutional neural networks and carries out local directed complete set regression training, generate face details key point, and the key point outside at least one setting regions described in described face details key point and described second face key point is merged into the 3rd face key point, generate face key point diagram layer.
Further, the described area of skin color to described pending picture detects, and generates the colour of skin and processes figure layer, including:
Adopt the skin color detection method based on regional diffusion that described pending picture is generated the described colour of skin and process figure layer.
Further, described process figure layer according to described U.S. face, described pending picture is carried out U.S. face and processes, generate U.S. face picture, including:
Obtain the pending region in described pending picture;
Region beyond setting regions in described pending region carries out U.S. face process, generate pretreatment picture;
Described pretreatment picture is spliced to the correspondence position of described pending picture, generates described U.S. face picture.
Further, described setting regions includes: at least one in eye areas, brow region and mouth region.
Second aspect, the embodiment of the present invention additionally provides a kind of U.S. face and processes device, including:
Face detection module, for carrying out Face datection to pending picture, it is determined that the human face region in described pending picture;
Face key point diagram layer generation module, for described human face region is carried out face critical point detection, it is determined that face key point, and generates face key point diagram layer according to described face key point;
The colour of skin processes figure layer generation module, for the area of skin color of described pending picture is detected, generates the colour of skin and processes figure layer;
U.S. face processes figure layer generation module, at least one setting regions in described face key point diagram layer is spliced to the U.S. face process figure layer of correspondence position synthesis of described colour of skin process figure layer;
U.S. face image generating module, for processing figure layer according to described U.S. face, carries out U.S. face and processes, generate U.S. face picture described pending picture.
Further, described face key point diagram layer generation module, including:
Face key point estimates unit, estimates face key point for acquisition;
First face key point generate unit, for by described estimate face key point be input to the first convolutional neural networks carry out the overall situation regression training, generate the first face key point;
Second face key point generates unit, carries out local correlations regression training for described first face key point is input to the second convolutional neural networks, generates the second face key point;
3rd face key point generates unit, local directed complete set regression training is carried out for the key point of at least one setting regions in described second face key point is input to the 3rd convolutional neural networks, generate face details key point, and the key point outside at least one setting regions described in described face details key point and described second face key point is merged into the 3rd face key point, generate face key point diagram layer.
Further, the described colour of skin processes figure layer generation module, for adopting the skin color detection method based on regional diffusion that described pending picture is generated described colour of skin process figure layer.
Further, described U.S. face image generating module, including:
Pending area determination unit, for obtaining the pending region in described pending picture;
Process sub-pictures and generate unit, processing for the region beyond the setting regions in described pending region being carried out U.S. face, generating pretreatment picture;
U.S. face picture generates unit, for described pretreatment picture is spliced to the correspondence position of described pending picture, generates described U.S. face picture.
Further, described setting regions includes: at least one in eye areas, brow region and mouth region.
The present invention determines by detecting face key point and retains the details of setting regions, solve whole picture carries out the problem that U.S. face processes the picture detail total loss caused, realize that the portrait in picture carries out U.S. face to process, and retain the effect of face setting regions details.
Accompanying drawing explanation
Fig. 1 is the flow chart of a kind of U.S. face processing method in the embodiment of the present invention one;
Fig. 2 is the pending picture of the mark face key point in the embodiment of the present invention one;
Fig. 3 is the flow chart of a kind of U.S. face processing method in the embodiment of the present invention two;
Fig. 4 is the first convolution neural network structure schematic diagram in the embodiment of the present invention two;
Fig. 5 is the pending picture dividing human face region in the embodiment of the present invention two;
Fig. 6 is the second convolution neural network structure schematic diagram in the embodiment of the present invention two;
Fig. 7 is the 3rd convolutional neural networks structural representation in the embodiment of the present invention two;
Fig. 8 is the flow chart of a kind of U.S. face processing method in the embodiment of the present invention three;
Fig. 9 is the structural representation that a kind of U.S. face in the embodiment of the present invention four processes device.
Detailed description of the invention
Below in conjunction with drawings and Examples, the present invention is described in further detail.It is understood that specific embodiment described herein is used only for explaining the present invention, but not limitation of the invention.It also should be noted that, for the ease of describing, accompanying drawing illustrate only part related to the present invention but not entire infrastructure.
Embodiment one
The flow chart of a kind of U.S. face processing method that Fig. 1 provides for the embodiment of the present invention one, the present embodiment is applicable to and the portrait in picture carries out the situation that U.S. face processes, the method can be processed device by the U.S. face being integrated in the smart mobile phone or digital camera equipment that possess camera function and perform, and specifically includes following steps:
Step 110, pending picture is carried out Face datection, it is determined that the human face region in pending picture.
Wherein, pending picture can be local picture or the picture passing through photographic head Real-time Collection, and pending picture is carried out Face datection, if being not detected by pending picture to include face, then returns and continues to obtain pending picture.Preferably, by computer vision storehouse of increasing income, pending picture carried out Face datection, and determine the human face region in pending picture.
Step 120, human face region is carried out face critical point detection, it is determined that face key point, and generate face key point diagram layer according to face key point.
Wherein, face key point is the artificial key point for identifying the cheek profile of the face in pending picture, brow region, lens area, nasal area and mouth region etc. set.Preferably, as in figure 2 it is shown, picture includes face, wherein manually mark 74 face key points, for identifying the cheek profile of face, brow region, lens area, nasal area and mouth region.The quantity of face key point is not limited to 74, when face key point is more than 74, it is possible to mark cheek profile, brow region, lens area, nasal area and mouth region more accurately;And for identifying the face key point of an eyebrow no less than 4, lay respectively at the place between the eyebrows of eyebrow, the tip of the brow, on along midpoint and lower to midpoint;For identifying the face key point of eyes no less than 4, lay respectively on inner eye corner, the tail of the eye, eyes under midpoint and canthus along midpoint;For identifying the face key point of nose no less than 4, lay respectively at bridge of the nose upper end, nose and wing of nose marginal point;For identifying the face key point of mouth no less than 6, lay respectively on two corners of the mouths, upper lips under midpoint, upper lip along midpoint, lower lip under midpoint and lower lip along midpoint;For identifying the face key point of cheek profile no less than 5, lay respectively under hairline temples place, cheek both sides, cheek place, cheek both sides and chin along midpoint.The cheek profile of face is determined according to face key point, and determine eye areas, brow region and mouth region according to face key point, through the human face region composition face key point diagram layer divided, in face key point diagram layer, eye areas, brow region and mouth region can pluck out from the cheek profile of face.
Step 130, area of skin color to pending picture detect, and generate the colour of skin and process figure layer.
Wherein, the face in pending picture and other skin exposed are carried out area of skin color detection, it is preferred that, it is determined that the region of human body skin in pending picture, this region constitutes the colour of skin and processes figure layer, processes, for follow-up U.S. face, the region determining skin treatment.
Needing explanation, the execution sequence of step 120 and step 130 can overturn, or perform simultaneously, after step 120 and step 130 have performed, performs step 140.
Step 140, at least one setting regions in face key point diagram layer is spliced to the correspondence position U.S. face process figure layer of synthesis of colour of skin process figure layer.
Wherein, face key point diagram layer can extract setting regions, preferably, setting regions includes: at least one in eye areas, brow region and mouth region, the setting regions of face key point diagram layer has the artwork position of correspondence in pending picture, and in colour of skin process figure layer, the region corresponding with this artwork position is in the colour of skin process figure layer correspondence position of setting regions.After the setting regions of face key point diagram layer and the colour of skin process the splicing of figure layer, the U.S. face process figure layer of synthesis is for processing the portrait in pending picture.Example, the eye areas in face key point diagram layer is extracted and is spliced to the eye areas that colour of skin process figure layer is corresponding, the U.S. face of synthesis processes figure layer.
Step 150, process figure layer according to U.S. face, pending picture is carried out U.S. face and processes, generate U.S. face picture.
Wherein, according to U.S. face process figure layer, the portrait in pending picture is carried out U.S. face and process, retain face key point diagram layer and be spliced to the region of pending figure layer corresponding to the setting regions of colour of skin process figure layer and do not process, namely retain the details of setting regions.The region that colour of skin process figure layer in pending picture is corresponding carries out U.S. face process, by the region after processing and in pending picture untreated part be spliced into U.S. face picture.
The technical scheme of the present embodiment, determine by detecting face key point and retain the details of setting regions, solve whole picture carries out the problem that U.S. face processes the picture detail total loss caused, it is achieved the portrait in picture carries out U.S. face and processes, and retain the effect of face setting regions details.
Embodiment two
Fig. 3 is the flow chart of a kind of U.S. face processing method provided for the embodiment of the present invention, on the basis of technique scheme, it is preferred that step 120 includes:
Face key point is estimated in step 121, acquisition.
Concrete, for instance can obtain according to the face picture data set being labeled with face key point and estimate face key point.Face picture data set includes at least two ten thousand artificial face picture marking face key point, and face key point location can be understood as, it is assumed that real face key point is set to S '=(x1, y1..., xn, yn), wherein (x1,y1) for first man face key point coordinate in picture, (xn,yn) it is n-th face key point coordinate in picture, the process of the face key point position S of prediction can be converted to optimization problem: min | | S-S'| |, even if the mould of the difference of the face key point position vector S and real face key point position vector S ' of prediction is minimum.Using the meansigma methods of the face key point of samples all in face picture data set as estimating face key point.
Step 122, will estimate face key point be input to the first convolutional neural networks carry out the overall situation regression training, generate the first face key point.
Wherein, first convolutional neural networks is overall situation Recurrent networks, example, first convolution neural network structure is as shown in Figure 4, scaled layer 401 unification of human face region that the mark of input is estimated face key point is scaled 64*64, the filtering of first volume lamination 402 is sized to 5*5, to input human face region carry out convolution output characteristic pattern be sized to 60*60, number is 20.By these 20 characteristic patterns through the first maximum pond layer 403 carry out 2 times down-sampled, the output figure obtained is sized to 30*30.The filtering of volume Two lamination 404 is sized to 5*5, and after the characteristic pattern inputted is carried out convolution, the characteristic pattern number of output is 12, and output characteristic figure is sized to 26*26.Still to the characteristic pattern after convolution through the second maximum pond layer 405 carry out 2 times down-sampled, output characteristic figure is sized to 13*13.Then characteristic pattern is carried out convolution with the filtering of 2*2 by the 3rd convolutional layer 406, and through the 3rd maximum pond layer 407 carry out 2 times down-sampled, output characteristic figure number is 40, is sized to 6*6.Last Volume Four lamination 408 with the filtering of 3*3 characteristic pattern carried out convolution and through the 4th maximum pond layer 409 carry out 2 times down-sampled, output characteristic figure number is 60, is sized to 2*2.All characteristic patterns of output are pulled into a column vector, is connected to the first full articulamentum 410 of the neuron node of 120 dimensions.It is eventually connected to the second full articulamentum 411 of 148 dimensions, the face key point coordinate figure (i.e. the first face key point) of output regression.The last parameter adjusting whole network structure again with back propagation.The overall situation trained returns and can go out face key point approximate location in training sample picture (i.e. the human face region of pending picture) by "ball-park" estimate.
Step 123, the first face key point is input to the second convolutional neural networks carries out local correlations regression training, generate the second face key point.
Wherein, second convolutional neural networks is local correlations network, it is preferred that 74 the first face key points are divided into 4 regions, dividing corresponding region is 1 in Fig. 5,2,3,4 regions, wherein there are 22 face key points in region 1, there are 22 face key points in region 2, and there are 15 face key points in region 3, and there are 15 face key points in region 4.Example, as shown in Figure 6, except last layer on the second convolution neural network structure, preceding layers structure is identical with the first convolutional neural networks for the second convolution neural network structure.Last full articulamentum divide into the network branches formed by the face key point that 4 regions are corresponding, respectively full articulamentum first branch 611 of 48 dimensions, full articulamentum the 4th network branches 614 of full articulamentum the 3rd network branches 613 and 30 dimension of full articulamentum the second network branches 612,30 dimension of 48 dimensions.In the process carrying out back propagation and adjusting parameter, study when the only face key point training in this region of the parameter of the full articulamentum of branch, and the parameter of all convolutional layers before branch and full articulamentum is trained every time and will be updated.Full articulamentum branching networks structure in Fig. 6 can retain the own feature of the setting regions of face key point mark, and the convolutional layer structure of unceasing study remains the dependency between the setting regions of face key point mark before branch simultaneously, strengthen the global restriction between face key point.First face key point exports the second face key point after inputting the second convolution neural net regression training.
Step 124, the key point of at least one setting regions in the second face key point is input to the 3rd convolutional neural networks carries out local directed complete set regression training, generate face details key point, and the key point outside at least one setting regions in face details key point and the second face key point is merged into the 3rd face key point, generate face key point diagram layer.
Wherein, the face key point of setting regions, it is preferable that eye areas, brow region and mouth region are setting regions, the location accuracy requirements of the face key point for identifying these regions is higher.Therefore the 3rd convolutional neural networks individually adjusts for the face key point of setting regions.With the second face key point for initial value, extract the face key point (totally 48 points) of setting regions respectively.The face key point of setting regions is put together process, because there is certain restriction relation in the position of the eyes of most people, eyebrow and face, recurrence of putting together can effectively prevent some face key point deviations too big, is effectively improved the accuracy of location.First the region of eyes, eyebrow and face is cut out according to the second face key point, as the input picture of the 3rd convolutional neural networks, because being local directed complete set, it is only necessary to process peripheral region.3rd convolutional neural networks structure is as shown in Figure 7, scaling layer 701 is scaled 32*32 by unified for input picture, the filtering of first volume lamination 702 (convolution) is sized to 3*3, the filtering of volume Two lamination 703 is sized to 2*2, first maximum pond layer 704 carry out 2 times down-sampled, the filtering of the 3rd convolutional layer 705 is sized to 3*3, and the filtering of Volume Four lamination 706 is sized to 2*2.To the characteristic pattern after convolution through the second maximum pond layer 707 carry out 2 times down-sampled, then characteristic pattern is carried out convolution by the 5th convolutional layer 708 filtering of 3*3.All characteristic patterns of output are pulled into a column vector, is connected to the first full articulamentum 709 of the neuron node of 80 dimensions, is eventually connected to the second full articulamentum 710 of 96 dimensions, the face key point coordinate figure (i.e. face details key point) of output regression.Key point outside at least one setting regions in the face key point in the adjusted eye areas of output, brow region and mouth region and the second face key point being merged, the result obtaining final face key point location is the 3rd face key point.According to the 3rd face key point, it is possible to divide the profile of face, and can the eyes on face, eyebrow and mouth be extracted, what these were divided out by the 3rd face key point be connected with each other and relatively independent region constitutes face key point diagram layer.
Further, as it is shown on figure 3, step 130 includes:
The pending picture generation colour of skin is processed figure layer based on the skin color detection method of regional diffusion by step 131, employing.
Wherein, first skin color detection method based on regional diffusion chooses the higher pixel of credibility as colour of skin seed points evenly around at the human face region detected, according to selected seed points, carry out spreading and detecting of periphery UNICOM region, generate skin template parameter.Colour of skin process figure layer is generated according to the skin area detected.
The technical scheme of the present embodiment, is determined by convolutional neural networks locating human face's key point and retains the details of setting regions, is accurately positioned face key point so that it is determined that retain the face setting regions of details.Adopt the skin color detection method based on regional diffusion to generate the colour of skin and process figure layer, it is determined that pending picture needs the part that U.S. face processes.While the portrait of pending picture is carried out U.S. face, retain the details of face, make the picture more anima after U.S. face.
Embodiment three
Fig. 8 is the flow chart of a kind of U.S. face processing method provided for the embodiment of the present invention, on the basis of technique scheme, refines further, and step 150 includes:
Step 151, the pending region obtained in pending picture.
Wherein, the region that U.S. face process figure layer covers in pending picture being defined as pending region, the U.S. face process to pending picture is to treat processing region to carry out U.S. face process, and in pending picture, the region beyond pending region does not help face to process.
Region beyond step 152, the setting regions treated in processing region carries out U.S. face and processes, and generates pretreatment picture.
Wherein, face key point diagram layer and the colour of skin process figure lamination Cheng Meiyan and process figure layer, and the setting regions in face key point diagram layer to retain details when carrying out U.S. face process, so the region beyond the setting regions treated in processing region carries out U.S. face and processes, namely Fuzzy Processing it is filtered, realize mill bark effect so that the more smooth exquisiteness of skin area of portrait.Again skin area is carried out brightness adjustment, make skin present pale effect.Skin area after processing is generated the pretreatment picture corresponding with pending regional location with setting regions splicing.
Step 153, pretreatment picture is spliced to the correspondence position of pending picture, generates U.S. face picture.
Wherein, pretreatment picture is spliced on position in pending picture, the pending region, the U.S. face picture of synthesis.
The technical scheme of the present embodiment, processes skin area by U.S. face, retains the details of setting regions, generate pretreatment picture, be then spliced in pending picture by pretreatment picture, generates U.S. face picture.Not only beautify portrait but also remain face details, improve the appreciation effect of U.S. face picture.
Embodiment four
Fig. 9 processes the structural representation of device for a kind of U.S. face that the embodiment of the present invention four provides, and this U.S.'s face processes device, including:
Face detection module 11, for carrying out Face datection to pending picture, it is determined that the human face region in pending picture;
Face key point diagram layer generation module 12, for human face region is carried out face critical point detection, it is determined that face key point, and generates face key point diagram layer according to face key point;
The colour of skin processes figure layer generation module 13, for the area of skin color of pending picture is detected, generates the colour of skin and processes figure layer;
U.S. face processes figure layer generation module 14, at least one setting regions in face key point diagram layer is spliced to the U.S. face process figure layer of correspondence position synthesis of colour of skin process figure layer;
U.S. face image generating module 15, for processing figure layer according to U.S. face, carries out U.S. face and processes, generate U.S. face picture pending picture.
Further, face key point diagram layer generation module, including:
Face key point estimates unit, estimates face key point for acquisition;
First face key point generates unit, is input to the first convolutional neural networks carries out overall regression training for estimating face key point, generate the first face key point;
Second face key point generates unit, carries out local correlations regression training for the first face key point is input to the second convolutional neural networks, generates the second face key point;
3rd face key point generates unit, local directed complete set regression training is carried out for the key point of at least one setting regions in the second face key point is input to the 3rd convolutional neural networks, generate face details key point, and the key point outside at least one setting regions in face details key point and the second face key point is merged into the 3rd face key point, and generate face key point diagram layer.
Further, the colour of skin processes figure layer generation module, for adopting the skin color detection method based on regional diffusion that pending picture is generated colour of skin process figure layer.
Further, U.S. face image generating module, including:
Pending area determination unit, for obtaining the pending region in pending picture;
Process sub-pictures and generate unit, carry out U.S. face for the region treated beyond the setting regions in processing region and process, generate pretreatment picture;
U.S. face picture generates unit, for pretreatment picture is spliced to the correspondence position of pending picture, generates U.S. face picture.
Preferably, setting regions includes: at least one in eye areas, brow region and mouth region.
The said goods can perform the method that any embodiment of the present invention provides, and possesses the corresponding functional module of execution method and beneficial effect.
Note, above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that and the invention is not restricted to specific embodiment described here, various obvious change can be carried out for a person skilled in the art, readjust and substitute without departing from protection scope of the present invention.Therefore, although the present invention being described in further detail by above example, but the present invention is not limited only to above example, when without departing from present inventive concept, other Equivalent embodiments more can also be included, and the scope of the present invention is determined by appended right.

Claims (10)

1. a U.S. face processing method, it is characterised in that including:
Pending picture is carried out Face datection, it is determined that the human face region in described pending picture;
Described human face region is carried out face critical point detection, it is determined that face key point, and generate face key point diagram layer according to described face key point;
The area of skin color of described pending picture is detected, generates the colour of skin and process figure layer;
At least one setting regions in described face key point diagram layer is spliced to the U.S. face process figure layer of correspondence position synthesis of described colour of skin process figure layer;
Process figure layer according to described U.S. face, described pending picture is carried out U.S. face and processes, generate U.S. face picture.
2. method according to claim 1, it is characterised in that described described human face region is carried out face critical point detection, it is determined that face key point, and generate face key point diagram layer according to described face key point, including:
Face key point is estimated in acquisition;
By described estimate face key point be input to the first convolutional neural networks carry out the overall situation regression training, generate the first face key point;
Described first face key point is input to the second convolutional neural networks and carries out local correlations regression training, generate the second face key point;
The key point of at least one setting regions in described second face key point is input to the 3rd convolutional neural networks and carries out local directed complete set regression training, generate face details key point, and the key point outside at least one setting regions described in described face details key point and described second face key point is merged into the 3rd face key point, generate face key point diagram layer.
3. method according to claim 1, it is characterised in that the described area of skin color to described pending picture detects, generates the colour of skin and processes figure layer, including:
Adopt the skin color detection method based on regional diffusion that described pending picture is generated the described colour of skin and process figure layer.
4. method according to claim 1, it is characterised in that described process figure layer according to described U.S. face, carries out U.S. face and processes, generate U.S. face picture described pending picture, including:
Obtain the pending region in described pending picture;
Region beyond setting regions in described pending region carries out U.S. face process, generate pretreatment picture;
Described pretreatment picture is spliced to the correspondence position of described pending picture, generates described U.S. face picture.
5. according to described method arbitrary in claim 1-4, it is characterised in that described setting regions includes: at least one in eye areas, brow region and mouth region.
6. a U.S. face processes device, it is characterised in that including:
Face detection module, for carrying out Face datection to pending picture, it is determined that the human face region in described pending picture;
Face key point diagram layer generation module, for described human face region is carried out face critical point detection, it is determined that face key point, and generates face key point diagram layer according to described face key point;
The colour of skin processes figure layer generation module, for the area of skin color of described pending picture is detected, generates the colour of skin and processes figure layer;
U.S. face processes figure layer generation module, at least one setting regions in described face key point diagram layer is spliced to the U.S. face process figure layer of correspondence position synthesis of described colour of skin process figure layer;
U.S. face image generating module, for processing figure layer according to described U.S. face, carries out U.S. face and processes, generate U.S. face picture described pending picture.
7. device according to claim 6, it is characterised in that described face key point diagram layer generation module, including:
Face key point estimates unit, estimates face key point for acquisition;
First face key point generate unit, for by described estimate face key point be input to the first convolutional neural networks carry out the overall situation regression training, generate the first face key point;
Second face key point generates unit, carries out local correlations regression training for described first face key point is input to the second convolutional neural networks, generates the second face key point;
3rd face key point generates unit, local directed complete set regression training is carried out for the key point of at least one setting regions in described second face key point is input to the 3rd convolutional neural networks, generate face details key point, and the key point outside at least one setting regions described in described face details key point and described second face key point is merged into the 3rd face key point, generate face key point diagram layer.
8. device according to claim 6, it is characterised in that the described colour of skin processes figure layer generation module, for adopting the skin color detection method based on regional diffusion that described pending picture is generated described colour of skin process figure layer.
9. device according to claim 6, it is characterised in that described U.S. face image generating module, including:
Pending area determination unit, for obtaining the pending region in described pending picture;
Process sub-pictures and generate unit, processing for the region beyond the setting regions in described pending region being carried out U.S. face, generating pretreatment picture;
U.S. face picture generates unit, for described pretreatment picture is spliced to the correspondence position of described pending picture, generates described U.S. face picture.
10. according to described device arbitrary in claim 6-9, it is characterised in that described setting regions includes: at least one in eye areas, brow region and mouth region.
CN201610105295.8A 2016-02-25 2016-02-25 A kind of U.S. face processing method and processing device Active CN105787878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610105295.8A CN105787878B (en) 2016-02-25 2016-02-25 A kind of U.S. face processing method and processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610105295.8A CN105787878B (en) 2016-02-25 2016-02-25 A kind of U.S. face processing method and processing device

Publications (2)

Publication Number Publication Date
CN105787878A true CN105787878A (en) 2016-07-20
CN105787878B CN105787878B (en) 2018-12-28

Family

ID=56402956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610105295.8A Active CN105787878B (en) 2016-02-25 2016-02-25 A kind of U.S. face processing method and processing device

Country Status (1)

Country Link
CN (1) CN105787878B (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447638A (en) * 2016-09-30 2017-02-22 北京奇虎科技有限公司 Beauty treatment method and device thereof
CN106778751A (en) * 2017-02-20 2017-05-31 迈吉客科技(北京)有限公司 A kind of non-face ROI recognition methods and device
CN107341774A (en) * 2017-06-16 2017-11-10 广东欧珀移动通信有限公司 Facial image U.S. face processing method and processing device
CN107341777A (en) * 2017-06-26 2017-11-10 北京小米移动软件有限公司 image processing method and device
CN107358573A (en) * 2017-06-16 2017-11-17 广东欧珀移动通信有限公司 Image U.S. face treating method and apparatus
CN107424115A (en) * 2017-05-31 2017-12-01 成都品果科技有限公司 A kind of colour of skin correction algorithm based on face key point
CN107578380A (en) * 2017-08-07 2018-01-12 北京金山安全软件有限公司 Image processing method and device, electronic equipment and storage medium
WO2018019068A1 (en) * 2016-07-27 2018-02-01 中兴通讯股份有限公司 Photographing method and device, and mobile terminal
CN107730465A (en) * 2017-10-09 2018-02-23 武汉斗鱼网络科技有限公司 Face U.S. face method and device in a kind of image
CN107945134A (en) * 2017-11-30 2018-04-20 北京小米移动软件有限公司 Image processing method and device
CN108229278A (en) * 2017-04-14 2018-06-29 深圳市商汤科技有限公司 Face image processing process, device and electronic equipment
CN108320266A (en) * 2018-02-09 2018-07-24 北京小米移动软件有限公司 A kind of method and apparatus generating U.S. face picture
CN108492348A (en) * 2018-03-30 2018-09-04 北京金山安全软件有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN108875594A (en) * 2018-05-28 2018-11-23 腾讯科技(深圳)有限公司 A kind of processing method of facial image, device and storage medium
WO2019000777A1 (en) * 2017-06-27 2019-01-03 五邑大学 Internet-based face beautification system
CN109345480A (en) * 2018-09-28 2019-02-15 广州云从人工智能技术有限公司 A kind of face based on inpainting model goes acne method automatically
CN109360176A (en) * 2018-10-15 2019-02-19 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109389076A (en) * 2018-09-29 2019-02-26 深圳市商汤科技有限公司 Image partition method and device
CN109583277A (en) * 2017-09-29 2019-04-05 大连恒锐科技股份有限公司 A kind of sex determination's method that is barefoot or wearing sock print based on CNN
CN109712082A (en) * 2018-12-05 2019-05-03 厦门美图之家科技有限公司 The method and device of figure is repaired in cooperation
CN109819318A (en) * 2019-02-02 2019-05-28 广州虎牙信息科技有限公司 A kind of image procossing, live broadcasting method, device, computer equipment and storage medium
CN110020982A (en) * 2018-01-09 2019-07-16 武汉斗鱼网络科技有限公司 Cheek U.S. type method, storage medium, electronic equipment and system automatically
CN110084219A (en) * 2019-05-07 2019-08-02 厦门美图之家科技有限公司 Interface alternation method and device
CN110348358A (en) * 2019-07-03 2019-10-18 网易(杭州)网络有限公司 A kind of Face Detection system, method, medium and calculate equipment
CN110363717A (en) * 2019-06-28 2019-10-22 北京字节跳动网络技术有限公司 A kind of method, apparatus, medium and electronic equipment handling face-image
CN110378254A (en) * 2019-07-03 2019-10-25 中科软科技股份有限公司 Recognition methods, system, electronic equipment and the storage medium of vehicle damage amending image trace
CN111179156A (en) * 2019-12-23 2020-05-19 北京中广上洋科技股份有限公司 Video beautifying method based on face detection
CN111583280A (en) * 2020-05-13 2020-08-25 北京字节跳动网络技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN111784611A (en) * 2020-07-03 2020-10-16 厦门美图之家科技有限公司 Portrait whitening method, portrait whitening device, electronic equipment and readable storage medium
CN112330824A (en) * 2018-05-31 2021-02-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
US11132824B2 (en) 2017-04-14 2021-09-28 Shenzhen Sensetime Technology Co., Ltd. Face image processing method and apparatus, and electronic device
CN113596314A (en) * 2020-04-30 2021-11-02 北京达佳互联信息技术有限公司 Image processing method and device and electronic equipment
CN113743243A (en) * 2021-08-13 2021-12-03 厦门大学 Face beautifying method based on deep learning
CN114418901A (en) * 2022-03-30 2022-04-29 江西中业智能科技有限公司 Image beautifying processing method, system, storage medium and equipment based on Retinaface algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1777913A (en) * 2003-03-20 2006-05-24 欧姆龙株式会社 Image processing device
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
US20140147003A1 (en) * 2012-11-23 2014-05-29 Nokia Corporation Method and Apparatus for Facial Image Processing
CN104537612A (en) * 2014-08-05 2015-04-22 华南理工大学 Method for automatically beautifying skin of facial image
CN105354793A (en) * 2015-11-25 2016-02-24 小米科技有限责任公司 Facial image processing method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1777913A (en) * 2003-03-20 2006-05-24 欧姆龙株式会社 Image processing device
US20140147003A1 (en) * 2012-11-23 2014-05-29 Nokia Corporation Method and Apparatus for Facial Image Processing
CN103839250A (en) * 2012-11-23 2014-06-04 诺基亚公司 Facial image processing method and device
CN103824049A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascaded neural network-based face key point detection method
CN104537612A (en) * 2014-08-05 2015-04-22 华南理工大学 Method for automatically beautifying skin of facial image
CN105354793A (en) * 2015-11-25 2016-02-24 小米科技有限责任公司 Facial image processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱睿: "改进区域扩散肤色检测方法", 《HTTP://WWW.PAPER.EDU.CN/RELEASEPAPER/CONTENT/200912-806》 *
杨海燕 等: "基于并行卷积神经网络的人脸关键点定位方法研究", 《计算机应用研究》 *

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018019068A1 (en) * 2016-07-27 2018-02-01 中兴通讯股份有限公司 Photographing method and device, and mobile terminal
CN106447638A (en) * 2016-09-30 2017-02-22 北京奇虎科技有限公司 Beauty treatment method and device thereof
CN106778751B (en) * 2017-02-20 2020-08-21 迈吉客科技(北京)有限公司 Non-facial ROI (region of interest) identification method and device
CN106778751A (en) * 2017-02-20 2017-05-31 迈吉客科技(北京)有限公司 A kind of non-face ROI recognition methods and device
WO2018149350A1 (en) * 2017-02-20 2018-08-23 迈吉客科技(北京)有限公司 Method and apparatus for recognising non-facial roi
US11132824B2 (en) 2017-04-14 2021-09-28 Shenzhen Sensetime Technology Co., Ltd. Face image processing method and apparatus, and electronic device
CN108229278B (en) * 2017-04-14 2020-11-17 深圳市商汤科技有限公司 Face image processing method and device and electronic equipment
CN108229278A (en) * 2017-04-14 2018-06-29 深圳市商汤科技有限公司 Face image processing process, device and electronic equipment
US11250241B2 (en) 2017-04-14 2022-02-15 Shenzhen Sensetime Technology Co., Ltd. Face image processing methods and apparatuses, and electronic devices
CN107424115A (en) * 2017-05-31 2017-12-01 成都品果科技有限公司 A kind of colour of skin correction algorithm based on face key point
CN107358573A (en) * 2017-06-16 2017-11-17 广东欧珀移动通信有限公司 Image U.S. face treating method and apparatus
CN107341774A (en) * 2017-06-16 2017-11-10 广东欧珀移动通信有限公司 Facial image U.S. face processing method and processing device
CN107341777A (en) * 2017-06-26 2017-11-10 北京小米移动软件有限公司 image processing method and device
WO2019000777A1 (en) * 2017-06-27 2019-01-03 五邑大学 Internet-based face beautification system
CN107578380A (en) * 2017-08-07 2018-01-12 北京金山安全软件有限公司 Image processing method and device, electronic equipment and storage medium
CN109583277B (en) * 2017-09-29 2021-04-20 大连恒锐科技股份有限公司 Gender determination method of barefoot footprint based on CNN
CN109583277A (en) * 2017-09-29 2019-04-05 大连恒锐科技股份有限公司 A kind of sex determination's method that is barefoot or wearing sock print based on CNN
CN107730465A (en) * 2017-10-09 2018-02-23 武汉斗鱼网络科技有限公司 Face U.S. face method and device in a kind of image
CN107730465B (en) * 2017-10-09 2020-09-04 武汉斗鱼网络科技有限公司 Face beautifying method and device in image
CN107945134A (en) * 2017-11-30 2018-04-20 北京小米移动软件有限公司 Image processing method and device
CN107945134B (en) * 2017-11-30 2020-10-09 北京小米移动软件有限公司 Image processing method and device
CN110020982B (en) * 2018-01-09 2022-12-27 武汉斗鱼网络科技有限公司 Cheek automatic beautifying method, storage medium, electronic device and system
CN110020982A (en) * 2018-01-09 2019-07-16 武汉斗鱼网络科技有限公司 Cheek U.S. type method, storage medium, electronic equipment and system automatically
CN108320266A (en) * 2018-02-09 2018-07-24 北京小米移动软件有限公司 A kind of method and apparatus generating U.S. face picture
CN108492348A (en) * 2018-03-30 2018-09-04 北京金山安全软件有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2019184715A1 (en) * 2018-03-30 2019-10-03 北京金山安全软件有限公司 Image processing method and device, electronic device and storage medium
CN108875594B (en) * 2018-05-28 2023-07-18 腾讯科技(深圳)有限公司 Face image processing method, device and storage medium
CN108875594A (en) * 2018-05-28 2018-11-23 腾讯科技(深圳)有限公司 A kind of processing method of facial image, device and storage medium
CN112330824A (en) * 2018-05-31 2021-02-05 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN109345480A (en) * 2018-09-28 2019-02-15 广州云从人工智能技术有限公司 A kind of face based on inpainting model goes acne method automatically
CN109345480B (en) * 2018-09-28 2020-11-27 广州云从人工智能技术有限公司 Face automatic acne removing method based on image restoration model
CN109389076B (en) * 2018-09-29 2022-09-27 深圳市商汤科技有限公司 Image segmentation method and device
CN109389076A (en) * 2018-09-29 2019-02-26 深圳市商汤科技有限公司 Image partition method and device
CN109360176A (en) * 2018-10-15 2019-02-19 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109712082A (en) * 2018-12-05 2019-05-03 厦门美图之家科技有限公司 The method and device of figure is repaired in cooperation
CN109712082B (en) * 2018-12-05 2020-08-07 厦门美图之家科技有限公司 Method and device for collaboratively repairing picture
CN109819318A (en) * 2019-02-02 2019-05-28 广州虎牙信息科技有限公司 A kind of image procossing, live broadcasting method, device, computer equipment and storage medium
CN109819318B (en) * 2019-02-02 2022-03-22 广州虎牙信息科技有限公司 Image processing method, live broadcast method, device, computer equipment and storage medium
CN110084219A (en) * 2019-05-07 2019-08-02 厦门美图之家科技有限公司 Interface alternation method and device
CN110363717B (en) * 2019-06-28 2021-07-23 北京字节跳动网络技术有限公司 Method, device, medium and electronic equipment for processing face image
CN110363717A (en) * 2019-06-28 2019-10-22 北京字节跳动网络技术有限公司 A kind of method, apparatus, medium and electronic equipment handling face-image
CN110378254B (en) * 2019-07-03 2022-04-19 中科软科技股份有限公司 Method and system for identifying vehicle damage image modification trace, electronic device and storage medium
CN110378254A (en) * 2019-07-03 2019-10-25 中科软科技股份有限公司 Recognition methods, system, electronic equipment and the storage medium of vehicle damage amending image trace
CN110348358A (en) * 2019-07-03 2019-10-18 网易(杭州)网络有限公司 A kind of Face Detection system, method, medium and calculate equipment
CN111179156A (en) * 2019-12-23 2020-05-19 北京中广上洋科技股份有限公司 Video beautifying method based on face detection
CN111179156B (en) * 2019-12-23 2023-09-19 北京中广上洋科技股份有限公司 Video beautifying method based on face detection
CN113596314B (en) * 2020-04-30 2022-11-11 北京达佳互联信息技术有限公司 Image processing method and device and electronic equipment
WO2021218118A1 (en) * 2020-04-30 2021-11-04 北京达佳互联信息技术有限公司 Image processing method and apparatus
CN113596314A (en) * 2020-04-30 2021-11-02 北京达佳互联信息技术有限公司 Image processing method and device and electronic equipment
CN111583280A (en) * 2020-05-13 2020-08-25 北京字节跳动网络技术有限公司 Image processing method, device, equipment and computer readable storage medium
CN111784611A (en) * 2020-07-03 2020-10-16 厦门美图之家科技有限公司 Portrait whitening method, portrait whitening device, electronic equipment and readable storage medium
CN111784611B (en) * 2020-07-03 2023-11-03 厦门美图之家科技有限公司 Portrait whitening method, device, electronic equipment and readable storage medium
CN113743243A (en) * 2021-08-13 2021-12-03 厦门大学 Face beautifying method based on deep learning
CN114418901A (en) * 2022-03-30 2022-04-29 江西中业智能科技有限公司 Image beautifying processing method, system, storage medium and equipment based on Retinaface algorithm
CN114418901B (en) * 2022-03-30 2022-08-09 江西中业智能科技有限公司 Image beautifying processing method, system, storage medium and equipment based on Retinaface algorithm

Also Published As

Publication number Publication date
CN105787878B (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN105787878A (en) Beauty processing method and device
EP3338217B1 (en) Feature detection and masking in images based on color distributions
CN108229278B (en) Face image processing method and device and electronic equipment
CN105184249B (en) Method and apparatus for face image processing
CN105469379B (en) Video target area shielding method and device
CN107123083A (en) Face edit methods
Baskan et al. Projection based method for segmentation of human face and its evaluation
CN103456010B (en) A kind of human face cartoon generating method of feature based point location
WO2018225061A1 (en) System and method for image de-identification
CN104318558B (en) Hand Gesture Segmentation method based on Multi-information acquisition under complex scene
CN106960202A (en) A kind of smiling face's recognition methods merged based on visible ray with infrared image
CN104992402A (en) Facial beautification processing method and device
CN108875462A (en) Eyebrow moulding guidance device and its method
CN109584153A (en) Modify the methods, devices and systems of eye
CN111243051B (en) Portrait photo-based simple drawing generation method, system and storage medium
CN113850169B (en) Face attribute migration method based on image segmentation and generation countermeasure network
CN107194869A (en) A kind of image processing method and terminal, computer-readable storage medium, computer equipment
CN108073851A (en) A kind of method, apparatus and electronic equipment for capturing gesture identification
CN110531853A (en) A kind of E-book reader control method and system based on human eye fixation point detection
CN111179156B (en) Video beautifying method based on face detection
US10803677B2 (en) Method and system of automated facial morphing for eyebrow hair and face color detection
Montazeri et al. Automatic extraction of eye field from a gray intensity image using intensity filtering and hybrid projection function
Jin et al. Facial makeup transfer combining illumination transfer
CN113379623A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112488965A (en) Image processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant