CN110473295A - A kind of method and apparatus that U.S. face processing is carried out based on three-dimensional face model - Google Patents
A kind of method and apparatus that U.S. face processing is carried out based on three-dimensional face model Download PDFInfo
- Publication number
- CN110473295A CN110473295A CN201910726111.3A CN201910726111A CN110473295A CN 110473295 A CN110473295 A CN 110473295A CN 201910726111 A CN201910726111 A CN 201910726111A CN 110473295 A CN110473295 A CN 110473295A
- Authority
- CN
- China
- Prior art keywords
- face
- model
- thin
- effect
- obtains
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of methods and apparatus for carrying out U.S. face processing based on three-dimensional face model, this method comprises: scanning obtains three-dimensional face model corresponding with real human face;The three-dimensional face model is detected, determines the key feature points of preset quantity;Target beauty Yan Xiaoguo is determined according to the user's choice, regard key feature points corresponding with the target U.S. face effect as U.S. face characteristic point;It is adjusted according to characteristic value of the default adjusting range to the U.S. face characteristic point, obtains U.S. face treated three-dimensional face model, efficient U.S. face processing is carried out to three-dimensional face model to realize, so that user is obtained the preferably U.S. Yan style and tests.
Description
Technical field
It is the present invention relates to facial image processing technical field, in particular to a kind of that U.S. face processing is carried out based on three-dimensional face model
Method and apparatus.
Background technique
There are many U.S. face technology and U.S. face softwares in the prior art, but are all based on unified default beauty parameter pair
The U.S. face that real human face carries out, U.S. face effect is stereotyped, and the three-dimensional face for being generated by scanning target real human face
Model, but since everyone feature is different, the wiring of the threedimensional model of generation is also different, causes to be difficult to for different three-dimensionals
Faceform carries out U.S. face.
Summary of the invention
The present invention provides a kind of method for carrying out U.S. face processing based on three-dimensional face model, to solve in the prior art only
U.S. face can be carried out to real human face based on unified default beauty parameter, it is difficult to carry out U.S. face for different three-dimensional face models
The problem of, which comprises
Scanning obtains three-dimensional face model corresponding with real human face;
The three-dimensional face model is detected, determines the key feature points of preset quantity;
Target beauty Yan Xiaoguo is determined according to the user's choice, and key feature points corresponding with the target U.S. face effect are made
For U.S. face characteristic point;
It is adjusted according to characteristic value of the default adjusting range to the U.S. face characteristic point, obtains U.S. face treated and is three-dimensional
Faceform.
Preferably, the scanning obtains three-dimensional face model corresponding with real human face, specifically:
The real human face is scanned, acquisition includes the original figure on the front of the real human face, the left side and the right side;
Using the original figure as the three-dimensional face model, and/or based on the original figure generate model meshes after
As the three-dimensional face model.
Preferably, when the target U.S. face effect is that skin is ground in whitening, according to default adjusting range to the beauty Yan Tezheng
The characteristic value of point is adjusted, specifically:
The original figure is handled based on the surface blur filtering algorithm after optimization, obtains second graph, wherein
The optimization is specially to reduce algorithm complexity;
High contrast reservation process is carried out to the original figure and the second graph, obtains third figure;
Bloom operation is carried out to the third figure and amplifies contrast, obtains the 4th figure;
Dimming operation is carried out by dash area of the color range adjustment to the 4th figure and chooses the spot of facial skin
Part obtains the 5th figure;
By the UV Texture Coordinates of the original figure after the second graph and the 5th figure are merged
Expansion obtains three UV textures;
Synthetic operation is carried out to three UV textures, obtains the 6th figure;
The nose shade in the 6th figure is removed based on stationary shroud;
Face Detection smoothing algorithm based on hsv color space chooses area of skin color mask;
The colour of skin is determined based on the linear amplifying operation in hsv color space.
Preferably, when the target U.S. face effect is greatly at the moment, according to default adjusting range to the U.S. face characteristic point
Characteristic value is adjusted, specifically:
Step A, the human eye layout in the model meshes is determined;
Step B, the big eye zone of action is determined according to human eye layout and the regularity of distribution of the eyes on face;
Step C, the grid vertex in the big eye zone of action is chosen;
Step D, the grid vertex is handled based on local scale warping algorithm;
Step E, according to the default adjusting range of big eye parameter, to treated, the characteristic value of the grid vertex is adjusted
It is whole, wherein step A to step D is repeated based on different big eye parameters and carries out big eye effect adjustment, is imitated according to the big eye
The result of fruit adjustment determines the default adjusting range of the big eye parameter.
Preferably, when the target U.S. face effect is thin face, according to default adjusting range to the U.S. face characteristic point
Characteristic value is adjusted, specifically:
Step a, thin face model blendshape is generated based on the model meshes;
Step b, the blendshape and the model meshes are mixed into thin face model meshes using mixed coefficint;
Step c, it is adjusted according to characteristic value of the default adjusting range of thin face parameter to the thin face model meshes,
In, step a to step b is repeated based on different thin face parameters and carries out thin face effect adjustment, according to the thin face effect tune
Whole result determines the default adjusting range of the thin face parameter.
Correspondingly, the application also proposed a kind of equipment for carrying out U.S. face processing based on three-dimensional face model, comprising:
Scan module obtains three-dimensional face model corresponding with real human face for scanning;
Detection module determines the key feature points of preset quantity for detecting to the three-dimensional face model;
Determining module will be corresponding with the target U.S. face effect for determining target beauty Yan Xiaoguo according to the user's choice
Key feature points as U.S. face characteristic point;
Module is adjusted, for being adjusted according to characteristic value of the default adjusting range to the U.S. face characteristic point, obtains beauty
Face treated three-dimensional face model.
Preferably, the scan module, is specifically used for:
The real human face is scanned, acquisition includes the original figure on the front of the real human face, the left side and the right side;
Using the original figure as the three-dimensional face model, and/or based on the original figure generate model meshes after
As the three-dimensional face model.
Preferably, when the target U.S. face effect is that skin is ground in whitening, the adjustment module is specifically used for:
The original figure is handled based on the surface blur filtering algorithm after optimization, obtains second graph, wherein
The optimization is specially to reduce algorithm complexity;
High contrast reservation process is carried out to the original figure and the second graph, obtains third figure;
Bloom operation is carried out to the third figure and amplifies contrast, obtains the 4th figure;
Dimming operation is carried out by dash area of the color range adjustment to the 4th figure and chooses the spot of facial skin
Part obtains the 5th figure;
By the UV Texture Coordinates of the original figure after the second graph and the 5th figure are merged
Expansion obtains three UV textures;
Synthetic operation is carried out to three UV textures, obtains the 6th figure;
The nose shade in the 6th figure is removed based on stationary shroud;
Face Detection smoothing algorithm based on hsv color space chooses area of skin color mask;
The colour of skin is determined based on the linear amplifying operation in hsv color space.
Preferably, when the target U.S. face effect is greatly at the moment, the adjustment module is specifically used for:
Step A, the human eye layout in the model meshes is determined;
Step B, the big eye zone of action is determined according to human eye layout and the regularity of distribution of the eyes on face;
Step C, the grid vertex in the big eye zone of action is chosen;
Step D, the grid vertex is handled based on local scale warping algorithm;
Step E, according to the default adjusting range of big eye parameter, to treated, the characteristic value of the grid vertex is adjusted
It is whole, wherein step A to step D is repeated based on different big eye parameters and carries out big eye effect adjustment, is imitated according to the big eye
The result of fruit adjustment determines the default adjusting range of the big eye parameter.
Preferably, when the target U.S. face effect is thin face, the adjustment module is specifically used for:
Step a, thin face model blendshape is generated based on the model meshes;
Step b, the blendshape and the model meshes are mixed into thin face model meshes using mixed coefficint;
Step c, it is adjusted according to characteristic value of the default adjusting range of thin face parameter to the thin face model meshes,
In, step a to step b is repeated based on different thin face parameters and carries out thin face effect adjustment, according to the thin face effect tune
Whole result determines the default adjusting range of the thin face parameter.
It can be seen that by applying above technical scheme, scanning obtains three-dimensional face model corresponding with real human face;It is right
The three-dimensional face model is detected, and determines the key feature points of preset quantity;Target U.S. face is determined according to the user's choice
Effect regard key feature points corresponding with the target U.S. face effect as U.S. face characteristic point;According to default adjusting range to institute
The characteristic value for stating U.S. face characteristic point is adjusted, and obtains U.S. face treated three-dimensional face model, realize to three-dimensional face model
Efficient U.S. face processing is carried out, so that user is obtained the preferably U.S. Yan style and tests.
Detailed description of the invention
Fig. 1 is that a kind of process for method that U.S. face processing is carried out based on three-dimensional face model that the embodiment of the present application proposes is shown
It is intended to;
Fig. 2 is that a kind of structure for equipment that U.S. face processing is carried out based on three-dimensional face model that the embodiment of the present application proposes is shown
It is intended to;
Fig. 3 is the original figure for whitening mill skin obtained in the application specific embodiment;
Fig. 4 is the model meshes for big eye obtained in the application specific embodiment;
Fig. 5 is the model meshes for thin face obtained in the application specific embodiment;
Fig. 6 be it is optimized in the application specific embodiment after surface blur filtering algorithm treated effect picture;
Fig. 7 is the effect picture in the application specific embodiment after high contrast reservation process;
Fig. 8 is the facial hatching effect figure in the application specific embodiment after bloom operation processing;
Fig. 9 is in the application specific embodiment through color range adjustment treated effect picture;
Figure 10 is that the effect picture after skin nti-freckle is ground in the application specific embodiment;
Figure 11 is the first UV textures in the application specific embodiment;
Figure 12 is the 2nd UV textures in the application specific embodiment;
Figure 13 is the 3rd UV textures in the application specific embodiment;
Figure 14 is the effect picture in the application specific embodiment after three UV textures synthesis;
Figure 15 is that the effect picture after nose shade is removed in the application specific embodiment;
Figure 16 is to choose the effect picture after area of skin color mask in the application specific embodiment;
Figure 17 is that the effect picture after the colour of skin is determined in the application specific embodiment;
Figure 18 is the effect picture for grinding skin treated human face three-dimensional model in the application specific embodiment through whitening;
Figure 19 is that the effect picture behind the big eye zone of action is determined in the application specific embodiment;
Figure 20 is the effect picture chosen in the big eye zone of action after grid vertex in the application specific embodiment;
Figure 21 is the effect picture through big eye treated human face three-dimensional model in the application specific embodiment;
Figure 22 is the effect picture through thin face treated human face three-dimensional model in the application specific embodiment.
Specific embodiment
As stated in the background art, beauty can only be carried out to real human face based on unified default beauty parameter in the prior art
Face, it is difficult to carry out U.S. face for different three-dimensional face models.
To solve the above problems, the embodiment of the present application proposes a kind of side for carrying out U.S. face processing based on three-dimensional face model
Method obtains the key feature points of face by detection, is carried out according to characteristic value of the default adjusting range to the key feature points
Adjustment obtains preferably U.S. face treatment effect to carry out U.S. face in real time to different three-dimensional face models.
As shown in Figure 1, a kind of process for method for being carried out U.S. face processing based on three-dimensional face model that the application proposes is shown
It is intended to, method includes the following steps:
S101, scanning obtain three-dimensional face model corresponding with real human face;
Specifically, carrying out 3-D scanning to real human face by scanning devices such as camera or scanners, obtain and true people
The corresponding three-dimensional face model of face.
It should be noted that those skilled in the art can be according to the actual situation using different scanning devices to true people
Face is scanned, this has no effect on the protection scope of the application.
In view of different U.S. face effects needs to obtain based on different three-dimensional face models, in the preferred implementation of the application
In example, the scanning obtains three-dimensional face model corresponding with real human face, specifically:
The real human face is scanned, acquisition includes the original figure on the front of the real human face, the left side and the right side;
Using the original figure as the three-dimensional face model, and/or based on the original figure generate model meshes after
As the three-dimensional face model.
Specifically, the front of real human face, the original figure on the left side and the right side can be obtained by scanning real human face, it will
This original figure can carry out the U.S. face processing such as whitening mill skin as three-dimensional face model based on this original figure, and/or be based on institute
It states after original figure generates model meshes and is used as the three-dimensional face model, big eye and thin face etc. can be carried out based on this model meshes
U.S. face processing.
When generating model meshes based on original figure, those skilled in the art can flexibly select difference according to the actual situation
The mode such as modes such as AR development platform ARKit or depth diagram technology, different modes have no effect on the protection scope of the application.
S102 detects the three-dimensional face model, determines the key feature points of preset quantity.
Specifically, the three-dimensional face model obtained to previous step detects, it may be determined that go out the key feature of preset quantity
Point, which, which can be, has the corresponding characteristic point in position mainly influenced, such as people to the shape of three-dimensional face model
The corresponding characteristic point of the main portions such as profile, canthus, nose, the wing of nose, canthus, the corners of the mouth, the eyebrow of face.
Those skilled in the art can different preset quantities determine according to actual needs key feature points, this has no effect on this
The protection scope of application.
S103 determines target beauty Yan Xiaoguo according to the user's choice, will be corresponding with the target U.S. face effect crucial special
Sign point is as U.S. face characteristic point.
Specifically, user can select different target beauty Yan Xiaoguo according to their own needs, for example, when desired pair of user
When three-dimensional face model carries out whitening mill skin, whitening mill skin may be selected as target beauty Yan Xiaoguo, based on different target U.S. face
Effect can be handled different key feature points, therefore can be using the corresponding key feature points of target U.S. face effect as U.S. face
Characteristic point, it is subsequent that the U.S. face characteristic point is handled.
Those skilled in the art according to actual needs can be different to same target U.S. face effect selection key feature points with
Correspondence, this has no effect on the protection scope of the application.
S104 is adjusted, after obtaining U.S. face processing according to characteristic value of the default adjusting range to the U.S. face characteristic point
Three-dimensional face model.
Specifically, the characteristic value by adjusting U.S. face characteristic point carries out corresponding U.S. face processing, adjustment model can be preset
It encloses, the adjustment of the characteristic value of U.S. face characteristic point is carried out according to the adjusting range, U.S. face can be obtained treated three-dimensional face mould
Type.
To guarantee to carry out the beautiful Yan Xiaoguo that skin is ground in whitening to three-dimensional face model, in the application preferred embodiment, when
When the target U.S. face effect is that skin is ground in whitening, adjusted according to characteristic value of the default adjusting range to the U.S. face characteristic point
It is whole, specifically:
The original figure is handled based on the surface blur filtering algorithm after optimization, obtains second graph, wherein
The optimization is specially to reduce algorithm complexity;
High contrast reservation process is carried out to the original figure and the second graph, obtains third figure;
Bloom operation is carried out to the third figure and amplifies contrast, obtains the 4th figure;
Dimming operation is carried out by dash area of the color range adjustment to the 4th figure and chooses the spot of facial skin
Part obtains the 5th figure;
By the UV Texture Coordinates of the original figure after the second graph and the 5th figure are merged
Expansion obtains three UV textures;
Synthetic operation is carried out to three UV textures, obtains the 6th figure;
The nose shade in the 6th figure is removed based on stationary shroud;
Face Detection smoothing algorithm based on hsv color space chooses area of skin color mask;
The colour of skin is determined based on the linear amplifying operation in hsv color space.
As described above, when the target U.S. face effect that user selects grinds skin for whitening, in default adjusting range to the beauty
When the characteristic value of face characteristic point is adjusted, by utilizing the surface blur rate wave algorithm after optimizing to acquisition original figure surface
Fuzzy rate wave simultaneously carries out high contrast reservation, bloom operation, amplification contrast, color range adjustment, the fractionation and conjunction of UV texture mapping
At, based on stationary shroud removal nose shade, choose based on hsv color space area of skin color mask and determine the colour of skin etc. it is a series of
Process carries out adjustment corresponding with whitening mill skin.
It should be noted that the scheme of preferred embodiment above is only a kind of specific implementation that the application is proposed,
Other belong to the protection model of the application in the mode that characteristic value of the default adjusting range to the U.S. face characteristic point is adjusted
It encloses.
To guarantee to carry out three-dimensional face model the beautiful Yan Xiaoguo of big eye, in the application preferred embodiment, when described
Target U.S. face effect is greatly at the moment, to be adjusted according to characteristic value of the default adjusting range to the U.S. face characteristic point, specifically:
Step A, the human eye layout in the model meshes is determined;
Step B, the big eye zone of action is determined according to human eye layout and the regularity of distribution of the eyes on face;
Step C, the grid vertex in the big eye zone of action is chosen;
Step D, the grid vertex is handled based on local scale warping algorithm;
Step E, according to the default adjusting range of big eye parameter, to treated, the characteristic value of the grid vertex is adjusted
It is whole, wherein step A to step D is repeated based on different big eye parameters and carries out big eye effect adjustment, is imitated according to the big eye
The result of fruit adjustment determines the default adjusting range of the big eye parameter.
As described above, target U.S. face effect select as user is greatly at the moment, in default adjusting range to the beauty Yan Te
The characteristic value of sign point is when being adjusted, and A, determines human eye layout in model meshes, wherein human eye layout table is leted others have a look at the position of eye
And rotation;B, the big eye zone of action is determined by human eye layout and the regularity of distribution of the eyes on face;C, big eye effect is chosen
Grid vertex in region;D, the grid vertex is handled using local scale warping algorithm;E, big eye parameter is utilized
Default adjusting range to treated, grid vertex is adjusted, wherein determine the mistake of the default adjusting range of big eye parameter
Journey can be with are as follows: carries out big eye effect adjustment based on the operation that different big eye parameters repeats aforementioned A to D, is imitated according to big eye
The result of fruit adjustment determines the default adjusting range of big eye parameter.
It should be noted that the scheme of preferred embodiment above is only a kind of specific implementation that the application is proposed,
Other belong to the protection model of the application in the mode that characteristic value of the default adjusting range to the U.S. face characteristic point is adjusted
It encloses.
To guarantee to carry out three-dimensional face model the beautiful Yan Xiaoguo of thin face, in the application preferred embodiment, when described
When target U.S. face effect is thin face, it is adjusted according to characteristic value of the default adjusting range to the U.S. face characteristic point, specifically:
Blendshape is deformed as fusion based on the thin face model that the model meshes generate;
The blendshape is handled according to mixed coefficint to obtain thin face model meshes;
The thin face model meshes are adjusted according to the adjusting range for presetting thin face parameter.
As described above, when the target U.S. face effect that user selects is thin face, in default adjusting range to the beauty Yan Te
When the characteristic value of sign point is adjusted, thin face model blendshape a, is generated based on the model meshes;B, mixed stocker is utilized
The blendshape and the model meshes are mixed into thin face model meshes by number;C, according to the default adjustment model of thin face parameter
It encloses and the characteristic value of the thin face model meshes is adjusted, wherein the determination process of the default adjusting range of thin face parameter can
With are as follows: a to b is repeated based on different thin face parameters and carries out thin face effect adjustment, the result adjusted according to thin face effect is true
The default adjusting range of fixed thin face parameter.
It should be noted that the scheme of preferred embodiment above is only a kind of specific implementation that the application is proposed,
Other belong to the protection model of the application in the mode that characteristic value of the default adjusting range to the U.S. face characteristic point is adjusted
It encloses.
By applying above technical scheme, scanning obtains three-dimensional face model corresponding with real human face;To the three-dimensional
Faceform detects, and determines the key feature points of preset quantity;Target beauty Yan Xiaoguo is determined according to the user's choice, it will be with
The corresponding key feature points of the target U.S. face effect are as U.S. face characteristic point;According to default adjusting range to the beauty Yan Tezheng
The characteristic value of point is adjusted, and obtains U.S. face treated three-dimensional face model, realize three-dimensional face model is carried out it is efficient
U.S. face processing makes user obtain the preferably U.S. Yan style and tests.
For the technical idea that the present invention is further explained, now in conjunction with specific application scenarios, to technical side of the invention
Case is illustrated.
The embodiment of the present application provides a kind of method for carrying out U.S. face processing based on three-dimensional face model, by true people
Face scanning generates three-dimensional face model, determines multiple key feature points of the three-dimensional face model, the target selected according to user
U.S. face effect determines corresponding U.S. face characteristic point, is adjusted within a preset range to the corresponding characteristic value of U.S. face characteristic point, from
And U.S. face in real time is carried out to different three-dimensional face models, obtain preferably U.S. face treatment effect.
The acquisition process of three-dimensional face model can be with are as follows: scans real human face by scanning devices such as cameras, obtains true
The original figure in the front of face, the left side and the right side, as shown in figure 3, obtaining the original figure according to AR development platform ARKit
Model meshes, as shown in Figure 4 and Figure 5, those skilled in the art can also obtain the model such as depth diagram technology by other means
Grid, using the original figure or the grid model as three-dimensional face model to be processed.
When the target U.S. face effect selected according to user grinds skin for whitening, U.S. face processing can be carried out by following steps:
The first step, in order to guarantee the marginal information of original figure, generally by surface blur filtering algorithm to original figure
It carries out protecting side filter operation, the formula principle of surface blur filtering algorithm is as follows:
Wherein:
R indicates the radius of neighbourhood;
Y indicates threshold values, and range is [0-255];
X indicates current pixel value;
xiIndicate radius for ith pixel value in the neighborhood of r;
XoutIndicate the output end value of surface blur.
However, the algorithm complexity of surface blur filtering is O (n2), to improve treatment effeciency, the embodiment of the present application is to it
It optimizes, especially by plus-minus O (1) operation of the vector of the instruction set based on x86/ARM framework and based on the histogram of convolution
O (1) grade operation of figure optimizes surface blur filtering algorithm, and the algorithm complexity of surface blur filtering algorithm is become O
(1), specific optimization process is the prior art, and details are not described herein, to pass through the surface blur filtering algorithm pair after optimization
Original figure carries out protecting side filter operation, improves treatment effeciency, treated, and effect picture is as shown in Figure 6.
Second step carries out high contrast reservation operations to original figure and through the first step treated figure.
High contrast retains, and the part for exactly filtering out some high comparisons is handled, mainly edge/profile portion of image
Part.Retain pixel and the biggish part of surrounding contrast ratio on image, other parts all become grey.Such as portrait just remains
Eyes, mouth, skin speckle and profile etc., treated, and effect picture is as shown in Figure 7.
The principle formula that high contrast retains is as follows:
F (x)=x-blur (x)+0.5
It wherein, is between [0,1] for each pixel value x range, wherein blur function is the surface blur in previous step
Function.
Third step will carry out bloom operation through second step treated figure, and amplify contrast, obtain the shade of face,
Treated, and effect picture is as shown in Figure 8.
The principle formula of bloom operation is as follows:
F (x)=x≤0.5 2*x*x
F (x)=x > 0.5=1-2* (1-x) (1-x)
It wherein, is if x is less than or equal to 0.5, to carry out f (x)=2*x*x behaviour between [0,1] for each pixel value x range
Make, if x is greater than 0.5, carries out f (x)=1-2* (1-x) (1-x), in order to make the shade of figure or edge more clear
It is clear.
Contrast refer to light and shade region in image it is most bright it is white and most dark it is black between different brightness levels measurement, difference model
It is bigger to enclose bigger representative comparison, the smaller representative comparison of disparity range is smaller, and by amplifying contrast, the clear of image can be improved
Degree, details performance, gray-level performance.
4th step, by color range adjustment to carrying out dimming operation through third step treated dash area and choose face's skin
The blur portion of skin carries out Histogram adjustment, and treated, and effect picture is as shown in Figure 9.
Color range is to indicate the standard of index of brightness of image power, and the color richness and fineness of image are determined by color range
's.Color range refers to that brightness and color are unrelated, but most bright only white, the only black not worked most.
The principle formula of the Histogram adjustment curve of color range is as follows:
F (x)=curve (x)
Wherein, the specific form of curve (x) can refer to trigonometric function or power function.
5th step, will through the first step treated figure with through the 4th step, treated that figure merges, reach mill skin
Freckle effect, treated, and effect picture is as shown in Figure 10, and specific principle formula is as follows:
F (u, v)=src (u, v) * a+dest (u, v) (1-a)
Wherein, it to Texture Coordinates u, the v for belonging to [0,1], is based respectively on u and v and carries out f (u, v)=src (u, v) * a+
Dest (u, v) (1-a) operation, wherein src refers to that u in original image, the color at v value, dest refer to u, the face of the target figure at v value
Color, a are constants.
6th step, by treated that figure is unfolded to obtain three UV by the UV Texture Coordinates of original figure through the 5th step
Textures, treated, and effect picture is as shown in Figure 11 to Figure 13.
UV refers to u, and the abbreviation of v Texture Coordinates, it defines the information of the position of each point in figure, these point with
Threedimensional model connects each other, to determine the position of surface texture mapping.
7th step synthesizes three UV textures in the 6th step, and treated, and effect picture is as shown in figure 14.
8th step, based on stationary shroud removal through the nose shade in the 7th step treated figure, effect that treated
Figure is as shown in figure 15.
Ever-changing due to face, the algorithm comparison for finding shade for photo is complicated, and for UV textures, people
Face information it is relatively fixed, then fixed mask can be used to optimize shade, proximity is supplemented to the picture of mask part
The colour of skin in domain, principle formula are as follows:
Mask=tex2d (mask, (u, v))
Ret=tex2d (src, ((u, v)+mask.a*offset))
Wherein, are carried out by the operation of f (u, v)=tex2d (mask, (u, v)), takes out the face of mask by the u for belonging to [0,1], v
Color value.
The operation of f (u, v)=tex2d (src, ((u, v)+mask.a*offset)) is carried out to the u for belonging to [0,1], v, first
Original image color is taken out, is deviated by the value of the transparent channel of mask, calculates final color, wherein tex2d function is to take out
The function of textures color.
9th step is based on a kind of HSV (Hue, Saturation, Value, tone, saturation degree, lightness, color model) face
The Face Detection smoothing algorithm of the colour space chooses area of skin color mask, and treated, and effect picture is as shown in figure 16.
A kind of RGB (red, green, blue, red, green, blue, color standard) color space does not connect the variation of color
Continuous, hsv color space is in opposite stable region to color detection, so RGB is converted to after HSV, for normal human's skin
The judgement of color just becomes simply, and the color in some section is exactly the colour of skin, softens side to the offset in section using color
Edge, principle formula are as follows:
Inverselerp (a, b, x)=(x-a)/(b-a);
F (x)=saturate (min (inverselerp (7-lenth, 7, x), inverselerp (20+lenth, 20,
x)))
F (y)=saturate (inverselerp (28,28+lenth, y))
F (z)=saturate (inverselerp (20,20+lenth, z))
Skin (x, y, z)=min (f (x), f (y), f (z))
Wherein, inverselerp refers to f (x)=(x-a)/(b-a), indicates ratio of the x in the section ab
Above-mentioned formula indicates value range of the color value in HSV space of certain point in figure.
For the channel H, make a=7-length, b=length, acquire the value of f (x), wherein length is constant;
For the channel H, make a=20+length, b=length, acquire the value of f (x), wherein length is constant;
For channel S, make a=28, b=28+length, acquire the value of f (y), wherein length is constant;
For the channel V, make a=20, b=20+length, acquire the value of f (z), wherein length is constant;
The minimum value in above-mentioned all values is acquired, so that convenient carry out break-in operation to result.
Tenth step determines the colour of skin based on the linear amplifying operation in hsv color space, treated effect picture such as Figure 17 institute
Show.
For colour of skin constituency, the average color that traversal calculates HSV space is carried out, compares the hsv color of target, calculating should
The multiplying power zoomed in or out carries out whitening, and principle formula is as follows:
Color=tex2d (main, uv);
Hsv=RGB2HSV (color.rgb);
Value=skin (hsv);
Light=hsv*Whitening;
Result (r, g, b)=HSV2RGB (lerp (hsv, light, value)
Wherein, are carried out by f (u, v)=src (u, v), obtains the color of original image by the u for belonging to [0,1], v;
To the x for belonging to [0,1], the color value of y, z, the color for carrying out f (h, s, v)=Rgb2hsv (x, y, z) converts behaviour
Make;
To the h for belonging to [0,1], the color value of s, v carry out the amplifying operation of f (h, s, v)=a* (h, s, v);
To the h for belonging to [0,1], s, v carry out Face Detection using the formula of previous step, obtain the value value of the colour of skin;
Amplified hsv and original hsv are carried out to the lerp function operation of value value.
It obtains grinding skin treated human face three-dimensional model, effect picture such as Figure 18 institute through whitening after a series of above-mentioned processing
Show.
When the target U.S. face effect selected according to user carries out U.S. face processing for that greatly at the moment, can pass through following steps:
The first step determines the human eye layout in the model meshes;
X/y plane coordinate and radius is arranged as big eye in the rule of face location according to human eye layout and eyes in second step
The region of effect, treated, and effect picture is as shown in figure 19;
Third step, based on the grid vertex for the area filter Selection Model grid being arranged in second step, effect that treated
Figure is as shown in figure 20;
4th step is handled the grid vertex based on local scale warping algorithm, and principle formula is as follows:
Wherein, r: the distance of pixel to the center of circle;
rmax: the maximum radius of deformation;
A: zoom factor;
fs(r): the distance in the center of circle is arrived after scaling.
Step 5: repeating step 1 to step 4 by changing different parameter values and being adjusted, set the minimum of big eye
Parameter value and maximum parameter value.
Step 6: parameter value to be converted to 0~100 ratio, 0 is model eyes initial value, and 100 be the big eye of model eyes
Maximum value afterwards influences the initial parameter of previous step by allowing user to adjust the parameter, by the way that parameter value is adjusted to difference
Ratio carry out the processing of big eye U.S. face, can also the adjustable minimum of pre-set user to peak, user can within this range from
Row carries out beauty operation, and to reach the optimum efficiency of customer acceptance, the effect picture through big eye treated human face three-dimensional model is such as
Shown in Figure 21.
When the target U.S. face effect selected according to user is thin face, following steps can be passed through and carry out U.S. face processing:
The first step makes thin face model blendshape with reference to head with this using model meshes as head is referred to;
The blendshape is mixed into the model meshes of the first step, principle formula by second step through mixed coefficint
It is as follows:
Vtarget=Vsrc+dVblendshape*t
Wherein, Vtarget: mixed vertex position
Vsrc: model meshes original vertices position
dVblendshape: thin face blendshape top displacement data
T: mixed coefficint
Third step repeats step 1 to step 2 and carries out effect adjustment, set thin face by changing different parameter values
Minimum parameter values and maximum parameter value.
Parameter value is converted to 0~100 ratio by the 4th step, and 0 is model shape of face initial value, and 100 be the thin face of model shape of face
Maximum value afterwards influences the initial parameter of previous step by allowing user to adjust the parameter, by the way that parameter value is adjusted to difference
Ratio carry out the processing of thin face U.S. face, can also the adjustable minimum of pre-set user to peak, user can within this range from
Row carries out beauty operation, and to reach the optimum efficiency of customer acceptance, the effect picture through thin face treated human face three-dimensional model is such as
Shown in Figure 22.
The correlation U.S. face treatment process that whitening mill skin, big eye and thin face are carried out based on three-dimensional face model is carried out above
Illustrate, the embodiment of the present application obtains the key feature points of face by detection, according to default adjusting range to the key feature
The characteristic value of point is adjusted, and also sustainable extension increases other beauty Yan Gongneng, such as thrush, lip gloss, blush, the U.S. face of U.S. pupil
Operation is realized and carries out efficient U.S. face processing to three-dimensional face model, so that user is obtained the preferably U.S. Yan style and tests.
In order to reach the above technical purpose, setting for U.S. face processing is carried out based on three-dimensional face model present applicant proposes a kind of
It is standby, as shown in Figure 2, comprising:
Scan module 201 obtains three-dimensional face model corresponding with real human face for scanning;
Detection module 202 determines the key feature points of preset quantity for detecting to the three-dimensional face model;
Determining module 203 will be with the target U.S. face effect pair for determining target beauty Yan Xiaoguo according to the user's choice
The key feature points answered are as U.S. face characteristic point;
Module 204 is adjusted, for being adjusted according to characteristic value of the default adjusting range to the U.S. face characteristic point, is obtained
U.S. face treated three-dimensional face model.
In specific application scenarios, the scan module 201 is specifically used for:
The real human face is scanned, acquisition includes the original figure on the front of the real human face, the left side and the right side;
Using the original figure as the three-dimensional face model, and/or based on the original figure generate model meshes after
As the three-dimensional face model.
In specific application scenarios, when the target U.S. face effect is that skin is ground in whitening, the adjustment module 204, tool
Body is used for:
The original figure is handled based on the surface blur filtering algorithm after optimization, obtains second graph, wherein
The optimization is specially to reduce algorithm complexity;
High contrast reservation process is carried out to the original figure and the second graph, obtains third figure;
Bloom operation is carried out to the third figure and amplifies contrast, obtains the 4th figure;
Dimming operation is carried out by dash area of the color range adjustment to the 4th figure and chooses the spot of facial skin
Part obtains the 5th figure;
By the UV Texture Coordinates of the original figure after the second graph and the 5th figure are merged
Expansion obtains three UV textures;
Synthetic operation is carried out to three UV textures, obtains the 6th figure;
The nose shade in the 6th figure is removed based on stationary shroud;
Face Detection smoothing algorithm based on hsv color space chooses area of skin color mask;
The colour of skin is determined based on the linear amplifying operation in hsv color space.
In specific application scenarios, when the target U.S. face effect is that greatly at the moment, the adjustment module 204 is specific to use
In:
Step A, the human eye layout in the model meshes is determined;
Step B, the big eye zone of action is determined according to human eye layout and the regularity of distribution of the eyes on face;
Step C, the grid vertex in the big eye zone of action is chosen;
Step D, the grid vertex is handled based on local scale warping algorithm;
Step E, according to the default adjusting range of big eye parameter, to treated, the characteristic value of the grid vertex is adjusted
It is whole, wherein step A to step D is repeated based on different big eye parameters and carries out big eye effect adjustment, is imitated according to the big eye
The result of fruit adjustment determines the default adjusting range of the big eye parameter.
In specific application scenarios, when the target U.S. face effect is thin face, the adjustment module 204 is specific to use
In:
Step a, thin face model blendshape is generated based on the model meshes;
Step b, the blendshape and the model meshes are mixed into thin face model meshes using mixed coefficint;
Step c, it is adjusted according to characteristic value of the default adjusting range of thin face parameter to the thin face model meshes,
In, step a to step b is repeated based on different thin face parameters and carries out thin face effect adjustment, according to the thin face effect tune
Whole result determines the default adjusting range of the thin face parameter.
Through the above description of the embodiments, those skilled in the art can be understood that the present invention can lead to
Hardware realization is crossed, the mode of necessary general hardware platform can also be added to realize by software.Based on this understanding, this hair
Bright technical solution can be embodied in the form of software products, which can store in a non-volatile memories
In medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including making a computer equipment (can be in the form of some instructions
It is personal computer, server or the network equipment etc.) execute method described in each implement scene of the present invention.
It will be appreciated by those skilled in the art that the accompanying drawings are only schematic diagrams of a preferred implementation scenario, module in attached drawing or
Process is not necessarily implemented necessary to the present invention.
It will be appreciated by those skilled in the art that the module in device can describe to be distributed in implement scene according to implement scene
Device in, can also carry out corresponding change be located at different from this implement scene one or more devices in.Above-mentioned implementation field
The module of scape can be merged into a module, can also be further split into multiple submodule.
Aforementioned present invention serial number is for illustration only, does not represent the superiority and inferiority of implement scene.
Disclosed above is only several specific implementation scenes of the invention, and still, the present invention is not limited to this, Ren Heben
What the technical staff in field can think variation should all fall into protection scope of the present invention.
Claims (10)
1. a kind of method for carrying out U.S. face processing based on three-dimensional face model, which is characterized in that the described method includes:
Scanning obtains three-dimensional face model corresponding with real human face;
The three-dimensional face model is detected, determines the key feature points of preset quantity;
Target beauty Yan Xiaoguo is determined according to the user's choice, regard key feature points corresponding with the target U.S. face effect as beauty
Face characteristic point;
It is adjusted according to characteristic value of the default adjusting range to the U.S. face characteristic point, obtains U.S. face treated three-dimensional face
Model.
2. the method as described in claim 1, which is characterized in that the scanning obtains three-dimensional face mould corresponding with real human face
Type, specifically:
The real human face is scanned, acquisition includes the original figure on the front of the real human face, the left side and the right side;
Using the original figure as the three-dimensional face model, and/or based on conduct after original figure generation model meshes
The three-dimensional face model.
3. method according to claim 2, which is characterized in that when the target U.S. face effect is that skin is ground in whitening, according to pre-
If adjusting range is adjusted the characteristic value of the U.S. face characteristic point, specifically:
The original figure is handled based on the surface blur filtering algorithm after optimization, obtains second graph, wherein described
Optimization is specially to reduce algorithm complexity;
High contrast reservation process is carried out to the original figure and the second graph, obtains third figure;
Bloom operation is carried out to the third figure and amplifies contrast, obtains the 4th figure;
Dimming operation is carried out by dash area of the color range adjustment to the 4th figure and chooses the blur portion of facial skin,
Obtain the 5th figure;
It is unfolded after the second graph and the 5th figure are merged by the UV Texture Coordinates of the original figure
Obtain three UV textures;
Synthetic operation is carried out to three UV textures, obtains the 6th figure;
The nose shade in the 6th figure is removed based on stationary shroud;
Face Detection smoothing algorithm based on hsv color space chooses area of skin color mask;
The colour of skin is determined based on the linear amplifying operation in hsv color space.
4. method according to claim 2, which is characterized in that when the target U.S. face effect is greatly at the moment, according to default tune
Whole range is adjusted the characteristic value of the U.S. face characteristic point, specifically:
Step A, the human eye layout in the model meshes is determined;
Step B, the big eye zone of action is determined according to human eye layout and the regularity of distribution of the eyes on face;
Step C, the grid vertex in the big eye zone of action is chosen;
Step D, the grid vertex is handled based on local scale warping algorithm;
Step E, according to the default adjusting range of big eye parameter, to treated, the characteristic value of the grid vertex is adjusted,
In, step A to step D is repeated based on different big eye parameters and carries out big eye effect adjustment, according to the big eye effect tune
Whole result determines the default adjusting range of the big eye parameter.
5. method according to claim 2, which is characterized in that when the target U.S. face effect is thin face, according to default tune
Whole range is adjusted the characteristic value of the U.S. face characteristic point, specifically:
Step a, thin face model blendshape is generated based on the model meshes;
Step b, the blendshape and the model meshes are mixed into thin face model meshes using mixed coefficint;
Step c, it is adjusted according to characteristic value of the default adjusting range of thin face parameter to the thin face model meshes, wherein
Step a to step b is repeated based on different thin face parameters and carries out thin face effect adjustment, according to the thin face effect adjustment
As a result the default adjusting range of the thin face parameter is determined.
6. a kind of equipment for carrying out U.S. face processing based on three-dimensional face model, which is characterized in that the equipment includes:
Scan module obtains three-dimensional face model corresponding with real human face for scanning;
Detection module determines the key feature points of preset quantity for detecting to the three-dimensional face model;
Determining module will pass corresponding with the target U.S. face effect for determining target beauty Yan Xiaoguo according to the user's choice
Key characteristic point is as U.S. face characteristic point;
Module is adjusted, for being adjusted according to characteristic value of the default adjusting range to the U.S. face characteristic point, is obtained at U.S. face
Three-dimensional face model after reason.
7. equipment as claimed in claim 6, which is characterized in that the scan module is specifically used for:
The real human face is scanned, acquisition includes the original figure on the front of the real human face, the left side and the right side;
Using the original figure as the three-dimensional face model, and/or based on conduct after original figure generation model meshes
The three-dimensional face model.
8. equipment as claimed in claim 7, which is characterized in that when the target U.S. face effect is that skin is ground in whitening, the tune
Mould preparation block, is specifically used for:
The original figure is handled based on the surface blur filtering algorithm after optimization, obtains second graph, wherein described
Optimization is specially to reduce algorithm complexity;
High contrast reservation process is carried out to the original figure and the second graph, obtains third figure;
Bloom operation is carried out to the third figure and amplifies contrast, obtains the 4th figure;
Dimming operation is carried out by dash area of the color range adjustment to the 4th figure and chooses the blur portion of facial skin,
Obtain the 5th figure;
It is unfolded after the second graph and the 5th figure are merged by the UV Texture Coordinates of the original figure
Obtain three UV textures;
Synthetic operation is carried out to three UV textures, obtains the 6th figure;
The nose shade in the 6th figure is removed based on stationary shroud;
Face Detection smoothing algorithm based on hsv color space chooses area of skin color mask;
The colour of skin is determined based on the linear amplifying operation in hsv color space.
9. equipment as claimed in claim 7, which is characterized in that when the target U.S. face effect is the greatly at the moment adjustment mould
Block is specifically used for:
Step A, the human eye layout in the model meshes is determined;
Step B, the big eye zone of action is determined according to human eye layout and the regularity of distribution of the eyes on face;
Step C, the grid vertex in the big eye zone of action is chosen;
Step D, the grid vertex is handled based on local scale warping algorithm;
Step E, according to the default adjusting range of big eye parameter, to treated, the characteristic value of the grid vertex is adjusted,
In, step A to step D is repeated based on different big eye parameters and carries out big eye effect adjustment, according to the big eye effect tune
Whole result determines the default adjusting range of the big eye parameter.
10. equipment as claimed in claim 7, which is characterized in that when the target U.S. face effect is thin face, the adjustment mould
Block is specifically used for:
Step a, thin face model blendshape is generated based on the model meshes;
Step b, the blendshape and the model meshes are mixed into thin face model meshes using mixed coefficint;
Step c, it is adjusted according to characteristic value of the default adjusting range of thin face parameter to the thin face model meshes, wherein
Step a to step b is repeated based on different thin face parameters and carries out thin face effect adjustment, according to the thin face effect adjustment
As a result the default adjusting range of the thin face parameter is determined.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910726111.3A CN110473295B (en) | 2019-08-07 | 2019-08-07 | Method and equipment for carrying out beautifying treatment based on three-dimensional face model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910726111.3A CN110473295B (en) | 2019-08-07 | 2019-08-07 | Method and equipment for carrying out beautifying treatment based on three-dimensional face model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110473295A true CN110473295A (en) | 2019-11-19 |
CN110473295B CN110473295B (en) | 2023-04-25 |
Family
ID=68510355
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910726111.3A Active CN110473295B (en) | 2019-08-07 | 2019-08-07 | Method and equipment for carrying out beautifying treatment based on three-dimensional face model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110473295B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112381729A (en) * | 2020-11-12 | 2021-02-19 | 广州繁星互娱信息科技有限公司 | Image processing method, device, terminal and storage medium |
CN112634126A (en) * | 2020-12-22 | 2021-04-09 | 厦门美图之家科技有限公司 | Portrait age reduction processing method, portrait age reduction training device, portrait age reduction equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105513007A (en) * | 2015-12-11 | 2016-04-20 | 惠州Tcl移动通信有限公司 | Mobile terminal based photographing beautifying method and system, and mobile terminal |
CN108447026A (en) * | 2018-01-31 | 2018-08-24 | 上海思愚智能科技有限公司 | Acquisition methods, terminal and the computer readable storage medium of U.S. face parameter attribute value |
US20180260643A1 (en) * | 2017-03-07 | 2018-09-13 | Eyn Limited | Verification method and system |
CN108765273A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | The virtual lift face method and apparatus that face is taken pictures |
CN109190503A (en) * | 2018-08-10 | 2019-01-11 | 珠海格力电器股份有限公司 | beautifying method, device, computing device and storage medium |
CN109584146A (en) * | 2018-10-15 | 2019-04-05 | 深圳市商汤科技有限公司 | U.S. face treating method and apparatus, electronic equipment and computer storage medium |
-
2019
- 2019-08-07 CN CN201910726111.3A patent/CN110473295B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105513007A (en) * | 2015-12-11 | 2016-04-20 | 惠州Tcl移动通信有限公司 | Mobile terminal based photographing beautifying method and system, and mobile terminal |
US20180260643A1 (en) * | 2017-03-07 | 2018-09-13 | Eyn Limited | Verification method and system |
CN108447026A (en) * | 2018-01-31 | 2018-08-24 | 上海思愚智能科技有限公司 | Acquisition methods, terminal and the computer readable storage medium of U.S. face parameter attribute value |
CN108765273A (en) * | 2018-05-31 | 2018-11-06 | Oppo广东移动通信有限公司 | The virtual lift face method and apparatus that face is taken pictures |
CN109190503A (en) * | 2018-08-10 | 2019-01-11 | 珠海格力电器股份有限公司 | beautifying method, device, computing device and storage medium |
CN109584146A (en) * | 2018-10-15 | 2019-04-05 | 深圳市商汤科技有限公司 | U.S. face treating method and apparatus, electronic equipment and computer storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112381729A (en) * | 2020-11-12 | 2021-02-19 | 广州繁星互娱信息科技有限公司 | Image processing method, device, terminal and storage medium |
CN112634126A (en) * | 2020-12-22 | 2021-04-09 | 厦门美图之家科技有限公司 | Portrait age reduction processing method, portrait age reduction training device, portrait age reduction equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110473295B (en) | 2023-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108447017B (en) | Face virtual face-lifting method and device | |
CN108765273B (en) | Virtual face-lifting method and device for face photographing | |
AU2003204466B2 (en) | Method and system for enhancing portrait images | |
KR101455950B1 (en) | Image-processing device, image-processing method, and recording medium for control program | |
CN110443747B (en) | Image processing method, device, terminal and computer readable storage medium | |
JP4862955B1 (en) | Image processing apparatus, image processing method, and control program | |
CN106909875B (en) | Face type classification method and system | |
US7184578B2 (en) | Method and system for enhancing portrait images that are processed in a batch mode | |
CN103839250B (en) | The method and apparatus processing for face-image | |
CN108229279A (en) | Face image processing process, device and electronic equipment | |
US20110115786A1 (en) | Image processing apparatus, image processing method, and program | |
WO2001026050A2 (en) | Improved image segmentation processing by user-guided image processing techniques | |
CN110738732A (en) | three-dimensional face model generation method and equipment | |
CN110473295A (en) | A kind of method and apparatus that U.S. face processing is carried out based on three-dimensional face model | |
CN114155569B (en) | Cosmetic progress detection method, device, equipment and storage medium | |
Kang et al. | Preferred skin color reproduction based on y-dependent gaussian modeling of skin color |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |