CN107424125A - A kind of image weakening method and terminal - Google Patents
A kind of image weakening method and terminal Download PDFInfo
- Publication number
- CN107424125A CN107424125A CN201710243802.9A CN201710243802A CN107424125A CN 107424125 A CN107424125 A CN 107424125A CN 201710243802 A CN201710243802 A CN 201710243802A CN 107424125 A CN107424125 A CN 107424125A
- Authority
- CN
- China
- Prior art keywords
- image
- skin color
- area
- target
- target image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000003313 weakening effect Effects 0.000 title claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 40
- 238000001514 detection method Methods 0.000 claims abstract description 25
- 210000000746 body region Anatomy 0.000 claims description 50
- 230000000694 effects Effects 0.000 abstract description 10
- 210000001508 eye Anatomy 0.000 description 11
- 238000012549 training Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 8
- 210000004709 eyebrow Anatomy 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 210000004209 hair Anatomy 0.000 description 6
- 230000001815 facial effect Effects 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 210000003491 skin Anatomy 0.000 description 4
- 238000004040 coloring Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 210000005252 bulbus oculi Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 235000019646 color tone Nutrition 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000005520 cutting process Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000004549 pulsed laser deposition Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of image weakening method and terminal, wherein method includes:Obtain target image;Determine the skin color range threshold value of the target image;According to the skin color range threshold value, the area of skin color of the target image is determined;Virtualization processing is carried out to the non-area of skin color beyond area of skin color in the target image.The accuracy of Face Detection can be improved, and then improves image virtualization effect.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image weakening method and its terminal.
Background technology
Along with the development of mobile Internet and the continuous enhancing of mobile phone shooting performance, demand of taking pictures is more and more, right
The image request of shooting also more and more higher.
When taking pictures, user can typically be focused for reference object interested, particularly when shooting portrait, the back of the body
The effect of scape virtualization is very popular, and under this effect, reference object is enhanced in itself, and background parts then become mould
Paste.
But in background blurring before processing, the area of skin color of detection shooting portrait is first had to so as to identify background area, it is existing
There is generally use skin color detection algorithm in technology to be detected, but be all by setting the fixed colour of skin in the skin color detection algorithm
Threshold value is detected, and fixed colour of skin threshold value is not to be all suitable for for every figure, for example, for Black people's image
With white man's image, if setting identical colour of skin threshold value can not just detect correct area of skin color, so as to reduce the colour of skin
The accuracy of detection, and then reduce image virtualization effect.
The content of the invention
The embodiment of the present invention provides a kind of image weakening method and its terminal, can improve the accuracy of Face Detection, and then
Improve image virtualization effect.
The embodiments of the invention provide a kind of image weakening method, including:
Obtain target image;
Determine the skin color range threshold value of the target image;
According to the skin color range threshold value, the area of skin color of the target image is determined;
Virtualization processing is carried out to the non-area of skin color beyond area of skin color in the target image.
The embodiment of the present invention additionally provides a kind of terminal, including:
Acquiring unit, for obtaining target image;
Determining unit, for determining the skin color range threshold value of the target image;
The determining unit, it is additionally operable to, according to the skin color range threshold value, determine the area of skin color of the target image;
Unit is blurred, for carrying out virtualization processing to the non-area of skin color beyond area of skin color in the target image.
In the embodiment of the present invention, terminal obtains target image first, it is then determined that the skin color range threshold value of the target image,
So that it is determined that the area of skin color of target image, is finally blurred to the non-area of skin color beyond area of skin color in the target image
Processing.Using the embodiment of the present invention, the skin of target image is accurately detected by determining the skin color range threshold value of target image
Color region, the accuracy of Face Detection can be improved, and then improve image virtualization effect.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, it is required in being described below to embodiment to use
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, general for this area
For logical technical staff, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet for image weakening method that one embodiment of the invention provides;
Fig. 2 (a) is a kind of schematic diagram for target image that one embodiment of the invention provides;
Fig. 2 (b) is a kind of schematic diagram for human face region image that one embodiment of the invention provides;
Fig. 3 is the area image schematic diagram after the Face Detection that one embodiment of the invention provides;
Fig. 4 is a kind of schematic diagram for target image that another embodiment of the present invention provides;
Fig. 5 is the area image schematic diagram after the area of skin color filling that one embodiment of the invention provides;
Fig. 6 is a kind of schematic flow sheet for image weakening method that another embodiment of the present invention provides;
Fig. 7 is a kind of schematic diagram for feature masterplate that one embodiment of the invention provides;
Fig. 8 is the structural representation for the seed window that one embodiment of the invention provides;
Fig. 9 is the structural representation for the seed window that another embodiment of the present invention provides;
Figure 10 is a kind of structural representation for strong classifier that another embodiment of the present invention provides;
Figure 11 is a kind of structural representation for terminal that one embodiment of the invention provides;
Figure 12 is a kind of structural representation for terminal that another embodiment of the present invention provides.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation describes.
It should be appreciated that ought be in this specification and in the appended claims in use, term " comprising " and "comprising" instruction
Described feature, entirety, step, operation, the presence of element and/or component, but it is not precluded from one or more of the other feature, whole
Body, step, operation, element, component and/or its presence or addition for gathering.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh for describing specific embodiment
And be not intended to limit the present invention.As used in description of the invention and appended claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singulative, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in description of the invention and appended claims is
Refer to any combinations of one or more of the associated item listed and be possible to combine, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, the terminal described in the embodiment of the present invention is including but not limited to such as with touch sensitive surface
The mobile phone, laptop computer or tablet PC of (for example, touch-screen display and/or touch pad) etc it is other just
Portable device.It is to be further understood that in certain embodiments, the equipment is not portable communication device, but with tactile
Touch the desktop computer of sensing surface (for example, touch-screen display and/or touch pad).
In discussion below, the terminal including display and touch sensitive surface is described.It is, however, to be understood that
It is that terminal can include one or more of the other physical user-interface device of such as physical keyboard, mouse and/or control-rod.
Terminal supports various application programs, such as one or more of following:Drawing application program, demonstration application journey
Sequence, word-processing application, website create application program, disk imprinting application program, spreadsheet applications, game application
Program, telephony application, videoconference application, email application, instant messaging applications, exercise
Support application program, photo management application program, digital camera application program, digital camera application program, web-browsing application
Program, digital music player application and/or video frequency player application program.
The various application programs that can be performed in terminal can use at least one public of such as touch sensitive surface
Physical user-interface device.It can adjust and/or change among applications and/or in corresponding application programs and touch sensitive table
The corresponding information shown in the one or more functions and terminal in face.So, the public physical structure of terminal is (for example, touch
Sensing surface) the various application programs with user interface directly perceived and transparent for a user can be supported.
Fig. 1 is refer to, is the schematic flow sheet for the image weakening method that one embodiment of the invention provides, as illustrated, should
Method may comprise steps of:
S101, obtain target image.
It should be noted that the target image refers to that terminal receives figure captured during the instruction for opening application of taking pictures
Picture is a part of image in captured artwork, and the target image is stored in the terminal in the form of caching.
Specifically, user can send the instruction for opening application of taking pictures by modes such as touch-control or voices to terminal, eventually
Hold when the unlatching for receiving user's transmission is taken pictures using instruction, can open to take pictures and apply to obtain target image, or obtain
Take artwork and intercept a part of image as target image.
The target image includes area of skin color and non-area of skin color, and non-area of skin color can include background area, hair
Region etc. is sent out, is not limited specifically.
Above-mentioned terminal can be that smart mobile phone, camera, tablet personal computer, intelligent wearable device etc. have camera function
Equipment, the embodiment of the present invention do not limit.
S102, the skin color range threshold value for determining the target image.
Specifically, in a kind of feasible embodiment, the human face region image of target image is obtained, then described in acquisition
The area of skin color of human face region image, then determine the skin color range threshold value of the area of skin color of the human face region image.Wherein, institute
State human face region image and only include facial characteristics region (eyes, face, nose etc.) and area of skin color.
The processing mode of the human face region image for obtaining target image the ROI of user's selection (can be felt emerging according to
Interest) region cut, accurate choose can be passed through repeatedly.It is described cutting formula be:
Rect_crop={ 1+floor (width/para), 1+floor (height/para), floor (width/0.5*
para),floor(height/0.5*para)}
Wherein, width is the width of target image, and height is the height of target image, and to cut coefficient, floor is para
Bracket function.
For example, such as Fig. 2 (a) show target image, wherein may include skin, hair, eyebrow, eyes, beard etc.
Region, it is also possible to including part background area.And background area be it is uncontrollable and various, therefore, can be by Fig. 2 (a)
Shown target image is cut, and to obtain the human face region image shown in Fig. 2 (b), human face region image only includes face
Characteristic area (eyes, face, nose etc.) and area of skin color.
Further, the area of skin color of the acquisition human face region image can be:The face area is obtained first
The tone value and brightness value of each pixel in area image, then acquisition tone value is more than default hue threshold and brightness value is big
In the target pixel points of default luminance threshold, and determine that the region corresponding to the target pixel points is the human face region figure
The area of skin color of picture.
Specifically, due to still including the regions such as eyebrow, eyes in the human face region image, therefore according to the colour of skin, eyebrow, eye
The difference of the image such as eyeball value and brightness value is to identify area of skin color.
Area of skin color is regarded as when pattern colour tone pitch is more than predetermined threshold value, is clustered using tone value, you can
To area of skin color image.Further, in order to reach more accurately result of determination, monochrome information can be clustered again.Therefore
The judgment formula of the colour of skin is:Index={ h > h_thresh&v > v_thresh };Wherein h_thresh is hue threshold, v_
Thresh is luminance threshold, and h is the tone value of pixel in face area image, and v is the bright of pixel in face area image
Angle value.So as to detectable result as shown in Figure 3, wherein, grey parts are area of skin color.
After the area of skin color of human face region image is determined, the tone in the region is acquired and obtains the region
Tone maximum Max_h=max (skin_h) and tone minimum M in_h=min (skin_h), wherein, skin_h is the colour of skin
Partial tone value.Therefore, skin tone range threshold is Min_h=min (skin_h) and Max_h=max (skin_h).
Equally, in order to more accurately judge area of skin color, the brightness to the region is acquired and obtains the bright of the region
Spend maximum Max_v=max (skin_v) and brightness minimum M in_v=min (skin_v).Wherein, skin_v is colour of skin portion
The brightness value divided.Therefore, skin brightness range threshold is Min_v=min (skin_v) and Max_v=max (skin_v).
Therefore, the colour of skin threshold range of target image is (Min_v, Max_v) and (Minh_, Max_h).
S103, according to the skin color range threshold value, determine the area of skin color of the target image.
Specifically, for target image, it is more than Min_h=min (skin_h) in tone and tone is less than Max_h=
Max (skin_h) region is the area of skin color of target image.
Likewise, it is less than Max_v=max (skin_v) region more than Min_v=min (skin_v) and brightness in brightness
For the area of skin color of target image.
That is, tone value belong to (Min_h, Max_h) and brightness value to belong to the region of (Min_v, Max_v) be mesh
The area of skin color of logo image.
For example, target image as shown in Figure 4, the area of skin color that is determined according to skin color range threshold value is A, remaining region
B is then non-area of skin color.
Optionally, if the target image is a part of image of captured original image, according to the skin color range
Threshold value, the area of skin color of original image is determined, so that detection is more comprehensively more accurate.
S104, virtualization processing is carried out to the non-area of skin color beyond area of skin color in the target image.
Specifically, for the target image shown in Fig. 4, by carrying out virtualization processing to B area, to obtain target virtualization figure
Picture.
Optionally, before being blurred to non-area of skin color, the skin of target image described in target image described in terminal-pair
Facial characteristics region in color region is filled, to avoid to the facial characteristics region (area such as eyes, eyebrow in area of skin color
Domain) virtualization by mistake is carried out, improve virtualization precision.For example, by obtained after being filled to the facial characteristics region in A in Fig. 4 as
Image shown in Fig. 5.
Optionally, the first virtualization degree is less than the second virtualization degree.
In the embodiment of the present invention, terminal obtains target image first, it is then determined that the skin color range threshold value of the target image,
So that it is determined that the area of skin color of target image, is finally blurred to the non-area of skin color beyond area of skin color in the target image
Processing.Skin color range threshold value by determining target image improves the colour of skin accurately to detect the area of skin color of target image
The accuracy of detection, and then improve image virtualization effect.
It is a kind of schematic flow sheet for image weakening method that another embodiment of the present invention provides referring to Fig. 6, such as figure institute
Show, this method may comprise steps of:
S201, obtain target image.
It should be noted that the target image refers to that terminal receives figure captured during the instruction for opening application of taking pictures
Picture is a part of image in captured artwork, and the target image is stored in the terminal in the form of caching.
Specifically, user can send the instruction for opening application of taking pictures by modes such as touch-control or voices to terminal, eventually
Hold when the unlatching for receiving user's transmission is taken pictures using instruction, can open to take pictures and apply to obtain target image, or obtain
Take artwork and intercept a part of image as target image.
The target image includes area of skin color and non-area of skin color, and non-area of skin color can be background area, hair
Region etc., is not limited specifically.
S202, the detection target image upper part of the body region.
It should be noted that the detection method can be including the upper part of the body method for detecting area based on study or based on spy
Upper part of the body method for detecting area of sign etc..Upper part of the body method for detecting area based on study can include the side based on ADABOOST
Method, the method based on bayesian criterion, the method (Artificial Neural Network, ANN) based on artificial neural network
Or method (Support Vector Machine, SVM) of SVMs etc..The upper part of the body method for detecting area of feature based
Including low-level image feature analysis method, group characterization method or deforming template method etc..
For example, target image can be divided into several lattices by terminal, and each lattice is a pixel,
The colouring information of each pixel of the target image is extracted, the colouring information of each pixel and upper part of the body field color are believed
Breath database is contrasted, if the colouring information of the pixel matches with upper part of the body field color information database, the pixel
Point belongs to upper part of the body region, the pixel that upper part of the body region is belonged in the target image is clustered to obtain upper part of the body region
Image.
As an alternative embodiment, can be detected using ADABOOST algorithms, it can specifically include following step
Rapid S2021-S2024.
S2021, the target image is subjected to repeatedly division to obtain multiple first images, every described first image bag
Include multiple subwindows.
It should be noted that terminal can be carried out the target image repeatedly, division is to obtain multiple first images, every
First image includes multiple subwindows, wherein the number of the subwindow divided every time is more, the Haar characteristic values being calculated
Also more, the upper part of the body area image detected is more accurate, but the subwindow divided every time is more, calculates Haar characteristic values
Time also can accordingly increase, in addition, the maximum quantity of subwindow is no more than the maximum subwindow quantity that strong classifier detects,
So division subwindow quantity can according to detection upper part of the body area image accuracy, calculate Haar characteristic values when
Between, the combined factors such as the quantity of strong classifier subwindow consider.Wherein, Haar characteristic values can pass through the picture of the subwindow of image
Plain value is calculated, and for describing the grey scale change of image.
For example, target image can be divided into 20*20 subwindow by terminal for the first time, can then proceed in equal proportion
Expand the quantity of division subwindow, such as expand the quantity of division subwindow according to 3 times of ratio, you can so that the target image to be drawn
It is divided into 60*60 subwindow, 180*180 subwindow or 540*540 subwindow etc..
S2022, the Haar characteristic values for calculating according to integrogram each subwindow in every target image.
Due to calculating the pixel value of each subwindow known to Haar characteristic values needs, the pixel value of each subwindow can root
Calculated according to the integrogram at the end points of subwindow, it is possible to the Haar features of every first image are calculated according to integrogram
Value.
, can be with as an alternative embodiment, the above-mentioned Haar characteristic values that each subwindow is calculated according to integrogram
Including:The pixel value according to corresponding to the integrogram calculates each subwindow;Should according to the calculated for pixel values of each subwindow
The Haar characteristic values of each subwindow.
It should be noted that the integrogram at any point in gray level image refers to from the upper left corner of image to this institute's structure
Into rectangular area in pixel value sum a little, similarly for in the image of multiple subwindows first, each subwindow
The pixel value sum for all subwindows that integrogram at end points is included for the end points to the image upper left corner.So calculating
In the case of the integrogram gone out at each subwindow end points, the pixel value of each subwindow can be calculated according to integrogram, and can
With the Haar characteristic values of each subwindow of calculated for pixel values according to each subwindow.
Further, when calculating Haar characteristic values, it is necessary first to select suitable feature masterplate, feature masterplate is by two
Or multiple rectangles combines, there are two kinds of rectangles of black and white in feature templates, wherein common feature masterplate such as Fig. 7 institutes
Show.Wherein every kind of feature masterplate only corresponds to a kind of feature, but every kind of feature can correspond to various features masterplate, and common feature has
Edge feature, linear character, point feature, to corner characteristics, feature masterplate is then placed on gray level image pair according to preset rules
In the subwindow answered, Haar characteristic values corresponding to this feature masterplate placement region are calculated, the Haar characteristic values are by feature masterplate
The pixel in white rectangle region and subtract black rectangle region pixel and be calculated.Wherein, preset rules include setting spy
Levy the size of masterplate, the position that feature masterplate is placed in subwindow, the subwindow that preset rules divide according to gray level image
Quantity determines.
Wherein, it is of different sizes due to feature masterplate in the case of selected feature masterplate, and in every first image
The position placed in subwindow is different, so for a feature masterplate, multiple Haar features are corresponding with every first image,
Multiple feature masterplates can be selected to calculate the Haar features of every first image simultaneously, in addition, stroke of this every the first image
The quantity of the subwindow divided is different, so the quantity of the Haar characteristic values of every first image is different.
For example, gray level image can be reduced 1000 times by terminal, and the gray level image after reducing is divided into 20*
20 subwindows, the pixel value of each subwindow is calculated according to integrogram, and its step includes:
1st, the integrogram at each subwindow end points is calculated, here with the end points (i, j) of the subwindow D in calculating such as Fig. 8
Exemplified by the integrogram at place, the integrogram of end points (i, j) is the pixel of each subwindow included by the point to the gray level image upper left corner
Sum, it is represented by:
Integral (i, j)=subwindow D pixel value+subwindow C pixel value+subwindow B pixel value+subwindow
A pixel value;
Because Integral (i-1, j-1)=subwindow A pixel value;
Integral (i-1, j)=subwindow A pixel value+subwindow C pixel value;
Integral (i, j-1)=subwindow B pixel value+subwindow A pixel value;
So Integral (i, j) can be further expressed as:
Integral (i, j)=Integral (i, j-1)+Integral (i-1, j)-Integral (i-1, j-1)+sub- window
Mouth D pixel value;
Wherein, Integral () represents the integrogram of certain point, and the integrogram for finding (i, j) point by observation can pass through
The integrogram Integral (i, j-1) of (i, j-1) point is obtained plus the pixel and ColumnSum (j) of jth row, i.e., (i, j) is put
Integrogram can be expressed as:
Integral (i, j)=Integral (i, j-1)+ColumnSum (j);
Wherein, ColumnSum (0)=0, Integral (0, j)=0, so for 20*20 subwindow, gray level image
Integrogram at upper all subwindow end points can be tried to achieve by 19+19+2*19*19=760 iteration.
2nd, the pixel value of each subwindow is calculated according to the integrogram at each subwindow end points, here to calculate subwindow D
Pixel value exemplified by, by step 1 understand subwindow D pixel value can by end points (i, j), (i, j-1), (i-1, j) and (i-1,
J-1) integrogram at place is calculated, i.e. the pixel value of subwindow D is represented by:
Subwindow D pixel value=Integral (i, j)+Integral (i-1, j-1)-Integral (i-1, j)-
Integral(i,j-1);
It can be seen from above formula, as long as the integrogram at known each subwindow end points, it is possible to calculate each subwindow
Pixel value.
Further, can be according to the calculated for pixel values Haar of each window after the pixel value of each subwindow is obtained
Characteristic value, wherein selecting different feature masterplates, the position of placement is different, and the size of feature masterplate is different, corresponding Haar
Characteristic value is different, select in Fig. 8 by taking feature templates corresponding to edge feature as an example, as shown in figure 9, this feature masterplate corresponds to area
The Haar characteristic values in domain can be subtracted subwindow B pixel value by subwindow A pixel value.
S2023, the Haar characteristic values obtained according to strong classifier and every described first image are detected in multiple targets
Half body area image.
It should be noted that after the Haar characteristic values of each subwindow in calculating every first image, terminal can be with
Multiple target upper part of the body area images are detected according to the Haar characteristic values that strong classifier and every first image obtain, also
It is to say that according to the Haar characteristic values and strong classifier of first image a target upper part of the body area image can be detected.Tool
Body, strong classifier can be made up of several Weak Classifiers, and the Haar characteristic values of the subwindow of every first image are inputted
Into strong classifier, step by step by each Weak Classifier, it is corresponding to judge whether Haar characteristic values meet equivalent to Weak Classifier
Goal-selling upper part of the body provincial characteristics condition, if satisfied, then allowing the Haar characteristic values by if not satisfied, not allowing this then
Haar characteristic values pass through.If one-level is by the way that then subwindow corresponding to the Haar characteristic values will be rejected, and be categorized as non-
Target upper part of the body region, can be by then further handling to find out the Haar characteristic values the Haar characteristic values per one-level
Corresponding subwindow, and subwindow corresponding to the Haar characteristic values is categorized as target upper part of the body region, to every first image
In be categorized as the subwindow in target upper part of the body region and merge, to obtain target upper part of the body region corresponding to every first image
Image is (for example, subwindow quantity is merged for the target upper part of the body region subwindow detected in 20*20 the first image
Obtain a corresponding target upper part of the body area image).
For example, as shown in Figure 10, if the strong classifier is made up of the Weak Classifier of 3 cascades, by subwindow number
The Haar characteristic values for measuring each subwindow of the first image for 24*24 are sequentially inputted in 3 Weak Classifiers, each weak typing
Device judges whether the Haar characteristic values meet corresponding goal-selling upper part of the body provincial characteristics condition, if satisfied, then allowing this
Haar characteristic values are not by if not satisfied, allow then the Haar characteristic values to pass through.If one-level is by the way that then the Haar is special
Subwindow corresponding to value indicative will be rejected, and be categorized as non-targeted upper part of the body region, can be by then to the Haar per one-level
Characteristic value further handles to find out subwindow corresponding to the Haar characteristic values, and subwindow corresponding to the Haar characteristic values is divided
Class is target upper part of the body region, will be categorized as the sub- window in target upper part of the body region in the second image that subwindow quantity is 24*24
Mouth merges, to obtain subwindow quantity as target upper part of the body area image corresponding to 24*24 the first image.Similarly can root
Target upper part of the body area image corresponding to the first image that subwindow quantity is 36*36 is calculated according to above step.
Further, before multiple target upper part of the body area images are detected, it is also necessary to obtain strong classifier.Specifically
, obtain being described in detail as follows for strong classifier:
1st, training sample T={ (x are selected1, y1), (x2, y2)…(xi, yi)…(xN, yN), and the training sample is stored
In specified location, in sample database.Wherein xiRepresent i-th of sample, yiIt is negative sample (non-targeted upper half that it is represented when=0
Do not include target upper part of the body region in body region, the i.e. sample), yiRepresented when=1 its for positive sample (target upper part of the body region,
I.e. the sample includes target upper part of the body region).N is training samples number.
2nd, the weights distribution D of training sample is initialized1, i.e., identical weights are set to each training sample, can represent
For:
D1=(w11, w12…w1i…w1N), w1i=1/N, i=1,2 ... N
The 3rd, iterations T is set, how many times iteration is represented with t=1,2 ..., T.
4th, weights are normalized:
Wherein, Dt(i) it is the weights of i-th of sample in the t times circulation, qt(i) i-th sample returns in being circulated for the t time
One changes weights.
5th, training sample is learnt to obtain multiple Weak Classifiers, and calculates each Weak Classifier on training sample
Error in classification rate:D is distributed using with weightstTraining sample learn to obtain Weak Classifier h (xi, fi, pi, θi), calculate weak
The classification error rate ε of gradert:
Wherein, a Weak Classifier h (xi, fi, pi, θi) it is by feature fi, threshold θi, and offset position piComposition:
In addition, xiFor a training sample, feature fiWith Weak Classifier hi(xi, fi, pi, θi) there is one-to-one close
System, offset bit piEffect be majorization inequality direction so that inequality symbol be smaller than be equal to number, train one weak point
Class device is exactly to find optimal threshold θiProcess.
6th, in the Weak Classifier determined from 5, finding out one has minimum classification error rate εt(i) Weak Classifier ht。
7th, the factor beta of Weak Classifier is calculated according to error in classification ratet:
βt=εt/(1-εt)
Wherein, the coefficient represents each Weak Classifier weights shared in strong classifier, works as xiWhen correctly being classified,
eiValue take 0, work as xiWhen mistakenly being classified, eiValue take 1.And the weights of all training samples are updated with the coefficient:
8th, after the right value update of all training samples, circulation performs step 4 to 7, after iteration T times, terminates iteration, obtains
To strong classifier H (x):
Wherein, αt=log (1/ βt)。
S2024, multiple described target upper part of the body area images are merged to obtain the upper part of the body region.
It should be noted that multiple target upper part of the body area images are merged to obtain the upper part of the body region, also
It is to say, multiple the target upper part of the body area images obtained to the first image of different subwindow quantity merge to obtain the upper half
Body region.Specifically, different target upper part of the body area images is contrasted, if certain two target upper part of the body area image weight
Folded area is more than predetermined threshold value, then it is assumed that and this two target upper part of the body area images represent same target upper part of the body area image,
This two target upper part of the body regions are merged, i.e., made the average value of the position in this two target upper part of the body regions and size
For the target upper part of the body regional location and size obtained after merging;If certain two target upper part of the body area image overlapping area is small
In predetermined threshold value, then it is assumed that two target upper part of the body area images represent two different target upper part of the body area images, will
Two target upper part of the body area images are merged into an image, and the image has two target upper part of the body regions, by multiple
Upper part of the body region can be obtained to when union operation.
S203, the skin color range threshold value for determining the target image.
Specifically, in a kind of feasible embodiment, the human face region image of target image is obtained, then described in acquisition
The area of skin color of human face region image, then determine the skin color range threshold value of the area of skin color of the human face region image.Wherein, institute
State human face region image and only include facial characteristics region (eyes, face, nose etc.) and area of skin color.
The processing mode of the human face region image for obtaining target image the ROI of user's selection (can be felt emerging according to
Interest) region cut, accurate choose can be passed through repeatedly.It is described cutting formula be:
Rect_crop={ 1+floor (width/para), 1+floor (height/para), floor (width/0.5*
para),floor(height/0.5*para)}
Wherein, width is the width of target image, and height is the height of target image, and to cut coefficient, floor is para
Bracket function.
For example, such as Fig. 2 (a) show target image, wherein may include skin, hair, eyebrow, eyes, beard etc.
Region, it is also possible to including part background area.And background area be it is uncontrollable and various, therefore, can be by Fig. 2 (a)
Shown target image is cut, and to obtain the human face region image shown in Fig. 2 (b), human face region image only includes face
Characteristic area (eyes, face, nose etc.) and area of skin color.
Further, the area of skin color of the acquisition human face region image can be:The face area is obtained first
The tone value and brightness value of each pixel in area image, then acquisition tone value is more than default hue threshold and brightness value is big
In the target pixel points of default luminance threshold, and determine that the region corresponding to the target pixel points is the human face region figure
The area of skin color of picture.
Specifically, due to still including the regions such as eyebrow, eyes in the human face region image, therefore according to the colour of skin, eyebrow, eye
The difference of the image such as eyeball value and brightness value is to identify area of skin color.
Area of skin color is regarded as when pattern colour tone pitch is more than predetermined threshold value, is clustered using tone value, you can
To area of skin color image.Further, in order to reach more accurately result of determination, monochrome information can be clustered again.Therefore
The judgment formula of the colour of skin is:Index={ h > h_thresh&v > v_thresh };Wherein h_thresh is hue threshold, v_
Thresh is luminance threshold, and h is the tone value of pixel in face area image, and v is the bright of pixel in face area image
Angle value.So as to detectable result as shown in Figure 3, wherein, grey parts are area of skin color.
After the area of skin color of human face region image is determined, the tone in the region is acquired and obtains the region
Tone maximum Max_h=max (skin_h) and tone minimum M in_h=min (skin_h), wherein, skin_h is the colour of skin
Partial tone value.Therefore, skin tone range threshold is Min_h=min (skin_h) and Max_h=max (skin_h).
Equally, in order to more accurately judge area of skin color, the brightness to the region is acquired and obtains the bright of the region
Spend maximum Max_v=max (skin_v) and brightness minimum M in_v=min (skin_v).Wherein, skin_v is colour of skin portion
The brightness value divided.Therefore, skin brightness range threshold is Min_v=min (skin_v) and Max_v=max (skin_v).
Therefore, the colour of skin threshold range of target image is (Min_v, Max_v) and (Minh_, Max_h).
S204, according to the skin color range threshold value, determine the area of skin color of the target image.
Specifically, for target image, it is more than Min_h=min (skin_h) in tone and tone is less than Max_h=
Max (skin_h) region is the area of skin color of target image.
Likewise, it is less than Max_v=max (skin_v) region more than Min_v=min (skin_v) and brightness in brightness
For the area of skin color of target image.
That is, tone value belong to (Min_h, Max_h) and brightness value to belong to the region of (Min_v, Max_v) be mesh
The area of skin color of logo image.
For example, target image as shown in Figure 4, the area of skin color that is determined according to skin color range threshold value is A, remaining region
B is then non-area of skin color.
Optionally, if the target image is a part of image of captured original image, according to the skin color range
Threshold value, the area of skin color of original image is determined, so that detection is more comprehensively more accurate.
S205, virtualization processing carried out to the upper part of the body region according to the first virtualization degree, according to the second virtualization degree to background
Region carries out virtualization processing, and the non-area of skin color beyond the area of skin color includes upper part of the body region and background area.
Specifically, B is non-area of skin color in target image as shown in Figure 4, wherein B is upper including being detected using S202
Half body region C, while also include background area D.Generally, the upper part of the body region includes the regions such as shoulder, hair, the background
Region is usually to shoot other objects beyond main body.Virtualization processing is carried out to C regions by using the first virtualization degree, and used
Second virtualization degree carries out virtualization processing to D regions, to embody virtualization level.
Optionally, the first virtualization degree is less than the second virtualization degree.
In the embodiment of the present invention, terminal obtains target image first, then detects the upper part of the body region in the target image,
Skin color range threshold value is determined by human face region image again, and it is determined that obtaining the colour of skin of target image after the skin color range threshold value
Region, finally the upper part of the body region in the non-area of skin color beyond area of skin color in the target image and background area are entered respectively
Row virtualization is handled.By first obtaining human face region image to obtain skin color range threshold value, portrait on the one hand can be accurately detected
The area of skin color of image, the accuracy of Face Detection is improved, improve image virtualization effect;On the other hand, without carrying in real time
The skin color range threshold value in human face region image is taken, can be with less operand, so as to quickly determine the colour of skin of target image
Region, improve detection efficiency.In addition, upper part of the body region and background area can be blurred respectively using different virtualization degree
Processing so that virtualization processing more has a sense of hierarchy.
Referring to Figure 11, Figure 11 is a kind of structural representation of terminal provided in an embodiment of the present invention, is retouched in the present embodiment
The terminal stated, including:
Acquiring unit 10, for obtaining target image.
Determining unit 20, for determining the skin color range threshold value of the target image.
Optionally, the determining unit 20 is specifically used for:
Obtain the human face region image of target image;
Obtain the area of skin color of the human face region image;
Determine the skin color range threshold value of the area of skin color of the human face region image.
The optional determining unit 20 is specifically used for:
Obtain the tone value and brightness value of each pixel in the human face region image;
Target pixel points are obtained, the tone value of the target pixel points is more than default hue threshold, and the target picture
The brightness value of vegetarian refreshments is more than default luminance threshold;
Determine the area of skin color that the region corresponding to the target pixel points is the human face region image.
The determining unit 20, it is additionally operable to, according to the skin color range threshold value, determine the area of skin color of the target image;
Unit 30 is blurred, for carrying out virtualization processing to the non-area of skin color beyond area of skin color in the target image.
Optionally, the terminal also includes:
Detection unit 40, for detecting the upper part of the body region of the target image;
The virtualization unit 30 is specifically used for:
Virtualization processing is carried out to the upper part of the body region according to the first virtualization degree;
Virtualization processing is carried out to the background area according to the second virtualization degree.
Optionally, the detection unit 40 is specifically used for:
The target image is subjected to repeatedly division to obtain multiple first images, every described first image includes multiple
Subwindow;
The Haar characteristic values of each subwindow in every described first image are calculated according to integrogram;
The Haar characteristic values obtained according to strong classifier and every described first image detect multiple target upper part of the body areas
Area image;
Multiple described target upper part of the body area images are merged to obtain the upper part of the body region.
In the embodiment of the present invention, terminal obtains target image first, then detects the upper part of the body region in the target image,
Skin color range threshold value is determined by human face region image again, and it is determined that obtaining the colour of skin of target image after the skin color range threshold value
Region, finally the upper part of the body region in the non-area of skin color beyond area of skin color in the target image and background area are entered respectively
Row virtualization is handled.By first obtaining human face region image to obtain skin color range threshold value, portrait on the one hand can be accurately detected
The area of skin color of image, the accuracy of Face Detection is improved, improve image virtualization effect;On the other hand, without carrying in real time
The skin color range threshold value in human face region image is taken, can be with less operand, so as to quickly determine the colour of skin of target image
Region, improve detection efficiency.In addition, upper part of the body region and background area can be blurred respectively using different virtualization degree
Processing so that virtualization processing more has a sense of hierarchy.
Referring to Figure 12, Figure 12 is a kind of structural representation for terminal that another embodiment of the present invention provides, in the present embodiment
Described terminal can include:One or more processors 903, one or more input interfaces 901, one or more output
Interface 902 and memory 904.Processor 903, input interface 901, output interface 902 and memory are connected by bus 805.
Input interface 901 can include Trackpad, fingerprint adopt sensor (finger print information that is used to gathering personage and fingerprint
Directional information), microphone etc., output interface 902 can include display (LCD etc.), loudspeaker etc..
Processor 903 can be central processing module (Central Processing Unit, CPU), and the processor may be used also
To be other general processors, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other PLDs, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor
Deng.
Memory 904 can be high-speed RAM memory, or non-labile memory (non-volatile
), such as magnetic disk storage memory.Memory 904 is used to store batch processing code, input interface 901, output interface 902
The program code stored in memory 904 can be called with processor 903.
Processor 903 calls the code in memory 904 to perform following operation:
Obtain target image;
Determine the skin color range threshold value of the target image;
According to the skin color range threshold value, the area of skin color of the target image is determined;
Virtualization processing is carried out to the non-area of skin color beyond area of skin color in the target image.
As an alternative embodiment, processor 903 calls the code in memory 904 to can also carry out following behaviour
Make:
Obtain the human face region image of target image;(face+colour of skin is specifically only included, accurate choosing can be passed through repeatedly
Take)
Obtain the area of skin color of the human face region image;
Determine the skin color range threshold value of the area of skin color of the human face region image.
As an alternative embodiment, processor 903 calls the code in memory 904 to can also carry out following behaviour
Make:
Obtain the tone value and brightness value of each pixel in the human face region image;
Target pixel points are obtained, the tone value of the target pixel points is more than default hue threshold, and the target picture
The brightness value of vegetarian refreshments is more than default luminance threshold;
Determine the area of skin color that the region corresponding to the target pixel points is the human face region image.
As an alternative embodiment, processor 903 calls the code in memory 904 to can also carry out following behaviour
Make:
Detect the upper part of the body region of the target image;
The non-area of skin color to beyond area of skin color in the target image carries out virtualization processing, including:
Virtualization processing is carried out to the upper part of the body region according to the first virtualization degree;
Virtualization processing is carried out to the background area according to the second virtualization degree.
As an alternative embodiment, processor 903 calls the code in memory 904 to can also carry out following behaviour
Make:
The target image is subjected to repeatedly division to obtain multiple first images, every described first image includes multiple
Subwindow;
The Haar characteristic values of each subwindow in every described first image are calculated according to integrogram;
The Haar characteristic values obtained according to strong classifier and every described first image detect multiple target upper part of the body areas
Area image;
Multiple described target upper part of the body area images are merged to obtain the upper part of the body region.
In the embodiment of the present invention, terminal obtains target image first, then detects the upper part of the body region in the target image,
Skin color range threshold value is determined by human face region image again, and it is determined that obtaining the colour of skin of target image after the skin color range threshold value
Region, finally the upper part of the body region in the non-area of skin color beyond area of skin color in the target image and background area are entered respectively
Row virtualization is handled.By first obtaining human face region image to obtain skin color range threshold value, portrait on the one hand can be accurately detected
The area of skin color of image, the accuracy of Face Detection is improved, improve image virtualization effect;On the other hand, without carrying in real time
The skin color range threshold value in human face region image is taken, can be with less operand, so as to quickly determine the colour of skin of target image
Region, improve detection efficiency.In addition, upper part of the body region and background area can be blurred respectively using different virtualization degree
Processing so that virtualization processing more has a sense of hierarchy.
In the specific implementation, processor 903, input interface 901, the output interface 902 described in the embodiment of the present invention can
The implementation described in the first embodiment and second embodiment of image weakening method provided in an embodiment of the present invention is performed,
Also the implementation of the terminal described by the embodiment of the present invention is can perform, will not be repeated here.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein
Member and algorithm steps, it can be realized with electronic hardware, computer software or the combination of the two, in order to clearly demonstrate hardware
With the interchangeability of software, the composition and step of each example are generally described according to function in the above description.This
A little functions are performed with hardware or software mode actually, application-specific and design constraint depending on technical scheme.Specially
Industry technical staff can realize described function using distinct methods to each specific application, but this realization is not
It is considered as beyond the scope of this invention.
In addition, in several embodiments provided herein, it should be understood that disclosed, terminal and method, can be with
Realize by another way.For example, device embodiment described above is only schematical, for example, the unit
Division, only a kind of division of logic function, can there is other dividing mode, such as multiple units or component when actually realizing
Another system can be combined or be desirably integrated into, or some features can be ignored, or do not perform.In addition, shown or beg for
The mutual coupling of opinion or direct-coupling or communication connection can be the INDIRECT COUPLINGs by some interfaces, device or unit
Or communication connection or electricity, the connection of mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize scheme of the embodiment of the present invention according to the actual needs
Purpose.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also
It is that unit is individually physically present or two or more units are integrated in a unit.It is above-mentioned integrated
Unit can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
Step in present invention method can be sequentially adjusted, merged and deleted according to actual needs.This hair
Unit in bright embodiment terminal can be combined, divided and deleted according to actual needs.It is described above, it is only the present invention's
Embodiment, but protection scope of the present invention is not limited thereto, and any one skilled in the art is at this
Invent in the technical scope disclosed, various equivalent modifications or substitutions can be readily occurred in, these modifications or substitutions should all cover
Within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.
Claims (10)
- A kind of 1. image weakening method, it is characterised in that including:Obtain target image;Determine the skin color range threshold value of the target image;According to the skin color range threshold value, the area of skin color of the target image is determined;Virtualization processing is carried out to the non-area of skin color beyond area of skin color in the target image.
- 2. according to the method for claim 1, it is characterised in that the skin color range threshold value for determining the target image, Including:Obtain the human face region image of target image;Obtain the area of skin color of the human face region image;Determine the skin color range threshold value of the area of skin color of the human face region image.
- 3. according to the method for claim 2, it is characterised in that the area of skin color for obtaining the human face region image, Including:Obtain the tone value and brightness value of each pixel in the human face region image;Target pixel points are obtained, the tone value of the target pixel points is more than default hue threshold, and the target pixel points Brightness value be more than default luminance threshold;Determine the area of skin color that the region corresponding to the target pixel points is the human face region image.
- 4. according to the method for claim 1, it is characterised in that non-area of skin color includes background area in the target image With upper part of the body region, it is described acquisition target image after, in addition to:Detect the upper part of the body region of the target image;The non-area of skin color to beyond area of skin color in the target image carries out virtualization processing, including:Virtualization processing is carried out to the upper part of the body region according to the first virtualization degree;Virtualization processing is carried out to the background area according to the second virtualization degree.
- 5. according to the method for claim 4, it is characterised in that the upper part of the body region of the detection target image, bag Include:The target image is subjected to repeatedly division to obtain multiple first images, every described first image includes more sub- windows Mouthful;The Haar characteristic values of each subwindow in every described first image are calculated according to integrogram;The Haar characteristic values obtained according to strong classifier and every described first image detect multiple target upper part of the body administrative division maps Picture;Multiple described target upper part of the body area images are merged to obtain the upper part of the body region.
- A kind of 6. terminal, it is characterised in that including:Acquiring unit, for obtaining target image;Determining unit, for determining the skin color range threshold value of the target image;The determining unit, it is additionally operable to, according to the skin color range threshold value, determine the area of skin color of the target image;Unit is blurred, for carrying out virtualization processing to the non-area of skin color beyond area of skin color in the target image.
- 7. terminal according to claim 6, it is characterised in that the determining unit is specifically used for:Obtain the human face region image of target image;Obtain the area of skin color of the human face region image;Determine the skin color range threshold value of the area of skin color of the human face region image.
- 8. terminal according to claim 7, it is characterised in that the determining unit is specifically used for:Obtain the tone value and brightness value of each pixel in the human face region image;Target pixel points are obtained, the tone value of the target pixel points is more than default hue threshold, and the target pixel points Brightness value be more than default luminance threshold;Determine the area of skin color that the region corresponding to the target pixel points is the human face region image.
- 9. terminal according to claim 6, it is characterised in that the terminal also includes:Detection unit, for detecting the upper part of the body region of the target image;The virtualization unit is specifically used for:Virtualization processing is carried out to the upper part of the body region according to the first virtualization degree;Virtualization processing is carried out to the background area according to the second virtualization degree.
- 10. terminal according to claim 9, it is characterised in that the detection unit is specifically used for:The target image is subjected to repeatedly division to obtain multiple first images, every described first image includes more sub- windows Mouthful;The Haar characteristic values of each subwindow in every described first image are calculated according to integrogram;The Haar characteristic values obtained according to strong classifier and every described first image detect multiple target upper part of the body administrative division maps Picture;Multiple described target upper part of the body area images are merged to obtain the upper part of the body region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710243802.9A CN107424125A (en) | 2017-04-14 | 2017-04-14 | A kind of image weakening method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710243802.9A CN107424125A (en) | 2017-04-14 | 2017-04-14 | A kind of image weakening method and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107424125A true CN107424125A (en) | 2017-12-01 |
Family
ID=60423381
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710243802.9A Withdrawn CN107424125A (en) | 2017-04-14 | 2017-04-14 | A kind of image weakening method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107424125A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109639982A (en) * | 2019-01-04 | 2019-04-16 | Oppo广东移动通信有限公司 | A kind of image denoising method, device, storage medium and terminal |
CN110070080A (en) * | 2019-03-12 | 2019-07-30 | 上海肇观电子科技有限公司 | A kind of character detecting method and device, equipment and computer readable storage medium |
CN111541924A (en) * | 2020-04-30 | 2020-08-14 | 海信视像科技股份有限公司 | Display apparatus and display method |
-
2017
- 2017-04-14 CN CN201710243802.9A patent/CN107424125A/en not_active Withdrawn
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109639982A (en) * | 2019-01-04 | 2019-04-16 | Oppo广东移动通信有限公司 | A kind of image denoising method, device, storage medium and terminal |
CN109639982B (en) * | 2019-01-04 | 2020-06-30 | Oppo广东移动通信有限公司 | Image noise reduction method and device, storage medium and terminal |
WO2020140986A1 (en) * | 2019-01-04 | 2020-07-09 | Oppo广东移动通信有限公司 | Image denoising method and apparatus, storage medium and terminal |
CN110070080A (en) * | 2019-03-12 | 2019-07-30 | 上海肇观电子科技有限公司 | A kind of character detecting method and device, equipment and computer readable storage medium |
CN111541924A (en) * | 2020-04-30 | 2020-08-14 | 海信视像科技股份有限公司 | Display apparatus and display method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111260665B (en) | Image segmentation model training method and device | |
CN110751043B (en) | Face recognition method and device based on face visibility and storage medium | |
Martin | An empirical approach to grouping and segmentation | |
CN109637664A (en) | A kind of BMI evaluating method, device and computer readable storage medium | |
WO2018210124A1 (en) | Clothing recommendation method and clothing recommendation device | |
US20140153832A1 (en) | Facial expression editing in images based on collections of images | |
CN108229318A (en) | The training method and device of gesture identification and gesture identification network, equipment, medium | |
JP2018055470A (en) | Facial expression recognition method, facial expression recognition apparatus, computer program, and advertisement management system | |
WO2020187160A1 (en) | Cascaded deep convolutional neural network-based face recognition method and system | |
CN110414428A (en) | A method of generating face character information identification model | |
CN103164687B (en) | A kind of method and system of pornographic image detecting | |
CN109117760A (en) | Image processing method, device, electronic equipment and computer-readable medium | |
CN106803077A (en) | A kind of image pickup method and terminal | |
CN109345553A (en) | A kind of palm and its critical point detection method, apparatus and terminal device | |
US11468709B2 (en) | Image forming apparatus | |
CN107633205A (en) | lip motion analysis method, device and storage medium | |
CN107424125A (en) | A kind of image weakening method and terminal | |
CN111126347B (en) | Human eye state identification method, device, terminal and readable storage medium | |
CN108492301A (en) | A kind of Scene Segmentation, terminal and storage medium | |
CN106878614A (en) | A kind of image pickup method and terminal | |
CN110197238A (en) | A kind of recognition methods, system and the terminal device of font classification | |
CN113179421A (en) | Video cover selection method and device, computer equipment and storage medium | |
CN107146203A (en) | A kind of image weakening method and terminal | |
WO2011096010A1 (en) | Pattern recognition device | |
CN107423663A (en) | A kind of image processing method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20171201 |
|
WW01 | Invention patent application withdrawn after publication |