CN107154030A - Image processing method and device, electronic equipment and storage medium - Google Patents
Image processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN107154030A CN107154030A CN201710348772.8A CN201710348772A CN107154030A CN 107154030 A CN107154030 A CN 107154030A CN 201710348772 A CN201710348772 A CN 201710348772A CN 107154030 A CN107154030 A CN 107154030A
- Authority
- CN
- China
- Prior art keywords
- deformation
- deformed region
- image
- point
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000012545 processing Methods 0.000 claims description 34
- 230000008859 change Effects 0.000 claims description 22
- 230000004927 fusion Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 12
- 230000015572 biosynthetic process Effects 0.000 claims description 11
- 241000208340 Araliaceae Species 0.000 claims description 3
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims description 3
- 235000003140 Panax quinquefolius Nutrition 0.000 claims description 3
- 235000008434 ginseng Nutrition 0.000 claims description 3
- 230000008569 process Effects 0.000 description 15
- 238000004364 calculation method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000001815 facial effect Effects 0.000 description 5
- 210000000056 organ Anatomy 0.000 description 5
- 210000000887 face Anatomy 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000036544 posture Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 230000004069 differentiation Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000002844 melting Methods 0.000 description 1
- 230000008018 melting Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing method and device, electronic equipment and storage medium, methods described includes:Obtain original image;Based on the characteristic point of first object object to be deformed in original image, deformed region is determined, wherein, the characteristic point, profile and/or textural characteristics for embodying the first object object;Multiple pixels, which are selected, as fixed deformation from the deformed region constrains source point;Based on deformation constraint source point and deformation intensity, target point is determined, wherein, the target point is the pixel of the deformation pattern formed after the first object object deformation, and the pixel-parameters of the target point are equal to the pixel-parameters of the deformation constraint origin;Based on the original pixels parameter of each pixel, the target point in the deformed region, the pixel-parameters after each pixel deformation in the deformed region are determined, so as to obtain deformed region image;By in the deformed region of the deformed region image co-registration to the original image, the image after being deformed.
Description
Technical field
The present invention relates to areas of information technology, more particularly to a kind of image processing method and device, electronic equipment and storage
Medium.
Background technology
Image procossing includes:Deformation process is carried out to the subregion in existing image, image landscaping treatment is obtained
Deng.For example, common image procossing may include:Beautifying faces processing, during beautifying faces processing, may be related to
The deformation of target organ.The processing of existing beautifying faces, is generally directed to substantial amounts of amount of calculation, and complexity is high, when causing response
Prolong big, treatment effeciency is low or processing after poor image quality the problems such as.
The content of the invention
In view of this, the embodiment of the present invention expects that providing a kind of image processing method and device, electronic equipment and storage is situated between
Matter, solve in the obtained poor image quality of image procossing and/or image processing process it is computationally intensive, the problem of complexity is high.
To reach above-mentioned purpose, the technical proposal of the invention is realized in this way:
First aspect of the embodiment of the present invention provides a kind of image processing method, including:
Obtain original image;
Based on the characteristic point of first object object to be deformed in original image, deformed region is determined, wherein, the feature
Point, profile and/or textural characteristics for embodying the first object object;
Multiple pixels, which are selected, as fixed deformation from the deformed region constrains source point;
Based on deformation constraint source point and deformation intensity, target point is determined, wherein, the target point is the first object pair
As the pixel of deformation pattern formed after deformation, the pixel-parameters of the target point are equal to the pixel of the deformation constraint origin
Parameter;
Based on the original pixels parameter of each pixel, the target point in the deformed region, determine in the deformed region
Pixel-parameters after each pixel deformation, so as to obtain deformed region image;
By in the deformed region of the deformed region image co-registration to the original image, the figure after being deformed
Picture.
Based on such scheme, the characteristic point based on first object object to be deformed in original image, it is determined that deformation
Region, including:
Obtain multiple characteristic points of the first object object;
According to coordinate parameters of multiple characteristic points in the original image, the first object object is selected
Intermediate features point, as the central point of the deformed region, wherein, the intermediate features point is multiple characteristic point middle positions
The characteristic point in most centre position;
Obtain deformation dimensional parameters;
Based on the deformation dimensional parameters and the central point, the deformed region is determined.
Based on such scheme, the acquisition deforms dimensional parameters, including:
According to the Edge Feature Points of the first object object and central feature point, the first radius of deformation is determined,
Wherein, the Edge Feature Points, are the characteristic point for being located at marginal position in multiple characteristic points;
According to first radius of deformation and the first adjusting parameter, the second radius of deformation is determined;
It is described that the deformed region is determined based on the deformation dimensional parameters and the central point, including:
Based on second radius of deformation and the central point, the deformed region is determined.
Based on such scheme, methods described also includes:
The characteristic point of the second destination object according to where the first object object, determines the first deformation intensity;
According to first deformation intensity and the second adjusting parameter, the second deformation intensity is determined;
It is described that target point is determined based on deformation constraint source point and deformation intensity, including:
Based on the deformation obligatory point and second deformation intensity, target point is determined.
Based on such scheme, second destination object is face;Second destination object is nose.
Based on such scheme, methods described also includes:
Obtain the boundary rectangle of the deformed region;
The original image, interception image are intercepted according to the boundary rectangle;
The interception image is changed based on the deformed region, mask figure is obtained, wherein, the mask figure is located at deformed area
The pixel value of pixel in domain is that the pixel value of the pixel outside the first value, the deformed region is the second value;
It is described based on the original pixels parameter of each pixel, the target point in the deformed region, determine the deformed area
Pixel-parameters in domain after each pixel deformation, so that deformed region image is obtained, including:
Based on the mask figure, the original pixels parameter and the target point of the interception image obtain the deformed area
Area image.
Based on such scheme, methods described also includes:
Fuzzy Processing is carried out to the mask figure, the gradual change figure of the interception image is obtained;
It is described by the deformed region of the deformed region image co-registration to the original image, after being deformed
Image, including:
Based on the gradual change figure, fusion weight parameter is obtained;
Based on the fusion parameters, the original image and the deformed region image are merged.
It is described to be based on the mask figure, the original pixels parameter and the target of the interception image based on such scheme
Point, obtains the deformed region image, including:
Based on the mask figure, the pending pixel of the interception image is determined;
Original pixels parameter and the target point based on the pending pixel, obtain the deformed region image.
Second aspect of the embodiment of the present invention provides a kind of image processing apparatus, including:
First acquisition unit, for obtaining original image;
First determining unit, for the characteristic point based on first object object to be deformed in original image, it is determined that deformation
Region, wherein, the characteristic point, profile and/or textural characteristics for embodying the first object object;
Selecting unit, source point is constrained for selecting multiple pixels as fixed deformation from the deformed region;
Second determining unit, for based on deformation constraint source point and deformation intensity, determining target point, wherein, the target
Point is the pixel of the deformation pattern formed after the first object object deformation, and the pixel-parameters of the target point are equal to described
The pixel-parameters of deformation constraint origin;
Unit is formed, for based on the original pixels parameter of each pixel, the target point in the deformed region, determining institute
The pixel-parameters after each pixel deformation in deformed region are stated, so as to obtain deformed region image;
Integrated unit, for by the deformed region of the deformed region image co-registration to the original image, obtaining
Image after must deforming.
Based on such scheme, first determining unit, multiple features for obtaining the first object object
Point;According to coordinate parameters of multiple characteristic points in the original image, the centre of the first object object is selected
Characteristic point, as the central point of the deformed region, wherein, the intermediate features point is to be located at most in multiple characteristic points
The characteristic point in centre position;Obtain deformation dimensional parameters;Based on the deformation dimensional parameters and the central point, institute is determined
State deformed region.
Based on such scheme, first determining unit, for the Edge Feature Points according to the first object object and
The central feature point, determines the first radius of deformation, wherein, the Edge Feature Points are to be located in multiple characteristic points
The characteristic point of marginal position;According to first radius of deformation and the first adjusting parameter, the second radius of deformation is determined;It is based on
Second radius of deformation and the central point, determine the deformed region.
Based on such scheme, described device also includes:
3rd determining unit, for the characteristic point of the second destination object according to where the first object object, it is determined that
Go out the first deformation intensity;According to first deformation intensity and the second adjusting parameter, the second deformation intensity is determined;
Second determining unit, specifically for based on the deformation obligatory point and second deformation intensity, determining mesh
Punctuate.
Based on such scheme, second destination object is face;Second destination object is nose.
Based on such scheme, described device also includes:
Second acquisition unit, the boundary rectangle for obtaining the deformed region;
Interception unit, for intercepting the original image, interception image according to the boundary rectangle;
Converting unit, for changing the interception image based on the deformed region, obtains mask figure, wherein, it is described to cover
The pixel value that film figure is located at the pixel in deformed region is that the pixel value of the pixel outside the first value, the deformed region is
Second value;
The formation unit, for based on the mask figure, the original pixels parameter and the target of the interception image
Point, obtains the deformed region image.
Based on such scheme, described device also includes:
Fuzzy Processing unit, for carrying out Fuzzy Processing to the mask figure, obtains the gradual change figure of the interception image;
The integrated unit, for based on the gradual change figure, obtaining fusion weight parameter;Based on the fusion parameters, melt
Close the original image and the deformed region image.
Based on such scheme, the formation unit, specifically for based on the mask figure, determining that the interception image is waited to locate
Manage pixel;Original pixels parameter and the target point based on the pending pixel, obtain the deformed region image.
The third aspect of the embodiment of the present invention provides a kind of electronic equipment, including:
Memory, for storing computer program;
Processor, is connected with the memory, for by performing the computer program, realizing that foregoing any one is carried
The image processing method of confession.
Fourth aspect of the embodiment of the present invention provides a kind of computer-readable storage medium, and the computer-readable storage medium is stored with meter
Calculation machine program, after the computer program is executed by processor, can realize the image processing method that foregoing any one is provided.
Image processing method and device provided in an embodiment of the present invention, electronic equipment and storage medium, in the present embodiment
When carrying out image procossing, the characteristic point for the first object object for needing to carry out deformation process in image can be obtained first, based on it
He determines deformation range by this feature point, and can accurately determine includes the deformed region of first object object, then using deformation
The corresponding relation between source point and target point is constrained, the deformation in deformed region is carried out, the image after being deformed is fused to original
In beginning image, relative to the image procossing of deformed region can not be accurately positioned, better image effect can be obtained;And it is treated
The former state without crushed element is maintained in journey, and computation complexity is low, the characteristics of consumption of image processing apparatus is few.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the first image processing method provided in an embodiment of the present invention;
Fig. 2 is the schematic flow sheet provided in an embodiment of the present invention for how determining deformed region;
Fig. 3 is a kind of schematic diagram of the characteristic point of nose provided in an embodiment of the present invention;
Fig. 4 is a kind of change schematic diagram that deformed region is inputted based on user provided in an embodiment of the present invention;
Fig. 5 is a kind of display schematic diagram of image processing method provided in an embodiment of the present invention;
Fig. 6 is the display schematic diagram after the thin nose of image progress shown in Fig. 5;
Fig. 7 is a kind of structural representation of image processing apparatus provided in an embodiment of the present invention;
Fig. 8 is the structural representation of a kind of electronic equipment provided in an embodiment of the present invention;
Fig. 9 is the schematic flow sheet of another image processing method provided in an embodiment of the present invention;
Figure 10 is the differentiation schematic diagram between image provided in an embodiment of the present invention.
Embodiment
Technical scheme is further elaborated below in conjunction with Figure of description and specific embodiment.
As shown in figure 1, the present embodiment provides a kind of image processing method, including:
Step S110:Obtain original image;
Step S120:Based on the characteristic point of first object object to be deformed in original image, deformed region is determined, its
In, the characteristic point, profile and/or textural characteristics for embodying the first object object;
Step S130:Multiple pixels, which are selected, as fixed deformation from the deformed region constrains source point;
Step S140:Based on deformation constraint source point and deformation intensity, target point is determined, wherein, the target point is described
The pixel of the deformation pattern formed after first object object deformation, the pixel-parameters of the target point are equal to the deformation constraint
The pixel-parameters of origin;
Step S150:Based on the original pixels parameter of each pixel, the target point in the deformed region, the change is determined
Pixel-parameters in shape region after each pixel deformation, so as to obtain deformed region image;
Step S160:By in the deformed region of the deformed region image co-registration to the original image, become
Image after shape.
The image processing method that the present embodiment is provided, can be applied in various image processing equipments, for example, mobile phone, flat board
The electronic equipment of the various operation image processing applications such as computer or wearable device.
Original image is obtained in step s 110, it may include:Original image is collected by camera, or, from communication
Interface receives the original image from other electronic equipments, or, institute is extracted from the local storage medium of image processing equipment
State original image.The original image includes in the present embodiment:The pixel-parameters of each pixel.Here each picture
The pixel-parameters of vegetarian refreshments may include:Color-values and transparence value;The color-values may include:Red (r), green (g), blue (b) etc.
Trichromatic color-values, the parameter such as the transparency of each pixel.
The characteristic point of first object object to be deformed is carried out in the step s 120, obtains deformed region.Here first
The characteristic point of destination object, can be to use various features point extracting method, obtain the profile and/or line for characterizing first object object
The pixel of reason.It is for instance possible to use the characteristic point for the first object object that FAST feature extraction algorithms are extracted.The FAST can
For Features from Accelerated Segment Test abbreviation, but it is not limited to the algorithm.In some implementations
Characteristic point described in example, is the pixel of the difference of its gray value and the gray value around it within a preset range.
The characteristic point of the first object object, including:The characteristic point of the marginal position of sign first object object, and in
Between position characteristic point, in the present embodiment at least can the characteristic point based on marginal position, determine the deformed region
Edge, in certain embodiments, the deformed region at least need to include all characteristic points of the first object object.
On the one hand, it is determined that during deformed region, the characteristic point of first object object, feature based will be obtained in the present embodiment
The distributing position or pixel coordinate of point, to determine the deformed region, like this, relative to random interception in the prior art or
The interception drawn a circle to approve based on user, can be under different posture or form in first object object, can obtain accurate
Deformed region, for the inexactness of random interception or people's operation, can accurately determine deformed region, so as to realize
The image of different postures can obtain preferable image processing effect.
On the other hand, it is that the characteristic point based on first object object determines deformed region in the present embodiment, even if because
Cause first object object that different postures are presented in original image for shooting angle etc., can also adaptively extract including
The graphics field of first object object is as the deformation region, so as to improve the stability and accuracy of deformation.
In certain embodiments, it can be used for being based on deformed region in step S130, it is determined that deformation constraint source point.In some realities
Apply in example, number and the position of the deformation constraint source point according to the area of the deformed region, can be determined.This implementation in a word
Deformed region can be described in example:Only include the image-region of first object object in the original image.For example, described first
Destination object is nose, then the deformed region can be:Include the graphics field of whole nose, and only include the graph area of nose
Domain.For example, the first object object is eyes, then the deformed region can be:Only include eyes in the original image
Graphics field.
, can be equal angular from deformed region according to predetermined angle for example, the deformed region is circular deformation region
The deformation constraint source point is selected on periphery.It is more than the first half when the area of deformed region is more than the first area or radius of deformation
During footpath, then deformation constraint source point is taken using the first predetermined angle is equal angular;When the area of deformed region is not more than the first area
Or radius of deformation is when being not more than the first radius, then the deformation constraint source point is taken using the second predetermined angle is equal angular.It is described
First predetermined angle is less than second predetermined angle.Certainly, the circular deformation region can also be elliptic region.
It can also be determined target point based on deformation constraint source point and deformation intensity, carry out destination object in step S150
Deformation.Like this, distinguished point based positioning transformations region, in conjunction with deformation constraint source point and the coordinate of target point, realizes and calculates
Method is simple, the characteristics of computation complexity is low, so that the image processing method that the present embodiment is provided, the characteristics of with realizing easy.
The deformation intensity can be to characterize the deformation degree to being produced after first object object deformation in the present embodiment
Parameter, the deformation intensity may include:Scale the scaling of the first object object.The deformation intensity may also include:
Scale value of magnification or diminution value of the first object object etc..The deformation intensity may also include:In the middle part of first object object
Radian adjusted value of minute wheel exterior feature radian etc..In a word, the corresponding design parameter of the deformation intensity has many kinds, is not limited to above-mentioned
Any one.
, can be to scale the nose for example, the first object object is nose when the deformation intensity can be scaling
The scaling of son, the first object object is face, then can be the scaling for scaling the face.
The target point constrains the pixel-parameters of source point for the deformation in the present embodiment, corresponding after a deformation
Pixel.Here pixel-parameters may include color-values and transparence value etc..
The pixel of each pixel in deformed region is redefined based on original pixels parameter, target point in step S150
Parameter, so that the image of the deformed region after being deformed.The image of deformed region after deformation, claims in the present embodiment
Be deformed region image.
Only deformed region will be deformed in step S150, and like this, obtain melting after deformed region image
Close the corresponding position of deformed region of original image, it is clear that avoid to other regions beyond deformed region in original image
Carry out graphics process, so as to maintain in original image should not deformation original presentation so that beyond to first object object
Other Drawing Objects protected.
Original image and deformed region image can be based in step S160, original image and deformed region image is carried out
Fusion, the image after being deformed.The step S160 may include in this embodiment:The deformed region image is replaced into original
The image of deformed region in beginning image, it is possible to directly obtain the image after deformation.But in the present embodiment in order to avoid entering
After the deformed region image of row deformed region is replaced it, occur replace cause replace boundary it is excessively sharp keen the problem of, this
In embodiment when merging the original image and deformed region image, the deformed region image is being replaced into original image
Deformed region after, carry out edge blurry processing so that excessive gentle, the further figure of lifting deformation pattern of fringe region
As quality.
Alternatively, as shown in Fig. 2 the S120 may include:
Step S121:Obtain multiple characteristic points of the first object object;
Step S122:According to coordinate parameters of multiple characteristic points in the original image, described first is selected
The intermediate features point of destination object, as the central point of the deformed region, wherein, the intermediate features point is multiple described
It is located at the characteristic point in most centre position in characteristic point;
Step S123:Obtain deformation dimensional parameters;
Step S124:Based on the deformation dimensional parameters and the central point, the deformed region is determined.
The characteristic point is obtained in the present embodiment, including:Each characteristic point of first object object is obtained in original graph
Pixel coordinate as in;After obtaining each pixel coordinate, it is possible to according to the coordinate of each pixel coordinate, select
Characteristic point positioned at these characteristic point centre positions, this feature point is referred to as intermediate features point.In the present embodiment can be by this
Between characteristic point, be used as the central point of deformed region in the present embodiment.For example, deformed region is border circular areas, the then intermediate features
The pixel coordinate of point is using as the circle of the border circular areas, if the deformed region is rectangular area, the intermediate features point
Pixel coordinate will be used as the central point of the rectangular area.If the deformed region be elliptic region, the intermediate features band you
For intermediate point of the pixel coordinate as the elliptic region.
In figure 3, first object object is the nose of face, it is seen that the characteristic point of nose, available for description facial image
The profile of middle nose;The characteristic point of nose includes in figure 3:Positioned at the Edge Feature Points of nose edge, and in the middle of nose
The intermediate features point of position.Generally Edge Feature Points surround intermediate features point.The dashed circle shown in Fig. 3 can be with
The circular deformation region of intermediate features point formation.
The preset shape of the deformed region is corresponding with the first object object in the present embodiment.If current
The first object object is nose or eyes etc., and the preset shape of the deformed region is circle;If the first object pair
As for face or lip, then the preset shape of the deformed region is ellipse etc..Certainly it the above is only citing, when implementing not
It is confined to these situations.
The deformation intensity can determine for electronic equipment according to preset rules in the present embodiment, for example, in face
Some organ when being deformed, what the transformation rule that the preset rules are based on the aesthetic setting of the mankind was determined.
Here deformation dimensional parameters, substantially can be the parameter of description deformed region, for example, the half of circular deformation region
Footpath, the length of side of rectangle deformed region, the pixel coordinate of the summit pixel of rectangle deformed region, the major axis in ovalizing deflection region and short
Value of axle etc..
In the step s 120 after deformation dimensional parameters are determined, based on central point, it is clear that just can accurately make deformation
Region, so as to obtain the high deformed region of accuracy.
Alternatively, the step S123 may include:
According to the Edge Feature Points of the first object object and central feature point, the first radius of deformation is determined,
Wherein, the Edge Feature Points, are the characteristic point for being located at marginal position in multiple characteristic points;
According to first radius of deformation and the first adjusting parameter, the second radius of deformation is determined;
The step S124 may include:
Based on second radius of deformation and the central point, the deformed region is determined.
First adjusting parameter, may be based on user and indicates input in the present embodiment.
As shown in figure 4, after the central point and first deformation parameter is determined, the original shown in electronic equipment
The corresponding benchmark deformed region of the radius of deformation of Overlapping display first on beginning image, and adjustment space is shown, it is shown in Fig. 4
Bar is adjusted, the adjustment bar includes adjustment guide rail and the sliding block on guide rail, and user can be made by touch or mouse action
The sliding block is moved on the guide rail, electronic equipment according to the moving parameter of the sliding block, determine the first adjusting parameter or
Second parameter.For example, at least one of amount of movement and moving direction based on sliding block, determine the scaling of the first deformation parameter
Ratio.Specifically such as, the amount of movement is used to determine scaling increment, and the moving direction is used for the symbol for determining scaling increment.When
When moving direction is first direction, the first adjusting parameter A=1+B, when moving direction is first direction, A=1-B;Described
One direction and second direction are on the contrary, the home position of the sliding block is located at the centre position of the guide rail.The B increases for scaling
Amount.Certainly it the above is only distance, when implementing, first adjusting parameter can also be that moving the benchmark based on user becomes
The moving operation formation at the edge in shape region, for example, user clicks on the edge of benchmark deformed region, and promote on screen institute
Edge is stated, when promoting stopping, then first adjusting parameter can be determined based on pushed amount, so that it is determined that going out described second
Radius of deformation.
Adjustment bar is the second adjustment control in Fig. 4, available for adjustment deformation intensity.The first tune is also shown in Fig. 4
Whole control, the scope for adjusting deformed region.
In Fig. 4 right figure, sliding block is located at the Far Left of guide rail, the broken circle shown in Fig. 4 left figure represent for base
The deformed region determined in the first radius of deformation, sliding block is moved to the inclined centre position of guide rail in Fig. 4 right figure, it will be appreciated that be
Deformation intensity is added.Include in Fig. 4 the first adjustment control:Multiple child controls, after the child control that difference is given is selected,
First adjusting parameter different to having corresponded to, the 4th child control of the first adjustment control of left figure in Fig. 4 is selected, right
The deformed region answered is the broken circle in Fig. 4 left figure, and the first adjustment control in Fig. 4 right figure obtains the 5th child control
The deformed region determined based on the second radius of deformation that is selected, obtaining.The area of deformed region in obvious left figure is less than the right side
The area of deformed region in figure.
In certain embodiments, methods described also includes:
The characteristic point of the second destination object according to where the first object object, determines the first deformation intensity;
According to first deformation intensity and the second adjusting parameter, the second deformation intensity is determined;
The step S140 may include:
Based on the deformation obligatory point and second deformation intensity, target point is determined.
The deformation intensity can be for directly specifying in certain embodiments, in the present embodiment first can be according to pre-
If rule recommends a deformation intensity by electronic equipment, the deformation intensity can be the first intensity, be then based on of user oneself
Property demand, on the basis of the first deformation intensity adjustment obtain the second deformation intensity.
Here the second adjusting parameter can be to adopt the adjusting parameter determined in various manners.
For example, if Current protocols are used to carry out nose diminution, the deformation intensity can be used for the ratio for determining that nose reduces
Example, area of the nose after diminution etc..
Therefore deformation intensity is based in the present embodiment, it is determined that the corresponding target point of deformation constraint source point.
For example, second destination object is face;The first object object is nose.Such as the second target is behaved
Face, then based on Popular Aesthetics, the great nose of great face correspondence compares to be attractive in appearance, and electronic equipment can be according to default
Proportionate relationship, provides the deformation intensity of a recommendation, i.e., described first deformation intensity.Then some users may have the need of oneself
Ask, it indicates that need great intensity, then the first deformation intensity is adjusted based on the second adjusting parameter in the present embodiment, obtain second
Deformation intensity.
In certain embodiments, it can determine that optimal deformation is strong based on the preset ratio relation between face and nose
Degree and recommended range;The usual optimal deformation intensity is the median of the recommended range;The optimal deformation intensity is
One kind of first deformation intensity, then second adjusting parameter, adjusts second deformation in the recommended range
Parameter, so as not to user be unfamiliar with or misoperation situation, cause the scaling of nose excessive or too small, form be not inconsistent on the contrary
Close the photo of human aesthetic.
Carry out photo interest and appeal when, be applied equally to the first deformation intensity can for interest and appeal deform recommendation intensity and
Recommended range, the recommendation intensity that the second adjusting parameter is changes in recommended range, so that it is guaranteed that the image after deformation has foot
Enough deflections, so as to form interesting image.
When implementing, if second destination object is face, second destination object can be in face
Anticipate an organ, i.e., described first object object be not limited to nose, the organ that can also be the faces such as glasses, lip or forehead or
It is local.
Alternatively, methods described also includes:
Obtain the boundary rectangle of the deformed region;
The original image, interception image are intercepted according to the boundary rectangle;
The interception image is changed based on the deformed region, mask figure is obtained, wherein, the mask figure is located at deformed area
The pixel value of pixel in domain is that the pixel value of the pixel outside the first value, the deformed region is the second value;
The step S150 can, including:
Based on the mask figure, the original pixels parameter and the target point of the interception image obtain the deformed area
Area image.
As shown in Figure 10, FIG pull handle is carried out to original image (imgA), intercepts out the figure where imgA deformed region
Picture, forms the interception image (imgI).
In order to keep the part in original image beyond deformed region to remain unchanged, pass through interception image in the present embodiment
Acquisition, only to interception image carry out subsequent treatment, formed replace original image in deformed region deformed region image.
As shown in Figure 10, the mask figure (imgM) can be the image of binaryzation in the present embodiment.In the mask figure
The gray value of pixel only include:First value and the second value.In the present embodiment in the deformed region of the mask figure
The gray value of pixel can be that the gray value of the pixel outside 255, deformed region can be 0.After so electronic equipment obtains mask figure,
It is known that needing carry out converted for pixel-parameters to which pixel of interception image.
Certain first value is different with second value, is not limited to 255 and 0, can also be specifically 0 and 1
Deng.
Alternatively, methods described also includes:Fuzzy Processing is carried out to the mask figure, the gradual change of the interception image is obtained
Figure;
The step S160 may include:Based on the gradual change figure, fusion weight parameter is obtained;
Based on the fusion parameters, the original image and the deformed region image are merged.
A gradual change figure is also obtained in the present embodiment;The gradual change figure be based on mask figure come.For example, by mask figure
The gray value for the pixel that the gray value of pixel is disposed as outside 255, deformed region is disposed as 0, such mask in deformed region
The pixel grey scale value changes distance of deformed region edge, passes through Fuzzy Processing so that deformed region in the present embodiment in figure
The gray value gradual change of the pixel of edge.So that the gray value of the pixel at marginal position from 0 outside deformed region to 255 by
Walk progressive etc..
A weight parameter is determined according to the gray value of each pixel in gradual change figure in the present embodiment.For example, current
Pixel coordinate is that the gray value of (a, b) pixel is c in gradual change figure, then when the c is merged for deformed region image with original image,
Pixel coordinate is the weighting parameter merged of (a, b) pixel and respective pixel in original image.The weighting parameter may include:First
Weighting parameter and the second weighting parameter, the first weighting parameter are that the original pixels parameter of respective pixel in original image is joined in fusion
Angle value is influenceed in number;Second weighting parameter can be influence of the pixel-parameters of deformed region image respective pixel in fusion parameters
Angle value.The pixel-parameters of the respective pixel of image after fusion are the pixel-parameters value of original image and multiplying for the first weighting parameter
Product, adds the product of the pixel-parameters of respective pixel and the second weighting parameter in deformation pattern region.Certainly, only it is to lift here
Example, but it is not limited to the citing.
Alternatively, described to be based on the mask figure, the original pixels parameter and the target point of the interception image are obtained
The deformed region image, including:
Based on the mask figure, the pending pixel of the interception image is determined;
Original pixels parameter and the target point based on the pending pixel, obtain the deformed region image.
Mask figure, is applied not only to the determination of foregoing weighting parameter in the present embodiment, while being additionally operable to carry out pixel ginseng
The delineation of the pixel of number conversion, it is clear that a secondary mask figure realizes two kinds of functions, realizes the multiplexing of data, simplifies equipment
Handling process.
Fig. 5 show the schematic diagram before the thin nose of facial image, in Figure 5 the dotted line of nose position represent thin nose it
The profile of nose afterwards, and the solid line of nose position represents the profile of the nose before thin nose.
Fig. 6 show the schematic diagram after the thin nose of facial image.
As shown in fig. 7, the present embodiment provides a kind of image processing apparatus, including:
First acquisition unit 110, for obtaining original image;
First determining unit 120, for the characteristic point based on first object object to be deformed in original image, it is determined that becoming
Shape region, wherein, the characteristic point, profile and/or textural characteristics for embodying the first object object;
Selecting unit 130, source point is constrained for selecting multiple pixels as fixed deformation from the deformed region;
Second determining unit 140, for based on deformation constraint source point and deformation intensity, determining target point, wherein, the mesh
Punctuate is the pixel of the deformation pattern formed after the first object object deformation, and the pixel-parameters of the target point are equal to institute
State the pixel-parameters of deformation constraint origin;
Unit 150 is formed, for based on the original pixels parameter of each pixel, the target point in the deformed region, really
Pixel-parameters in the fixed deformed region after each pixel deformation, so as to obtain deformed region image;
Integrated unit 160, for by the deformed region of the deformed region image co-registration to the original image,
Image after being deformed.
The image processing apparatus that the present embodiment is provided, can be applied in various image processing equipments, and described first obtains single
Member 110 may include:Communication interface, the original image is received available for from peripheral hardware.The first acquisition unit 110 can also be wrapped
Include:Camera, available for original image described in automatic data collection.
First determining unit 120, selecting unit 130, the second determining unit 140, formation unit 150 and integrated unit
160 can correspond to processor or process circuit.The processor can be for central processing unit, microprocessor, at data signal
Manage device, application processor or programmable array.The process circuit may include:Application specific integrated circuit.
The processor or process circuit realize above-mentioned functions by the execution of the executable codes such as computer program.
Alternatively, first determining unit 120, multiple characteristic points for obtaining the first object object;
According to coordinate parameters of multiple characteristic points in the original image, the intermediate features of the first object object are selected
Point, as the central point of the deformed region, wherein, the intermediate features point, be located in multiple characteristic points it is most middle
The characteristic point of position;Obtain deformation dimensional parameters;Based on the deformation dimensional parameters and the central point, the change is determined
Shape region.
Obtain multiple characteristic points of first object object first in the present embodiment, characteristic point here is located at marginal position
Edge Feature Points and the intermediate features point positioned at first object object intermediate region.In point of the total distinguished point based of the present embodiment
Cloth, determines the deformed region.In the present embodiment by the use of intermediate features point as the central point of deformed region, it is then based on
Edge Feature Points formation deformation dimensional parameters, so as to form the deformed region at least surrounding all characteristic points.
Alternatively, first determining unit 120, for Edge Feature Points according to the first object object and described
Central feature point, determines the first radius of deformation, wherein, the Edge Feature Points are to be located at edge in multiple characteristic points
The characteristic point of position;According to first radius of deformation and the first adjusting parameter, the second radius of deformation is determined;Based on described
Second radius of deformation and the central point, determine the deformed region.
First adjusting parameter, may be based on what user's input was determined, so facilitates user voluntarily in the present embodiment
The corresponding deformed region of the first object object is controlled, so as to meet the individual needs of user.
Alternatively, described device also includes:
3rd determining unit, for the characteristic point of the second destination object according to where the first object object, it is determined that
Go out the first deformation intensity;According to first deformation intensity and the second adjusting parameter, the second deformation intensity is determined;
Second determining unit 140, specifically for deforming obligatory point and second deformation intensity based on described, it is determined that
Target point.
The 3rd determining unit, can also correspond to processor or process circuit in the present embodiment, processing here
Device or process circuit, again may be by the execution of code, realize the acquisition of second deformation intensity.
Second determining unit 140, specifically determines target point based on the second deformation intensity.
Alternatively, second destination object is face;Second destination object is nose.
In certain embodiments, described device also includes:
Second acquisition unit, the boundary rectangle for obtaining the deformed region;
Interception unit, for intercepting the original image, interception image according to the boundary rectangle;
Converting unit, for changing the interception image based on the deformed region, obtains mask figure, wherein, it is described to cover
The pixel value that film figure is located at the pixel in deformed region is that the pixel value of the pixel outside the first value, the deformed region is
Second value;
The formation unit 150, for based on the mask figure, the original pixels parameter and the mesh of the interception image
Punctuate, obtains the deformed region image.
Second acquisition unit, interception unit and converting unit, can all correspond to processor or processing electricity in the present embodiment
Road, can realize above-mentioned functions by the execution simplicity of correspondence code.
The formation unit 150 is based on mask figure, and deformed region is determined in formation, the pixel weight in corresponding deformation region
New determination pixel-parameters pixel-by-pixel, so that the deformed region image after being deformed.
Alternatively, described device also includes:
Fuzzy Processing unit, for carrying out Fuzzy Processing to the mask figure, obtains the gradual change figure of the interception image;
The integrated unit 160, for based on the gradual change figure, obtaining fusion weight parameter;Based on the fusion parameters,
Merge the original image and the deformed region image.
The Fuzzy Processing unit equally may correspond to processor or process circuit, and the processor or process circuit lead to
Cross code and perform the generation for realizing the image after new deformation.
Further, the formation unit 150, specifically for based on the mask figure, determining that the interception image is waited to locate
Manage pixel;Original pixels parameter and the target point based on the pending pixel, obtain the deformed region image.
As shown in figure 8, the present embodiment also provides a kind of electronic equipment, it is characterised in that including:
Memory 210, for storing computer program;
Processor 220, is connected with the memory 210, for by performing the computer program, realizing foregoing any
The image processing method that one embodiment is provided.
Memory 210 may include:Various types of storage mediums, the storage medium can be non-moment storage medium, such as only
Storage medium etc. is read, certain memory may also include:Flash memory etc..
The processor 220 may include:Central processing unit, microprocessor, digital signal processor, application processor or can
Programmed array etc..
Connected between processor 220 and memory 210 by bus 230, the bus 230 can be total for integrated circuit (IIC)
Line, can also be peripheral interconnection standard (PCI) bus.The information exchange that the bus can be used between memory and processor.
In certain embodiments, the electronic equipment also includes:Display 240, the display 240 is believed for display image
Breath and/or text message, convenient deformation pattern shown after original image, interception image, deformed region image and fusion etc..
In certain embodiments as illustrated, the electronic equipment also includes:Communication interface 250, the communication interface 250 can
For carrying out information exchange with other electronic equipments.
The present embodiment provides a kind of computer-readable storage medium, and the computer-readable storage medium is stored with computer program, institute
State after computer program is executed by processor, the image processing method that any one foregoing embodiment is provided can be showed.
Laid out a concrete plan below in conjunction with above-mentioned any embodiment:
This example provides a kind of image processing method, including:Input facial image imgA;Output result image imgR.
The first step:Extract facial feature is carried out from imagA, human face characteristic point is obtained, human face characteristic point Fi here is used
Positional information of each organ etc. in the profile information and/or face of face in sign imagA.For example, the Fi includes M
Individual, the M is equally likely to 80, wherein, the 56th, i.e. i=56 is the characteristic point of nose, i.e. Fi (i=56 ..., 64).This example
The middle nose areas located in connection point for having used face recognition algorithms to obtain totally 9 points, to be i5=56-64 respectively.
Second step:Acquisition radius of deformation parameter is R and deformation intensity parameter is M.
3rd step:Determine deformed region, it may include:According to input picture anchor point information Fi (i=64), radius of deformation R,
Calculating obtains thin nose correspondence rectangular area Rect (x, y, w, h), corresponding region image imgI.Wherein, the square that coordinate (x, y) is represented
One vertex position in shape region, the rectangle width that w is represented, what h was represented is the height of rectangle.Coordinate (x, y) in this example
It can be the coordinate on the left summit of rectangular area.
The pixel region calculation that rectangular area corresponds in imaA is as follows:
R0=R*ratio (1), ratio (1) are test experience numerical constant, and such as 1.3, ratio (1) is the first adjustment ginseng
Number, can input the value of determination for empirical value or based on user;
X=Fi.x-R0;Y=Fi.y-R0;W=2.0*R0;H=2.0*R0;The Fi.x is i-th of nose characteristic point
Abscissa;Fi.y is the ordinate of i-th of nose characteristic point.
4th step:Deformed region is deformed, it may include:
Thin nose deformed region mask (mask) figure imgM is obtained based on Fi position information process, processing method is as follows:
Image imgM is created, single channel, the same to Rect (w, h) of length and width sets the pixel-parameters of all pixels value to be set to 0.
With Fi (i=64) for the center of circle, radius is r0, draws circular on imgM, and the pixel-parameters for setting pixel in border circular areas are
255.Here pixel-parameters at least include:Characterize gray value.
Above-mentioned r0=R*c0, c0 are preset parameter value, are taken out according to test empirical value, such as 0.9;
Result is schemed for deformed region mask:ImgM, single channel, optional position (x, y) respective pixel value G defines G>
0:Deformed region;G=0:Non-deformed region.
Calculated according to radius of deformation parameter R and characteristic point Fi (i=64) and obtain deformation constraint source point Sj (j=0,1,2 ...
7), (j=0,1,2 ... 7), with Dj (dx, dy), Fi (i=64) (fx, fy), exemplified by Si (sx, sy) by target point Dj.What dx was represented
The abscissa of target point;The ordinate for the target point that dy is represented;The abscissa for the characteristic point that fx is represented, fy, which is characterized, a little indulges seat
Mark.
There is provided calculation exemplified by calculating the conversion of j-th of pixel below:
Sx=fx+R1*cos (Ai), sy=fy+R1*sin (Aj);
Dx=fx+R2*cos (Ai), dy=fy+R2*sin (Aj);
R1 calculation formula:R1=R*c1, c1 are that preset parameter value is drawn according to test empirical value, such as 0.8;
R2 calculation formula:R2=R* (1.0-M*ratio (2)), ratio (2) are experience numerical constant, and such as 0.6, M is defeated
The deformation intensity parameter entered.
Aj angularly choose deformation constraint origin angle value, concretely 0 degree, 45 degree, 90 degree ... 315 degree, with 45 degree
Incremental correspondence takes out 8 angle values.
According to input picture (imgI), Sj, Fi (i=64), Dj, imgM are calculated based on deformation algorithm and are obtained deformation result
Scheme imgR0.
5th step:The fusion of deformation pattern and original image is carried out, including:
Fuzzy Processing is done to imgM, gradual change figure (imgAlpha) is as a result expressed as, using imgAlpha images as weight parameter,
Merge imgR0 and scheme imgA to input, obtain result figure imgR, result of calculation process is as follows:
Input picture position (x, y) respective pixel value imgAlpha (G), imgA (r, g, b), imgR (r, g, b) are defined,
ImgR0 (r, g, b), calculation formula is as follows:
R (r, g, b)=A (r, g, b) * (255-G)+R0 (r, g, b) * (G);
R represent for the corresponding red color-values of pixel, span can be 0 to 255;What g was represented corresponds to for pixel
Green color-values, span can be 0 to 255;What b was represented is the corresponding blue color-values of pixel, span
Can be 0 to 255;The gray value for the pixel that G is represented, span equally can be 0 to 255.
As shown in figure 9, the image processing method that this example is provided includes:
Step S1:Input figure imgA, nose characteristic point Fi (i=64), radius of deformation R and deformation intensity mag;
Step S2:With Fi (i=64) for the center of circle, R*ratio (1) is radius, is spaced 45 degree of angles, once takes 8 point works
For deformation constraint source point Sj (j=0 ... ..., 7);
Step S3:Based on deformation constrain origin, deformation intensity mag, calculate deformation target point Dj (j=0 ... ...,
7);
Step S4:Deformation rectangle region Rect (x, y, w, h) is calculated, Rect (x, y, w, h) image is replicated from imagA and makees
For imgI, inputted as deformation;
Step S5:Based on Rect (x, y, w, h) and imgI, thin nose deformation mask figures are calculated:imgM;
Step S6:Based on imgM, single channel gray-scale map imgAlpha is calculated, for merging deformation output result;
Step S7:Sj, Dj, imgI, imgM are inputted, output figure imgR0 is obtained using deformation algorithm;
Step S8:Based on imgAlpha, imgR0 and imgA, output result figure imagR are merged;
Step S9:Output result figure imgR.
Figure 10 is shown in image processing process, the differentiation graph of a relation between image;First, from original image imgA
Interception obtains imgI, to imgI processing, obtains mask figure imgM;The imgM progress of mask figure is handled and obtains gradual change figure
imgAlpha;ImgR0 after contracting nose is obtained based on imgI and imgM;ImgA, imgAlpha and imgR0 are merged, obtained
imgR。
, can be by it in several embodiments provided herein, it should be understood that disclosed apparatus and method
Its mode is realized.Apparatus embodiments described above are only schematical, for example, the division of the unit, is only
A kind of division of logic function, can have other dividing mode, such as when actually realizing:Multiple units or component can be combined, or
Another system is desirably integrated into, or some features can be ignored, or do not perform.In addition, shown or discussed each composition portion
Coupling point each other or direct-coupling or communication connection can be the INDIRECT COUPLINGs of equipment or unit by some interfaces
Or communication connection, can be electrical, machinery or other forms.
The above-mentioned unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part shown can be or may not be physical location, you can positioned at a place, can also be distributed to multiple network lists
In member;Part or all of unit therein can be selected to realize the purpose of this embodiment scheme according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated into a processing module, also may be used
Be each unit individually as a unit, can also two or more units it is integrated in a unit;It is above-mentioned
Integrated unit can both be realized in the form of hardware, it would however also be possible to employ hardware adds the form of SFU software functional unit to realize.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above method embodiment can pass through
Programmed instruction related hardware is completed, and foregoing program can be stored in a computer read/write memory medium, the program
Upon execution, the step of including above method embodiment is performed;And foregoing storage medium includes:It is movable storage device, read-only
Memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or
Person's CD etc. is various can be with the medium of store program codes.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all be contained
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (15)
1. a kind of image processing method, it is characterised in that including:
Obtain original image;
Based on the characteristic point of first object object to be deformed in original image, deformed region is determined, wherein, the characteristic point,
Profile and/or textural characteristics for embodying the first object object;
Multiple pixels, which are selected, as fixed deformation from the deformed region constrains source point;
Based on deformation constraint source point and deformation intensity, target point is determined, wherein, the target point becomes for the first object object
The pixel of the deformation pattern formed after shape, the pixel-parameters of the target point are equal to the pixel ginseng of the deformation constraint origin
Number;
Based on the original pixels parameter of each pixel, the target point in the deformed region, determine each in the deformed region
Pixel-parameters after individual pixel deformation, so as to obtain deformed region image;
By in the deformed region of the deformed region image co-registration to the original image, the image after being deformed.
2. according to the method described in claim 1, it is characterised in that
The characteristic point based on first object object to be deformed in original image, determines deformed region, including:
Obtain multiple characteristic points of the first object object;
According to coordinate parameters of multiple characteristic points in the original image, the centre of the first object object is selected
Characteristic point, as the central point of the deformed region, wherein, the intermediate features point is to be located at most in multiple characteristic points
The characteristic point in centre position;
Obtain deformation dimensional parameters;
Based on the deformation dimensional parameters and the central point, the deformed region is determined.
3. method according to claim 2, it is characterised in that
Described obtain deforms dimensional parameters, including:
According to the Edge Feature Points of the first object object and central feature point, the first radius of deformation is determined, wherein,
The Edge Feature Points, are the characteristic point for being located at marginal position in multiple characteristic points;
According to first radius of deformation and the first adjusting parameter, the second radius of deformation is determined;
It is described that the deformed region is determined based on the deformation dimensional parameters and the central point, including:
Based on second radius of deformation and the central point, the deformed region is determined.
4. the method according to claim 1,2 or 3, it is characterised in that
Methods described also includes:
The characteristic point of the second destination object according to where the first object object, determines the first deformation intensity;
According to first deformation intensity and the second adjusting parameter, the second deformation intensity is determined;
It is described that target point is determined based on deformation constraint source point and deformation intensity, including:
Based on the deformation obligatory point and second deformation intensity, target point is determined.
5. method according to claim 4, it is characterised in that
Second destination object is face;Second destination object is nose.
6. the method according to claim 1,2 or 3, it is characterised in that
Methods described also includes:
Obtain the boundary rectangle of the deformed region;
The original image, interception image are intercepted according to the boundary rectangle;
The interception image is changed based on the deformed region, mask figure is obtained, wherein, the mask figure is located in deformed region
The pixel value of pixel be the first value, the pixel value of the pixel outside the deformed region is the second value;
It is described based on the original pixels parameter of each pixel, the target point in the deformed region, determine in the deformed region
Pixel-parameters after each pixel deformation, so that deformed region image is obtained, including:
Based on the mask figure, the original pixels parameter and the target point of the interception image obtain the deformed region figure
Picture.
7. method according to claim 6, it is characterised in that
Methods described also includes:
Fuzzy Processing is carried out to the mask figure, the gradual change figure of the interception image is obtained;
It is described by the deformed region of the deformed region image co-registration to the original image, the figure after being deformed
Picture, including:
Based on the gradual change figure, fusion weight parameter is obtained;
Based on the fusion parameters, the original image and the deformed region image are merged.
8. method according to claim 6, it is characterised in that
Described to be based on the mask figure, the original pixels parameter and the target point of the interception image obtain the deformed area
Area image, including:
Based on the mask figure, the pending pixel of the interception image is determined;
Original pixels parameter and the target point based on the pending pixel, obtain the deformed region image.
9. a kind of image processing apparatus, it is characterised in that including:
First acquisition unit, for obtaining original image;
First determining unit, for the characteristic point based on first object object to be deformed in original image, determines deformed region,
Wherein, the characteristic point, profile and/or textural characteristics for embodying the first object object;
Selecting unit, source point is constrained for selecting multiple pixels as fixed deformation from the deformed region;
Second determining unit, for based on deformation constraint source point and deformation intensity, determining target point, wherein, the target point is
The pixel of the deformation pattern formed after the first object object deformation, the pixel-parameters of the target point are equal to the deformation
Constrain the pixel-parameters of origin;
Unit is formed, for based on the original pixels parameter of each pixel, the target point in the deformed region, determining the change
Pixel-parameters in shape region after each pixel deformation, so as to obtain deformed region image;
Integrated unit, for by the deformed region of the deformed region image co-registration to the original image, being become
Image after shape.
10. device according to claim 9, it is characterised in that
First determining unit, multiple characteristic points for obtaining the first object object;According to multiple spies
The coordinate parameters a little in the original image are levied, the intermediate features point of the first object object is selected, become as described
The central point in shape region, wherein, the intermediate features point is the feature for being located at most centre position in multiple characteristic points
Point;Obtain deformation dimensional parameters;Based on the deformation dimensional parameters and the central point, the deformed region is determined.
11. device according to claim 10, it is characterised in that
First determining unit, for the Edge Feature Points according to the first object object and central feature point, really
The first radius of deformation is made, wherein, the Edge Feature Points are the feature for being located at marginal position in multiple characteristic points
Point;According to first radius of deformation and the first adjusting parameter, the second radius of deformation is determined;Based on second radius of deformation and
The central point, determines the deformed region.
12. the device according to claim 9,10 or 11, it is characterised in that
Described device also includes:
3rd determining unit, for the characteristic point of the second destination object according to where the first object object, determines
One deformation intensity;According to first deformation intensity and the second adjusting parameter, the second deformation intensity is determined;
Second determining unit, specifically for based on the deformation obligatory point and second deformation intensity, determining target point.
13. the device according to claim 9,10 or 11, it is characterised in that
Described device also includes:
Second acquisition unit, the boundary rectangle for obtaining the deformed region;
Interception unit, for intercepting the original image, interception image according to the boundary rectangle;
Converting unit, for changing the interception image based on the deformed region, obtains mask figure, wherein, the mask figure
The pixel value of pixel in deformed region is that the pixel value of the pixel outside the first value, the deformed region is second
Value;
The formation unit, for based on the mask figure, the original pixels parameter and the target point of the interception image to be obtained
Obtain the deformed region image.
14. a kind of electronic equipment, it is characterised in that including:
Memory, for storing computer program;
Processor, is connected with the memory, for by performing the computer program, realizing any one of claim 1 to 8
The described image processing method of offer.
15. a kind of computer-readable storage medium, the computer-readable storage medium is stored with computer program, the computer program quilt
After computing device, the described image processing method that any one of claim 1 to 8 is provided can be realized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710348772.8A CN107154030B (en) | 2017-05-17 | 2017-05-17 | Image processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710348772.8A CN107154030B (en) | 2017-05-17 | 2017-05-17 | Image processing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107154030A true CN107154030A (en) | 2017-09-12 |
CN107154030B CN107154030B (en) | 2023-06-09 |
Family
ID=59792877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710348772.8A Active CN107154030B (en) | 2017-05-17 | 2017-05-17 | Image processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107154030B (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107203963A (en) * | 2016-03-17 | 2017-09-26 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device, electronic equipment |
CN107526504A (en) * | 2017-08-10 | 2017-12-29 | 广州酷狗计算机科技有限公司 | Method and device, terminal and the storage medium that image is shown |
CN107707818A (en) * | 2017-09-27 | 2018-02-16 | 努比亚技术有限公司 | Image processing method, device and computer-readable recording medium |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN108765274A (en) * | 2018-05-31 | 2018-11-06 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage media |
CN108830784A (en) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage medium |
CN109146769A (en) * | 2018-07-24 | 2019-01-04 | 北京市商汤科技开发有限公司 | Image processing method and device, image processing equipment and storage medium |
CN109242765A (en) * | 2018-08-31 | 2019-01-18 | 腾讯科技(深圳)有限公司 | A kind of face image processing process, device and storage medium |
CN109658360A (en) * | 2018-12-25 | 2019-04-19 | 北京旷视科技有限公司 | Method, apparatus, electronic equipment and the computer storage medium of image procossing |
CN109685015A (en) * | 2018-12-25 | 2019-04-26 | 北京旷视科技有限公司 | Processing method, device, electronic equipment and the computer storage medium of image |
CN110111240A (en) * | 2019-04-30 | 2019-08-09 | 北京市商汤科技开发有限公司 | A kind of image processing method based on strong structure, device and storage medium |
CN110136224A (en) * | 2018-02-09 | 2019-08-16 | 三星电子株式会社 | Image interfusion method and equipment |
CN110555794A (en) * | 2018-05-31 | 2019-12-10 | 北京市商汤科技开发有限公司 | image processing method and device, electronic equipment and storage medium |
CN110766603A (en) * | 2018-07-25 | 2020-02-07 | 北京市商汤科技开发有限公司 | Image processing method and device and computer storage medium |
CN110852932A (en) * | 2018-08-21 | 2020-02-28 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, image device, and storage medium |
CN110956679A (en) * | 2018-09-26 | 2020-04-03 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN111028137A (en) * | 2018-10-10 | 2020-04-17 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
WO2020087731A1 (en) * | 2018-10-30 | 2020-05-07 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, computer device and computer storage medium |
CN111736788A (en) * | 2020-06-28 | 2020-10-02 | 广州励丰文化科技股份有限公司 | Image processing method, electronic device, and storage medium |
CN111968050A (en) * | 2020-08-07 | 2020-11-20 | Oppo(重庆)智能科技有限公司 | Human body image processing method and related product |
CN112087648A (en) * | 2019-06-14 | 2020-12-15 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
WO2021012596A1 (en) * | 2019-07-24 | 2021-01-28 | 广州视源电子科技股份有限公司 | Image adjustment method, device, storage medium, and apparatus |
CN113096022A (en) * | 2019-12-23 | 2021-07-09 | RealMe重庆移动通信有限公司 | Image blurring processing method and device, storage medium and electronic equipment |
CN113706369A (en) * | 2020-05-21 | 2021-11-26 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113986105A (en) * | 2020-07-27 | 2022-01-28 | 北京达佳互联信息技术有限公司 | Face image deformation method and device, electronic equipment and storage medium |
WO2024120223A1 (en) * | 2022-12-09 | 2024-06-13 | 北京字跳网络技术有限公司 | Image processing method and apparatus, and device, storage medium and computer program product |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050162419A1 (en) * | 2002-03-26 | 2005-07-28 | Kim So W. | System and method for 3-dimension simulation of glasses |
FR2920560A1 (en) * | 2007-09-05 | 2009-03-06 | Botton Up Soc Responsabilite L | Three-dimensional synthetic actor i.e. avatar, constructing and immersing method, involves constructing psychic profile from characteristic points and features, and fabricating animated scene from head of profile and animation base |
CN102221954A (en) * | 2010-04-15 | 2011-10-19 | ***通信集团公司 | Zooming displayer as well as electronic device comprising same and zoom displaying method |
CN102999929A (en) * | 2012-11-08 | 2013-03-27 | 大连理工大学 | Triangular gridding based human image face-lift processing method |
CN104036453A (en) * | 2014-07-03 | 2014-09-10 | 上海斐讯数据通信技术有限公司 | Image local deformation method and image local deformation system and mobile phone with image local deformation method |
CN104067311A (en) * | 2011-12-04 | 2014-09-24 | 数码装饰有限公司 | Digital makeup |
US20150117719A1 (en) * | 2013-10-29 | 2015-04-30 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
CN104637078A (en) * | 2013-11-14 | 2015-05-20 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN104657974A (en) * | 2013-11-25 | 2015-05-27 | 腾讯科技(上海)有限公司 | Image processing method and device |
CN106296590A (en) * | 2015-05-11 | 2017-01-04 | 福建天晴数码有限公司 | Skin coarseness self adaptation mill skin method, system and client |
CN106303153A (en) * | 2015-05-29 | 2017-01-04 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device |
US20170076474A1 (en) * | 2014-02-23 | 2017-03-16 | Northeastern University | System for Beauty, Cosmetic, and Fashion Analysis |
-
2017
- 2017-05-17 CN CN201710348772.8A patent/CN107154030B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050162419A1 (en) * | 2002-03-26 | 2005-07-28 | Kim So W. | System and method for 3-dimension simulation of glasses |
FR2920560A1 (en) * | 2007-09-05 | 2009-03-06 | Botton Up Soc Responsabilite L | Three-dimensional synthetic actor i.e. avatar, constructing and immersing method, involves constructing psychic profile from characteristic points and features, and fabricating animated scene from head of profile and animation base |
CN102221954A (en) * | 2010-04-15 | 2011-10-19 | ***通信集团公司 | Zooming displayer as well as electronic device comprising same and zoom displaying method |
CN104067311A (en) * | 2011-12-04 | 2014-09-24 | 数码装饰有限公司 | Digital makeup |
CN102999929A (en) * | 2012-11-08 | 2013-03-27 | 大连理工大学 | Triangular gridding based human image face-lift processing method |
US20150117719A1 (en) * | 2013-10-29 | 2015-04-30 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
CN104637078A (en) * | 2013-11-14 | 2015-05-20 | 腾讯科技(深圳)有限公司 | Image processing method and device |
CN104657974A (en) * | 2013-11-25 | 2015-05-27 | 腾讯科技(上海)有限公司 | Image processing method and device |
US20170076474A1 (en) * | 2014-02-23 | 2017-03-16 | Northeastern University | System for Beauty, Cosmetic, and Fashion Analysis |
CN104036453A (en) * | 2014-07-03 | 2014-09-10 | 上海斐讯数据通信技术有限公司 | Image local deformation method and image local deformation system and mobile phone with image local deformation method |
CN106296590A (en) * | 2015-05-11 | 2017-01-04 | 福建天晴数码有限公司 | Skin coarseness self adaptation mill skin method, system and client |
CN106303153A (en) * | 2015-05-29 | 2017-01-04 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device |
Non-Patent Citations (1)
Title |
---|
陈威华: "基于图像变形的人体动画和人脸夸张", 《硕士电子期刊》 * |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107203963A (en) * | 2016-03-17 | 2017-09-26 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device, electronic equipment |
CN107203963B (en) * | 2016-03-17 | 2019-03-15 | 腾讯科技(深圳)有限公司 | A kind of image processing method and device, electronic equipment |
CN107526504A (en) * | 2017-08-10 | 2017-12-29 | 广州酷狗计算机科技有限公司 | Method and device, terminal and the storage medium that image is shown |
CN107707818A (en) * | 2017-09-27 | 2018-02-16 | 努比亚技术有限公司 | Image processing method, device and computer-readable recording medium |
CN107707818B (en) * | 2017-09-27 | 2020-09-29 | 努比亚技术有限公司 | Image processing method, image processing apparatus, and computer-readable storage medium |
CN107730445A (en) * | 2017-10-31 | 2018-02-23 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN107730445B (en) * | 2017-10-31 | 2022-02-18 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, storage medium, and electronic device |
CN110136224A (en) * | 2018-02-09 | 2019-08-16 | 三星电子株式会社 | Image interfusion method and equipment |
CN108765274A (en) * | 2018-05-31 | 2018-11-06 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage media |
CN108830784A (en) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage medium |
CN110555794A (en) * | 2018-05-31 | 2019-12-10 | 北京市商汤科技开发有限公司 | image processing method and device, electronic equipment and storage medium |
US11216904B2 (en) | 2018-05-31 | 2022-01-04 | Beijing Sensetime Technology Development Co., Ltd. | Image processing method and apparatus, electronic device, and storage medium |
CN109146769A (en) * | 2018-07-24 | 2019-01-04 | 北京市商汤科技开发有限公司 | Image processing method and device, image processing equipment and storage medium |
CN110766603B (en) * | 2018-07-25 | 2024-04-12 | 北京市商汤科技开发有限公司 | Image processing method, device and computer storage medium |
CN110766603A (en) * | 2018-07-25 | 2020-02-07 | 北京市商汤科技开发有限公司 | Image processing method and device and computer storage medium |
CN110852932B (en) * | 2018-08-21 | 2024-03-08 | 北京市商汤科技开发有限公司 | Image processing method and device, image equipment and storage medium |
CN110852932A (en) * | 2018-08-21 | 2020-02-28 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, image device, and storage medium |
CN109242765B (en) * | 2018-08-31 | 2023-03-10 | 腾讯科技(深圳)有限公司 | Face image processing method and device and storage medium |
CN109242765A (en) * | 2018-08-31 | 2019-01-18 | 腾讯科技(深圳)有限公司 | A kind of face image processing process, device and storage medium |
CN110956679A (en) * | 2018-09-26 | 2020-04-03 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN110956679B (en) * | 2018-09-26 | 2023-07-14 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN111028137A (en) * | 2018-10-10 | 2020-04-17 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN111028137B (en) * | 2018-10-10 | 2023-08-15 | Oppo广东移动通信有限公司 | Image processing method, apparatus, electronic device, and computer-readable storage medium |
WO2020087731A1 (en) * | 2018-10-30 | 2020-05-07 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, computer device and computer storage medium |
CN109658360A (en) * | 2018-12-25 | 2019-04-19 | 北京旷视科技有限公司 | Method, apparatus, electronic equipment and the computer storage medium of image procossing |
CN109685015A (en) * | 2018-12-25 | 2019-04-26 | 北京旷视科技有限公司 | Processing method, device, electronic equipment and the computer storage medium of image |
CN109658360B (en) * | 2018-12-25 | 2021-06-22 | 北京旷视科技有限公司 | Image processing method and device, electronic equipment and computer storage medium |
CN110111240A (en) * | 2019-04-30 | 2019-08-09 | 北京市商汤科技开发有限公司 | A kind of image processing method based on strong structure, device and storage medium |
CN112087648A (en) * | 2019-06-14 | 2020-12-15 | 腾讯科技(深圳)有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
WO2021012596A1 (en) * | 2019-07-24 | 2021-01-28 | 广州视源电子科技股份有限公司 | Image adjustment method, device, storage medium, and apparatus |
CN113096022B (en) * | 2019-12-23 | 2022-12-30 | RealMe重庆移动通信有限公司 | Image blurring processing method and device, storage medium and electronic device |
CN113096022A (en) * | 2019-12-23 | 2021-07-09 | RealMe重庆移动通信有限公司 | Image blurring processing method and device, storage medium and electronic equipment |
CN113706369A (en) * | 2020-05-21 | 2021-11-26 | 北京达佳互联信息技术有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN111736788A (en) * | 2020-06-28 | 2020-10-02 | 广州励丰文化科技股份有限公司 | Image processing method, electronic device, and storage medium |
CN113986105A (en) * | 2020-07-27 | 2022-01-28 | 北京达佳互联信息技术有限公司 | Face image deformation method and device, electronic equipment and storage medium |
CN113986105B (en) * | 2020-07-27 | 2024-05-31 | 北京达佳互联信息技术有限公司 | Face image deformation method and device, electronic equipment and storage medium |
CN111968050A (en) * | 2020-08-07 | 2020-11-20 | Oppo(重庆)智能科技有限公司 | Human body image processing method and related product |
CN111968050B (en) * | 2020-08-07 | 2024-02-20 | Oppo(重庆)智能科技有限公司 | Human body image processing method and related products |
WO2024120223A1 (en) * | 2022-12-09 | 2024-06-13 | 北京字跳网络技术有限公司 | Image processing method and apparatus, and device, storage medium and computer program product |
Also Published As
Publication number | Publication date |
---|---|
CN107154030B (en) | 2023-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107154030A (en) | Image processing method and device, electronic equipment and storage medium | |
CN103914699B (en) | A kind of method of the image enhaucament of the automatic lip gloss based on color space | |
CN110705448B (en) | Human body detection method and device | |
CN101052996B (en) | Face image display, face image display method | |
CN106971165B (en) | A kind of implementation method and device of filter | |
CN110163640A (en) | A kind of method and computer equipment of product placement in video | |
CN107507217B (en) | Method and device for making certificate photo and storage medium | |
JP7129502B2 (en) | Face image processing method and device, image equipment and storage medium | |
CN108229279A (en) | Face image processing process, device and electronic equipment | |
CN103686125A (en) | Depth estimation device, depth estimation method, depth estimation program, image processing device, image processing method, and image processing program | |
CN107507216A (en) | The replacement method of regional area, device and storage medium in image | |
US20110050685A1 (en) | Image processing apparatus, image processing method, and program | |
CN107610202A (en) | Marketing method, equipment and the storage medium replaced based on facial image | |
CN106652015B (en) | Virtual character head portrait generation method and device | |
CN107203963A (en) | A kind of image processing method and device, electronic equipment | |
CN103168316A (en) | User interface control device, user interface control method, computer program, and integrated circuit | |
CN109426767A (en) | Informer describes guidance device and its method | |
CN103413339B (en) | 1000000000 pixel high dynamic range images are rebuild and the method for display | |
CN106254859A (en) | Light field display control method and device, light field display device | |
CN112686820A (en) | Virtual makeup method and device and electronic equipment | |
CN108596992B (en) | Rapid real-time lip gloss makeup method | |
CN116648733A (en) | Method and system for extracting color from facial image | |
JP4775067B2 (en) | Digital content creation system, digital content creation program, and digital content creation method | |
CN117157673A (en) | Method and system for forming personalized 3D head and face models | |
CN106846399A (en) | A kind of method and device of the vision center of gravity for obtaining image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |