CN108234882A - A kind of image weakening method and mobile terminal - Google Patents
A kind of image weakening method and mobile terminal Download PDFInfo
- Publication number
- CN108234882A CN108234882A CN201810143060.7A CN201810143060A CN108234882A CN 108234882 A CN108234882 A CN 108234882A CN 201810143060 A CN201810143060 A CN 201810143060A CN 108234882 A CN108234882 A CN 108234882A
- Authority
- CN
- China
- Prior art keywords
- area
- parameter
- virtualization
- target image
- luminance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000003313 weakening effect Effects 0.000 title claims abstract description 14
- 238000012545 processing Methods 0.000 claims description 40
- 230000008859 change Effects 0.000 claims description 5
- 239000011800 void material Substances 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 13
- 230000006870 function Effects 0.000 description 20
- 238000003860 storage Methods 0.000 description 13
- 238000004422 calculation algorithm Methods 0.000 description 10
- 238000009826 distribution Methods 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 230000006854 communication Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 241001464837 Viridiplantae Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The present invention provides a kind of image weakening method and mobile terminals.This method includes:Determine the background area in target image;According to the parameter preset information of the background area, the background area is divided, obtains at least two subregions;According to the parameter preset information of at least two subregion, the corresponding different virtualization parameters of described two at least subregions are determined;Different degrees of virtualization is carried out respectively at least two subregion in the target image according to the virtualization parameter to handle, the target image after being blurred.The present invention can carry out different degrees of virtualization to each sub-regions and handle according to the property difference of every sub-regions of background image, meet the personalized image shooting demand of user, and improve image taking effect.
Description
Technical field
The present embodiments relate to technical field of image processing more particularly to a kind of image weakening methods and mobile terminal.
Background technology
With the progress of mobile technology and the development of society, mobile terminal is more and more important in our life,
Camera function also becomes the basic function of mobile terminal.
Shooting main body is protruded in order to promote effect of taking pictures, current mobile terminal generally supports background blurring function, but
It is, when the background blurring function of using mobile terminal carries out image/video shooting, phase all to be used to any one background image
Same virtualization degree carries out virtualization shooting so that the virtualization degree of all photo/videos captured by mobile terminal is just as
's.And the background image of each image is different, and content expressed in same background image be also it is different, that
Using unified virtualization degree various background images are carried out with virtualization in the prior art and will be unable to the personalized bat for meeting user
Demand is taken the photograph, and reduces shooting effect.
Invention content
The embodiment of the present invention provides a kind of image weakening method and mobile terminal, to solve the virtualization of the image in the relevant technologies
Present in scheme when being blurred to background image, using identical virtualization degree virtualization processing caused by can not meet use
Family personalized image shooting demand, and the problem of reduce shooting effect.
In order to solve the above-mentioned technical problem, the invention is realized in this way:
In a first aspect, an embodiment of the present invention provides a kind of image weakening method, applied to mobile terminal, the method packet
It includes:
Determine the background area in target image;
According to the parameter preset information of the background area, the background area is divided, obtains at least two sons
Region;
According to the parameter preset information of at least two subregion, determine that described two at least subregions are corresponding
Difference virtualization parameter;
At least two subregion in the target image is carried out in various degree respectively according to the virtualization parameter
Virtualization processing, the target image after being blurred.
Second aspect, the embodiment of the present invention additionally provide a kind of mobile terminal, and the mobile terminal includes:
First determining module, for determining the background area in the target image;
Division module for the parameter preset information of the background area, divides the background area, obtain to
Few two sub-regions;
Second determining module, for the parameter preset information according at least two subregion, determine it is described two extremely
The corresponding different virtualization parameters of subregion less;
First blurring module, for according to it is described virtualization parameter at least two subregion in the target image
Different degrees of virtualization processing, the target image after being blurred are carried out respectively.
The third aspect, the embodiment of the present invention additionally provide a kind of mobile terminal, including:It memory, processor and is stored in
On the memory and the computer program that can run on the processor, the computer program are performed by the processor
The step of image weakening method described in Shi Shixian.
Fourth aspect, the embodiment of the present invention additionally provide a kind of computer readable storage medium, described computer-readable to deposit
Computer program is stored on storage media, the image weakening method is realized when the computer program is executed by processor
Step.
In embodiments of the present invention, by the parameter preset information of the background area according to target image come by background area
It is divided at least two subregions, and different virtualization parameters is set according to the parameter preset information of at least two subregions,
Finally, virtualization processing is carried out to each sub-regions in target image according to the virtualization parameter of each sub-regions.Energy of the present invention
The property difference of enough every sub-regions according to background image carries out different degrees of virtualization to each sub-regions and handles, full
The personalized image shooting demand of foot user, and improve image taking effect.
Description of the drawings
Fig. 1 is the flow chart of image weakening method provided by one embodiment of the present invention;
Fig. 2 is the block diagram of the mobile terminal of one embodiment of the invention;
Fig. 3 is the hardware architecture diagram of the mobile terminal of one embodiment of the invention.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete
Site preparation describes, it is clear that described embodiment is part of the embodiment of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, the every other implementation that those of ordinary skill in the art are obtained without creative efforts
Example, shall fall within the protection scope of the present invention.
With reference to Fig. 1, the flow chart of the image weakening method of one embodiment of the invention is shown, applied to mobile terminal,
The method specifically may include steps of:
Optionally, step 101, target image is obtained;
Wherein, target image can be that mobile terminal receives the figure for acquiring or shooting during the instruction for opening application of taking pictures
A part of image in the artwork of picture or the image, the target image are stored in the terminal in a manner of caching.
Specifically, user can send the finger for opening application of taking pictures by modes such as touch-control or voices to mobile terminal
Enable, mobile terminal unlatchings for receiving user's triggering take pictures application instruction when, it is possible to unlatching takes pictures application to obtain mesh
A part of image in logo image or acquisition artwork is as target image.Also, the target image can be preview image or
Person shoots a part of image in image or preview image, shooting image.
Step 102, the background area in the target image is determined;
Wherein, background area includes non-portrait area, and algorithm known may be used in the determining method of background area in image,
Which is not described herein again.
Optionally, step 103, the parameter preset information of the background area is detected;
Wherein, which can include but is not limited to the characteristic of the image-regions such as luminance information and color information
Information.And algorithm known may be used in the detection method of the parameter preset information of the image-region, which is not described herein again.
Step 104, the background area is divided according to the parameter preset information, obtains at least two sub-districts
Domain;
Wherein, the method for the embodiment of the present invention can carry out background area according to the parameter preset information of background area
Region division, so as to obtain at least two different subregions of parameter preset information.
It for example, then can be by background area in region division when the parameter preset information includes luminance information
It is divided into highlight regions and low bright area.
When parameter preset information includes color information, then then background area can be divided into some in region division
Another sub-regions that one sub-regions of color of object aggregation and the color of object are not assembled.
Currently, the quantity of the subregion divided in this step is not limited to 2, can be with more than two.
In addition, it can be overlapped between the subregion divided according to different characteristics or be overlapped, the present invention
This is not limited, this has no effect on the realization of the present invention.Such as a sub-regions of highlight regions and color of object aggregation it
Between there are equitant a part of regions.
Step 105, according to the parameter preset information of at least two subregion, described two at least subregions point are determined
Not corresponding different virtualization parameters;
Wherein, the parameter preset information per sub-regions is also known, therefore, can be according to the default of every sub-regions
Parameter information determines the virtualization parameter per sub-regions.It is with parameter preset when wherein, due to being divided between different subregions
The difference of information (such as luminance information) and divide, therefore, the virtualization parameters of different subregions is also different, virtualization ginseng
The parameter preset information of number and the subregion is relevant.
Step 106, at least two subregion in the target image is carried out respectively according to the virtualization parameter
Different degrees of virtualization processing, the target image after being blurred.
Wherein it is possible to virtualization processing is carried out to each sub-regions according to the virtualization parameter set to every sub-regions, by
Had differences between the virtualization parameter of each sub-regions, therefore, the virtualization degree of each sub-regions be also it is different, finally
Target image after being blurred.
In embodiments of the present invention, by the parameter preset information of the background area according to target image come by background area
It is divided at least two subregions, and different virtualization parameters is set according to the parameter preset information of at least two subregions,
Finally, virtualization processing is carried out to each sub-regions in target image according to the virtualization parameter of each sub-regions.Energy of the present invention
The property difference of enough every sub-regions according to background image carries out different degrees of virtualization to each sub-regions and handles, full
The personalized image shooting demand of foot user, and improve image taking effect.
Optionally, in one embodiment, the parameter preset information includes luminance information;
It, can be by detecting the brightness value of each pixel in the background area come real so when performing step 103
It is existing.
So when performing step 104, it can be realized by following sub-step:
S41, the brightness value of each pixel according to the background area are determined in the background area at least
One the first luminance area, wherein, the brightness value of first luminance area and the difference of the brightness value of the target image
Absolute value is more than the first luminance threshold;
Wherein, the brightness value of target image is known and can be collected, then this step can be in background area
Find at least one highlight regions, the difference of the brightness value of so-called highlight regions, the i.e. region and the brightness value of the target image
The absolute value of value is more than the first luminance threshold.
In one example, the numerical value of the RGB channel of the background area of target image can be extracted, and draws rgb value
Histogram, wherein, RGB numerical value is bigger, and the brightness of image is higher;Then, search that RGB numerical value is higher and pixel in the histogram
Point compares the multiple regions of concentration;Finally, the difference of the brightness value of brightness value and target image is filtered out in this multiple regions
Absolute value be more than the first luminance threshold at least one highlight regions.
S42, the brightness value of each pixel according to the background area are determined in the background area at least
One the second luminance area, wherein, the brightness value of second luminance area and the difference of the brightness value of the target image
Absolute value is less than the second luminance threshold;
Wherein, the brightness value of target image is known and can be collected, then this step can be in background area
Find at least one low bright area, the difference of the brightness value of the brightness value and target image of so-called low bright area, the i.e. region
The absolute value of value is less than the second luminance threshold.
In one example, the numerical value of the RGB channel of the background area of target image can be extracted, and draws rgb value
Histogram, wherein, RGB numerical value is smaller, and the brightness of image is lower;Then, search that RGB numerical value is relatively low and pixel in the histogram
Point compares the multiple regions of concentration;Finally, the difference of the brightness value of brightness value and target image is filtered out in this multiple regions
Absolute value be less than the second luminance threshold at least one low bright area.
The background area is divided at least one first luminance area and at least one second luminance area by S43;
Wherein it is possible to according to identified at least one first luminance area and at least one second luminance area come to mesh
Background area in logo image carries out region division, so as to divide to obtain at least one first luminance area and at least one second
Luminance area.
So when performing step 105, it can be realized by following sub-step:
S51 according to the brightness value of at least one first luminance area, determines at least one first luminance area
Corresponding at least one first virtualization parameter;
Wherein, the embodiment of the present invention can determine corresponding void for different brightness value or range of luminance values in advance
Change parameter, therefore, when performing S51, can according to preset brightness value/range of luminance values with blur parameter correspondence,
To determine each first virtualization parameter corresponding to the brightness value of each first luminance area.
S52 according to the brightness value of at least one second luminance area, determines at least one second luminance area
Corresponding at least one second virtualization parameter.
Wherein, the embodiment of the present invention can determine corresponding void for different brightness value or range of luminance values in advance
Change parameter, therefore, when performing S52, can according to preset brightness value/range of luminance values with blur parameter correspondence,
To determine each second virtualization parameter corresponding to the brightness value of each second luminance area.
Wherein, the void in normal brightness region can be less than for the virtualization degree of the first luminance area and the second luminance area
Change degree.So-called normal brightness region, i.e., the region in background area in addition to the first luminance area and the second luminance area.Cause
Clarity for bright/dark areas excessively excessively can be weaker with respect to the clarity in normal brightness region, then the side of the embodiment of the present invention
Method can not only meet the requirement blurred to background area, and pass through drop by the virtualization degree to reducing bright/mistake dark areas
Low virtualization degree and improve performance and reduce power consumption.
In this way, background area can be divided by the embodiment of the present invention according to the brightness per sub-regions in background area
At least one low-light level area and at least one high luminance area, and high luminance area and low-light level area are carried out at the virtualization of distinct program
Reason.So as to the personalized shooting demand according to user, come the brightness case according to background area come to different luminance areas
Carry out the virtualization processing of different virtualization degree.
Optionally, in another embodiment, the parameter preset information can also include color information;
It, can be according to the color information of the background area to the background area so when performing step 104
Color division is carried out, obtains at least one object color component region and non-targeted color area;
Wherein it is possible to color information (i.e. red component numerical value, green component numerical value and blue component according to background area
Numerical value) come to background area carry out color segmentation, so that it is determined that go out at least one of background area object color component region (such as
Green, 0,255,0) region (i.e. non-targeted color area) of other colors and except at least one color of object.
In one example, such as whether customer demand be background in target image is landscape to determine to background
The virtualization degree of middle landscape.And the judgement of landscape can be presented as and judge whether include green area (green plant in background area
Object color, such as turf color), blue region (blue sky color) etc. color of object region.
It is illustrated so that color of object is green as an example in this example.
Pretreatment is filtered with Gaussian function to background area first, so as to eliminate the noise jamming in image, shadow
Ring picture quality;Then red using the green of image, blue component carries out color segmentation, so as to substantially determine in background area
Green area;But the profile of determining green area and not clear enough here, and hence it is also possible to green area progress gray scale
Processing, then to gray level image by setting threshold values that the gray level image of green area is carried out binaryzation (that is, gray value is more than
Or 255 are turned to equal to the gray value two-value of the pixel of the threshold value, and gray value is less than the gray value two of the pixel of the threshold value
Value is that 0), may thereby determine that accurate green area;Optionally, the present invention can also be for spuious after binary conversion treatment
Point is removed using morphology opening operation method, and finally the hole of the green area to not extracting is filled, so as to complete green
It detects plant regional.
And after detecting at least one of background area green area through the above scheme, others are removed in background area
Region except the green area, as non-targeted color area.
Here, it is illustrated so that color of object is pure green as an example in this example, and in other embodiments, color of object can
To include any one green, and pure green is not limited to, so, the quantity in color of object region can be one or more.
So when performing step 105, then the third virtualization parameter of corresponding object color component can be determined as it is described at least
The virtualization parameter in one object color component region;4th virtualization parameter of the non-targeted color of correspondence is determined as the non-targeted color
The virtualization parameter in region.
Wherein, the embodiment of the present invention can be directed to the green of the green or different depth degree range of different depth degrees in advance
Color sets corresponding virtualization parameter, therefore, can be according to the green of preset different depth degrees when performing step 105
Or the correspondence of the green of different depth degree ranges and virtualization parameter, by system to the green area of each depth degree
Each third corresponding to its depth degree is set to blur parameter.In addition, for non-targeted color area, i.e. not including here
The region of color except the green of any one above-mentioned depth degree, can equally pre-set the corresponding color the 4th are empty
Change parameter, then this step blurs parameter come to non-targeted color area in the background area according to the preset correspondence the 4th
To set the 4th virtualization parameter.
Wherein, the magnitude relationship of the virtualization degree between third virtualization parameter and the 4th virtualization parameter can be according to user couple
The shooting demand of image is flexibly set, and the present invention does not limit this.
In this way, the embodiment of the present invention can be according to the region for whether including certain color of object in background area, to the back of the body
Scene area carries out region division, so as in practical applications, background area can be included in the case of landscape region to carrying on the back
Landscape region and non-landscape region in scene area carry out different degrees of virtualization processing, realize to face different in background area
The different degrees of virtualization processing in color region so that treated, and image is more beautiful, meets the personalized aesthetic requirement of user.
Optionally, in one embodiment, when parameter preset information had not only included luminance information but also including color information, then
It is also the overlapping there may be region between the two class subregions determined respectively according to both parameters, this has no effect on this
The scheme of invention realizes, and as the setting for the virtualization parameter of overlapping region for, then be the superposition for blur parameter
Setting is handled.Such as the subregion in highlight regions 1 overlaps with the subregion in green area 1, overlay region here
Domain is region X, and the virtualization parameter of highlight regions 1 is M, and the virtualization parameter of green area 1 is N, then the virtualization parameter of region X
For M+N.
Optionally, in one embodiment, after step 101, can also include according to the method for the embodiment of the present invention:
Determine the portrait area in the target image;
Wherein it is determined that known arbitrary portrait detection algorithm may be used to realize in the portrait area in target image, this
A kind of human testing algorithm of the present invention is shown in example.
Herein to be based on HOG (histograms of oriented gradients, Histogram of Oriented Gradient) human testing
It is illustrated for algorithm:HOG features are a kind of features for being used for carrying out object detection in computer vision and image procossing
Description, HOG feature extracting methods are as follows with regard to the method and step:
1) gray processing (gray processing processing being carried out to target image, regard an x, the 3-D view of y, z (gray scale) as);
2) standardization (normalized) of color space is carried out to the image of 1) output using Gamma correction methods, here
Purpose be to adjust the contrast of image, reduce image local shade and illumination variation caused by influence, while can be with
Inhibit the interference of noise;
3) gradient (including size and Orientation) of each pixel in the 2) image of output is calculated, here primarily to capture
Profile information, while the interference that further weakened light shines;
4) 3) image of output is divided into cellule unit (cell), such as each cell is 6*6 pixels;
5) histogram of gradients (numbers of different gradients) of each cell is counted, you can form the description information of each cell
(descriptor);
6) cell of predetermined quantity is formed into block (such as 3*3 cell one block of composition, section), one
The feature descriptor of all cell is together in series the descriptor for the HOG features for just obtaining the block in block;
7) the HOG features descriptor of all block in image image is together in series and can be obtained by the target
The HOG features descriptor of image;
8) human region in target image and non-human is determined according to the HOG features descriptor of the target image
Region.
Wherein, so-called human region is portrait area.
Face datection is carried out to the portrait area, determines the human face region in the portrait area and non-face region;
It wherein, can also be to portrait area, i.e. human body in this step in order to more protrude the human face region in target image
Region carries out Face datection, so that it is determined that going out the human face region in human region and non-face region (the inhuman face such as four limbs
Point).
Wherein, this step may be used any one known Face datection algorithm, with the master of feature based face in this example
It is illustrated for componential analysis (PCA), which constructs principal component subspace according to lineup's face training sample, wherein, it is main
First subspace representation belongs to each feature of face;Then, during human face region in detection image, test image is projected
Onto principal component subspace, one group of projection coefficient is obtained;Again by the throwing of one group of projection coefficient and each known facial image pattern
Shadow coefficient is compared, so that it is determined that human face region and non-face region in human region.
Parameter is blurred to the non-face region setting target;
Wherein, which blurs in parameter and step 105 for the virtualization parameter set by the subregion in background area
Can be identical or different, it is flexibly handled with specific reference to actual needs.
Parameter is blurred according to the target, virtualization processing is carried out to the non-face region in the target image.
In this way, the embodiment of the present invention can more protrude the human face region in shooting image, and to non-in portrait area
Human face region and non-portrait area (i.e. background area) make virtualization processing, also, are processed for the virtualization of background area
Journey can also be directed to the difference of the characteristic per sub-regions and carry out different degrees of virtualization processing, can be according to each region
Characteristic carries out the adjusting processing of individually virtualization degree.
Optionally, in one embodiment, in the above-described embodiments, when detecting that portrait area includes multiple face areas
During domain, the method for the embodiment of the present invention can also include:
Calculate the area of each human face region;
Determine the maximum area in the area of multiple human face regions;
The human face region that area and the ratio of the maximum area are less than to preset ratio threshold value is divided into non-face region.
For example, 5 human face region A~E are determined by above-mentioned PCA detections, wherein, the area of human face region E is maximum, and
When the area of human face region A, B, C, D and the ratio of maximum area are respectively less than such as 1/4, then the method for the embodiment of the present invention can
Non-face region is referred to this four human face regions by the human face region A, B, C, D, so that the human face region recognized
Only include human face region E.In this way, owner's object head portrait in shooting image can be protruded more, and cause the head portrait of other personages
All carry out virtualization processing.
Optionally, in another embodiment, after step 101, can also wrap according to the method for the embodiment of the present invention
It includes:
Determine the portrait area in the target image and non-portrait area;
Referring in particular to above-described embodiment, which is not described herein again.
Face datection is carried out to the portrait area, determines the human face region in the portrait area and non-face region;
Referring in particular to above-described embodiment, which is not described herein again.
So when performing step 102, the non-portrait area and the non-face region can be determined as the mesh
Background area in logo image.
That is, non-face region and non-portrait area can be jointly classified as to the background area in target image, from
And carry out the processing of step 103~step 105.
In this way, non-face region is by being also divided to the background area of target image by the embodiment of the present invention, it can be more
Human face region in apparent prominent target image, promotes shooting effect.
In addition, in practical applications, respective virtualization parameter is being used to any of the above-described sub-regions, non-face region
When carrying out virtualization processing, any one known virtualization algorithm, such as Quick and equal fuzzy algorithmic approach (Box Blur) may be used,
Process is blurred in this example by taking Gaussian Blur algorithm as an example briefly to be illustrated.
It is only simple to introduce since Gaussian Blur is that conventional algorithm is not described in detail at this.Gaussian Blur is then by surrounding
The weights of pixel carry out value according to Gaussian Profile, i.e., the weights of value are determined according to the distance apart from current pixel point.
Gaussian Blur algorithm is simply introduced:
The function definition of One-Dimensional Normal distribution:
The distribution of stochastic variable, the first parameter μ are the mean values for the stochastic variable for deferring to normal distribution, second parameter σ2It is
This variance of a random variable, so normal distribution is denoted as N (μ, σ 2).The probabilistic law of the stochastic variable of normal distribution is deferred to take
The probability of value neighbouring μ is big, and takes the probability of the value more remote from μ smaller;σ is smaller, and distribution is more concentrated near μ, and σ is bigger, point
Cloth more disperses.The characteristics of density function of normal distribution is:It is symmetrical about μ, maximum value is reached at μ, in just (negative) infinity
It is 0 to locate value, there is inflection point at μ ± σ.Its shape is that intermediate high both sides are low, and image is a bell song being located above x-axis
Line.As μ=0, σ2When=1, referred to as standardized normal distribution is denoted as N (0,1).The meaning of two constants:μ expectations, σ2Variance.
Wherein, σ is smaller, and the higher bell curve the sharper, and σ is bigger, and the lower bell curve the gentler.Therefore Gauss radius is got over
Small, then fuzzy smaller, Gauss radius is bigger, then fog-level is bigger.
When carrying out virtualization processing, the region for treating virtualization processing after each segmentation can be traversed, obtained each
The N in regioniIt is worth (wherein, NiIt is worth the virtualization parameter values in the region to be blurred for i-th), then calculation is blurred using using Gauss
Method carries out virtualization processing to each region.
Wherein, when blurring processing, each region can individually be extracted and carries out virtualization processing, then to each area
After domain virtualization is disposed, these region merging techniques are then reverted to the raw bits in original target image into a region
It puts.
For current intelligent movable equipment when carrying out virtualization shooting, the virtualization degree of content of shooting is all the same, it is difficult to meet
The shooting demand of user, and by means of the technical solution of the above embodiment of the present invention, the present invention can be according in shooting image
The parameter information (brightness, color etc.) of actual content is split come the content to shooting, then to the region of segmentation in
The characteristic (i.e. parameter information) of appearance carries out different degrees of virtualization, and while mobile terminal hardware is not increased, increases shifting
The function of dynamic terminal, disclosure satisfy that the shooting needs of user, improves user experience.
With reference to Fig. 2, the block diagram of the mobile terminal of one embodiment of the invention is shown.The mobile terminal of the embodiment of the present invention
It can realize the details of the image weakening method in above-described embodiment, and achieve the effect that identical.Mobile terminal includes shown in Fig. 2:
First determining module 21, for determining the background area in the target image;
Division module 22 for the parameter preset information of the background area, divides the background area, obtains
At least two subregions;
Second determining module 23 for the parameter preset information according at least two subregion, determines described two
At least corresponding different virtualization parameters of subregion;
First blurring module 24, for according to it is described virtualization parameter at least two sub-district in the target image
Domain carries out different degrees of virtualization processing, the target image after being blurred respectively.
Optionally, the parameter preset information includes luminance information;
The division module 22 includes:
First determination sub-module, for determining the background area according to the brightness value of pixel each in the background area
The first luminance area of at least one of domain, wherein, the brightness value of first luminance area and the brightness of the target image
The absolute value of the difference of value is more than the first luminance threshold;
Second determination sub-module, for determining the background area according to the brightness value of pixel each in the background area
The second luminance area of at least one of domain, wherein, the brightness value of second luminance area and the brightness of the target image
The absolute value of the difference of value is less than the second luminance threshold;
First divides submodule, for the background area to be divided at least one first luminance area and at least one
Second luminance area;
Second determining module 21 includes:
Third determination sub-module, for the brightness value according at least one first luminance area, determine it is described at least
The corresponding at least one first virtualization parameter of one the first luminance area;
4th determination sub-module, for the brightness value according at least one second luminance area, determine it is described at least
The corresponding at least one second virtualization parameter of one the second luminance area.
Optionally, the parameter preset information includes color information;
The division module 22 includes:
Second divides submodule, and color is carried out to the background area for the color information according to the background area
Coloured silk divides, and obtains at least one object color component region and non-targeted color area;
Second determining module 23 includes:
5th determination sub-module, for the third virtualization parameter for corresponding to object color component to be determined as at least one target
The virtualization parameter of color area;
6th determination sub-module is determined as the non-targeted color for that will correspond to the 4th of non-targeted color the virtualization parameter
The virtualization parameter in region.
Optionally, described device further includes:
Third determining module, for determining the portrait area in the target image;
4th determining module for carrying out Face datection to the portrait area, determines the face in the portrait area
Region and non-face region;
Setup module, for blurring parameter to the non-face region setting target;
Second blurring module carries out the non-face region in the target image for blurring parameter according to the target
Virtualization is handled.
Optionally, described device further includes:
5th determining module, for determining the portrait area in the target image and non-portrait area;
6th determining module for carrying out Face datection to the portrait area, determines the face in the portrait area
Region and non-face region;
First determining module 21 includes:
7th determination sub-module, for the non-portrait area and the non-face region to be determined as the target image
In background area.
Mobile terminal provided in an embodiment of the present invention can realize that mobile terminal is realized each in above method embodiment
Process is repeated to avoid, and which is not described herein again.
A kind of hardware architecture diagram of Fig. 3 mobile terminals of each embodiment to realize the present invention,
The mobile terminal 300 has screen fingerprint identification function, which includes but not limited to:Radio frequency unit
301st, network module 302, audio output unit 303, input unit 304, sensor 305, display unit 306, user input list
The components such as member 307, interface unit 308, memory 309, processor 310 and power supply 311.Those skilled in the art can manage
It solves, the mobile terminal structure shown in Fig. 3 does not form the restriction to mobile terminal, and mobile terminal can include more than illustrating
Or less component either combines certain components or different components arrangement.In embodiments of the present invention, mobile terminal packet
It includes but is not limited to mobile phone, tablet computer, laptop, palm PC, car-mounted terminal, wearable device and pedometer etc..
Wherein, radio frequency unit 301, for obtaining target image;
Processor 310, for determining the background area in the target image;According to the parameter preset of the background area
Information divides the background area, obtains at least two subregions;According to the default ginseng of at least two subregion
Number information determines the corresponding different virtualization parameters of described two at least subregions;According to the virtualization parameter to the mesh
At least two subregion in logo image carries out different degrees of virtualization processing, the target image after being blurred respectively.
In embodiments of the present invention, by the parameter preset information of the background area according to target image come by background area
It is divided at least two subregions, and different virtualization parameters is set according to the parameter preset information of at least two subregions,
Finally, virtualization processing is carried out to each sub-regions in target image according to the virtualization parameter of each sub-regions.Energy of the present invention
The property difference of enough every sub-regions according to background image carries out different degrees of virtualization to each sub-regions and handles, full
The personalized image shooting demand of foot user, and improve image taking effect.
It should be understood that the embodiment of the present invention in, radio frequency unit 301 can be used for receive and send messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, handled to processor 310;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 301 includes but not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 301 can also by radio communication system and network and other set
Standby communication.
Mobile terminal has provided wireless broadband internet to the user by network module 302 and has accessed, and such as user is helped to receive
It sends e-mails, browse webpage and access streaming video etc..
It is that audio output unit 303 can receive radio frequency unit 301 or network module 302 or in memory 309
The audio data of storage is converted into audio signal and exports as sound.Moreover, audio output unit 303 can also be provided and be moved
The relevant audio output of specific function that dynamic terminal 300 performs is (for example, call signal receives sound, message sink sound etc.
Deng).Audio output unit 303 includes loud speaker, buzzer and receiver etc..
Input unit 304 is used to receive audio or video signal.Input unit 304 can include graphics processor
(Graphics Processing Unit, GPU) 3041 and microphone 3042, graphics processor 3041 is in video acquisition mode
Or the static images or the image data of video obtained in image capture mode by image capture apparatus (such as camera) carry out
Reason.Treated, and picture frame may be displayed on display unit 306.Through graphics processor 3041, treated that picture frame can be deposited
Storage is sent in memory 309 (or other storage mediums) or via radio frequency unit 301 or network module 302.Mike
Wind 3042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be
The form output of mobile communication base station can be sent to via radio frequency unit 301 by being converted in the case of telephone calling model.
Mobile terminal 300 further includes at least one sensor 305, such as optical sensor, motion sensor and other biographies
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein, ambient light sensor can be according to environment
The light and shade of light adjusts the brightness of display panel 3061, and proximity sensor can close when mobile terminal 300 is moved in one's ear
Display panel 3061 and/or backlight.As one kind of motion sensor, accelerometer sensor can detect in all directions (general
For three axis) size of acceleration, size and the direction of gravity are can detect that when static, can be used to identify mobile terminal posture (ratio
Such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap);It passes
Sensor 105 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, wet
Meter, thermometer, infrared ray sensor etc. are spent, details are not described herein.
Display unit 306 is used to show by information input by user or be supplied to the information of user.Display unit 306 can wrap
Display panel 3061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode may be used
Display panel 3061 is configured in forms such as (Organic Light-Emitting Diode, OLED).
User input unit 307 can be used for receiving the number inputted or character information and generation and the use of mobile terminal
The key signals input that family is set and function control is related.Specifically, user input unit 307 include touch panel 3071 and
Other input equipments 3072.Touch panel 3071, also referred to as touch screen collect user on it or neighbouring touch operation
(for example user uses any suitable objects such as finger, stylus or attachment on touch panel 3071 or in touch panel 3071
Neighbouring operation).Touch panel 3071 may include both touch detecting apparatus and touch controller.Wherein, touch detection
Device detects the touch orientation of user, and detects the signal that touch operation is brought, and transmits a signal to touch controller;Touch control
Device processed receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 310, receiving area
It manages the order that device 310 is sent and is performed.It is furthermore, it is possible to more using resistance-type, condenser type, infrared ray and surface acoustic wave etc.
Type realizes touch panel 3071.In addition to touch panel 3071, user input unit 307 can also include other input equipments
3072.Specifically, other input equipments 3072 can include but is not limited to physical keyboard, function key (such as volume control button,
Switch key etc.), trace ball, mouse, operating lever, details are not described herein.
Further, touch panel 3071 can be covered on display panel 3061, when touch panel 3071 is detected at it
On or near touch operation after, send to processor 310 with determine touch event type, be followed by subsequent processing device 310 according to touch
The type for touching event provides corresponding visual output on display panel 3061.Although in figure 3, touch panel 3071 and display
Panel 3061 is the component independent as two to realize the function that outputs and inputs of mobile terminal, but in some embodiments
In, can be integrated by touch panel 3071 and display panel 3061 and realize the function that outputs and inputs of mobile terminal, it is specific this
Place does not limit.
Interface unit 308 is the interface that external device (ED) is connect with mobile terminal 300.For example, external device (ED) can include
Line or wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, storage card end
Mouth, port, audio input/output (I/O) port, video i/o port, earphone end for connecting the device with identification module
Mouthful etc..Interface unit 308 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and
One or more elements that the input received is transferred in mobile terminal 300 can be used in 300 He of mobile terminal
Data are transmitted between external device (ED).
Memory 309 can be used for storage software program and various data.Memory 309 can mainly include storing program area
And storage data field, wherein, storing program area can storage program area, application program (such as the sound needed at least one function
Sound playing function, image player function etc.) etc.;Storage data field can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 309 can include high-speed random access memory, can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 310 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection
A part is stored in storage by running or performing the software program being stored in memory 309 and/or module and call
Data in device 309 perform the various functions of mobile terminal and processing data, so as to carry out integral monitoring to mobile terminal.Place
Reason device 310 may include one or more processing units;Preferably, processor 310 can integrate application processor and modulatedemodulate is mediated
Device is managed, wherein, the main processing operation system of application processor, user interface and application program etc., modem processor is main
Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 310.
Mobile terminal 300 can also include the power supply 311 (such as battery) powered to all parts, it is preferred that power supply 311
Can be logically contiguous by power-supply management system and processor 310, so as to realize management charging by power-supply management system, put
The functions such as electricity and power managed.
In addition, mobile terminal 300 includes some unshowned function modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of mobile terminal, and including processor 310, memory 309 is stored in
On memory 309 and the computer program that can be run on the processor 310, the computer program are performed by processor 310
Each process of the above-mentioned image weakening method embodiments of Shi Shixian, and identical technique effect can be reached, it is repeated to avoid, here
It repeats no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize each process of above-mentioned image weakening method embodiment, and energy when being executed by processor
Reach identical technique effect, repeated to avoid, which is not described herein again.Wherein, the computer readable storage medium, such as only
Read memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation
RAM), magnetic disc or CD etc..
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or device including a series of elements not only include those elements, and
And it further includes other elements that are not explicitly listed or further includes intrinsic for this process, method, article or device institute
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this
Also there are other identical elements in the process of element, method, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme of the present invention substantially in other words does the prior art
Going out the part of contribution can be embodied in the form of software product, which is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), used including some instructions so that a station terminal (can be mobile phone, computer services
Device, air conditioner or network equipment etc.) perform method described in each embodiment of the present invention.
The embodiment of the present invention is described above in conjunction with attached drawing, but the invention is not limited in above-mentioned specific
Embodiment, above-mentioned specific embodiment is only schematical rather than restricted, those of ordinary skill in the art
Under the enlightenment of the present invention, present inventive concept and scope of the claimed protection are not being departed from, can also made very much
Form is belonged within the protection of the present invention.
Claims (10)
1. a kind of image weakening method, applied to mobile terminal, which is characterized in that the method includes:
Determine the background area in target image;
According to the parameter preset information of the background area, the background area is divided, obtains at least two subregions;
According to the parameter preset information of at least two subregion, the corresponding difference of described two at least subregions is determined
Blur parameter;
Different degrees of void is carried out according to the virtualization parameter respectively at least two subregion in the target image
Change is handled, the target image after being blurred.
2. according to the method described in claim 1, it is characterized in that, the parameter preset information includes luminance information;
The parameter preset information according to the background area divides the background area, obtains at least two sons
Region, including:
The first brightness of at least one of described background area is determined according to the brightness value of pixel each in the background area
Region, wherein, the absolute value of the difference of the brightness value of first luminance area and the brightness value of the target image is more than the
One luminance threshold;
The second brightness of at least one of described background area is determined according to the brightness value of pixel each in the background area
Region, wherein, the absolute value of the difference of the brightness value of second luminance area and the brightness value of the target image is less than the
Two luminance thresholds;
The background area is divided at least one first luminance area and at least one second luminance area;
The parameter preset information according at least two subregion determines that described two at least subregions are corresponding
Difference virtualization parameter, including:
According to the brightness value of at least one first luminance area, determine that at least one first luminance area is corresponding extremely
Few one first virtualization parameter;
According to the brightness value of at least one second luminance area, determine that at least one second luminance area is corresponding extremely
Few one second virtualization parameter.
3. according to the method described in claim 1, it is characterized in that, the parameter preset information includes color information;
The parameter preset information according to the background area divides the background area, obtains at least two sons
Region, including:
Color division is carried out to the background area according to the color information of the background area, obtains at least one target
Color area and non-targeted color area;
The parameter preset information according at least two subregion determines that described two at least subregions are corresponding
Difference virtualization parameter, including:
The third virtualization parameter of corresponding object color component is determined as the virtualization parameter at least one object color component region;
4th virtualization parameter of the non-targeted color of correspondence is determined as the virtualization parameter of the non-targeted color area.
4. according to the method described in claim 1, it is characterized in that, the method further includes:
Determine the portrait area in the target image;
Face datection is carried out to the portrait area, determines the human face region in the portrait area and non-face region;
Parameter is blurred to the non-face region setting target;
Parameter is blurred according to the target, virtualization processing is carried out to the non-face region in the target image.
5. according to the method described in claim 1, it is characterized in that, the method further includes:
Determine the portrait area in the target image and non-portrait area;
Face datection is carried out to the portrait area, determines the human face region in the portrait area and non-face region;
The background area determined in the target image, including:
The non-portrait area and the non-face region are determined as to the background area in the target image.
6. a kind of mobile terminal, which is characterized in that the mobile terminal includes:
First determining module, for determining the background area in the target image;
Division module for the parameter preset information of the background area, divides the background area, obtains at least two
Sub-regions;
Second determining module for the parameter preset information according at least two subregion, determines described two at least sub
The corresponding different virtualization parameters in region;
First blurring module, for being distinguished according to the virtualization parameter at least two subregion in the target image
Carry out different degrees of virtualization processing, the target image after being blurred.
7. device according to claim 6, which is characterized in that the parameter preset information includes luminance information;
The division module includes:
First determination sub-module, for being determined in the background area according to the brightness value of pixel each in the background area
At least one first luminance area, wherein, the brightness value of first luminance area and the brightness value of the target image
The absolute value of difference is more than the first luminance threshold;
Second determination sub-module, for being determined in the background area according to the brightness value of pixel each in the background area
At least one second luminance area, wherein, the brightness value of second luminance area and the brightness value of the target image
The absolute value of difference is less than the second luminance threshold;
First divides submodule, for the background area to be divided at least one first luminance area and at least one second
Luminance area;
Second determining module includes:
Third determination sub-module for the brightness value according at least one first luminance area, determines described at least one
The corresponding at least one first virtualization parameter of first luminance area;
4th determination sub-module for the brightness value according at least one second luminance area, determines described at least one
The corresponding at least one second virtualization parameter of second luminance area.
8. device according to claim 6, which is characterized in that the parameter preset information includes color information;
The division module includes:
Second divides submodule, and carrying out color to the background area for the color information according to the background area draws
Point, obtain at least one object color component region and non-targeted color area;
Second determining module includes:
5th determination sub-module, for the third virtualization parameter for corresponding to object color component to be determined as at least one object color component
The virtualization parameter in region;
6th determination sub-module is determined as the non-targeted color area for that will correspond to the 4th of non-targeted color the virtualization parameter
Virtualization parameter.
9. device according to claim 6, which is characterized in that described device further includes:
Third determining module, for determining the portrait area in the target image;
4th determining module for carrying out Face datection to the portrait area, determines the human face region in the portrait area
With non-face region;
Setup module, for blurring parameter to the non-face region setting target;
Second blurring module blurs the non-face region in the target image for blurring parameter according to the target
Processing.
10. device according to claim 6, which is characterized in that described device further includes:
5th determining module, for determining the portrait area in the target image and non-portrait area;
6th determining module for carrying out Face datection to the portrait area, determines the human face region in the portrait area
With non-face region;
First determining module includes:
7th determination sub-module, for the non-portrait area and the non-face region to be determined as in the target image
Background area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810143060.7A CN108234882B (en) | 2018-02-11 | 2018-02-11 | Image blurring method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810143060.7A CN108234882B (en) | 2018-02-11 | 2018-02-11 | Image blurring method and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108234882A true CN108234882A (en) | 2018-06-29 |
CN108234882B CN108234882B (en) | 2020-09-29 |
Family
ID=62661620
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810143060.7A Active CN108234882B (en) | 2018-02-11 | 2018-02-11 | Image blurring method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108234882B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447942A (en) * | 2018-09-14 | 2019-03-08 | 平安科技(深圳)有限公司 | Image blur determines method, apparatus, computer equipment and storage medium |
CN109727192A (en) * | 2018-12-28 | 2019-05-07 | 北京旷视科技有限公司 | A kind of method and device of image procossing |
CN110991298A (en) * | 2019-11-26 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium and electronic device |
WO2020078027A1 (en) * | 2018-10-15 | 2020-04-23 | 华为技术有限公司 | Image processing method, apparatus and device |
CN111275630A (en) * | 2020-01-07 | 2020-06-12 | 中国人民解放军陆军军医大学第二附属医院 | Cell image adjusting method and device and electron microscope |
CN113313626A (en) * | 2021-05-20 | 2021-08-27 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113673474A (en) * | 2021-08-31 | 2021-11-19 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
WO2022241728A1 (en) * | 2021-05-20 | 2022-11-24 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method, electronic device and non–transitory computer–readable media |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11893668B2 (en) | 2021-03-31 | 2024-02-06 | Leica Camera Ag | Imaging system and method for generating a final digital image via applying a profile to image information |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5524162A (en) * | 1991-07-22 | 1996-06-04 | Levien; Raphael L. | Method and apparatus for adaptive sharpening of images |
CN106060423A (en) * | 2016-06-02 | 2016-10-26 | 广东欧珀移动通信有限公司 | Bokeh photograph generation method and device, and mobile terminal |
CN106101544A (en) * | 2016-06-30 | 2016-11-09 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN106993112A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Background-blurring method and device and electronic installation based on the depth of field |
CN107038681A (en) * | 2017-05-31 | 2017-08-11 | 广东欧珀移动通信有限公司 | Image weakening method, device, computer-readable recording medium and computer equipment |
CN107197146A (en) * | 2017-05-31 | 2017-09-22 | 广东欧珀移动通信有限公司 | Image processing method and related product |
CN107481186A (en) * | 2017-08-24 | 2017-12-15 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and computer equipment |
-
2018
- 2018-02-11 CN CN201810143060.7A patent/CN108234882B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5524162A (en) * | 1991-07-22 | 1996-06-04 | Levien; Raphael L. | Method and apparatus for adaptive sharpening of images |
CN106060423A (en) * | 2016-06-02 | 2016-10-26 | 广东欧珀移动通信有限公司 | Bokeh photograph generation method and device, and mobile terminal |
CN106101544A (en) * | 2016-06-30 | 2016-11-09 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN106993112A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Background-blurring method and device and electronic installation based on the depth of field |
CN107038681A (en) * | 2017-05-31 | 2017-08-11 | 广东欧珀移动通信有限公司 | Image weakening method, device, computer-readable recording medium and computer equipment |
CN107197146A (en) * | 2017-05-31 | 2017-09-22 | 广东欧珀移动通信有限公司 | Image processing method and related product |
CN107481186A (en) * | 2017-08-24 | 2017-12-15 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and computer equipment |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447942B (en) * | 2018-09-14 | 2024-04-23 | 平安科技(深圳)有限公司 | Image ambiguity determining method, apparatus, computer device and storage medium |
CN109447942A (en) * | 2018-09-14 | 2019-03-08 | 平安科技(深圳)有限公司 | Image blur determines method, apparatus, computer equipment and storage medium |
CN112840376A (en) * | 2018-10-15 | 2021-05-25 | 华为技术有限公司 | Image processing method, device and equipment |
US12026863B2 (en) | 2018-10-15 | 2024-07-02 | Huawei Technologies Co., Ltd. | Image processing method and apparatus, and device |
WO2020078027A1 (en) * | 2018-10-15 | 2020-04-23 | 华为技术有限公司 | Image processing method, apparatus and device |
CN109727192B (en) * | 2018-12-28 | 2023-06-27 | 北京旷视科技有限公司 | Image processing method and device |
CN109727192A (en) * | 2018-12-28 | 2019-05-07 | 北京旷视科技有限公司 | A kind of method and device of image procossing |
CN110991298A (en) * | 2019-11-26 | 2020-04-10 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium and electronic device |
CN111275630A (en) * | 2020-01-07 | 2020-06-12 | 中国人民解放军陆军军医大学第二附属医院 | Cell image adjusting method and device and electron microscope |
CN113313626A (en) * | 2021-05-20 | 2021-08-27 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
WO2022241728A1 (en) * | 2021-05-20 | 2022-11-24 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method, electronic device and non–transitory computer–readable media |
CN113673474A (en) * | 2021-08-31 | 2021-11-19 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN113673474B (en) * | 2021-08-31 | 2024-01-12 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108234882B (en) | 2020-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108234882A (en) | A kind of image weakening method and mobile terminal | |
CN107580184B (en) | A kind of image pickup method and mobile terminal | |
CN104135609B (en) | Auxiliary photo-taking method, apparatus and terminal | |
CN110177221A (en) | The image pickup method and device of high dynamic range images | |
CN107707827A (en) | A kind of high-dynamics image image pickup method and mobile terminal | |
CN108989678A (en) | A kind of image processing method, mobile terminal | |
CN107592471A (en) | A kind of high dynamic range images image pickup method and mobile terminal | |
CN108111754A (en) | A kind of method and mobile terminal of definite image acquisition modality | |
CN110930329B (en) | Star image processing method and device | |
CN109544486A (en) | A kind of image processing method and terminal device | |
CN107566749A (en) | Image pickup method and mobile terminal | |
CN107895352A (en) | A kind of image processing method and mobile terminal | |
CN107977652A (en) | The extracting method and mobile terminal of a kind of screen display content | |
CN108307110A (en) | A kind of image weakening method and mobile terminal | |
CN107886321A (en) | A kind of method of payment and mobile terminal | |
CN108462826A (en) | A kind of method and mobile terminal of auxiliary photo-taking | |
CN108040209A (en) | A kind of image pickup method and mobile terminal | |
CN108513067A (en) | A kind of filming control method and mobile terminal | |
CN109144361A (en) | A kind of image processing method and terminal device | |
CN109104578B (en) | Image processing method and mobile terminal | |
CN108347558A (en) | A kind of method, apparatus and mobile terminal of image optimization | |
CN108259746A (en) | A kind of image color detection method and mobile terminal | |
CN108053371A (en) | A kind of image processing method, terminal and computer readable storage medium | |
CN108038431A (en) | Image processing method, device, computer equipment and computer-readable recording medium | |
CN108174081B (en) | A kind of image pickup method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220815 Address after: 200120 floors 1-3, No. 57, Boxia Road, pilot Free Trade Zone, Pudong New Area, Shanghai Patentee after: Aiku software technology (Shanghai) Co.,Ltd. Address before: 523860 No. 283 BBK Avenue, Changan Town, Changan, Guangdong. Patentee before: VIVO MOBILE COMMUNICATION Co.,Ltd. |
|
TR01 | Transfer of patent right |