CN107424198B - Image processing method, image processing device, mobile terminal and computer readable storage medium - Google Patents

Image processing method, image processing device, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN107424198B
CN107424198B CN201710624525.6A CN201710624525A CN107424198B CN 107424198 B CN107424198 B CN 107424198B CN 201710624525 A CN201710624525 A CN 201710624525A CN 107424198 B CN107424198 B CN 107424198B
Authority
CN
China
Prior art keywords
image
processed
region
color
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710624525.6A
Other languages
Chinese (zh)
Other versions
CN107424198A (en
Inventor
袁全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710624525.6A priority Critical patent/CN107424198B/en
Publication of CN107424198A publication Critical patent/CN107424198A/en
Application granted granted Critical
Publication of CN107424198B publication Critical patent/CN107424198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application relates to an image processing method, an image processing device, a mobile terminal and a computer readable storage medium. The method comprises the following steps: extracting color features of an image to be processed; performing region division on the image to be processed according to the color characteristics; determining the main tone of each divided region, and calculating the defogging parameters of the corresponding region according to the main tone of each region; and carrying out defogging treatment on the corresponding regions according to the defogging parameters of the regions. The image processing method, the image processing device, the mobile terminal and the computer readable storage medium can effectively remove the fog in the image, improve the effects of the image such as definition, contrast and saturation, and simultaneously enable the color of the image after the fog removal to be more natural and real, and improve the fog removal effect.

Description

Image processing method, image processing device, mobile terminal and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a mobile terminal, and a computer-readable storage medium.
Background
In foggy weather, the imaging equipment is affected by suspended particles in the air, so that the characteristics of the collected images, such as color, texture and the like, are seriously weakened, the definition of the images is often low, and the integral tone of the images tends to be grayed. In an image captured in foggy weather, there are generally problems such as low contrast, low saturation, and hue shift due to the influence of atmospheric particles.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a mobile terminal and a computer readable storage medium, which can effectively remove fog in an image, make the image clearer and improve the effects of contrast, saturation and the like of the image.
An image processing method comprising:
extracting color features of an image to be processed;
performing region division on the image to be processed according to the color characteristics;
determining the main tone of each divided region, and calculating the defogging parameters of the corresponding region according to the main tone of each region;
and carrying out defogging treatment on the corresponding regions according to the defogging parameters of the regions.
In one embodiment, the extracting color features of the image to be processed includes:
converting an image to be processed from a first color space to a second color space;
and determining the characteristic value of each component of each pixel point of the image to be processed in the second color space.
In one embodiment, the performing region division on the image to be processed according to the color features includes:
acquiring a segmentation component range preset in the second color space;
extracting pixel points with characteristic values conforming to the segmentation component range, and segmenting the image to be processed into a plurality of objects;
and merging the objects according to the similarity between the objects to obtain the divided areas.
In one embodiment, the calculating the defogging parameters of the corresponding regions according to the main tones of the respective regions includes:
acquiring an atmospheric light value of the image to be processed;
calculating the original transmittance of the image to be processed according to the atmospheric light value;
acquiring adjusting coefficients respectively corresponding to the main tones of the regions;
and calculating the regional transmissivity of the corresponding region according to the original transmissivity and the adjusting coefficient.
In one embodiment, the acquiring the atmospheric light value of the image to be processed includes:
obtaining a dark channel image of the image to be processed;
sequencing all pixel points of the dark channel image according to the brightness, and extracting pixel points in a preset proportion from large to small according to the brightness;
determining the brightness value corresponding to each extracted pixel point in the image to be processed;
calculating the average brightness value of the image to be processed according to the brightness value corresponding to each extracted pixel point;
and if the average brightness value is smaller than a preset threshold value, determining the atmospheric light value as the average brightness value, otherwise, determining the atmospheric light value as the preset threshold value.
An image processing apparatus comprising:
the extraction module is used for extracting the color characteristics of the image to be processed;
the dividing module is used for dividing the area of the image to be processed according to the color characteristics;
the computing module is used for determining the main tone of each divided region and computing the defogging parameters of the corresponding region according to the main tone of each region;
and the defogging module is used for defogging the corresponding regions according to the defogging parameters of the regions.
In one embodiment, the extraction module includes:
the conversion unit is used for converting the image to be processed from a first color space to a second color space;
the characteristic value determining unit is used for determining the characteristic value of each component of each pixel point of the image to be processed in the second color space;
the dividing module comprises:
a component range acquisition unit configured to acquire a divided component range preset in the second color space;
the segmentation unit is used for extracting pixel points with characteristic values conforming to the segmentation component range and segmenting the image to be processed into a plurality of objects;
and the merging unit is used for merging the objects according to the similarity among the objects to obtain the divided areas.
In one embodiment, the calculation module includes:
the atmospheric light value acquisition unit is used for acquiring the atmospheric light value of the image to be processed;
the original transmittance calculation unit is used for calculating the original transmittance of the image to be processed according to the atmospheric light value;
an adjustment coefficient acquisition unit configured to acquire adjustment coefficients corresponding to the keytones of the respective regions, respectively;
and the region transmittance calculation unit is used for calculating the region transmittance of the corresponding region according to the original transmittance and the adjusting coefficient.
In one embodiment, the atmospheric light value obtaining unit includes:
the calculating subunit is used for calculating a dark channel image of the image to be processed;
the sequencing subunit is used for sequencing all the pixel points of the dark channel image according to the brightness and extracting the pixel points in a preset proportion from large to small according to the brightness;
a brightness value determining subunit, configured to determine, in the image to be processed, a brightness value corresponding to each extracted pixel point;
the average brightness value calculation operator unit is used for calculating the average brightness value of the image to be processed according to the brightness value corresponding to each extracted pixel point;
and the atmospheric light value determining subunit is configured to determine, if the average brightness value is smaller than a preset threshold, the atmospheric light value as the average brightness value, and otherwise, determine the atmospheric light value as the preset threshold.
A mobile terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the program.
A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method as set forth above.
According to the image processing method, the image processing device, the mobile terminal and the computer readable storage medium, the image to be processed is subjected to region division according to the color characteristics of the image to be processed, the dominant hue of each divided region is determined, the defogging parameters of the corresponding region are calculated according to the dominant hue of each region, the image to be processed is divided according to the color, and defogging processing of different degrees is carried out according to the dominant hue of each divided region, so that the fog in the image can be effectively removed, the effects of image definition, image contrast and image saturation are improved, meanwhile, the color of the defogged image is more natural and real, and the defogging effect is improved.
Drawings
FIG. 1 is a block diagram of a mobile terminal in one embodiment;
FIG. 2 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 3 is a schematic flow chart illustrating the process of dividing the region of the image to be processed according to the color feature in one embodiment;
FIG. 4 is a schematic flow chart illustrating the calculation of defogging parameters of the divided regions according to one embodiment;
FIG. 5 is a schematic diagram illustrating a process of obtaining an atmospheric light value of an image to be processed according to an embodiment;
FIG. 6 is a block diagram of an image processing apparatus in one embodiment;
FIG. 7 is a block diagram of a compute module in one embodiment;
FIG. 8 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
Fig. 1 is a block diagram of a mobile terminal in one embodiment. As shown in fig. 1, the mobile terminal includes a processor, a non-volatile storage medium, an internal memory and a network interface, a display screen, and an input device, which are connected through a system bus. The non-volatile storage medium of the mobile terminal stores an operating system and computer-executable instructions, and the computer-executable instructions are executed by the processor to implement the image processing method provided in the embodiment of the application. The processor is used to provide computing and control capabilities to support the operation of the entire mobile terminal. The internal memory in the mobile terminal provides an environment for the execution of computer-readable instructions in the non-volatile storage medium. The network interface is used for network communication with the server. The display screen of the mobile terminal can be a liquid crystal display screen or an electronic ink display screen, and the input device can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the mobile terminal, or an external keyboard, a touch pad or a mouse. The mobile terminal can be a mobile phone, a tablet computer, a personal digital assistant or a wearable device. Those skilled in the art will appreciate that the architecture shown in fig. 1 is only a block diagram of a portion of the architecture associated with the subject application and does not constitute a limitation on the mobile terminal to which the subject application applies, and that a particular mobile terminal may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
As shown in fig. 2, in one embodiment, there is provided an image processing method including the steps of:
step 210, extracting color features of the image to be processed.
In the present embodiment, the image to be processed may be a color image containing fog, the image to be processed may be described by a color space, which may also be referred to as a color model, for describing colors in a generally acceptable manner under certain standards, and the commonly used color space may include RGB (red, green, blue color space), HSI (hue, saturation, intensity color space), HSV (hue, saturation, lightness color space), YCrCb (optimized color video signal), and the like. The mobile terminal can extract the color features of the image to be processed, the color features are generally based on the features of the pixel points, and the extracted color features can include the feature values of all the pixel points of the image to be processed in all components of a color space, the proportion of the pixel points belonging to different colors in all the pixel points and the like.
And step 220, performing region division on the image to be processed according to the color characteristics.
The mobile terminal can perform region division on the image to be processed according to the extracted color features, the color features of pixel points in each divided region meet certain similarity, and different regions are not intersected. One or more division conditions may be set in advance, each of which defines a condition that needs to be satisfied by a color feature, and the condition may include a component range of a color space or the like. The mobile terminal can extract pixel points with color characteristics meeting the dividing conditions, and divide the pixel points meeting the same dividing conditions and having connectivity into the same region.
And step 230, determining the main tone of each divided region, and calculating the defogging parameters of the corresponding regions according to the main tone of each region.
The different division conditions may correspond to different colors respectively, and in the present embodiment, the divided colors may include three colors, i.e., red, green, and blue. After the image to be processed is divided into regions according to the color characteristics, the dominant hue of each region can be respectively determined, wherein the dominant hue refers to the main color of the region, the dominant hue and the dividing condition for dividing the region can have a corresponding relation, furthermore, a histogram of the region can be drawn, the color distribution and the occupation ratio of each pixel in the region are determined according to the histogram of the region, and the color with the largest occupation ratio is the dominant hue of the region. In the present embodiment, the main color tones may include red, green, blue, and others. Since the influence of the fog on the three RGB bands is different, the influence on the three RGB bands is increased for the fog of the same concentration. When the dominant hue of the region is red, the value of the pixel point of the region in the R channel is higher, and the values in the G channel and the B channel are lower; when the dominant hue of the region is green, the value of the pixel point of the region in the G channel is higher, and the values in the R channel and the B channel are lower; when the dominant hue of the region is blue, the value of the pixel point in the region is higher in the B channel, and the values are lower in the R channel and the G channel. If the entire image of the image to be processed is defogged to the same degree, if the image to be processed includes a large number of green areas or blue areas, such as scenes, e.g., green trees, grasslands, seawater, and blue sky, the fog in the green areas and the blue areas may not be completely removed. Therefore, for the areas of the image to be processed with different keytones, the defogging parameters of the corresponding areas can be respectively calculated according to the keytone of each area, and the defogging processing with different degrees is carried out according to the keytone of each area.
The mobile terminal can respectively carry out defogging processing on each region in the image to be processed according to the defogging algorithm, wherein the defogging algorithm can comprise a defogging algorithm based on image enhancement and a defogging algorithm based on image restoration, the defogging algorithm based on image enhancement can comprise a defogging algorithm based on RetineX theory, a defogging algorithm based on histogram equalization and the like, and the defogging algorithm based on image restoration can comprise a defogging algorithm based on an atmospheric scattering model and the like. In this embodiment, the mobile terminal may perform defogging on an image to be processed by using a dark channel prior algorithm, where the dark channel prior algorithm belongs to a defogging algorithm based on image restoration. The dark channel prior algorithm adopts an atmospheric scattering model to describe the fog-containing image, and the atmospheric scattering model can be shown as formula (1):
I(x)=J(x)t(x)+A(1-t(x)) (1);
wherein, i (x) represents a fog-containing image which needs to be defogged, j (x) represents a fog-free image obtained after the fog-containing image is defogged, x represents a spatial position of a certain pixel point in the image, t (x) represents a transmissivity, and a represents an atmospheric light value. The defogging parameters of the respective regions may include an atmospheric light value, a region transmittance, and the like, wherein the region transmittance of the region whose dominant tone is red is greater than the region transmittance of the region whose dominant tone is green, the region transmittance of the region whose dominant tone is green is greater than the region transmittance of the region whose dominant tone is blue, and different region transmittances indicate different defogging process intensities.
And 240, performing defogging treatment on the corresponding regions according to the defogging parameters of the regions.
The mobile terminal can respectively substitute the defogging parameters such as the atmospheric light value, the regional transmissivity and the like of each region into the formula (1), perform defogging treatment on each region, respectively obtain each region without fogging, and synthesize each region after the defogging treatment, so that the final image without fogging can be obtained. Since the regional transmittance is in the relationship of red < green < blue, the defogging intensity of the region having the dominant hue of red is smaller than the defogging intensity of the region having the dominant hue of green, and the defogging intensity of the region having the dominant hue of green is smaller than the defogging intensity of the region having the dominant hue of blue. The defogging of different degrees is carried out to different color areas, can make the fog on green region, blue region in the image of treating get rid of totally, improves the defogging effect.
According to the image processing method, the image to be processed is subjected to region division according to the color characteristics of the image to be processed, the dominant hue of each divided region is determined, the defogging parameters of the corresponding region are calculated according to the dominant hue of each region, the image to be processed is divided according to the colors, and the defogging processing of different degrees is carried out according to the dominant hue of each divided region, so that the fog in the image can be effectively removed, the effects such as the definition, the contrast and the saturation of the image are improved, meanwhile, the color of the defogged image is more natural and real, and the defogging effect is improved.
In one embodiment, step 210 extracts color features of the image to be processed, including (a) and (b):
(a) and converting the image to be processed from the first color space to the second color space.
The mobile terminal may convert the image to be processed from the first color space to the second color space and extract color features in the second color space. In this embodiment, the first color space may be RGB, the second color space may be HSV, the color image is generally described by RGB, and in the RGB color space, the color of the pixel is described by the common effect of three components of RGB, the correlation of the three components is very high, and the threshold values of the three components of R, G, B cannot be directly set to divide the color region. If color feature extraction is directly performed in an RGB color space, an image to be processed needs to be converted into a grayscale image, and image region division is performed in the grayscale image, but a large amount of color features are lost at the same time.
In the HSV color space, the components may include H (Hue), S (Saturation), and V (Value), where H is measured by angle, and has a Value range of 0 ° to 360 °, and is calculated counterclockwise from red, red is 0 °, green is 120 °, and blue is 240 °; s represents the degree that the color is close to the spectral color, the larger the proportion of the spectral color is, the higher the degree that the color is close to the spectral color is, the higher the saturation of the color is, the higher the saturation is, and the color is generally dark and bright; v represents the brightness degree of the color, and for the light source color, the brightness value is related to the brightness of the luminous body; for object colors, this value is related to the transmittance or reflectance of the object, and V is typically in the range of 0% (black) to 100% (white).
The mobile terminal can convert the image to be processed from RGB to HSV according to the conversion formula of RGB and HSV, and the conversion formula of RGB to HSV can be shown as formula (2):
Figure BDA0001362510260000081
Figure BDA0001362510260000082
V=max (2);
wherein max represents the maximum value of the pixel point in the RGB color space, and min represents the minimum value of the pixel point in the RGB color space.
(b) And determining the characteristic value of each component of each pixel point of the image to be processed in the second color space.
After the mobile terminal converts the image to be processed from RGB to HSV, characteristic values of H, S, V three components of each pixel point in HSV color space can be determined.
As shown in fig. 3, in one embodiment, the step 220 of performing region division on the image to be processed according to the color features includes the following steps:
step 302, obtaining a segmentation component range preset in a second color space.
The segmentation component ranges of red, green and blue in HSV can be preset, and further, the segmentation component ranges of red, green and blue in H component can be set, and the image to be processed is divided into regions according to the segmentation component ranges of H component, for example, the H component range corresponding to green can be 60 ° to 180 °. It will be appreciated that the image to be processed may also be converted from RGB to other color spaces, and is not limited to HSV as described above.
And 304, extracting pixel points with characteristic values conforming to the range of the segmentation components, and segmenting the image to be processed into a plurality of objects.
The mobile terminal can divide the region of the image to be processed according to one or more preset segmentation component ranges. For each segmentation component range, the mobile terminal can judge whether the characteristic values of the three components of the pixel point at H, S, V are in accordance with the segmentation component range, extract the pixel points of which the characteristic values are in accordance with the same segmentation component range, and segment the image to be processed into a plurality of objects.
And step 306, merging the objects according to the similarity among the objects to obtain the divided areas.
Aiming at each divided object, the mobile terminal can calculate the similarity of the object and the adjacent object, can calculate the mean value and the variance of each pixel point in the object in hue, brightness, saturation and the like, calculate the mean value and the variance of the object adjacent to the object in hue, brightness, saturation and the like, and determine the similarity of the object and the adjacent object by comparing the mean value and the variance obtained by calculation of the object and the adjacent object. When the similarity between the two objects is larger than a preset value, the object and the adjacent object can be merged, the two merged objects are used as a new object, and the similarity between the two objects and the adjacent object is continuously calculated. The mobile terminal merges the objects according to the similarity between the objects to obtain divided areas, so that the situation that no coincidence exists between the areas can be ensured, and all pixel points in the areas meet certain similarity.
In the embodiment, the image to be processed can be converted into HSV from RGB, the color features are extracted, the image to be processed is divided into regions according to the color features, defogging processing of the regions in different degrees is facilitated according to the colors of the regions, fog in the image can be effectively removed, the color of the defogged image can be more natural and real, and the defogging effect is improved.
As shown in fig. 4, in one embodiment, the step 230 of determining the dominant hue of each divided region and calculating the defogging parameter of the corresponding region according to the dominant hue of each region includes the following steps:
step 402, obtaining an atmospheric light value of an image to be processed.
In the dark channel prior algorithm, for a fog-free image, at least one color channel of some pixels in the three RGB channels always has a very low value, and the value of the color channel is close to zero. Thus, for any image, its dark channel image can be as shown in equation (3):
Figure BDA0001362510260000101
wherein, Jdark(x) Representing dark channel images, Jc(y) represents the value of the color channel and Ω (x) represents a window centered on pixel x. According to the formula (1) and the formula (3), a calculation formula of the transmittance can be derived as shown in the formula (4):
Figure BDA0001362510260000102
in real life, even in a fine day, there are some particles in the air, the object far away can still feel the existence of fog, and the existence of fog can make people feel the existence of depth of field, therefore, a factor between [0 and 1] can be introduced to adjust the obtained transmittance, and the transmittance can be calculated by formula (4) as formula (5):
Figure BDA0001362510260000103
in this embodiment, ω represents a factor for adjusting the transmittance, and ω may have a value of 0.95 or other values, but is not limited thereto, and smaller ω represents smaller defogging degree, and larger ω represents larger defogging degree.
After the mobile terminal divides the regions according to the color features of the image to be processed, the dark channel image of the image to be processed can be obtained according to the formula (3), the atmospheric light value of the image to be processed can be obtained, and the atmospheric light value can be used as the atmospheric light value of all the regions to perform defogging processing. In one embodiment, the mobile terminal may sort the pixel points of the dark channel image according to the brightness, extract the first 0.1% of the pixel points according to the brightness from large to small, determine the brightness value of the position corresponding to the extracted pixel point in the image to be processed, and use the brightness value of the pixel point with the highest brightness value as the atmospheric light value.
Step 404, calculating the original transmittance of the image to be processed according to the atmospheric light value.
After the mobile terminal obtains the atmospheric light value, the original transmittance of the image to be processed can be calculated according to the formula (5), and the original transmittance is the transmittance of the whole image to be processed.
In step 406, adjustment coefficients corresponding to the keytones of the respective regions are obtained.
Adjustment coefficients corresponding to the different keytones may be introduced, respectively, and the corresponding regional transmittances may be calculated based on the adjustment coefficients corresponding to the keytones of the regions. In one embodiment, the adjustment coefficient for the region having a red dominant hue is greater than the adjustment coefficient for the region having a green dominant hue in the image to be processed, and the adjustment coefficient for the region having a green dominant hue is greater than the adjustment coefficient for the region having a blue dominant hue. In one embodiment, the adjustment factor W for a dominant hue of redrAdjustment factor W which can be 1 and the dominant hue is greengAnd a coefficient of adjustment W for a blue dominant huebCan be calculated according to the formula (6) and the formula (7):
Wg=(0.9+0.1*t)2(6);
Wb=(0.7+0.3*t)2(7);
where t represents the original transmittance of the image to be processed.
Step 408, calculating the regional transmittance of the corresponding region according to the original transmittance and the adjustment coefficient.
The mobile terminal may multiply the adjustment coefficient corresponding to the dominant hue of the region by the original transmittance, so as to calculate the region transmittance of the corresponding region, where the dominant hue of the region is red, the dominant hue of the region is green, and the dominant hue of the region is blue, and the region transmittance of the region is respectively represented by formula (8):
t(r)=Wr*t
t(g)=Wg*t
t(b)=Wb*t (8)。
for other color regions where the dominant hue is not red, green, or blue, the original transmittance of the image to be processed may be directly used for the defogging process. It is to be understood that the adjustment coefficients corresponding to different keytones are not limited to the calculation methods of the above equations (6) and (7), and the regional transmittance is not limited to the calculation method of the above equation (8), but may be other calculation methods.
In the embodiment, the adjustment coefficients respectively corresponding to different dominant hues are respectively introduced, the regional transmittance of each region is calculated, and the defogging treatment of different degrees is performed on each region divided according to the color characteristics, so that the fog in the image can be effectively removed, the color of the defogged image can be more natural and real, and the defogging effect is improved.
As shown in FIG. 5, in one embodiment, step 402 of obtaining the atmospheric light value of the image to be processed comprises the following steps:
step 502, obtaining a dark channel image of the image to be processed.
And step 504, sequencing all the pixel points of the dark channel image according to the brightness, and extracting the pixel points in a preset proportion from large to small according to the brightness.
The mobile terminal can obtain the brightness of each pixel point in the dark channel image of the image to be processed, sort the pixel points according to the brightness, and extract the pixel points in the dark channel image in a preset proportion from large to small according to the brightness after sorting, wherein the preset proportion can be set according to actual requirements, for example, 0.1%, 0.2% and the like, and extract the pixel points in the dark channel image with the highest brightness, namely the first 0.1% or 0.2%.
Step 506, determining the brightness value corresponding to each extracted pixel point in the image to be processed.
After the mobile terminal extracts the pixel points with the preset proportion from large to small according to the brightness in the dark channel image, the brightness value corresponding to each extracted pixel point can be determined from the position corresponding to the extracted pixel point in the image to be processed.
And step 508, calculating the average brightness value of the image to be processed according to the brightness value corresponding to each extracted pixel point.
The mobile terminal can obtain an average value of brightness values corresponding to the extracted pixel points to obtain an average brightness value of the image to be processed, the average brightness value is compared with a preset threshold value, if the average brightness value in the region is smaller than the preset threshold value, the atmospheric light value of the image to be processed can be determined to be the average brightness value, and if the average brightness value is not smaller than the preset threshold value, the atmospheric light value can be determined to be the preset threshold value. When the atmospheric light value is too high, the image obtained after the defogging process may have color cast and color spots, so the preset threshold may be set, and the defogging process is performed by using the preset threshold as the maximum atmospheric light value.
Step 510, if the average brightness value is smaller than the preset threshold, determining the atmospheric light value as the average brightness value, otherwise, determining the atmospheric light value as the preset threshold.
In this embodiment, the maximum atmospheric light value is set to prevent color cast and color spots after the defogging process, so that the defogged fogless image is more real and natural.
As shown in fig. 6, in one embodiment, an image processing apparatus 600 is provided, which includes an extraction module 610, a division module 620, a calculation module 630, and a defogging module 640.
And the extracting module 610 is used for extracting the color features of the image to be processed.
And a dividing module 620, configured to perform region division on the image to be processed according to the color features.
A calculating module 630, configured to determine the dominant hue of each divided region, and calculate a defogging parameter of the corresponding region according to the dominant hue of each region.
And the defogging module 640 is used for performing defogging processing on the corresponding regions according to the defogging parameters of the regions.
According to the image processing device, the image to be processed is subjected to region division according to the color characteristics of the image to be processed, the dominant hue of each divided region is determined, the defogging parameters of the corresponding region are calculated according to the dominant hue of each region, the image to be processed is divided according to the colors, and the defogging processing of different degrees is carried out according to the dominant hue of each divided region, so that the fog in the image can be effectively removed, the effects such as the definition, the contrast and the saturation of the image are improved, meanwhile, the color of the defogged image is more natural and real, and the defogging effect is improved.
In one embodiment, the extraction module 610 includes a conversion unit and a feature value determination unit.
The conversion unit is used for converting the image to be processed from the first color space to the second color space.
And the characteristic value determining unit is used for determining the characteristic value of each component of each pixel point of the image to be processed in the second color space.
In one embodiment, the partitioning module 620 includes a component range obtaining unit, a dividing unit, and a merging unit.
A component range acquisition unit configured to acquire a divided component range preset in the second color space.
And the segmentation unit is used for extracting pixel points of which the characteristic values accord with the segmentation component range and segmenting the image to be processed into a plurality of objects.
And the merging unit is used for merging the objects according to the similarity among the objects to obtain the divided areas.
In the embodiment, the image to be processed can be converted into HSV from RGB, the color features are extracted, the image to be processed is divided into regions according to the color features, defogging processing of the regions in different degrees is facilitated according to the colors of the regions, fog in the image can be effectively removed, the color of the defogged image can be more natural and real, and the defogging effect is improved.
As shown in fig. 7, in one embodiment, the calculation module 630 includes an atmospheric light value obtaining unit 632, a raw transmittance calculating unit 634, an adjustment coefficient obtaining unit 636, and a region transmittance calculating unit 638.
An atmospheric light value obtaining unit 632 is configured to obtain an atmospheric light value of the image to be processed.
And an original transmittance calculation unit 634 for calculating an original transmittance of the image to be processed according to the atmospheric light value.
An adjustment coefficient acquisition unit 636 for acquiring adjustment coefficients corresponding to the keytones of the respective regions, respectively.
And a region transmittance calculation unit 638 for calculating the region transmittance of the corresponding region according to the original transmittance and the adjustment coefficient.
In the embodiment, the adjustment coefficients respectively corresponding to different dominant hues are respectively introduced, the regional transmittance of each region is calculated, and the defogging treatment of different degrees is performed on each region divided according to the color characteristics, so that the fog in the image can be effectively removed, the color of the defogged image can be more natural and real, and the defogging effect is improved.
In one embodiment, the atmospheric light value obtaining unit 632 includes an obtaining subunit, a sorting subunit, a luminance value determining subunit, an average luminance value calculating subunit, and an atmospheric light value determining subunit.
And the obtaining subunit is used for obtaining the dark channel image of the image to be processed.
And the sequencing subunit is used for sequencing all the pixel points of the dark channel image according to the brightness and extracting the pixel points in a preset proportion from large to small according to the brightness.
And the brightness value determining subunit is used for determining the brightness value corresponding to each extracted pixel point in the image to be processed.
And the average brightness value calculation operator unit is used for calculating the average brightness value of the image to be processed according to the brightness value corresponding to each extracted pixel point.
And the atmospheric light value determining subunit is configured to determine, if the average brightness value is smaller than a preset threshold, that the atmospheric light value is the average brightness value, and otherwise, that the atmospheric light value is the preset threshold.
In this embodiment, the maximum atmospheric light value is set to prevent color cast and color spots after the defogging process, so that the defogged fogless image is more real and natural.
The embodiment of the application also provides the mobile terminal. The mobile terminal includes an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 8, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 8, the image processing circuit includes an ISP processor 840 and control logic 850. Image data captured by imaging device 810 is first processed by ISP processor 840, and ISP processor 840 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of imaging device 810. Imaging device 810 may include a camera having one or more lenses 812 and an image sensor 814. Image sensor 814 may include an array of color filters (e.g., Bayer filters), and image sensor 814 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 814 and provide a set of raw image data that may be processed by ISP processor 840. The sensor 820 may provide raw image data to the ISP processor 840 based on the sensor 820 interface type. The sensor 820 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The ISP processor 840 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 840 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 840 may also receive pixel data from image memory 830. For example, raw pixel data is sent from the sensor 820 interface to the image memory 830, and the raw pixel data in the image memory 830 is then provided to the ISP processor 840 for processing. The image Memory 830 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 820 interface or from the image memory 830, the ISP processor 840 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 830 for additional processing before being displayed. ISP processor 840 may also receive processed data from image memory 930 for image data processing in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 880 for viewing by a user and/or further processing by a graphics engine or GPU (graphics processing Unit). Further, the output of ISP processor 840 may also be sent to image memory 830 and display 880 may read image data from image memory 830. In one embodiment, image memory 830 may be configured to implement one or more frame buffers. Further, the output of the ISP processor 840 may be transmitted to an encoder/decoder 870 for encoding/decoding the image data. The encoded image data may be saved and decompressed prior to display on a display 880 device.
The step of the ISP processor 840 processing the image data includes: the image data is subjected to VFE (Video Front End) Processing and CPP (Camera Post Processing). The VFE processing of the image data may include modifying the contrast or brightness of the image data, modifying digitally recorded lighting status data, performing compensation processing (e.g., white balance, automatic gain control, gamma correction, etc.) on the image data, performing filter processing on the image data, etc. CPP processing of image data may include scaling an image, providing a preview frame and a record frame to each path. Among other things, the CPP may use different codecs to process the preview and record frames.
The image data processed by the ISP processor 840 may be sent to the defogging module 860 for defogging of the image before being displayed. The defogging module 860 may extract color features of an image to be processed, perform region division on the image to be processed according to the color features, determine a dominant hue of each divided region, calculate a defogging parameter of a corresponding region according to the dominant hue of each region, and perform defogging processing and the like on the corresponding region according to the defogging parameter of each region. The defogging module 860 may be a Central Processing Unit (CPU), a GPU, a coprocessor, or the like. After the defogging module 860 defogges the image data, the defogged image data may be transmitted to the encoder/decoder 870 to encode/decode the image data. The encoded image data may be saved and decompressed prior to display on a display 880 device. It is understood that the image data processed by the defogging module 860 may be directly transmitted to the display 880 for display without passing through the encoder/decoder 870. The image data processed by ISP processor 840 may also be processed by encoder/decoder 870 and then processed by defogging module 860. The encoder/decoder can be a CPU, a GPU, a coprocessor or the like in the mobile terminal.
The statistics determined by ISP processor 840 may be sent to control logic 850 unit. For example, the statistical data may include image sensor 814 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 812 shading correction, and the like. Control logic 850 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of imaging device 810 and ISP processor 840 based on the received statistical data. For example, the control parameters may include sensor 820 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 812 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 812 shading correction parameters.
In the present embodiment, the image processing method described above can be realized using the image processing technique in fig. 8.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the above-mentioned image processing method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. An image processing method, comprising:
extracting color features of an image to be processed;
performing region division on the image to be processed according to the color characteristics;
determining the main tone of each divided region, and calculating the defogging parameters of the corresponding region according to the main tone of each region; the main tone refers to the main color of the area, and the main tone comprises red, green and blue;
and carrying out defogging treatment on the corresponding regions according to the defogging parameters of the regions.
2. The method according to claim 1, wherein the extracting color features of the image to be processed comprises:
converting an image to be processed from a first color space to a second color space;
and determining the characteristic value of each component of each pixel point of the image to be processed in the second color space.
3. The method according to claim 2, wherein the area division of the image to be processed according to the color features comprises:
acquiring a segmentation component range preset in the second color space;
extracting pixel points with characteristic values conforming to the segmentation component range, and segmenting the image to be processed into a plurality of objects;
and merging the objects according to the similarity between the objects to obtain the divided areas.
4. The method according to any one of claims 1 to 3, wherein said calculating the defogging parameters of the corresponding regions according to the main hues of the respective regions comprises:
acquiring an atmospheric light value of the image to be processed;
calculating the original transmittance of the image to be processed according to the atmospheric light value;
acquiring adjusting coefficients respectively corresponding to the main tones of the regions;
and calculating the regional transmissivity of the corresponding region according to the original transmissivity and the adjusting coefficient.
5. The method of claim 4, wherein the obtaining atmospheric light values for the image to be processed comprises:
obtaining a dark channel image of the image to be processed;
sequencing all pixel points of the dark channel image according to the brightness, and extracting pixel points in a preset proportion from large to small according to the brightness;
determining the brightness value corresponding to each extracted pixel point in the image to be processed;
calculating the average brightness value of the image to be processed according to the brightness value corresponding to each extracted pixel point;
and if the average brightness value is smaller than a preset threshold value, determining the atmospheric light value as the average brightness value, otherwise, determining the atmospheric light value as the preset threshold value.
6. An image processing apparatus characterized by comprising:
the extraction module is used for extracting the color characteristics of the image to be processed;
the dividing module is used for dividing the area of the image to be processed according to the color characteristics;
the computing module is used for determining the main tone of each divided region and computing the defogging parameters of the corresponding region according to the main tone of each region; the main tone refers to the main color of the area, and the main tone comprises red, green and blue;
and the defogging module is used for defogging the corresponding regions according to the defogging parameters of the regions.
7. The apparatus of claim 6, wherein the extraction module comprises:
the conversion unit is used for converting the image to be processed from a first color space to a second color space;
the characteristic value determining unit is used for determining the characteristic value of each component of each pixel point of the image to be processed in the second color space;
the dividing module comprises:
a component range acquisition unit configured to acquire a divided component range preset in the second color space;
the segmentation unit is used for extracting pixel points with characteristic values conforming to the segmentation component range and segmenting the image to be processed into a plurality of objects;
and the merging unit is used for merging the objects according to the similarity among the objects to obtain the divided areas.
8. The apparatus of claim 6 or 7, wherein the computing module comprises:
the atmospheric light value acquisition unit is used for acquiring the atmospheric light value of the image to be processed;
the original transmittance calculation unit is used for calculating the original transmittance of the image to be processed according to the atmospheric light value;
an adjustment coefficient acquisition unit configured to acquire adjustment coefficients corresponding to the keytones of the respective regions, respectively;
and the region transmittance calculation unit is used for calculating the region transmittance of the corresponding region according to the original transmittance and the adjusting coefficient.
9. The apparatus of claim 8, wherein the atmospheric light value obtaining unit comprises:
the calculating subunit is used for calculating a dark channel image of the image to be processed;
the sequencing subunit is used for sequencing all the pixel points of the dark channel image according to the brightness and extracting the pixel points in a preset proportion from large to small according to the brightness;
a brightness value determining subunit, configured to determine, in the image to be processed, a brightness value corresponding to each extracted pixel point;
the average brightness value calculation operator unit is used for calculating the average brightness value of the image to be processed according to the brightness value corresponding to each extracted pixel point;
and the atmospheric light value determining subunit is configured to determine, if the average brightness value is smaller than a preset threshold, the atmospheric light value as the average brightness value, and otherwise, determine the atmospheric light value as the preset threshold.
10. A mobile terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the method according to any of claims 1 to 5.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 5.
CN201710624525.6A 2017-07-27 2017-07-27 Image processing method, image processing device, mobile terminal and computer readable storage medium Active CN107424198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710624525.6A CN107424198B (en) 2017-07-27 2017-07-27 Image processing method, image processing device, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710624525.6A CN107424198B (en) 2017-07-27 2017-07-27 Image processing method, image processing device, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN107424198A CN107424198A (en) 2017-12-01
CN107424198B true CN107424198B (en) 2020-03-27

Family

ID=60430468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710624525.6A Active CN107424198B (en) 2017-07-27 2017-07-27 Image processing method, image processing device, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN107424198B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022239B (en) * 2017-12-05 2020-03-10 郑州轻工业学院 Bubbling wine browning process detection device and method based on machine vision
CN107958470A (en) * 2017-12-18 2018-04-24 维沃移动通信有限公司 A kind of color correcting method, mobile terminal
CN108320265B (en) * 2018-01-31 2021-09-21 努比亚技术有限公司 Image processing method, terminal and computer readable storage medium
CN109472839B (en) * 2018-10-26 2023-10-13 Oppo广东移动通信有限公司 Image generation method and device, computer equipment and computer storage medium
CN109584301B (en) * 2018-11-28 2023-05-23 常州大学 Method for obtaining fruit area with non-uniform color
CN109903294B (en) * 2019-01-25 2020-05-29 北京三快在线科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN109934781B (en) * 2019-02-27 2020-10-23 合刃科技(深圳)有限公司 Image processing method, image processing device, terminal equipment and computer readable storage medium
CN110175967B (en) * 2019-06-05 2020-07-17 邓诗雨 Image defogging processing method, system, computer device and storage medium
CN110930326A (en) * 2019-11-15 2020-03-27 浙江大华技术股份有限公司 Image and video defogging method and related device
CN111062993B (en) * 2019-12-12 2023-09-26 广东智媒云图科技股份有限公司 Color combined painting image processing method, device, equipment and storage medium
CN111539891A (en) * 2020-04-27 2020-08-14 高小翎 Wave band self-adaptive demisting optimization processing method for single remote sensing image
CN112184581B (en) * 2020-09-27 2023-09-05 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and medium
JP2022057784A (en) * 2020-09-30 2022-04-11 キヤノン株式会社 Imaging apparatus, imaging system, and imaging method
CN112419431A (en) * 2020-10-26 2021-02-26 杭州君辰机器人有限公司 Method and system for detecting product color
CN112950453B (en) * 2021-01-25 2023-10-20 北京达佳互联信息技术有限公司 Image processing method and image processing apparatus
WO2022188014A1 (en) * 2021-03-09 2022-09-15 Oppo广东移动通信有限公司 Chroma adjustment method, chroma adjustment apparatus, electronic device and readable storage medium
CN113379631B (en) * 2021-06-11 2024-05-17 百果园技术(新加坡)有限公司 Image defogging method and device
CN114359305A (en) * 2021-12-31 2022-04-15 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115439494B (en) * 2022-11-08 2023-01-31 山东大拇指喷雾设备有限公司 Spray image processing method for quality inspection of sprayer
CN116787022B (en) * 2023-08-29 2023-10-24 深圳市鑫典金光电科技有限公司 Heat dissipation copper bottom plate welding quality detection method and system based on multi-source data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463816A (en) * 2014-12-02 2015-03-25 苏州大学 Image processing method and device
CN104715456A (en) * 2015-03-17 2015-06-17 北京环境特性研究所 Image defogging method
CN104794688A (en) * 2015-03-12 2015-07-22 北京航空航天大学 Single image defogging method and device based on depth information separation sky region
CN105631823A (en) * 2015-12-28 2016-06-01 西安电子科技大学 Dark channel sky area defogging method based on threshold segmentation optimization
CN106570839A (en) * 2016-11-04 2017-04-19 天津大学 Red Channel prior based underwater image sharpening method
CN106886985A (en) * 2017-04-25 2017-06-23 哈尔滨工业大学 A kind of self adaptation enhancement method of low-illumination image for reducing colour cast

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5157753B2 (en) * 2008-08-27 2013-03-06 カシオ計算機株式会社 Image processing apparatus, image processing method, and image processing program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463816A (en) * 2014-12-02 2015-03-25 苏州大学 Image processing method and device
CN104794688A (en) * 2015-03-12 2015-07-22 北京航空航天大学 Single image defogging method and device based on depth information separation sky region
CN104715456A (en) * 2015-03-17 2015-06-17 北京环境特性研究所 Image defogging method
CN105631823A (en) * 2015-12-28 2016-06-01 西安电子科技大学 Dark channel sky area defogging method based on threshold segmentation optimization
CN106570839A (en) * 2016-11-04 2017-04-19 天津大学 Red Channel prior based underwater image sharpening method
CN106886985A (en) * 2017-04-25 2017-06-23 哈尔滨工业大学 A kind of self adaptation enhancement method of low-illumination image for reducing colour cast

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于透射率优化和色温调节的水下图像复原;倪锦艳 等;《激光与光电子学进展》;20161031;011001-1-011001-8 *

Also Published As

Publication number Publication date
CN107424198A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
CN107424198B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107451969B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107317967B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107730446B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN107563976B (en) Beauty parameter obtaining method and device, readable storage medium and computer equipment
CN107509031B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108419028B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107862659B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN107993209B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN107481186B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN107800966A (en) Method, apparatus, computer-readable recording medium and the electronic equipment of image procossing
CN107911625A (en) Light measuring method, device, readable storage medium storing program for executing and computer equipment
CN107194900A (en) Image processing method, device, computer-readable recording medium and mobile terminal
CN109242794B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
Pei et al. Effective image haze removal using dark channel prior and post-processing
CN107277299B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107945106B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107341782B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN107194901B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN107392870B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107454318B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107424134B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN107277369B (en) Image processing method, device, computer readable storage medium and computer equipment
CN109191398B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant