CN115118947A - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115118947A
CN115118947A CN202110310827.2A CN202110310827A CN115118947A CN 115118947 A CN115118947 A CN 115118947A CN 202110310827 A CN202110310827 A CN 202110310827A CN 115118947 A CN115118947 A CN 115118947A
Authority
CN
China
Prior art keywords
color
rectangle
weight
determining
value corresponding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110310827.2A
Other languages
Chinese (zh)
Other versions
CN115118947B (en
Inventor
林威丞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110310827.2A priority Critical patent/CN115118947B/en
Publication of CN115118947A publication Critical patent/CN115118947A/en
Application granted granted Critical
Publication of CN115118947B publication Critical patent/CN115118947B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to an image processing method, an image processing device, an electronic device and a storage medium, wherein the method comprises the following steps: preprocessing an original image acquired by an image acquisition device to generate a corresponding color statistical graph, wherein the color statistical graph comprises a plurality of color statistical points and color values corresponding to the color statistical points; determining the positions of a plurality of weight rectangles corresponding to each face frame and the reference weight corresponding to each weight rectangle in the original image when the original image contains a face; determining a first weight value corresponding to each color statistical point according to the position relation between each color statistical point and each weight rectangle and the reference weight corresponding to each weight rectangle; determining a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point; and carrying out white balance processing on the original image based on the white balance gain value. This can improve the accuracy of the white balance processing.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
White balance is a very important concept in the field of image capture, through which a series of problems of color restoration and color tone processing can be solved, and is one of important indexes for evaluating color. White balance is an abstract concept, and the process of adjusting white balance is called white balance adjustment.
In the related art, an Automatic White Balance (AWB) adjustment method is generally used for white balance adjustment. However, in actual use, the determination of the color temperature of the light source by the AWB is easily affected by the skin color tone of the human face, so that a large error may be generated in the AWB processing, and the accuracy is low, thereby possibly affecting the user experience.
Disclosure of Invention
The present disclosure is directed to solving, at least in part, one of the technical problems in the related art.
An embodiment of a first aspect of the present disclosure provides an image processing method, including:
preprocessing an original image acquired by an image acquisition device to generate a corresponding color statistical graph, wherein the color statistical graph comprises a plurality of color statistical points and color values corresponding to the color statistical points;
determining the positions of a plurality of weight rectangles corresponding to each face frame contained in the original image and the reference weight corresponding to each weight rectangle under the condition that the original image contains a face, wherein the central point and the aspect ratio of each weight rectangle are respectively the same as those of the corresponding face frame;
determining a first weight value corresponding to each color statistical point according to the position relation between each color statistical point and each weight rectangle and the reference weight corresponding to each weight rectangle;
determining a white balance gain value corresponding to the original image according to a first weight value corresponding to each color statistical point and a color value corresponding to each color statistical point;
and carrying out white balance processing on the original image based on the white balance gain value.
An embodiment of a second aspect of the present disclosure provides an apparatus for processing an image, including:
the generating module is used for preprocessing an original image acquired by the image acquisition device to generate a corresponding color statistical graph, wherein the color statistical graph comprises a plurality of color statistical points and color values corresponding to the color statistical points;
a first determining module, configured to determine, when the original image includes a face, positions of a plurality of weight rectangles corresponding to each face frame included in the original image and a reference weight corresponding to each weight rectangle, where a center point and an aspect ratio of each weight rectangle are the same as a center point and an aspect ratio of the corresponding face frame, respectively;
the second determining module is used for determining a first weight value corresponding to each color statistical point according to the position relation between each color statistical point and each weight rectangle and the reference weight corresponding to each weight rectangle;
a third determining module, configured to determine a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point;
and the processing module is used for carrying out white balance processing on the original image based on the white balance gain value.
An embodiment of a third aspect of the present disclosure provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; the processor is configured to call and execute the executable instructions stored in the memory to implement the image processing method proposed in the embodiment of the first aspect of the present disclosure.
A fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium, where instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method set forth in the first aspect of the present disclosure.
A fifth aspect of the present disclosure provides a computer program product, which when executed by a processor of an electronic device, enables the electronic device to perform the method for processing an image provided in the first aspect of the present disclosure.
The image processing method, the image processing device, the electronic device and the storage medium provided by the disclosure are characterized in that an original image acquired by an image acquisition device is preprocessed to generate a corresponding color statistical map, under the condition that the original image contains a face, the positions of a plurality of weight rectangles corresponding to each face frame contained in the original image and a reference weight corresponding to each weight rectangle are determined, then a first weight value corresponding to each color statistical point can be determined according to the position relation between each color statistical point and each weight rectangle and the reference weight corresponding to each weight rectangle, then a white balance gain value corresponding to the original image is determined according to the first weight value corresponding to each color statistical point and a color value corresponding to each color statistical point, and therefore white balance processing is performed on the original image based on the white balance gain value. Therefore, when the original image containing the human face is subjected to white balance processing, the first weight value corresponding to each color statistical point is determined according to the positions of the different color statistical points in the human face frame, and then the white balance gain corresponding to the original image is determined, so that the influence of the human face skin color on the white balance gain can be effectively reduced, the accuracy of the white balance processing is improved, and good experience can be given to a user.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
FIG. 1 is a flow chart of a method of processing an image according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method of processing an image according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of a method of processing an image according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of a method of processing an image according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative and intended to explain the present disclosure, and should not be construed as limiting the present disclosure.
Generally, ambient light sources can be roughly classified into high color temperature light sources, medium color temperature light sources and low color temperature light sources, and image acquisition is performed in the environment of different types of color temperature light sources, and the obtained image may be influenced by the type of the light source. For example, an image acquired in a high color temperature light source environment is biased to blue, an image acquired in a medium color temperature light source environment is biased to white, and an image acquired in a low color temperature light source environment is biased to yellow.
The three primary optical colors include Red (Red, R), Green (Green, G) and Blue (B). The AWB can generate appropriate RGB gain values according to the type of light source when the photo is taken, and adjust the three primary RGB colors of the taken photo so that the white object appears white on the photo as seen by human eyes.
Generally speaking, the face skin color is biased to yellow, and when a large area of face skin color appears, the calculation of the color temperature of the light source by the AWB is easily influenced by the face skin color tone. For example, in a high color temperature light source environment, the AWB may misunderstand the human face skin color as a reflected light source of a medium color temperature light source or a low color temperature light source irradiating a white object, which may cause the AWB to misunderstand that the AWB is in the medium and low color temperature light source environment, and further generate a large B gain value, which may cause the image color to be bluish.
The invention provides an image processing method, which aims to reduce the influence of human face complexion on white balance gain and improve the accuracy of white balance processing, thereby giving users good experience.
A method, an apparatus, an electronic device, and a storage medium for processing an image according to an embodiment of the present disclosure are described below with reference to the drawings.
The image processing method according to the embodiment of the disclosure may be executed by an image processing apparatus provided in the embodiment of the disclosure, and the apparatus may be configured in an electronic device.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 1, the image processing method may include the steps of:
step 101, preprocessing an original image acquired by an image acquisition device to generate a corresponding color statistical graph, wherein the color statistical graph comprises a plurality of color statistical points and color values corresponding to each color statistical point.
The image capturing device may be any device having a photographing function, such as a camera, a video camera, a scanner, a mobile phone, a tablet computer, or any device that captures an image into a computer through a video capture card, and the like, which is not limited in this disclosure.
It can be understood that the difference degree of the corresponding pixel values of a plurality of pixel points in adjacent positions in the original image may be small. If each pixel point is processed identically, the difference of the results obtained by the pixel points at adjacent positions is small, so that the original image can be uniformly divided into m × n small blocks with equal areas in order to reduce the data processing process and improve the efficiency. The values of m and n may be preset values, or may also be adjusted according to the pixel size of the original image, and the like, which is not limited in this disclosure.
In addition, the original image collected by the image collecting device is preprocessed, so that m × n small blocks can be converted into a color space, and corresponding color statistical points and color values of the small blocks in the color statistical chart are obtained.
It is understood that there may be various color spaces, such as YUV color space, RgBg color space, or color difference component space, such as YCbCr, YPbPr, etc., and accordingly, there may be various representations of the color histogram, which is not limited by the disclosure.
For convenience of explanation, the RgBg color histogram is used as an example in this disclosure.
For example, the original image may be divided into m × n small blocks, and then it is determined that each small block corresponds to: the average value Ravg of the R component, the average value Gavg of the G component, and the average value Bavg of the B component, and then m × n small blocks are converted into a color space, so that m × n color statistic points in a color statistic map RgBg and color values (Rg, Bg) corresponding to the color statistic points can be obtained, wherein Rg and Bg should satisfy: rg Ravg/Gavg, Bg Bavg/Gavg.
It should be noted that the color histogram, the color statistic points, the color values, and the like are only illustrative, and are not intended to limit the determination, representation, and the like of the color values in the embodiment of the present disclosure.
And 102, determining the positions of a plurality of weight rectangles corresponding to each face frame and the reference weight corresponding to each weight rectangle in the original image under the condition that the original image contains the face.
The center point and the length-width ratio of each weight rectangle are respectively the same as those of the corresponding face frame.
In addition, for the same original image, it may include a plurality of faces, and when the original image is detected, the number of weight rectangles corresponding to the face frame at each face and the reference weight corresponding to each weight rectangle should be the same value.
For example, when an original image is detected, two face frames appear simultaneously, the 1 st face frame corresponds to 3 weight rectangles, and the reference weights corresponding to the weight rectangles are: 1. 5, 15, the 2 nd face frame also corresponds to 3 weight rectangles, and the reference weight corresponding to each weight rectangle is: 1. 5, 15. It is understood that the aspect ratio of each weight rectangle is the same as the aspect ratio of the corresponding face frame, so that the corresponding size of the weight rectangle may be different according to the size of the face frame, which is not limited in this disclosure.
In addition, there are various ways to determine the positions of the weighted rectangles corresponding to the face frame included in the original image.
Optionally, the ambient brightness corresponding to the original image may be determined first, then the number of weight rectangles corresponding to the original image and the weight value corresponding to each weight rectangle are determined according to the ambient brightness, and then the positions of the multiple weight rectangles corresponding to each face frame in the original image are determined based on the number of weight rectangles.
For example, the relationship between the ambient brightness and the number of the weight rectangles and the weight value corresponding to each weight rectangle can be set in advance, so that the number of the weight rectangles corresponding to the original image and the weight value corresponding to each weight rectangle can be determined relatively quickly according to the ambient brightness corresponding to the original image.
For example, according to the ambient brightness, it is determined that the number of the weight rectangles corresponding to the original image is 4, and the weight value corresponding to each weight rectangle is: 20. 15, 10 and 1. Then, for each face frame in the original image, the 4 weight rectangles corresponding to each face frame can be overlapped with the central point of the face frame, and then the positions of the 4 weight rectangles corresponding to each face frame in the original image can be determined according to the size of each weight rectangle.
Or the illuminance corresponding to the original image may be determined first, then the number of the weight rectangles corresponding to each face frame and the weight value corresponding to each weight rectangle are determined according to the illuminance, and then each weight rectangle corresponding to each face frame is overlapped with the central point of the face frame, so that the position of each weight rectangle corresponding to each face frame in the original image can be determined according to the size of each weight rectangle.
It should be noted that the above examples are only examples, and cannot be taken as limitations on the number, size, position, and the like of the weighting rectangles corresponding to each face frame in the embodiments of the present disclosure.
And 103, determining a first weight value corresponding to each color statistical point according to the position relation between each color statistical point and each weight rectangle and the reference weight corresponding to each weight rectangle.
There are various situations where each color statistic point and the position of the weight rectangle are located, for example, the color statistic point is located inside each weight rectangle, the color statistic point is located inside a part of the weight rectangles, and the like, which is not limited in this disclosure.
For example, the color statistic point is only located inside one weight rectangle, and the reference weight corresponding to the weight rectangle is the first weight value corresponding to the color statistic point.
Or, under the condition that any color statistical point is located in at least two weight rectangles, determining a larger value of reference weights respectively corresponding to the at least two weight rectangles as a first weight value corresponding to any color statistical point.
For example, if any color statistic point a is located in the weight rectangle 1 and the weight rectangle 2, the reference weight corresponding to the weight rectangle 1 is 15, and the reference weight corresponding to the weight rectangle 2 is 10, it can be determined that the first weight value corresponding to the any color statistic point a is 15.
Or, in a case where any color statistic point is located in a plurality of weight rectangles, an average value of reference weights corresponding to the plurality of weight rectangles may be determined as the first weight value corresponding to any color statistic point.
For example, if any color statistic point a is located in the weight rectangles 1, 2, and 3, the reference weight corresponding to the weight rectangle 1 is 15, the reference weight corresponding to the weight rectangle 2 is 10, and the reference weight corresponding to the weight rectangle 3 is 5, and the reference weights corresponding to the weight rectangles are averaged, it can be determined that the first weight value corresponding to any color statistic point a is 10.
It should be noted that the above example is only an example, and cannot be taken as a limitation for determining the first weight value corresponding to any color statistical point in the embodiment of the present disclosure.
In the embodiment of the present disclosure, by using the reference weight corresponding to each weight rectangle, each color statistic point located in the weight rectangle may be assigned with a corresponding first weight value, so that the color statistic points in different weight rectangles in the image have respective corresponding weight values, thereby providing conditions for subsequent processing.
And 104, determining a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point.
The color values corresponding to the color statistical points can be updated according to the first weighted values corresponding to the color statistical points, and then the white balance gain values corresponding to the original image are determined according to the updated color values.
For example, in the RgBg color statistical chart, the color statistical point a corresponds to a first weight value of 10 and a color value of (Rg1, Bg1), the color statistical point B corresponds to a first weight value of 5 and a color value of (Rg2, Bg2), the color statistical point C corresponds to a first weight value of 15 and a color value of (Rg3, Bg 3). Thereby determining the updated color value of the color statistics point A, B, C: rg (Rg) avg =(10*Rg1+5*Rg2+15*Rg3)/3,Bg avg (10 × Bg1+5 × Bg2+15 × Bg 3)/3. The corresponding white balance gain can then be determined: r Gain value =1/Rg avg ,G Gain value =1.0,B Gain value =1/Bg avg
It can be understood that, since the human eye has the highest sensitivity to light (480nm-600nm) belonging to green wavelength in the spectrum and the largest number of green pixels collected in the bayer array, the adjustment of the red component and the blue component is usually realized by fixing the gain value of the green component and then adjusting the gain values of the red component and the blue component, respectively.
It should be noted that the above examples are only illustrative and should not be taken as limiting the color histogram, the color statistic points, the color values, etc. in the embodiment of the present disclosure.
In the embodiment of the present disclosure, the reference weight of each weight rectangle is referred to by the first weight value corresponding to each color statistical point, so that the determined first weight value corresponding to each color statistical point is more reasonable and accurate, and thus the white balance gain value determined by using the first weight value is also more accurate and reliable.
Step 105, white balance processing is performed on the original image based on the white balance gain value.
Optionally, each gain value may be multiplied by each color component corresponding to the original image, so as to implement white balance processing on the original image.
The method includes the steps of preprocessing an original image acquired by an image acquisition device to generate a corresponding color statistical graph, determining positions of a plurality of weight rectangles corresponding to each face frame and reference weights corresponding to each weight rectangle in the original image under the condition that the original image contains a face, determining a first weight value corresponding to each color statistical point according to the position relation between each color statistical point and each weight rectangle and the reference weights corresponding to each weight rectangle, and determining a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point, so that the original image is subjected to white balance processing based on the white balance gain value. Therefore, when the original image containing the human face is subjected to white balance processing, the first weight value corresponding to each color statistical point is determined according to the positions of the different color statistical points in the human face frame, and then the white balance gain corresponding to the original image is determined, so that the influence of the human face skin color on the white balance gain can be effectively reduced, the accuracy of the white balance processing is improved, and good experience can be given to a user.
In the embodiment, the original image is preprocessed to generate the corresponding color statistical graph, and under the condition that the original image contains the face, the first weight value corresponding to each color statistical point can be adjusted according to the multiple weight rectangles corresponding to each face frame, so that the influence of the skin color of the face and the like on the white balance gain can be effectively reduced, and the accuracy of white balance processing is improved. In a possible implementation manner, the first segmentation granularity and the second segmentation granularity may be determined according to performance parameters of the image acquisition device under different light sources and the current ambient brightness, and then the color statistical graph is segmented into a plurality of color statistical rectangles, which is further described with reference to fig. 2.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure. As shown in fig. 2, the image processing method may include the steps of:
step 201, preprocessing an original image acquired by an image acquisition device to generate a corresponding color statistical graph, wherein the color statistical graph includes a plurality of color statistical points and a color value corresponding to each color statistical point.
In step 202, when the original image includes a face, positions of a plurality of weight rectangles corresponding to each face frame included in the original image and reference weights corresponding to each weight rectangle are determined.
The center point and the length-width ratio of each weight rectangle are respectively the same as those of the corresponding face frame.
Step 203, determining a first weight value corresponding to each color statistic point according to the position relationship between each color statistic point and each weight rectangle and the reference weight corresponding to each weight rectangle.
For specific implementation of the steps 201 to 203, reference may be made to descriptions of other embodiments of the disclosure, and details are not described here.
And 204, determining a first segmentation granularity and a second segmentation granularity according to performance parameters of the image acquisition device under different light sources and the current environment brightness, wherein the performance parameters are used for representing color values of the image acquired by the image acquisition device in different dimensions.
For different types of light sources, the performance parameters of the image acquired by the image acquisition device may be the same or may also be different, which is not limited in this disclosure.
Optionally, the correspondence between the light source type and the ambient brightness and each division granularity may be set in advance, and then the first division granularity and the second division granularity may be determined by searching for the correspondence according to the light source type and the ambient brightness.
Or, the first color value of the image acquired by the image acquisition device under the first designated light source in the first dimension and the second color value of the image acquired by the image acquisition device in the second dimension may be determined, and then the third color value of the image acquired by the image acquisition device under the second designated light source in the first dimension and the fourth color value of the image acquired by the image acquisition device in the second dimension may be determined. And then, determining a reference distance value according to the first color value, the second color value, the third color value and the fourth color value, then determining a first coefficient and a second coefficient according to the current ambient brightness, then determining a first segmentation granularity based on the reference distance value and the first coefficient, and determining a second segmentation granularity based on the reference distance value and the second coefficient.
The first designated light source and the second designated light source may be any light source, which is not limited in the present disclosure.
In addition, for different color spaces, the types of the color values corresponding to the collected images may be different, which is not limited in this disclosure.
In addition, when determining the reference distance value, there may be various ways, for example, an euclidean distance formula or a manhattan distance formula may be used, which is not limited in this disclosure.
It can be understood that the corresponding relationship between the ambient brightness and the first coefficient and the second coefficient may be set in advance, so that the corresponding first coefficient and the second coefficient may be determined by searching the corresponding relationship according to the current ambient brightness. Alternatively, a set formula or the like may be used to determine the corresponding first coefficient, second coefficient, and the like according to the ambient brightness, which is not limited in this disclosure.
In addition, there are various ways to determine the first division granularity and the second division granularity according to the reference distance and the first coefficient and the second coefficient.
For example, a product value obtained by multiplying the reference distance value by the first coefficient may be determined as the first division granularity, and a product value obtained by multiplying the reference distance value by the second coefficient may be determined as the second division granularity. Alternatively, the quotient obtained by dividing the reference distance value by the first coefficient may also be determined as the first division granularity, and correspondingly, the quotient obtained by dividing the reference distance value by the second coefficient may be determined as the second division granularity, and so on, which is not limited in this disclosure.
For example, the image capturing device captures an image 1 under a first designated light source, where the average value of the image 1 corresponding to the R component, the G component, and the B component is: ravg1, Gavg1, Bavg1, the first color value in the first dimension may be expressed as: rg1 Ravg1/Gavg1, the second color value in the second dimension may be represented as Bg1 Bavg1/Gavg1, and the image capturing device captures an image 2 under the second specified light source, where the image 2 has the corresponding average values of the R component, the G component, and the B component: ravg2, Gavg2, Bavg2, the third color value in the first dimension may be expressed as: rg2 Ravg2/Gavg2, and the fourth color value in the second dimension may be expressed as Bg2Bavg2/Gavg 2. The reference distance value may be expressed as:
Figure BDA0002989488970000061
then, by looking up the relationship between the reference distance value and each coefficient, a first coefficient and a second coefficient can be determined, and the first division granularity can be expressed as: d first coefficient, second partition granularity: d, the second coefficient, and the first and second partition particle sizes can be determined.
It should be noted that the above examples are merely illustrative, and are not intended to limit the manner of determining the first particle size fraction and the second particle size fraction in the embodiments of the present disclosure.
Step 205, based on the first segmentation granularity and the second segmentation granularity, segmenting the color statistical graph to determine a plurality of color statistical rectangles included in the color statistical graph.
The size of the color statistic rectangle may be: the first granularity may be a first granularity, and the second granularity may be a second granularity, and the color histogram may then be evenly divided into a plurality of color statistics rectangles according to the first granularity.
And step 206, determining a second weight value corresponding to each color statistical rectangle according to the first weight value corresponding to each color statistical rectangle and the position relation between each color statistical point and the color statistical rectangle.
When determining the second weight value corresponding to each color statistic rectangle, there may be multiple ways.
Optionally, the color statistic points included in each color statistic rectangle may be determined according to a position relationship between each color statistic point and the color statistic rectangle, and then a sum of first weight values corresponding to each color statistic point included in each color statistic rectangle is determined as a second weight value corresponding to each color statistic rectangle.
For example, the color statistic points A, B, C are located inside the color statistic rectangle 1, and the first weight values corresponding to the color statistic points A, B, C are 10, 20, and 30, respectively, so that the second weight value corresponding to the color statistic rectangle 1 may be the sum of the first weight values corresponding to the color statistic points A, B, C: 60.
it should be noted that, the above example is only an example, and cannot be used as a limitation on the position relationship between the color statistic point and the color statistic rectangle, the second weight value corresponding to the color statistic rectangle, and the like in the embodiment of the present disclosure.
Or, the color statistic points included in each color statistic rectangle may be determined first, and then the maximum first weight value corresponding to each color statistic point included in each color statistic rectangle is determined as the second weight value corresponding to each color statistic rectangle.
For example, the color statistic points A, B, C are located inside the color statistic rectangle 1, and the first weighted values corresponding to the color statistic points A, B, C are 50, 30, and 10, respectively, and then the second weighted value corresponding to the color statistic rectangle 1 may be: 50.
or, first weight values respectively corresponding to the color statistic points included in each color statistic rectangle may be determined, and then, second weight values corresponding to the color statistic rectangles may be determined according to the magnitude of each first weight value.
For example, it may be set in advance, when the number of each color statistic point with the first weight value greater than 100 exceeds half, it may be determined that the second weight value corresponding to the second color statistic rectangle corresponding to the color statistic rectangle 1 is 120, and when the number of each color statistic point with the first weight value less than 100 is greater than half, it may be determined that the second weight value corresponding to the color statistic rectangle is 80. For example, if there are 10 color statistic points in the color statistic rectangle 1 and there are 7 color statistic points with the first weight value greater than 100, it may be determined that the second weight value corresponding to the color statistic rectangle is 120.
It should be noted that, the above example is only an example, and cannot be used as a limitation on the position relationship between the color statistical point and the color statistical rectangle, the first weight value corresponding to the color statistical point, the second weight value corresponding to the color statistical rectangle, and the like in the embodiment of the present disclosure.
Step 207, updating the first weight value corresponding to each color statistic rectangle based on the second weight value corresponding to each color statistic rectangle to obtain the updated first weight value corresponding to each color statistic rectangle.
In order to reduce errors possibly caused by a single color statistic point, for each color statistic point located in the color statistic rectangle, a second weight value corresponding to the color statistic rectangle may be determined as an updated first weight value of each color statistic point in the color statistic rectangle.
And 208, determining a white balance gain value corresponding to the original image according to the updated first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point.
In step 209, the original image is subjected to white balance processing based on the white balance gain value.
It should be noted that specific contents and implementation forms of step 208 and step 209 may refer to other embodiments of the present disclosure, and are not described herein again.
The embodiment of the disclosure may first pre-process an original image acquired by an image acquisition device to generate a corresponding color statistical graph, determine a first weight value corresponding to each color statistical rectangle according to a position of a plurality of weight rectangles corresponding to each face frame, a reference weight corresponding to each weight rectangle, and a position relationship between each color point and each weight rectangle when the original image includes a face, then determine a first segmentation granularity and a second segmentation granularity according to performance parameters of the image acquisition device under different light sources and current ambient brightness, segment the color statistical graph to obtain a plurality of color statistical rectangles, then determine a second weight value corresponding to each color statistical rectangle, update the first weight value corresponding to each color statistical rectangle, and then determine a white balance gain value corresponding to the original image, and carrying out white balance processing on the original image. Therefore, when the original image containing the face is subjected to white balance processing, the first weight value corresponding to each color statistical point is determined according to the positions of different color statistical points in the face frame, the second weight value corresponding to each color statistical rectangle is then utilized to update the first weight value, and the white balance gain corresponding to the original image is determined again, so that the influence of the skin color of the face on the white balance gain can be effectively reduced, the accuracy of the white balance processing is improved, and good experience can be given to a user.
It is understood that for different color statistics rectangles, the corresponding second weight values may be the same or may be different. Therefore, the first weight value corresponding to each color statistical rectangle is updated based on the second weight value corresponding to each color statistical rectangle, and various situations can be met when the updated first weight value corresponding to each color statistical rectangle is obtained. In a possible implementation manner, as shown in fig. 3, step 207 may further include the following steps:
step 301, determine a first color statistic rectangle set and a second color statistic rectangle set included in the plurality of color statistic rectangles.
The second weight value corresponding to each color statistic rectangle in the first color statistic rectangle group is zero, and the second weight value corresponding to each color statistic rectangle in the second color statistic rectangle group is nonzero.
Step 302, updating a first weight value corresponding to each first color statistic rectangle in each first color statistic rectangle based on a second weight value corresponding to each first color statistic rectangle in the first color statistic rectangle group, so as to obtain an updated first weight value corresponding to each first color statistic rectangle.
In the first color statistic rectangle group, the updated first weight value of each first color statistic point in each first color statistic rectangle may be a second weight value corresponding to the first color statistic rectangle in which each first color statistic point is located.
Step 303, determining a first attenuation intensity and a second attenuation intensity corresponding to the current ambient brightness according to the corresponding relationship between the ambient brightness and the attenuation intensity, wherein the first attenuation intensity is greater than the second attenuation intensity.
Different ambient brightness may correspond to different attenuation intensities, and the corresponding relationship between the ambient brightness and each attenuation intensity may be determined in advance. Therefore, according to the current ambient brightness, the first attenuation intensity and the second attenuation intensity corresponding to the current ambient brightness can be determined by searching the corresponding relation.
Step 304, under the condition that the maximum value and the minimum value of the second weight value corresponding to each second color statistic rectangle in the second color statistic rectangle group are the same, determining the first attenuation intensity as the attenuation intensity corresponding to each second color statistic rectangle.
In the second color statistics rectangle group, the maximum value and the minimum value of the second weight value corresponding to each second color statistics rectangle are the same, the second weight value corresponding to each second color statistics rectangle can be the same, the sum of the first weight values corresponding to each second color statistics rectangle is the same, and the color statistics points contained in each second color statistics rectangle can accurately represent the skin color of the human face. In order to reduce the influence of the human face skin color on the white balance gain, it may be determined that the attenuation strength corresponding to each second color statistic rectangle is the first attenuation strength, so that each second color statistic rectangle has a higher attenuation strength.
Step 305, updating the second weight value corresponding to each second color statistical rectangle based on the attenuation strength corresponding to each second color statistical rectangle, so as to determine the updated second weight value corresponding to each second color statistical rectangle.
When the second weight value corresponding to each second color statistic rectangle is updated, multiple modes can be provided. For example, it is determined that the attenuation strength corresponding to each second color statistics rectangle is 70%, and the second weight value corresponding to each second color statistics rectangle is 100, and the updated second weight value corresponding to each second color statistics rectangle may be 30.
It should be noted that the above examples are only illustrative, and cannot be used as a limitation on the attenuation intensity, the second weight, and the like corresponding to the second color statistic rectangle in the embodiment of the present disclosure.
Step 306, updating the first weight value corresponding to each second color statistical rectangle in each second color statistical rectangle based on the updated second weight value corresponding to each second color statistical rectangle, so as to obtain the updated first weight value corresponding to each second color statistical rectangle.
The updated second weight corresponding to each second color statistical rectangle may be determined as the updated first weight corresponding to each second color statistical rectangle in each second color statistical rectangle.
In the embodiment of the disclosure, the second color statistical rectangles are given with higher attenuation strength to reduce the second weight value reduction of each second color statistical rectangle, and correspondingly, the first weight value corresponding to each second color statistical rectangle in each second color statistical rectangle is also adjusted, so that the influence of the human face skin color on the white balance gain can be reduced as much as possible, and the accuracy of white balance processing is improved.
According to the embodiment of the disclosure, the corresponding first weighted values of the color statistical points in the different color statistical rectangle groups are updated, the corresponding weights of the color statistical points can be effectively reduced, and then the white balance gains corresponding to the original images are determined, so that the influence of the human face complexion on the white balance gains can be effectively reduced, the accuracy of white balance processing is improved, and good experience can be provided for users.
In a possible implementation manner, as shown in fig. 4, the step 207 may further include the following steps:
step 401, under the condition that the maximum value and the minimum value of the second weight values corresponding to each second color statistic rectangle in the second color statistic rectangle group are different, determining a first difference between the maximum value and the minimum value and a second difference between the second weight value and the minimum value corresponding to each second color statistic rectangle.
For convenience of description, a first difference between a maximum value and a minimum value of the second weight values corresponding to each second color statistics rectangle in the second color statistics rectangle group may be denoted as D. Marking a second difference value between a second weight value corresponding to the ith second color statistic rectangle and the minimum value as d i The second color statistic rectangle group can be wrappedThe color statistic rectangle comprises y second color statistic rectangles, y can be any positive number, and the value range of i can be 1-y.
Step 402, determining an attenuation rate according to the first attenuation intensity and the second attenuation intensity.
Optionally, for convenience of description, the first attenuation intensity may be denoted as Max _ Str, the second attenuation intensity may be denoted as Min _ Str, and the attenuation rate may be a difference between the first attenuation intensity and the second attenuation intensity, and may be represented as: max _ Str-Min _ Str.
And step 403, determining an attenuation change value of each second color statistical rectangle relative to the second attenuation intensity according to the attenuation rate, the first difference value and the second difference value.
Wherein, the attenuation variation value of the ith second color statistic rectangle relative to the second attenuation intensity can be expressed as: (Max _ Str-Min _ Str) # d i /D。
And step 404, determining the attenuation intensity corresponding to each second color statistic rectangle according to the second attenuation intensity and the attenuation change value corresponding to each second color statistic rectangle.
The attenuation intensity corresponding to the ith second color statistic rectangle can be expressed as: min _ Str + (Max _ Str-Min _ Str) # d i /D。
Step 405, updating the second weight value corresponding to each second color statistic rectangle based on the attenuation strength corresponding to each second color statistic rectangle to determine an updated second weight value corresponding to each second color statistic rectangle.
For example, it is determined that the attenuation strength corresponding to the ith second color statistics rectangle is 70%, and the second weight value corresponding to the second color statistics rectangle is 100, the updated second weight value corresponding to the second color statistics rectangle may be: 30.
It should be noted that the above examples are only illustrative, and cannot be used as a limitation on the attenuation intensity, the second weight, and the like corresponding to the second color statistic rectangle in the embodiment of the present disclosure.
Step 406, updating the first weight value corresponding to each second color statistical point in each second color statistical rectangle based on the updated second weight value corresponding to each second color statistical rectangle, so as to obtain the updated first weight value corresponding to each second color statistical rectangle.
The updated second weight corresponding to each second color statistical rectangle may be determined as the updated first weight corresponding to each second color statistical rectangle in each second color statistical rectangle.
This disclosed embodiment, under the condition that the maximum value in the second weighted value that each second color statistics rectangle corresponds is inequality with the minimum in the second color statistics rectangle group, can utilize decay rate, each decay intensity etc., determine the second weighted value after the update that corresponds each second color statistics rectangle, later update the first weighted value that corresponds to each second color statistics point of different positions again, can effectively reduce the corresponding weight of each color statistics point, later confirm the white balance gain that the original image corresponds again, thereby can effectively reduce the influence of people's face complexion white balance gain, improve the accuracy that white balance was handled, thereby can give user good experience.
The embodiment of the disclosure also provides an image processing device, and fig. 5 is a schematic structural diagram of the image processing device according to the embodiment of the disclosure.
As shown in fig. 5, the image processing apparatus 100 includes: a generation module 110, a first determination module 120, a second determination module 130, a third determination module 140, and a processing module 150.
The generating module 110 is configured to pre-process an original image acquired by an image acquisition device to generate a corresponding color statistical map, where the color statistical map includes a plurality of color statistical points and color values corresponding to each of the color statistical points.
A first determining module 120, configured to determine, when the original image includes a face, positions of a plurality of weight rectangles corresponding to each face frame included in the original image and a reference weight corresponding to each weight rectangle, where a center point and an aspect ratio of each weight rectangle are respectively the same as a center point and an aspect ratio of the corresponding face frame.
The second determining module 130 is configured to determine a first weighted value corresponding to each color statistical point according to a position relationship between each color statistical point and each weight rectangle and a reference weight corresponding to each weight rectangle.
The third determining module 140 is configured to determine a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point.
And a processing module 150, configured to perform white balance processing on the original image based on the white balance gain value.
As a possible implementation manner, the first determining module 120 is specifically configured to: determining the corresponding ambient brightness of the original image; determining the number of weight rectangles corresponding to the original image and a weight value corresponding to each weight rectangle according to the environment brightness; and determining the positions of a plurality of weight rectangles corresponding to each face frame in the original image based on the number of the weight rectangles.
As a possible implementation manner, the second determining module 130 is specifically configured to, when any color statistic point is located in at least two weight rectangles, determine a larger value of reference weights respectively corresponding to the at least two weight rectangles as a first weight value corresponding to the any color statistic point.
As a possible implementation manner, the third determining module 140 includes:
the first determining unit is used for determining a first segmentation granularity and a second segmentation granularity according to performance parameters of the image acquisition device under different light sources and current environment brightness, wherein the performance parameters are used for representing color values of images acquired by the image acquisition device in different dimensions;
a dividing unit, configured to divide the color histogram based on the first division granularity and the second division granularity to determine a plurality of color statistic rectangles included in the color histogram;
the second determining unit is used for determining a second weight value corresponding to each color statistical rectangle according to the first weight value corresponding to each color statistical rectangle and the position relation between each color statistical point and the color statistical rectangle;
the acquiring unit is used for updating a first weight value corresponding to each color statistical rectangle based on a second weight value corresponding to each color statistical rectangle so as to acquire an updated first weight value corresponding to each color statistical rectangle;
and the third determining unit is used for determining a white balance gain value corresponding to the original image according to the updated first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point.
As a possible implementation manner, the first determining unit is specifically configured to:
determining a first color value of an image acquired by the image acquisition device under a first designated light source in a first dimension and a second color value of the image acquired by the image acquisition device in a second dimension;
determining a third color value in a first dimension and a fourth color value in a second dimension of an image acquired by the image acquisition device under a second designated light source;
determining a reference distance value according to the first color value, the second color value, the third color value and the fourth dimension color value;
determining a first coefficient and a second coefficient according to the current ambient brightness;
determining the first segmentation granularity based on the reference distance value and the first coefficient;
determining the second partition granularity based on the reference distance value and the second coefficient.
As a possible implementation manner, the second determining unit is specifically configured to:
determining color statistic points contained in each color statistic rectangle according to the position relation between each color statistic point and the color statistic rectangle;
and determining the sum of the first weight values corresponding to the color statistical rectangles contained in each color statistical rectangle as the second weight value corresponding to each color statistical rectangle.
As a possible implementation manner, the obtaining unit is specifically configured to:
determining a first color statistics rectangle group and a second color statistics rectangle group contained in the plurality of color statistics rectangles, wherein a second weight value corresponding to each color statistics rectangle in the first color statistics rectangle group is zero, and a second weight value corresponding to each color statistics rectangle in the second color statistics rectangle group is non-zero;
updating a first weight value corresponding to each first color statistical rectangle in each first color statistical rectangle based on a second weight value corresponding to each first color statistical rectangle in the first color statistical rectangle group to obtain an updated first weight value corresponding to each first color statistical rectangle;
determining a first attenuation intensity and a second attenuation intensity corresponding to the current ambient brightness according to the corresponding relation between the ambient brightness and the attenuation intensity, wherein the first attenuation intensity is greater than the second attenuation intensity;
determining the first attenuation intensity as the attenuation intensity corresponding to each second color statistic rectangle in the second color statistic rectangle group under the condition that the maximum value and the minimum value of the second weight value corresponding to each second color statistic rectangle are the same;
updating a second weight value corresponding to each second color statistical rectangle based on the attenuation intensity corresponding to each second color statistical rectangle to determine an updated second weight value corresponding to each second color statistical rectangle;
updating the first weight value corresponding to each second color statistical rectangle in each second color statistical rectangle based on the updated second weight corresponding to each second color statistical rectangle to obtain the updated first weight value corresponding to each second color statistical rectangle.
As a possible implementation manner, the obtaining unit is further specifically configured to:
under the condition that the maximum value and the minimum value in the second weight values corresponding to the second color statistic rectangles in the second color statistic rectangle group are different, determining a first difference value between the maximum value and the minimum value and a second difference value between the second weight value corresponding to each second color statistic rectangle and the minimum value;
determining an attenuation rate according to the first attenuation intensity and the second attenuation intensity;
determining an attenuation change value of each second color statistical rectangle relative to the second attenuation intensity according to the attenuation rate, the first difference value and the second difference value;
determining the attenuation intensity corresponding to each second color statistic rectangle according to the second attenuation intensity and the attenuation change value corresponding to each second color statistic rectangle;
updating a second weight value corresponding to each second color statistic rectangle based on the attenuation intensity corresponding to each second color statistic rectangle to determine an updated second weight value corresponding to each second color statistic rectangle;
updating the first weight value corresponding to each second color statistical rectangle in each second color statistical rectangle based on the updated second weight corresponding to each second color statistical rectangle to obtain the updated first weight value corresponding to each second color statistical rectangle.
The functions and specific implementation principles of the modules in the embodiments of the present disclosure may refer to the embodiments of the methods, and are not described herein again.
The image processing device according to the embodiment of the disclosure first preprocesses an original image acquired by an image acquisition device to generate a corresponding color statistical map, determines positions of a plurality of weight rectangles corresponding to each face frame and reference weights corresponding to each weight rectangle contained in the original image under the condition that the original image contains a face, and then determines a first weight value corresponding to each color statistical point according to a position relationship between each color statistical point and each weight rectangle and the reference weights corresponding to each weight rectangle, and determines a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistical point and a color value corresponding to each color statistical point, thereby performing white balance processing on the original image based on the white balance gain value. Therefore, when the original image containing the human face is subjected to white balance processing, the first weight value corresponding to each color statistical point is determined according to the positions of the different color statistical points in the human face frame, and then the white balance gain corresponding to the original image is determined, so that the influence of the human face skin color on the white balance gain can be effectively reduced, the accuracy of the white balance processing is improved, and good experience can be given to a user.
Fig. 6 is a block diagram of a structure of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 6, the electronic apparatus 200 includes: a memory 210 and a processor 220, and a bus 230 connecting the various components, including the memory 210 and the processor 220.
Wherein, the memory 210 is used for storing the executable instructions of the processor 220; the processor 201 is configured to call and execute the executable instructions stored in the memory 202 to implement the image processing method proposed by the above-mentioned embodiment of the present disclosure.
Bus 230 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 200 typically includes a variety of electronic device readable media. Such media may be any available media that is accessible by electronic device 200 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)240 and/or cache memory 250. The electronic device 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 260 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 6, commonly referred to as a "hard drive"). Although not shown in FIG. 6, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 230 by one or more data media interfaces. Memory 210 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
A program/utility 280 having a set (at least one) of program modules 270 may be stored, for example, in the memory 210, such program modules 270 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which or some combination of which may comprise an implementation of a network environment. The program modules 270 generally perform the functions and/or methodologies of the embodiments described in this disclosure.
Electronic device 200 may also communicate with one or more external devices 290 (e.g., keyboard, pointing device, display 291, etc.), with one or more devices that enable a user to interact with electronic device 200, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 292. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 293. As shown, the network adapter 293 communicates with the other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 220 executes various functional applications and data processing by executing programs stored in the memory 210.
It should be noted that, for the implementation process of the electronic device according to the embodiment of the present disclosure, reference is made to the foregoing explanation of the image processing method according to the embodiment of the present disclosure, and details are not repeated here.
The electronic device of the embodiment of the disclosure first preprocesses an original image acquired by an image acquisition device to generate a corresponding color statistical map, determines positions of a plurality of weight rectangles corresponding to each face frame and a reference weight corresponding to each weight rectangle contained in the original image under the condition that the original image contains a face, and then determines a first weight value corresponding to each color statistical point according to a position relationship between each color statistical point and each weight rectangle and a reference weight corresponding to each weight rectangle, and determines a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistical point and a color value corresponding to each color statistical point, thereby performing white balance processing on the original image based on the white balance gain value. Therefore, when the original image containing the face is subjected to white balance processing, the first weight value corresponding to each color statistical point is determined according to the positions of different color statistical points in the face frame, and then the white balance gain corresponding to the original image is determined, so that the influence of the skin color of the face on the white balance gain can be effectively reduced, the accuracy of white balance processing is improved, and good experience can be provided for a user.
In order to implement the above embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium, where instructions of the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the processing method of an image as described above.
In order to implement the above embodiments, the present disclosure also provides a computer program product, which, when executed by a processor of an electronic device, enables the electronic device to execute the processing method of the image as described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. A method of processing an image, comprising:
preprocessing an original image acquired by an image acquisition device to generate a corresponding color statistical graph, wherein the color statistical graph comprises a plurality of color statistical points and a color value corresponding to each color statistical point;
determining the positions of a plurality of weight rectangles corresponding to each face frame contained in the original image and the reference weight corresponding to each weight rectangle under the condition that the original image contains a face, wherein the central point and the aspect ratio of each weight rectangle are respectively the same as those of the corresponding face frame;
determining a first weight value corresponding to each color statistical point according to the position relation between each color statistical point and each weight rectangle and the reference weight corresponding to each weight rectangle;
determining a white balance gain value corresponding to the original image according to a first weight value corresponding to each color statistical point and a color value corresponding to each color statistical point;
and carrying out white balance processing on the original image based on the white balance gain value.
2. The method of claim 1, wherein determining the position of the weighted rectangles for each face frame included in the original image comprises:
determining the corresponding ambient brightness of the original image;
determining the number of weight rectangles corresponding to the original image and a weight value corresponding to each weight rectangle according to the environment brightness;
and determining the positions of a plurality of weight rectangles corresponding to each face frame in the original image based on the number of the weight rectangles.
3. The method of claim 1, wherein the determining the first weight value corresponding to each color statistic point according to the position relationship between each color statistic point and each weight rectangle and the reference weight corresponding to each weight rectangle comprises:
and under the condition that any color statistical point is positioned in at least two weight rectangles, determining a larger value of reference weights respectively corresponding to the at least two weight rectangles as a first weight value corresponding to the any color statistical point.
4. The method according to any of claims 1-3, wherein the determining the white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistic point and the color value corresponding to each color statistic point comprises:
determining a first segmentation granularity and a second segmentation granularity according to performance parameters of the image acquisition device under different light sources and current environment brightness, wherein the performance parameters are used for representing color values of images acquired by the image acquisition device in different dimensions;
based on the first segmentation granularity and the second segmentation granularity, segmenting the color statistical graph to determine a plurality of color statistical rectangles contained in the color statistical graph;
determining a second weight value corresponding to each color statistic rectangle according to the first weight value corresponding to each color statistic rectangle and the position relation between each color statistic point and the color statistic rectangle;
updating a first weight value corresponding to each color statistical rectangle based on a second weight value corresponding to each color statistical rectangle to obtain an updated first weight value corresponding to each color statistical rectangle;
and determining a white balance gain value corresponding to the original image according to the updated first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point.
5. The method of claim 4, wherein determining the first granularity of segmentation and the second granularity of segmentation according to the performance parameters of the image acquisition device under different light sources and the current ambient brightness comprises:
determining a first color value of an image acquired by the image acquisition device under a first designated light source in a first dimension and a second color value of the image acquired by the image acquisition device in a second dimension;
determining a third color value in a first dimension and a fourth color value in a second dimension of an image acquired by the image acquisition device under a second designated light source;
determining a reference distance value according to the first color value, the second color value, the third color value and the fourth dimension color value;
determining a first coefficient and a second coefficient according to the current ambient brightness;
determining the first granularity of partitioning based on the reference distance value and the first coefficient;
determining the second partition granularity based on the reference distance value and the second coefficient.
6. The method of claim 4, wherein determining a second weight value corresponding to each color statistic rectangle according to the first weight value corresponding to each color statistic point and the position relationship between each color statistic point and the color statistic rectangle comprises:
determining color statistic points contained in each color statistic rectangle according to the position relation between each color statistic point and the color statistic rectangle;
and determining the sum of the first weight values corresponding to the color statistical rectangles contained in each color statistical rectangle as the second weight value corresponding to each color statistical rectangle.
7. The method according to claim 5 or 6, wherein the updating the first weight value corresponding to each color statistic rectangle based on the second weight value corresponding to each color statistic rectangle to obtain the updated first weight value corresponding to each color statistic rectangle comprises:
determining a first color statistic rectangle group and a second color statistic rectangle group contained in the plurality of color statistic rectangles, wherein a second weight value corresponding to each color statistic rectangle in the first color statistic rectangle group is zero, and a second weight value corresponding to each color statistic rectangle in the second color statistic rectangle group is nonzero;
updating a first weight value corresponding to each first color statistic rectangle in each first color statistic rectangle based on a second weight value corresponding to each first color statistic rectangle in the first color statistic rectangle group to obtain an updated first weight value corresponding to each first color statistic rectangle;
determining a first attenuation intensity and a second attenuation intensity corresponding to the current ambient brightness according to the corresponding relation between the ambient brightness and the attenuation intensity, wherein the first attenuation intensity is greater than the second attenuation intensity;
determining the first attenuation intensity as the attenuation intensity corresponding to each second color statistic rectangle in the second color statistic rectangle group under the condition that the maximum value and the minimum value of the second weight value corresponding to each second color statistic rectangle are the same;
updating a second weight value corresponding to each second color statistic rectangle based on the attenuation intensity corresponding to each second color statistic rectangle to determine an updated second weight value corresponding to each second color statistic rectangle;
and updating the first weight value corresponding to each second color statistical rectangle in each second color statistical rectangle based on the updated second weight value corresponding to each second color statistical rectangle so as to obtain the updated first weight value corresponding to each second color statistical rectangle.
8. The method of claim 7, wherein after said determining a first attenuated intensity and a second attenuated intensity corresponding to said current ambient brightness, further comprising:
under the condition that the maximum value and the minimum value in the second weight values corresponding to the second color statistic rectangles in the second color statistic rectangle group are different, determining a first difference value between the maximum value and the minimum value and a second difference value between the second weight value corresponding to each second color statistic rectangle and the minimum value;
determining an attenuation rate according to the first attenuation intensity and the second attenuation intensity;
determining an attenuation change value of each second color statistical rectangle relative to the second attenuation intensity according to the attenuation rate, the first difference value and the second difference value;
determining the attenuation intensity corresponding to each second color statistic rectangle according to the second attenuation intensity and the attenuation change value corresponding to each second color statistic rectangle;
updating a second weight value corresponding to each second color statistical rectangle based on the attenuation intensity corresponding to each second color statistical rectangle to determine an updated second weight value corresponding to each second color statistical rectangle;
updating the first weight value corresponding to each second color statistical rectangle in each second color statistical rectangle based on the updated second weight corresponding to each second color statistical rectangle to obtain the updated first weight value corresponding to each second color statistical rectangle.
9. An apparatus for processing an image, comprising:
the generating module is used for preprocessing an original image acquired by the image acquisition device to generate a corresponding color statistical graph, wherein the color statistical graph comprises a plurality of color statistical points and color values corresponding to the color statistical points;
a first determining module, configured to determine, when the original image includes a face, positions of a plurality of weight rectangles corresponding to each face frame included in the original image and a reference weight corresponding to each weight rectangle, where a center point and an aspect ratio of each weight rectangle are the same as those of the corresponding face frame, respectively;
the second determining module is used for determining a first weight value corresponding to each color statistical point according to the position relation between each color statistical point and each weight rectangle and the reference weight corresponding to each weight rectangle;
a third determining module, configured to determine a white balance gain value corresponding to the original image according to the first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point;
and the processing module is used for carrying out white balance processing on the original image based on the white balance gain value.
10. The apparatus of claim 9, wherein the first determining module is specifically configured to:
determining the corresponding ambient brightness of the original image;
determining the number of weight rectangles corresponding to the original image and the weight value corresponding to each weight rectangle according to the environment brightness;
and determining the positions of a plurality of weight rectangles corresponding to each face frame in the original image based on the number of the weight rectangles.
11. The apparatus of claim 9, wherein the second determining module is specifically configured to:
and under the condition that any color statistical point is positioned in at least two weight rectangles, determining a larger value of reference weights respectively corresponding to the at least two weight rectangles as a first weight value corresponding to the any color statistical point.
12. The apparatus of any of claims 9-11, wherein the third determining module comprises:
the first determining unit is used for determining a first segmentation granularity and a second segmentation granularity according to performance parameters of the image acquisition device under different light sources and current environment brightness, wherein the performance parameters are used for representing color values of images acquired by the image acquisition device in different dimensions;
a dividing unit, configured to divide the color histogram based on the first division granularity and the second division granularity to determine a plurality of color statistic rectangles included in the color histogram;
the second determining unit is used for determining a second weight value corresponding to each color statistical rectangle according to the first weight value corresponding to each color statistical rectangle and the position relation between each color statistical point and the color statistical rectangle;
the acquiring unit is used for updating a first weight value corresponding to each color statistical rectangle based on a second weight value corresponding to each color statistical rectangle so as to acquire an updated first weight value corresponding to each color statistical rectangle;
and the third determining unit is used for determining a white balance gain value corresponding to the original image according to the updated first weight value corresponding to each color statistical point and the color value corresponding to each color statistical point.
13. The apparatus of claim 12, wherein the first determining unit is specifically configured to:
determining a first color value of an image acquired by the image acquisition device under a first designated light source in a first dimension and a second color value of the image acquired by the image acquisition device in a second dimension;
determining a third color value in a first dimension and a fourth color value in a second dimension of an image acquired by the image acquisition device under a second designated light source;
determining a reference distance value according to the first color value, the second color value, the third color value and the fourth dimension color value;
determining a first coefficient and a second coefficient according to the current ambient brightness;
determining the first segmentation granularity based on the reference distance value and the first coefficient;
determining the second partition granularity based on the reference distance value and the second coefficient.
14. The apparatus according to claim 12, wherein the second determining unit is specifically configured to:
determining color statistic points contained in each color statistic rectangle according to the position relation between each color statistic point and the color statistic rectangle;
and determining the sum of the first weight values corresponding to the color statistical rectangles contained in each color statistical rectangle as the second weight value corresponding to each color statistical rectangle.
15. The apparatus according to claim 13 or 14, wherein the obtaining unit is specifically configured to:
determining a first color statistic rectangle group and a second color statistic rectangle group contained in the plurality of color statistic rectangles, wherein a second weight value corresponding to each color statistic rectangle in the first color statistic rectangle group is zero, and a second weight value corresponding to each color statistic rectangle in the second color statistic rectangle group is nonzero;
updating a first weight value corresponding to each first color statistical rectangle in each first color statistical rectangle based on a second weight value corresponding to each first color statistical rectangle in the first color statistical rectangle group to obtain an updated first weight value corresponding to each first color statistical rectangle;
determining a first attenuation intensity and a second attenuation intensity corresponding to the current ambient brightness according to the corresponding relation between the ambient brightness and the attenuation intensity, wherein the first attenuation intensity is greater than the second attenuation intensity;
determining the first attenuation intensity as the attenuation intensity corresponding to each second color statistic rectangle in the second color statistic rectangle group under the condition that the maximum value and the minimum value of the second weight value corresponding to each second color statistic rectangle are the same;
updating a second weight value corresponding to each second color statistic rectangle based on the attenuation intensity corresponding to each second color statistic rectangle to determine an updated second weight value corresponding to each second color statistic rectangle;
and updating the first weight value corresponding to each second color statistical rectangle in each second color statistical rectangle based on the updated second weight value corresponding to each second color statistical rectangle so as to obtain the updated first weight value corresponding to each second color statistical rectangle.
16. The apparatus as claimed in claim 15, wherein said obtaining unit is further specifically configured to:
under the condition that the maximum value and the minimum value in the second weight values corresponding to the second color statistic rectangles in the second color statistic rectangle group are different, determining a first difference value between the maximum value and the minimum value and a second difference value between the second weight value corresponding to each second color statistic rectangle and the minimum value;
determining an attenuation rate according to the first attenuation intensity and the second attenuation intensity;
determining an attenuation change value of each second color statistical rectangle relative to the second attenuation intensity according to the attenuation rate, the first difference and the second difference;
determining the attenuation intensity corresponding to each second color statistic rectangle according to the second attenuation intensity and the attenuation change value corresponding to each second color statistic rectangle;
updating a second weight value corresponding to each second color statistic rectangle based on the attenuation intensity corresponding to each second color statistic rectangle to determine an updated second weight value corresponding to each second color statistic rectangle;
and updating the first weight value corresponding to each second color statistical rectangle in each second color statistical rectangle based on the updated second weight value corresponding to each second color statistical rectangle so as to obtain the updated first weight value corresponding to each second color statistical rectangle.
17. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to invoke and execute the memory-stored executable instructions to implement the method of processing an image of any of claims 1-8.
18. A non-transitory computer readable storage medium, instructions in which, when executed by a processor of an electronic device, enable the electronic device to perform the method of processing an image of any one of claims 1-8.
CN202110310827.2A 2021-03-23 2021-03-23 Image processing method and device, electronic equipment and storage medium Active CN115118947B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110310827.2A CN115118947B (en) 2021-03-23 2021-03-23 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110310827.2A CN115118947B (en) 2021-03-23 2021-03-23 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115118947A true CN115118947A (en) 2022-09-27
CN115118947B CN115118947B (en) 2023-11-24

Family

ID=83323835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110310827.2A Active CN115118947B (en) 2021-03-23 2021-03-23 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115118947B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070031060A1 (en) * 2005-08-04 2007-02-08 Canon Kabushiki Kaisha Image processing apparatus, method for calculating white balance evaluation value, program including program code for realizing the method for calculating white balance evaluation value, and storage medium for storing the program
US20090002519A1 (en) * 2007-06-29 2009-01-01 Tomokazu Nakamura Image processing method, apparatus and computer program product, and imaging apparatus, method and computer program product
JP2009004966A (en) * 2007-06-20 2009-01-08 Panasonic Corp Imaging apparatus
CN105187810A (en) * 2014-11-11 2015-12-23 怀效宁 Automatic white balance method based on face color features and electronic media device
US20160269707A1 (en) * 2013-11-28 2016-09-15 Olympus Corporation Multi-area white-balance control device, multi-area white-balance control method, multi-area white-balance control program, computer in which multi-area white-balance control program is recorded, multi-area white-balance image-processing device, multi-area white-balance image-processing method, multi-area white-balance image-processing program, computer in which multi-area white-balance image-processing program is recorded, and image-capture apparatus
WO2017096865A1 (en) * 2015-12-08 2017-06-15 乐视控股(北京)有限公司 Method and device for processing human face-containing image
CN106878695A (en) * 2017-02-13 2017-06-20 广东欧珀移动通信有限公司 Method, device and computer equipment that white balance is processed
CN107343189A (en) * 2017-07-10 2017-11-10 广东欧珀移动通信有限公司 White balancing treatment method and device
CN108024055A (en) * 2017-11-03 2018-05-11 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the storage medium of white balance processing
CN109151428A (en) * 2018-08-30 2019-01-04 Oppo广东移动通信有限公司 automatic white balance processing method, device and computer storage medium
US20190019312A1 (en) * 2017-07-12 2019-01-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and device for processing white balance of image and storage medium
CN110381303A (en) * 2019-05-31 2019-10-25 成都品果科技有限公司 Portrait automatic exposure white balance correction method and system based on skin color statistics
US20200036888A1 (en) * 2018-07-26 2020-01-30 Qualcomm Incorporated Calibration of Automatic White Balancing using Facial Images
CN110971813A (en) * 2018-09-30 2020-04-07 北京微播视界科技有限公司 Focusing method and device, electronic equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070031060A1 (en) * 2005-08-04 2007-02-08 Canon Kabushiki Kaisha Image processing apparatus, method for calculating white balance evaluation value, program including program code for realizing the method for calculating white balance evaluation value, and storage medium for storing the program
JP2009004966A (en) * 2007-06-20 2009-01-08 Panasonic Corp Imaging apparatus
US20090002519A1 (en) * 2007-06-29 2009-01-01 Tomokazu Nakamura Image processing method, apparatus and computer program product, and imaging apparatus, method and computer program product
US20160269707A1 (en) * 2013-11-28 2016-09-15 Olympus Corporation Multi-area white-balance control device, multi-area white-balance control method, multi-area white-balance control program, computer in which multi-area white-balance control program is recorded, multi-area white-balance image-processing device, multi-area white-balance image-processing method, multi-area white-balance image-processing program, computer in which multi-area white-balance image-processing program is recorded, and image-capture apparatus
CN105187810A (en) * 2014-11-11 2015-12-23 怀效宁 Automatic white balance method based on face color features and electronic media device
WO2017096865A1 (en) * 2015-12-08 2017-06-15 乐视控股(北京)有限公司 Method and device for processing human face-containing image
CN106878695A (en) * 2017-02-13 2017-06-20 广东欧珀移动通信有限公司 Method, device and computer equipment that white balance is processed
CN107343189A (en) * 2017-07-10 2017-11-10 广东欧珀移动通信有限公司 White balancing treatment method and device
US20200396435A1 (en) * 2017-07-10 2020-12-17 Guangdonng Oppo Mobile Telecommunications Corp., Ltd. White balance processing method and apparatus
US20190019312A1 (en) * 2017-07-12 2019-01-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and device for processing white balance of image and storage medium
CN108024055A (en) * 2017-11-03 2018-05-11 广东欧珀移动通信有限公司 Method, apparatus, mobile terminal and the storage medium of white balance processing
US20200036888A1 (en) * 2018-07-26 2020-01-30 Qualcomm Incorporated Calibration of Automatic White Balancing using Facial Images
CN109151428A (en) * 2018-08-30 2019-01-04 Oppo广东移动通信有限公司 automatic white balance processing method, device and computer storage medium
CN110971813A (en) * 2018-09-30 2020-04-07 北京微播视界科技有限公司 Focusing method and device, electronic equipment and storage medium
CN110381303A (en) * 2019-05-31 2019-10-25 成都品果科技有限公司 Portrait automatic exposure white balance correction method and system based on skin color statistics

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHUNYUAN ZHENG: "A DOA Estimation Method in TD-SCDMA System", 《2015 2ND INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING》 *
程本飞;戴明;孙丽娜;: "基于FPGA的Bayer彩色自动白平衡设计与实现", 电子技术应用, no. 08 *
黄牧: "视频监控中基于iOS平台的人脸检测与识别", 《中国优秀硕士论文电子期刊网》 *

Also Published As

Publication number Publication date
CN115118947B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
US10565742B1 (en) Image processing method and apparatus
US20130136352A1 (en) Method of chromatic classification of pixels and method of adaptive enhancement of a color image
US20200137369A1 (en) White balance processing method and apparatus
WO2022116989A1 (en) Image processing method and apparatus, and device and storage medium
CN113132695B (en) Lens shading correction method and device and electronic equipment
US8400523B2 (en) White balance method and white balance device
WO2022121893A1 (en) Image processing method and apparatus, and computer device and storage medium
CN105681775A (en) White balance method and device
WO2020119454A1 (en) Method and apparatus for color reproduction of image
EP3685346A1 (en) System and method for image dynamic range adjusting
CN110175967B (en) Image defogging processing method, system, computer device and storage medium
US7796827B2 (en) Face enhancement in a digital video
CN114429476A (en) Image processing method, image processing apparatus, computer device, and storage medium
CN111935481B (en) Method and device for testing image shooting device and computer readable storage medium
EP4184388A1 (en) White balance correction method and apparatus, device, and storage medium
JP2002281327A (en) Device, method and program for image processing
CN117218039A (en) Image processing method, device, computer equipment and storage medium
CN114286000B (en) Image color processing method and device and electronic equipment
CN115118947B (en) Image processing method and device, electronic equipment and storage medium
CN114390266B (en) Image white balance processing method, device and computer readable storage medium
CN107767350B (en) Video image restoration method and device
CN113099191B (en) Image processing method and device
US11647298B2 (en) Image processing apparatus, image capturing apparatus, image processing method, and storage medium
KR102215607B1 (en) Electronic device capable of correction to improve the brightness of dark images and operating method thereof
CN114266803A (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant