CN109151428B - Automatic white balance processing method, device and computer storage medium - Google Patents

Automatic white balance processing method, device and computer storage medium Download PDF

Info

Publication number
CN109151428B
CN109151428B CN201811006222.9A CN201811006222A CN109151428B CN 109151428 B CN109151428 B CN 109151428B CN 201811006222 A CN201811006222 A CN 201811006222A CN 109151428 B CN109151428 B CN 109151428B
Authority
CN
China
Prior art keywords
image
processed
awb
value
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811006222.9A
Other languages
Chinese (zh)
Other versions
CN109151428A (en
Inventor
袁全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811006222.9A priority Critical patent/CN109151428B/en
Publication of CN109151428A publication Critical patent/CN109151428A/en
Application granted granted Critical
Publication of CN109151428B publication Critical patent/CN109151428B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an automatic white balance processing method, an automatic white balance processing device and a computer storage medium, and the method comprises the steps of obtaining an image to be processed containing a human face; according to the area ratio of the detected object in the image to be processed, adopting a facial automatic white balance faceAWB algorithm to the face area in the image to be processed, and calculating to obtain a first automatic white balance AWB value; acquiring a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to the area ratio of the face in the image to be processed; the second AWB value is obtained by calculating all areas of the image to be processed by adopting an AWB algorithm different from the faceAWB algorithm; determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight; therefore, when the faceAWB is applied to a shooting scene of the rear camera, on one hand, the influence of the faceAWB on a normal scene is reduced, and on the other hand, color jump caused by small face area occupation is solved.

Description

Automatic white balance processing method, device and computer storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an automatic white balance processing method and apparatus, and a computer storage medium.
Background
With the development of science and technology, electronic devices with shooting functions, such as smart phones, cameras, video cameras, and the like, are widely applied to daily life, work, and learning of people, and play an increasingly important role in life of people, and there are not only Automatic White Balance (AWB) processed by image processing software, but also Face Automatic White Balance (FaceAWB) for faces.
At present, FaceAWB is mainly designed for a front camera, and face detection is stable because the face area in an image shot by the front camera is generally large; however, when FaceAWB is applied to the rear camera, the shooting scene of the rear camera is complex, and when the color cast problem that the face is in a pure background is solved, the AWB performance in a part of normal scenes may be affected; moreover, when the face area in the captured image is small, so that the face detection is unstable, a color jump may occur.
Disclosure of Invention
The main objective of the present application is to provide an automatic white balance processing method, an automatic white balance processing device, and a computer storage medium, wherein when FaceAWB is applied to a shooting scene of a rear camera, on one hand, the influence of FaceAWB on a normal scene can be reduced, and on the other hand, color jump caused by small face area ratio can be solved.
In order to achieve the purpose, the technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an automatic white balance processing method, where the method includes:
acquiring an image to be processed containing a human face;
according to the area ratio occupied by the detected object in the image to be processed, adopting a faceAWB algorithm to the face area in the image to be processed, and calculating to obtain a first AWB value;
acquiring a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to the area ratio of the face in the image to be processed; the second AWB value is obtained by calculating all areas of the image to be processed by adopting an AWB algorithm different from the faceAWB algorithm;
determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight.
In a second aspect, an embodiment of the present application provides an AWB processing apparatus, including: a communication bus, a memory, and a processor; wherein,
the communication bus is used for realizing connection communication between the memory and the processor;
the memory for storing a computer program operable on the processor;
the processor is configured to perform the steps of the method of the first aspect when running the computer program.
In a third aspect, embodiments of the present application provide a computer storage medium storing an AWB processing program, which when executed by at least one processor implements the steps of the method of the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes at least a rear camera and the AWB processing apparatus according to the second aspect.
The embodiment of the application provides an automatic white balance processing method, an automatic white balance processing device and a computer storage medium, and the method comprises the steps of obtaining an image to be processed containing a human face; according to the area ratio occupied by the detected object in the image to be processed, adopting a faceAWB algorithm to the face area in the image to be processed, and calculating to obtain a first AWB value; acquiring a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to the area ratio of the face in the image to be processed; the second AWB value is obtained by calculating all areas of the image to be processed by adopting an AWB algorithm different from the faceAWB algorithm; determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight; therefore, when the faceAWB is applied to a shooting scene of a rear camera, on one hand, the influence of the faceAWB on a normal scene can be reduced, and on the other hand, color jump caused by small face area occupation can be solved.
Drawings
Fig. 1 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of an AWB processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a to-be-processed image according to an embodiment of the present disclosure;
fig. 4 is a detailed flowchart of an AWB processing method according to an embodiment of the present disclosure;
fig. 5 is a detailed flow diagram of another AWB processing method provided in the embodiments of the present application;
fig. 6 is a schematic diagram of a component structure of an AWB processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a component structure of another AWB processing apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of another AWB processing apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a structure of another AWB processing apparatus according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram of a structure of another AWB processing apparatus according to an embodiment of the present disclosure;
fig. 11 is a schematic diagram of a structure of another AWB processing apparatus according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram of a structure of another AWB processing apparatus according to an embodiment of the present disclosure;
fig. 13 is a schematic diagram of a specific hardware structure of an AWB processing apparatus according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
An electronic device implementing various embodiments of the present application will now be described with reference to the accompanying drawings. The electronic device may be implemented in various forms, for example, the electronic devices described in the present application may include electronic devices having a photographing function, such as a smart phone, a tablet computer, a notebook computer, a palm top computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a Digital camera, a video camera, and the like.
For example, referring to fig. 1, which shows a schematic hardware structure diagram of an electronic device for implementing various embodiments of the present application, the electronic device 100 may include: RF (Radio Frequency) unit 101, audio output unit 102, a/V (audio/video) input unit 103, sensor 104, display unit 105, user input unit 106, memory 107, processor 108, camera 109, power supply 110, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 1 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the electronic device in detail with reference to fig. 1:
the rf unit 101 may be used for receiving and transmitting signals during information transmission and reception. Specifically, the downlink information of the base station is received and then processed by the processor 110; in addition, uplink data is transmitted to the base station. Generally, the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a coupler, a low noise amplifier, a duplexer, and the like. The radio unit 101 may also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access2000 (CDMA 2000), Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), and Long Term Evolution (LTE).
The audio output unit 102 may convert audio data received by the radio frequency unit 101 or stored in the memory 107 into an audio signal and output as sound when the electronic apparatus 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. The audio output unit 102 may include a speaker, a buzzer, and the like.
The a/V input unit 103 is used to receive audio or video signals. The a/V input Unit 103 may include a Graphics Processing Unit (GPU) 1031 and a microphone 1032, the Graphics processor 1031 Processing image data of still pictures or video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 105. The image frames processed by the graphic processor 1031 may be stored in the memory 107 (or other storage medium) or transmitted via the radio frequency unit 101. The microphone 1032 can receive sounds (audio data) via the microphone 1032 in a phone call mode, a recording mode, a voice recognition mode, or the like, and can process such sounds into audio data.
The electronic device 100 also includes at least one sensor 104, such as a light sensor, an image sensor, a motion sensor, and other sensors. Specifically, the optical sensor mainly includes a photosensor, and can adjust the brightness of the display panel 1051 according to the brightness of the ambient light; the image sensor is an important component forming the digital camera, and converts the light image on the light-sensitive surface into an electric signal in a corresponding proportional relation with the light image by utilizing the photoelectric conversion function of a photoelectric device; the motion sensor is an element for converting the change of non-electric quantity (such as speed and pressure) into the change of electric quantity, and the motion sensor can comprise a pressure sensor, a speed sensor and the like according to the difference of the converted non-electric quantity; in addition, the electronic device 100 may further include other sensors such as a fingerprint sensor, an iris sensor, a molecular sensor, a gyroscope, an infrared sensor, and the like, which are not described in detail herein.
The display unit 105 is used to display information input by a user or information provided to the user. The Display unit 105 may include a Display panel 1051, and the Display panel 1051 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 106 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 106 may include a touch panel 1061 and other input devices 1062. The touch panel 1061, also referred to as a touch screen, may collect touch operations performed by a user on or near the touch panel 1061 (e.g., operations performed by the user on or near the touch panel 1061 using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a preset program. In addition to the touch panel 1061, the user input unit 106 may include other input devices 1062. In particular, the other input devices 1062 may include, but are not limited to, one or more of a physical keyboard, a function key (such as a volume control key, a switch key, etc.), a track ball, a mouse, a joystick, etc., which are not limited to these specific devices.
Further, the touch panel 1061 can cover the display panel 1051, and when the touch panel 1061 detects a touch operation on or near the touch panel 1061, the touch panel can transmit the touch operation to the processor 108 to determine the type of the touch event, and then the processor 108 can provide a corresponding visual output on the display panel 1051 according to the type of the touch event. Although the touch panel 1061 and the display panel 1051 are shown in fig. 1 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1061 and the display panel 1051 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The memory 107 may be used to store software programs as well as various data. The memory 107 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 107 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 108 is a control center of the electronic device, connects various parts of the whole electronic device by various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 107 and calling data stored in the memory 107, thereby performing overall monitoring of the electronic device. Processor 108 may include one or more processing units; preferably, the processor 108 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 108.
The camera 109 is a video input device, which is also called a computer camera, an electronic eye, etc.; the camera 109 generally has basic functions of video shooting/transmission and still image capturing, and after capturing an image through a lens, the captured image is processed and converted into a digital signal by a photosensitive component circuit and a control component in the camera, and then stored in the memory 107. The camera 109 may include a front camera, a rear camera, and the like.
The electronic device 100 may further include a power source 110 (e.g., a battery) for supplying power to the various components, and preferably, the power source 110 may be logically connected to the processor 108 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the electronic device 100 may further include a bluetooth module, a WiFi module, and the like, which are not described in detail herein.
Based on the hardware structure of the electronic device shown in fig. 1, the FaceAWB is currently designed for a front camera, however, when the FaceAWB is applied to a rear camera, a shooting scene based on the rear camera is relatively complex, when the color cast problem that a human face is in a pure color background is solved, AWB expression in a part of normal scenes may be affected, and a color jump phenomenon may occur due to a small face area in a shot image; in order to solve the above technical problems, the following embodiments of the present application are proposed in conjunction with the accompanying drawings.
Example one
In the embodiment of the present application, the RGB color scheme is a color standard in the industry, which obtains various colors by changing three color channels of Red (Red, R), Green (Green, G) and Blue (Blue, B) and superimposing them with each other, wherein RGB represents the colors of the three channels of Red, Green and Blue, and the standard almost includes all the colors that can be perceived by human vision, and is one of the most widely used color systems at present.
Generally, the color reflected by an object has a large relationship with the color temperature of the light source. Whether on a sunny day, a cloudy day, an indoor environment, or a fluorescent environment, the human eye sees some objects as white, since the human brain can detect and correct color changes such as these. The human eye can adapt itself, but electronic devices such as digital cameras and the like do not have such advanced intelligent functions. In order to meet the human visual standard, the electronic device must imitate the human brain and adjust the image color according to the color temperature of the light source, and the AWB processing is required to achieve a satisfactory color effect.
Referring to fig. 2, an AWB processing method provided in an embodiment of the present application is shown, where the method may include:
s201: acquiring an image to be processed containing a human face;
s202: according to the area ratio occupied by the detected object in the image to be processed, adopting a faceAWB algorithm to the face area in the image to be processed, and calculating to obtain a first AWB value;
s203: acquiring a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to the area ratio of the face in the image to be processed; the second AWB value is obtained by calculating all areas of the image to be processed by adopting an AWB algorithm different from the faceAWB algorithm;
s204: determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight.
Based on the technical scheme shown in FIG. 2, an image to be processed containing a human face is obtained; according to the area ratio occupied by the detected object in the image to be processed, adopting a faceAWB algorithm to the face area in the image to be processed, and calculating to obtain a first AWB value; acquiring a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to the area ratio of the face in the image to be processed; the second AWB value is obtained by calculating all areas of the image to be processed by adopting an AWB algorithm different from the faceAWB algorithm; determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight; therefore, on one hand, the influence of faceAWB on a normal scene can be reduced, and on the other hand, color jump caused by small face area occupation can be solved.
For the technical solution shown in fig. 2, in a possible implementation manner, before the acquiring the to-be-processed image including the human face, the method further includes:
carrying out face detection on the collected image to be processed;
if a face is detected in the image to be processed, the image to be processed comprises the face;
and if the face is not detected in the image to be processed, the image to be processed does not contain the face.
It should be noted that the acquired to-be-processed image may be not only an image captured by the rear camera, but also an image captured by the front camera.
Generally, before acquiring an image to be processed, an electronic device first needs to receive a camera opening instruction, and opens a camera according to the camera opening instruction; then, a shooting instruction is received, and the acquired image to be processed is acquired according to the shooting instruction, and the acquired image to be processed is as shown in fig. 3. After the image to be processed is collected, Artificial Intelligence (AI) detection technology can be used for face detection; if the face is detected from the image to be processed, the image to be processed contains the face, and whether a faceAWB algorithm is adopted in the face area of the image to be processed or not can be judged subsequently according to the area ratio of the detected object in the image to be processed; however, if no human face is detected from the image to be processed, it indicates that the image to be processed does not include a human face, and directly indicates that the FaceAWB algorithm does not need to be applied to the image to be processed. For example, taking the to-be-processed image shown in fig. 3 as an example, the AI detection technology is used to perform face detection, so that a face, such as part 301 in fig. 3, can be detected from the to-be-processed image, that is, the to-be-processed image includes the face, and then it can be determined whether to adopt the FaceAWB algorithm for the face region of the to-be-processed image according to the region ratio occupied by the detected object in the to-be-processed image.
It can be understood that, besides the face, the image to be processed also contains the detected object; here, the detected object may be a pure color region (i.e., a large area occupied by a single color of the R, G, B three colors), may also be a human face, and may also be another object, which is not specifically limited in this embodiment of the present application.
When the detected object is a pure color region, as for the technical scheme shown in fig. 2, in a possible implementation manner, the calculating a first AWB value by using a FaceAWB algorithm for a face region in the image to be processed according to a region ratio occupied by the detected object in the image to be processed specifically includes:
if the detected object is a pure color area, acquiring a first area ratio of the pure color area in the image to be processed;
comparing the first area ratio with a first preset threshold;
and if the ratio of the first area is greater than a first preset threshold value, calculating to obtain the first AWB value by adopting the faceAWB algorithm for the face area in the image to be processed.
In the foregoing implementation manner, specifically, for the technical solution shown in fig. 2, the method further includes:
and if the ratio of the first area is not greater than a first preset threshold value, performing AWB processing on the image to be processed according to the second AWB value without adopting the faceAWB algorithm for the face area in the image to be processed.
It should be noted that, when the detected object is a pure color region, it needs to be determined whether the image to be processed contains a large-area pure color background; firstly, acquiring a first area ratio of a pure color area in an image to be processed, then comparing the first area ratio with a first preset threshold, if the first area ratio is greater than the first preset threshold, indicating that the image to be processed contains a large-area pure color background, and at the moment, adopting the faceAWB algorithm for a face area in the image to be processed; if the ratio of the first area is not greater than the first preset threshold, it indicates that the image to be processed does not contain a large-area solid background, and at this time, the FaceAWB algorithm does not need to be applied to the face area in the image to be processed. For example, in R, G, B three colors, the area ratio of the pure color area for a single color in the image to be processed needs to be calculated first; assuming that the first preset threshold is 0.33, if the area ratio of any one of R, G, B three colors is greater than 0.33, it indicates that a large-area solid background exists in the image to be processed, taking the image to be processed shown in fig. 3 as an example, assuming that 302 in fig. 3 is a green background and the first area ratio of 302 is 0.7, since a large-area solid background (such as 302 in fig. 3) and a human face (such as 301 in fig. 3) are detected in the image to be processed, it is necessary to apply a FaceAWB algorithm to a human face area (such as 301 in fig. 3) in the image to be processed, and then a first AWB value can be calculated according to the FaceAWB algorithm; if the area ratio of each of the R, G, B three colors is not greater than 0.33, it indicates that there is no large-area solid background in the image to be processed, and since there is no large-area solid background in the image to be processed, it is not necessary to apply a FaceAWB algorithm to the face area (e.g., 301 portion in fig. 3) in the image to be processed, and the AWB processing is performed on the image to be processed by using a second AWB value calculated by an AWB algorithm different from the FaceAWB algorithm, so that the influence of FaceAWB on a normal scene can be reduced.
When the detected object is a face, in a possible implementation manner of the technical scheme shown in fig. 2, the calculating a first AWB value by using a FaceAWB algorithm for a face area in the image to be processed according to an area ratio occupied by the detected object in the image to be processed specifically includes:
if the detected object is a face, acquiring a second area ratio of the face in the image to be processed;
comparing the second area ratio with a second preset threshold;
and if the ratio of the second area is larger than a second preset threshold value, calculating to obtain the first AWB value by adopting a faceAWB algorithm for the face area in the image to be processed.
In the foregoing implementation manner, specifically, for the technical solution shown in fig. 2, the method further includes:
and if the ratio of the second area is not greater than a second preset threshold value, performing AWB processing on the image to be processed according to the second AWB value without adopting the faceAWB algorithm for the face area in the image to be processed.
It should be noted that, when the detected object is a human face, it needs to be determined whether a second region ratio occupied by the human face in the image to be processed meets a requirement; firstly, acquiring a second area ratio of a face in an image to be processed, then comparing the second area ratio with a second preset threshold, and if the second area ratio is greater than the second preset threshold, indicating that the second area ratio of the face in the image to be processed meets the requirement, and at the moment, adopting the faceAWB algorithm for the face area in the image to be processed; if the second area ratio is not greater than the second preset threshold, it is indicated that the second area ratio occupied by the face in the image to be processed does not meet the requirement, and at this time, the FaceAWB algorithm does not need to be adopted for the face area in the image to be processed. For example, assuming that the second preset threshold is 0.1, if the ratio of the second area occupied by the face in the image to be processed is 0.3, that is, the ratio of the second area is greater than 0.1, it is described that the ratio of the second area occupied by the face in the image to be processed meets the requirement, taking the image to be processed shown in fig. 3 as an example, assuming that the ratio of the second area occupied by the portion 301 in fig. 3 is 0.3, at this time, it is necessary to apply a FaceAWB algorithm to the area of the face (such as the portion 301 in fig. 3) in the image to be processed, and then the first AWB value may be calculated according to the FaceAWB algorithm; if the ratio of the second area occupied by the face in the image to be processed is 0.05, that is, the ratio of the second area is not greater than 0.1, it is indicated that the ratio of the second area occupied by the face in the image to be processed does not meet the requirement, the face detection is unstable due to the small ratio of the second area occupied by the face in the image to be processed, at this time, it is not necessary to adopt a FaceAWB algorithm for the face area (for example, 301 part in fig. 3) in the image to be processed, and the AWB value calculated by the AWB algorithm different from the FaceAWB algorithm for the image to be processed is subjected to AWB processing, so that the color jump phenomenon caused by the small ratio of the face area can be effectively avoided.
For the technical solution shown in fig. 2, in a possible implementation manner, before the obtaining, according to the area ratio occupied by the face in the image to be processed, a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value, the method further includes:
and calculating to obtain the second AWB value by adopting an AWB algorithm different from the faceAWB algorithm for all areas of the image to be processed.
It should be noted that the second AWB value is obtained based on all areas of the image to be processed, and the employed AWB algorithm is different from the FaceAWB algorithm.
For the technical solution shown in fig. 2, in a possible implementation manner, the determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight specifically includes:
and performing weighted average calculation on the first AWB value and the second AWB value based on the first weight value and the second weight value to determine the final AWB value.
It should be noted that after the first AWB value and the second AWB value are obtained, weight distribution may be performed according to a region ratio occupied by the face in the image to be processed; and the larger the area ratio of the face in the image to be processed is, the larger the weight value distributed by the first AWB value is.
In the embodiment of the application, for different area ratios occupied by the human face in the image to be processed, an AWB weight distribution table for the first AWB value and the second AWB value can be established in advance. In this way, after the area ratio of the face in the image to be processed is obtained, a first weight corresponding to a first AWB value and a second weight corresponding to a second AWB value of the area ratio can be inquired from a pre-established AWB weight distribution table; after the first weight and the second weight are obtained, weighted average calculation is carried out on the first AWB value and the second AWB value, and therefore a final AWB value can be obtained.
It should be further noted that, in the embodiment of the present application, both the first weight and the second weight are natural numbers that are greater than or equal to 0 and less than or equal to 1; and the sum of the first weight and the second weight is equal to 1.
For the technical solution shown in fig. 2, in a possible implementation manner, after the determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight, the method further includes:
and carrying out AWB processing on the image to be processed according to the final AWB value.
It should be noted that after the final AWB value is obtained, AWB processing may be performed on the image to be processed according to the final AWB value, and the processed image is the current captured image, so that the current captured image may achieve a satisfactory color effect.
It can be understood that, when the image to be processed does not contain a human face, in a possible implementation manner for the technical solution shown in fig. 2, the method further includes:
and when the image to be processed does not contain the human face, performing AWB processing on the image to be processed according to the second AWB value.
It should be noted that, when the image to be processed does not include a human face, an AWB algorithm different from the FaceAWB algorithm may be adopted for the image to be processed, that is, AWB processing is performed on the image to be processed according to the second AWB value, and the processed image is the current captured image, so that the current captured image may achieve a satisfactory color effect.
The embodiment provides an AWB processing method, which comprises the steps of acquiring an image to be processed containing a human face; according to the area ratio of the detected object in the image to be processed, calculating a first AWB value by adopting a face automatic white balance faceAWB algorithm for the face area in the image to be processed; acquiring a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to the area ratio of the face in the image to be processed; the second AWB value is obtained by calculating all areas of the image to be processed by adopting an AWB algorithm different from the faceAWB algorithm; determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight; therefore, when the faceAWB is applied to a shooting scene of a rear camera, on one hand, the influence of the faceAWB on a normal scene can be reduced, and on the other hand, color jump caused by small face area occupation can be solved.
Example two
Based on the same inventive concept of the foregoing embodiment, referring to fig. 4, which shows a detailed flow of an AWB processing method provided in an embodiment of the present application, and in combination with the hardware structure example of the electronic device 100 shown in fig. 1, the detected object takes a pure color region as an example, and the detailed flow includes:
s401: receiving a camera opening instruction, wherein the camera opening instruction is used for opening a camera;
s402: receiving a shooting instruction, and acquiring an image to be processed according to the shooting instruction;
s403: calculating all areas of the image to be processed by adopting an AWB algorithm different from a faceAWB algorithm to obtain a second AWB value;
s404: carrying out face detection on the image to be processed;
s405: if a face is detected in the image to be processed, the image to be processed comprises the face;
s406: if the face is not detected in the image to be processed, the image to be processed does not contain the face;
for example, taking the electronic device 100 shown in fig. 1 as an example, when the electronic device 100 needs to shoot a scene, first a camera start instruction input by a user needs to be received to turn on the camera 109, and then a shooting instruction is input through the user input unit 106 (such as an entity key or a shooting touch button on the touch panel 1061), so as to obtain an image to be processed according to the shooting instruction; the image to be processed is displayed through the display panel 1051, as shown in fig. 3; carrying out face detection by using an AI detection technology; if a face is detected from the image to be processed, for example, part 301 in fig. 3, it is described that the image to be processed includes the face, and it may be determined whether to adopt a FaceAWB algorithm for a face region of the image to be processed according to a region ratio occupied by the detected object in the image to be processed; however, if the face is not detected from the image to be processed, it indicates that the image to be processed does not contain the face, and directly indicates that the faceAWB algorithm does not need to be adopted for the image to be processed; and calculating to obtain a second AWB value by adopting an AWB algorithm different from the faceAWB algorithm for all areas of the image to be processed.
S407: when the image to be processed contains a human face, calculating a first area ratio of a pure color area in the image to be processed;
s408: comparing the first area ratio with a first preset threshold;
s409: if the first area ratio is larger than a first preset threshold value, calculating to obtain a first AWB value by adopting the faceAWB algorithm for a face area in the image to be processed;
s410: if the first area ratio is not larger than a first preset threshold value, performing AWB processing on the image to be processed according to the second AWB value without adopting the faceAWB algorithm on the face area in the image to be processed;
after step S405, step S407 is executed; after step S408, if the first area ratio is greater than a first preset threshold, step S409 is executed; if the first area ratio is not greater than the first preset threshold, step S410 is executed.
For example, taking the electronic device 100 shown in fig. 1 as an example, in combination with the to-be-processed image shown in fig. 3, when the to-be-processed image includes a human face, the processor 108 needs to determine whether the to-be-processed image includes a large-area solid background; assuming that the first preset threshold is 0.33, if the portion 302 in fig. 3 is a green background and the first area ratio occupied by the portion 302 is 0.7, since the first area ratio is greater than 0.33, it is indicated that a large-area solid background exists in the image to be processed, and since a large-area solid background (e.g., the portion 302 in fig. 3) and a human face (e.g., the portion 301 in fig. 3) are detected in the image to be processed, the processor 108 needs to apply a FaceAWB algorithm to a human face area (e.g., the portion 301 in fig. 3) in the image to be processed, and then a first AWB value can be calculated according to the FaceAWB algorithm; if the first area ratio occupied by the portion 302 in fig. 3 is 0.25, since the first area ratios are not greater than 0.33, it indicates that there is no large-area solid-color background in the image to be processed, at this time, the processor 108 does not need to adopt a FaceAWB algorithm to the face area (e.g., the portion 301 in fig. 3) in the image to be processed, and may perform AWB processing on the image to be processed according to the second AWB value.
S411: acquiring a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to the area ratio of the face in the image to be processed;
s412: based on the first weight and the second weight, performing weighted average calculation on the first AWB value and the second AWB value to determine a final AWB value;
s413: performing AWB processing on the image to be processed according to the final AWB value;
s414: and when the image to be processed does not contain the human face, performing AWB processing on the image to be processed according to the second AWB value.
Note that, after step S409, step S411 is executed; and after step S406, step S414 is performed.
For example, taking the electronic device 100 shown in fig. 1 as an example, with reference to the above example, after a large-area solid-color background and a human face are detected in the image to be processed shown in fig. 3, the processor 108 may further query, according to a pre-established AWB weight distribution table for a first AWB value and a second AWB value, a first weight value corresponding to the first AWB value and a second weight value corresponding to the second AWB value for the area ratio from the pre-established AWB weight distribution table based on the area ratio occupied by the human face in the image to be processed; the processor 108 then performs a weighted average calculation on the first AWB value and the second AWB value, so as to obtain a final AWB value; after the final AWB value is obtained, the processor 108 may perform AWB processing on the image to be processed according to the final AWB value, where the processed image is the current captured image; for a shooting scene in which the image to be processed does not contain a human face, the processor 108 may perform AWB processing on the image to be processed according to the second AWB value; therefore, the influence of faceAWB on a normal scene can be reduced, and the color jump caused by small face area occupation can be solved, so that the current shot image achieves a satisfactory color effect.
Referring to fig. 5, which shows a detailed flow of another AWB processing method provided in the embodiment of the present application, and in combination with the hardware structure example of the electronic device 100 shown in fig. 1, the detected object takes a human face as an example, and the detailed flow includes:
s501: receiving a camera opening instruction, wherein the camera opening instruction is used for opening a camera;
s502: receiving a shooting instruction, and acquiring an image to be processed according to the shooting instruction;
s503: calculating all areas of the image to be processed by adopting an AWB algorithm different from a faceAWB algorithm to obtain a second AWB value;
s504: carrying out face detection on the image to be processed;
s505: if a face is detected in the image to be processed, the image to be processed comprises the face;
s506: if the face is not detected in the image to be processed, the image to be processed does not contain the face;
s507: when the image to be processed contains a human face, calculating a second area ratio of the human face in the image to be processed;
s508: comparing the second area ratio with a second preset threshold;
s509: if the ratio of the second area is larger than a second preset threshold value, calculating the first AWB value by adopting the faceAWB algorithm for the face area in the image to be processed;
s510: if the ratio of the second area is not larger than a second preset threshold value, performing AWB processing on the image to be processed according to the second AWB value without adopting the faceAWB algorithm on the face area in the image to be processed;
s511: acquiring a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to the second area ratio;
s512: based on the first weight and the second weight, performing weighted average calculation on the first AWB value and the second AWB value to determine a final AWB value;
s513: performing AWB processing on the image to be processed according to the final AWB value;
s514: and when the image to be processed does not contain the human face, performing AWB processing on the image to be processed according to the second AWB value.
After step S505, step S507 is executed; after step S506, step S514 is executed; however, after step S508, if the first area ratio is greater than the first preset threshold, step S509 is executed; if the first area ratio is not greater than the first preset threshold, step S510 is executed.
For example, taking the electronic device 100 shown in fig. 1 as an example, with reference to the above example, after the to-be-processed image shown in fig. 3 is acquired, the processor 108 performs face detection by using an AI detection technology, and if a face is not detected in the to-be-processed image, it indicates that the to-be-processed image does not include the face, and directly indicates that a FaceAWB algorithm is not required to be used for the to-be-processed image, at this time, the processor 108 performs AWB processing on the to-be-processed image according to a second AWB value, and the processed image is the current captured image; if a face is detected from the image to be processed, for example, part 301 in fig. 3, it is described that the image to be processed includes the face, and at this time, the processor 108 needs to determine whether a second region ratio occupied by the face in the image to be processed meets a requirement; assuming that the second preset threshold is 0.1, if the ratio of the second area occupied by the portion 301 in fig. 3 is 0.3, and the ratio of the second area is greater than 0.1, it is indicated that the ratio of the second area occupied by the face in the image to be processed meets the requirement, at this time, a FaceAWB algorithm needs to be applied to the face area (e.g., the portion 301 in fig. 3) in the image to be processed, and then the first AWB value can be calculated according to the FaceAWB algorithm; if the ratio of the second area occupied by the portion 301 in fig. 3 is 0.05, since the ratio of the second area is not greater than 0.1, it indicates that the ratio of the second area occupied by the face in the image to be processed does not meet the requirement, and at this time, it is not necessary to adopt a FaceAWB algorithm to the face area (e.g., the portion 301 in fig. 3) in the image to be processed, and the image to be processed may be subjected to AWB processing according to the second AWB value; in the image to be processed shown in fig. 3, the ratio of the second area occupied by the face meets the requirement, the processor 108 may further perform weighted average calculation on the first AWB value and the second AWB value according to the first weight value corresponding to the acquired first AWB value and the second weight value corresponding to the acquired second AWB value, so as to obtain a final AWB value, then the processor 108 performs AWB processing on the image to be processed according to the final AWB value, and the processed image is the current captured image, so that the influence of FaceAWB on a normal scene may be reduced, and color jump caused by small face area ratio may be solved, so that the current captured image achieves a satisfactory color effect.
Through the embodiment, the specific implementation of the embodiment is elaborated in detail, and it can be seen that through the technical scheme of the embodiment, when the FaceAWB is applied to a shooting scene of a rear camera, on one hand, the influence of the FaceAWB on a normal scene can be reduced, and on the other hand, color jump caused by small face area occupation can be solved.
EXAMPLE III
Based on the same inventive concept of the foregoing embodiment, referring to fig. 6, it shows the composition of an AWB processing apparatus 60 provided in the embodiment of the present application, and the AWB processing apparatus 60 may include: a first acquisition section 601, a first calculation section 602, a second acquisition section 603, and a determination section 604; wherein,
the first acquisition part 601 is configured to acquire an image to be processed including a human face;
the first calculating part 602 is configured to calculate a first AWB value by adopting a FaceAWB algorithm for a face region in the image to be processed according to a region ratio occupied by the detected object in the image to be processed;
the second obtaining part 603 is configured to obtain a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to a region ratio occupied by the face in the image to be processed; the second AWB value is obtained by calculating all areas of the image to be processed by adopting an AWB algorithm different from the faceAWB algorithm;
the determining part 604 is configured to determine a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight.
In the above scheme, referring to fig. 7, the AWB processing apparatus 60 further includes a detecting section 605 configured to:
carrying out face detection on the collected image to be processed;
if a face is detected in the image to be processed, the image to be processed comprises the face;
and if the face is not detected in the image to be processed, the image to be processed does not contain the face.
In the above scheme, the first calculating part 602 is specifically configured to:
if the detected object is a pure color area, acquiring a first area ratio of the pure color area in the image to be processed;
comparing the first area ratio with a first preset threshold;
and if the ratio of the first area is greater than a first preset threshold value, calculating to obtain the first AWB value by adopting the faceAWB algorithm for the face area in the image to be processed.
In the above scheme, referring to fig. 8, the AWB processing apparatus 60 further includes a first processing portion 606 configured to:
and if the ratio of the first area is not greater than a first preset threshold value, performing AWB processing on the image to be processed according to the second AWB value without adopting the faceAWB algorithm for the face area in the image to be processed.
In the above scheme, the first calculating part 602 is specifically configured to:
if the detected object is a face, acquiring a second area ratio of the face in the image to be processed;
comparing the second area ratio with a second preset threshold;
and if the ratio of the second area is larger than a second preset threshold value, calculating to obtain the first AWB value by adopting a faceAWB algorithm for the face area in the image to be processed.
In the above scheme, referring to fig. 9, the AWB processing apparatus 60 further includes a second processing portion 607 configured to:
and if the ratio of the second area is not greater than a second preset threshold value, performing AWB processing on the image to be processed according to the second AWB value without adopting the faceAWB algorithm for the face area in the image to be processed.
In the above scheme, referring to fig. 10, the AWB processing apparatus 60 further includes a second calculation portion 608 configured to:
and calculating to obtain the second AWB value by adopting an AWB algorithm different from the faceAWB algorithm for all areas of the image to be processed.
In the above scheme, the determining part 604 is specifically configured to:
and performing weighted average calculation on the first AWB value and the second AWB value based on the first weight value and the second weight value to determine the final AWB value.
In the above scheme, referring to fig. 11, the AWB processing apparatus 60 further includes a third processing section 609 configured to:
and carrying out AWB processing on the image to be processed according to the final AWB value.
In the above scheme, referring to fig. 12, the AWB processing apparatus 60 further includes a fourth processing portion 610 configured to:
and when the image to be processed does not contain the human face, performing AWB processing on the image to be processed according to the second AWB value.
It is understood that in this embodiment, "part" may be part of a circuit, part of a processor, part of a program or software, etc., and may also be a unit, and may also be a module or a non-modular.
In addition, each component in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Accordingly, the present embodiments provide a computer storage medium storing an AWB processing program that, when executed by at least one processor, implements the steps of the method of AWB processing described in one of the above embodiments.
Based on the above-mentioned composition of the AWB processing apparatus 60 and the computer storage medium, referring to fig. 13, it shows a specific hardware structure of the AWB processing apparatus 60 provided in the embodiment of the present application, which may include: a communication bus 1301, a memory 1302, and a processor 1303; the various components are coupled together by a communication bus 1301. It is understood that the communication bus 1301 includes a power bus, a control bus, and a status signal bus, in addition to a data bus. But for clarity of illustration the various buses are labeled in figure 13 as communication bus 1301. The communication bus 1301 is used for realizing connection communication between the memory and the processor;
a memory 1302 for storing a computer program capable of running on the processor 1303;
a processor 1303, configured to execute, when running the computer program:
acquiring an image to be processed containing a human face;
according to the area ratio occupied by the detected object in the image to be processed, adopting a faceAWB algorithm to the face area in the image to be processed, and calculating to obtain a first AWB value;
acquiring a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to the area ratio of the face in the image to be processed; the second AWB value is obtained by calculating all areas of the image to be processed by adopting an AWB algorithm different from the faceAWB algorithm;
determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight.
It will be appreciated that the memory 1302 in embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double data rate Synchronous Dynamic random access memory (ddr DRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 1302 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And processor 1303 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the method may be implemented by hardware integrated logic circuits in the processor 1303 or instructions in the form of software. The Processor 1303 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1302, and the processor 1303 reads information in the memory 1302 and completes the steps of the method in combination with hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the processor 1303 is further configured to execute the steps of the method of AWB processing according to the first embodiment when the computer program is executed.
Optionally, an embodiment of the present application further provides an electronic device, where the electronic device at least includes a camera and the AWB processing apparatus 60 as described in the foregoing embodiment.
It should be noted that: the technical solutions described in the embodiments of the present application can be arbitrarily combined without conflict.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. An automatic white balance processing method, characterized in that the method comprises:
acquiring an image to be processed containing a human face;
according to the area ratio of the detected object in the image to be processed, adopting a facial automatic white balance faceAWB algorithm to the face area in the image to be processed, and calculating to obtain a first automatic white balance AWB value;
acquiring a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to the area ratio of the face in the image to be processed; the second AWB value is obtained by calculating all areas of the image to be processed by adopting an AWB algorithm different from the faceAWB algorithm;
determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight;
the calculating of the first AWB value by adopting a facial automatic white balance FaceAWB algorithm for the face area in the image to be processed according to the area ratio occupied by the detected object in the image to be processed specifically includes:
if the detected object is a pure color area, acquiring a first area ratio of the pure color area in the image to be processed;
comparing the first area ratio with a first preset threshold;
and if the ratio of the first area is greater than a first preset threshold value, calculating to obtain the first AWB value by adopting the faceAWB algorithm for the face area in the image to be processed.
2. The method of claim 1, wherein prior to said obtaining the image to be processed containing the human face, the method further comprises:
carrying out face detection on the collected image to be processed;
if a face is detected in the image to be processed, the image to be processed comprises the face;
and if the face is not detected in the image to be processed, the image to be processed does not contain the face.
3. The method of claim 1, further comprising:
and if the ratio of the first area is not greater than a first preset threshold value, performing AWB processing on the image to be processed according to the second AWB value without adopting the faceAWB algorithm for the face area in the image to be processed.
4. The method of claim 1, further comprising:
if the detected object is a face, acquiring a second area ratio of the face in the image to be processed;
comparing the second area ratio with a second preset threshold;
and if the ratio of the second area is larger than a second preset threshold value, calculating to obtain the first AWB value by adopting a faceAWB algorithm for the face area in the image to be processed.
5. The method of claim 4, further comprising:
and if the ratio of the second area is not greater than a second preset threshold value, performing AWB processing on the image to be processed according to the second AWB value without adopting the faceAWB algorithm for the face area in the image to be processed.
6. The method according to claim 1, wherein before the obtaining a first weight corresponding to the first AWB value and a second weight corresponding to the second AWB value according to a region ratio occupied by the face in the image to be processed, the method further comprises:
and calculating to obtain the second AWB value by adopting an AWB algorithm different from the faceAWB algorithm for all areas of the image to be processed.
7. The method according to claim 1, wherein the determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight comprises:
and performing weighted average calculation on the first AWB value and the second AWB value based on the first weight value and the second weight value to determine the final AWB value.
8. The method of claim 1, wherein after the determining a final AWB value based on the first AWB value, the first weight, the second AWB value, and the second weight, the method further comprises:
and carrying out AWB processing on the image to be processed according to the final AWB value.
9. The method of claim 2, further comprising:
and when the image to be processed does not contain the human face, performing AWB processing on the image to be processed according to the second AWB value.
10. An AWB processing apparatus, comprising: a communication bus, a memory, and a processor; wherein,
the communication bus is used for realizing connection communication between the memory and the processor;
the memory for storing a computer program operable on the processor;
the processor, when executing the computer program, is adapted to perform the steps of the method of any of claims 1 to 9.
11. A computer storage medium storing an AWB handler that when executed by at least one processor implements the steps of the method of any of claims 1 to 9.
12. An electronic device comprising at least a camera and the AWB processing apparatus of claim 10.
CN201811006222.9A 2018-08-30 2018-08-30 Automatic white balance processing method, device and computer storage medium Active CN109151428B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811006222.9A CN109151428B (en) 2018-08-30 2018-08-30 Automatic white balance processing method, device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811006222.9A CN109151428B (en) 2018-08-30 2018-08-30 Automatic white balance processing method, device and computer storage medium

Publications (2)

Publication Number Publication Date
CN109151428A CN109151428A (en) 2019-01-04
CN109151428B true CN109151428B (en) 2020-04-24

Family

ID=64829573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811006222.9A Active CN109151428B (en) 2018-08-30 2018-08-30 Automatic white balance processing method, device and computer storage medium

Country Status (1)

Country Link
CN (1) CN109151428B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583127B (en) * 2020-04-03 2023-08-15 浙江大华技术股份有限公司 Face skin color correction method, device, computer equipment and readable storage medium
CN112822476A (en) * 2021-02-26 2021-05-18 广东以诺通讯有限公司 Automatic white balance method, system and terminal for color cast of large number of monochrome scenes
CN115118947B (en) * 2021-03-23 2023-11-24 北京小米移动软件有限公司 Image processing method and device, electronic equipment and storage medium
CN114125302A (en) * 2021-11-26 2022-03-01 维沃移动通信有限公司 Image adjusting method and device
CN114598852B (en) * 2022-03-07 2023-06-09 杭州国芯科技股份有限公司 Optimization method for white balance of face area of camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5113514B2 (en) * 2007-12-27 2013-01-09 キヤノン株式会社 White balance control device and white balance control method
JP5398156B2 (en) * 2008-03-04 2014-01-29 キヤノン株式会社 WHITE BALANCE CONTROL DEVICE, ITS CONTROL METHOD, AND IMAGING DEVICE
CN107343188A (en) * 2017-06-16 2017-11-10 广东欧珀移动通信有限公司 image processing method, device and terminal
CN107343189B (en) * 2017-07-10 2019-06-21 Oppo广东移动通信有限公司 White balancing treatment method and device
CN107454345B (en) * 2017-07-12 2019-10-22 Oppo广东移动通信有限公司 White balancing treatment method, device and the terminal device of image
CN107959843B (en) * 2017-12-25 2019-07-05 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and computer equipment

Also Published As

Publication number Publication date
CN109151428A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN109151428B (en) Automatic white balance processing method, device and computer storage medium
US11800221B2 (en) Time-lapse shooting method and device
EP4221164A1 (en) Display method for electronic device with flexible display and electronic device
CN110035141B (en) Shooting method and equipment
US20220174143A1 (en) Message notification method and electronic device
CN113395382B (en) Method for data interaction between devices and related devices
US20220245778A1 (en) Image bloom processing method and apparatus, and storage medium
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
WO2020060598A1 (en) Automated camera mode selection
CN111552451A (en) Display control method and device, computer readable medium and terminal equipment
CN114880251B (en) Memory cell access method, memory cell access device and terminal equipment
CN107438162B (en) Shooting parameter adjusting method and device
CN115150542B (en) Video anti-shake method and related equipment
US10319342B2 (en) Method and device for adjusting display brightness
CN112037157A (en) Data processing method and device, computer readable medium and electronic equipment
CN116055859B (en) Image processing method and electronic device
CN113610943B (en) Icon rounded angle processing method and device
CN114466238B (en) Frame demultiplexing method, electronic device and storage medium
CN114120987B (en) Voice wake-up method, electronic equipment and chip system
CN113923351A (en) Method, apparatus, storage medium, and program product for exiting multi-channel video shooting
CN115484383A (en) Shooting method and related device
CN114812381A (en) Electronic equipment positioning method and electronic equipment
CN115619628B (en) Image processing method and terminal equipment
WO2023207715A1 (en) Screen-on control method, electronic device, and computer-readable storage medium
CN115150543B (en) Shooting method, shooting device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant