CN115170680A - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN115170680A
CN115170680A CN202110367903.3A CN202110367903A CN115170680A CN 115170680 A CN115170680 A CN 115170680A CN 202110367903 A CN202110367903 A CN 202110367903A CN 115170680 A CN115170680 A CN 115170680A
Authority
CN
China
Prior art keywords
brightness
image
value
processed
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110367903.3A
Other languages
Chinese (zh)
Inventor
周千琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110367903.3A priority Critical patent/CN115170680A/en
Publication of CN115170680A publication Critical patent/CN115170680A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to an image processing method, apparatus, and storage medium. The image processing method includes: and acquiring an image to be processed, and determining a color channel pixel value of the image to be processed. And determining the brightness values of the color channel pixel values in at least two color spaces, and fitting the brightness values in the at least two color spaces to obtain a fitted brightness value. And dynamically adjusting the brightness range of the image to be processed based on the fitting brightness value. According to the image processing method provided by the disclosure, the corresponding fitting brightness value of the to-be-processed image is determined through at least two color spaces, so that the information or detail loss in the to-be-processed image is reduced in the process of converting the color channel pixel value into the brightness channel pixel value, and the accuracy of dynamically adjusting the brightness range of the to-be-processed image is improved.

Description

Image processing method, device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and a storage medium.
Background
In the real world we are in, the range of luminance distribution is too wide. For example, 10^5 candelas per square meter (cd/m 2) is achieved when luminance is maximum, and approximately (1/10 ^ 3) cd/m2 is achieved when luminance is minimum. In scenes of different brightness, the range of brightness seen by human eyes is also different, and the range is about 10^5 at the maximum. For photo photography, how to better present the scene seen by human eyes is an important issue for the research.
In the related art, the dynamic range of a real scene is subjected to nonlinear compression mainly through a tone mapping algorithm, so that the real scene is displayed in front of people through an image. But limited by the technology and cost of the display device, the gray scale range displayed is only 0-255, resulting in a significant loss of information and detail for a large area in the image.
Disclosure of Invention
In order to overcome the problems in the related art, an object of the present disclosure is to provide an image processing method, an image processing apparatus, and a storage medium, which can reduce information or detail loss in an image to be processed in a process of converting a color channel pixel value into a luminance channel pixel value, thereby improving accuracy of dynamically adjusting a luminance range of the image to be processed.
In order to achieve the purpose, the technical scheme adopted by the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including: acquiring an image to be processed, and determining a color channel pixel value of the image to be processed. And determining the brightness values of the color channel pixel values in at least two color spaces, and fitting the brightness values in the at least two color spaces to obtain a fitted brightness value. And dynamically adjusting the brightness range of the image to be processed based on the fitting brightness value.
In an embodiment, the image processing method further comprises, before determining the luminance values of the color channel pixel values in at least two color spaces: and determining at least two color spaces based on the image parameters to be processed of the image to be processed, wherein the image parameters comprise one or more of a brightness improvement threshold, saturation and regional brightness.
In another embodiment, in response to the image parameters comprising a luma value and/or a saturation, the at least two color spaces comprise a YUV color space and a HSV color space. The determining the brightness values of the color channel pixel values in at least two color spaces and fitting the brightness values in the at least two color spaces to obtain a fitted brightness value includes: determining a first luminance value of the color channel pixel value in a YUV color space and determining a second luminance value of the color channel pixel value in a HSV color space. And carrying out weighted summation on the first brightness value and the second brightness value to obtain a first fitting brightness value. Based on the fitted brightness value, dynamically adjusting the brightness range of the image to be processed, including: and dynamically adjusting the brightness range of the image to be processed based on the first fitting brightness value.
In yet another embodiment, in response to the image parameters comprising a luminance value, a saturation, and/or a region luminance, determining that the at least two color spaces comprise a YUV color space, a HSV color space, and a Hok color space. The determining the brightness values of the color channel pixel values in at least two color spaces and fitting the brightness values in the at least two color spaces to obtain a fitted brightness value includes: determining a first luminance value of the color channel pixel value in YUV color space, determining a second luminance value of the color channel pixel value in HSV color space, and determining a third luminance value of the color channel pixel value in Hok color space. And carrying out weighted summation on the first brightness value, the second brightness value and the third brightness value to obtain a second fitting brightness value. Based on the fitted brightness value, dynamically adjusting the brightness range of the image to be processed, including: and dynamically adjusting the brightness range of the image to be processed based on the second fitting brightness value.
In yet another embodiment, the image processing method further comprises determining luminance values of the color channel pixel values in at least two color spaces before: and if the image to be processed is not subjected to brightness correction processing and the brightness of the image to be processed is lower than a brightness threshold, performing brightness correction on the image to be processed according to the color space distribution of the image to be processed.
According to a second aspect of an embodiment of the present disclosure, there is provided an image processing apparatus including: the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring an image to be processed and determining a color channel pixel value of the image to be processed. And the fitting unit is used for determining the brightness values of the color channel pixel values in at least two color spaces and fitting the brightness values in the at least two color spaces to obtain a fitting brightness value. And the adjusting unit is used for dynamically adjusting the brightness range of the image to be processed based on the fitting brightness value.
In one embodiment, the image processing apparatus further includes: the determining unit is used for determining at least two color spaces based on image parameters needing to be processed of the image to be processed, and the image parameters comprise one or more of a brightness improvement threshold, saturation and regional brightness.
In another embodiment, in response to the image parameters comprising a brightness value and/or a saturation, the at least two color spaces comprise a YUV color space and a HSV color space. The fitting unit determines brightness values of the color channel pixel values in at least two color spaces by adopting the following method, and fits the brightness values in the at least two color spaces to obtain a fitted brightness value: determining a first luminance value of the color channel pixel value in a YUV color space and determining a second luminance value of the color channel pixel value in an HSV color space. And carrying out weighted summation on the first brightness value and the second brightness value to obtain a first fitting brightness value. The adjusting unit dynamically adjusts the brightness range of the image to be processed based on the fitting brightness value in the following way: and dynamically adjusting the brightness range of the image to be processed based on the first fitting brightness value.
In yet another embodiment, in response to the image parameters comprising a luminance value, a saturation, and/or a region luminance, it is determined that the at least two color spaces comprise a YUV color space, a HSV color space, and a Hok color space. The fitting unit determines the brightness values of the color channel pixel values in at least two color spaces by adopting the following modes, and fits the brightness values in the at least two color spaces to obtain a fitting brightness value: determining a first luma value of the color channel pixel value in YUV color space, determining a second luma value of the color channel pixel value in HSV color space, and determining a third luma value of the color channel pixel value in Hok color space. And carrying out weighted summation on the first brightness value, the second brightness value and the third brightness value to obtain a second fitting brightness value. The adjusting unit dynamically adjusts the brightness range of the image to be processed based on the fitting brightness value in the following way: and dynamically adjusting the brightness range of the image to be processed based on the second fitting brightness value.
In still another embodiment, the image processing apparatus further includes: and the correcting unit is used for correcting the brightness of the image to be processed according to the color space distribution of the image to be processed if the image to be processed is not subjected to brightness correction processing and the brightness of the image to be processed is lower than a brightness threshold value.
According to a third aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including: a memory to store instructions. And the processor is used for calling the instructions stored in the memory to execute any one of the image processing methods.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing instructions which, when executed by a processor, perform any one of the image processing methods described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: according to the image processing method provided by the disclosure, the fitting brightness value corresponding to the image to be processed is determined through at least two color spaces, so that the loss of information or details in the image to be processed is reduced in the process of converting the color channel pixel value into the brightness channel pixel value, and the accuracy of dynamically adjusting the brightness range of the image to be processed is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a luminance histogram shown in accordance with an exemplary embodiment.
Fig. 2 is another luminance histogram shown in accordance with an example embodiment.
Fig. 3 is yet another luminance histogram shown in accordance with an exemplary embodiment.
Fig. 4 is yet another luminance histogram shown in accordance with an exemplary embodiment.
FIG. 5 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 6 is a flow diagram illustrating another method of image processing according to an exemplary embodiment.
FIG. 7 is a flowchart illustrating yet another image processing method according to an exemplary embodiment.
FIG. 8 is a flowchart illustrating yet another image processing method according to an exemplary embodiment.
FIG. 9 is a flowchart illustrating yet another image processing method according to an exemplary embodiment.
FIG. 10 illustrates an image to be processed according to an exemplary embodiment.
FIG. 11a illustrates an effect diagram after image processing according to an exemplary embodiment.
FIG. 11b is an illustration of an effect after image processing according to an exemplary embodiment.
FIG. 11c is a diagram illustrating effects after image processing according to an exemplary embodiment.
FIG. 11d is a diagram illustrating effects after image processing according to an exemplary embodiment.
FIG. 12 illustrates an image to be processed according to an exemplary embodiment.
FIG. 13a is an illustration showing an effect of an image after processing, according to an example embodiment.
FIG. 13b is a diagram illustrating effects after image processing according to an exemplary embodiment.
FIG. 13c is a diagram illustrating effects after image processing according to an exemplary embodiment.
FIG. 13d is an illustration of an effect after image processing, according to an exemplary embodiment.
Fig. 14 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
Fig. 15 is a block diagram of a terminal shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
When the terminal takes a picture, the exposure parameters can be automatically and dynamically adjusted according to the scene brightness, and an image with good exposure can be obtained. However, in some relatively extreme implementation scenarios, it is not possible to obtain an image with appropriate brightness and contrast simply by means of automatic exposure. For example: the distribution of luminance pixel values of an image acquired under-exposure may be as shown in fig. 1. Fig. 1 is a luminance histogram showing a pixel value on the horizontal axis and luminance on the vertical axis according to an exemplary embodiment. From fig. 1, it can be determined that in the case of under-exposure, the luminance pixel values of the image are mostly distributed in a dark area (an area with a small horizontal axis pixel value), which easily results in insufficient global contrast and loss of detail in a dark area of the image. The distribution of luminance pixel values of an image acquired in the case of overexposure may be as shown in fig. 2. Fig. 2 is another luminance histogram shown according to an exemplary embodiment, with the horizontal axis representing pixel values and the vertical axis representing luminance. From fig. 2, it can be determined that in the case of overexposure, most of the luminance pixel values of the image are distributed in the highlight region (the region with larger horizontal axis pixel values), which also easily results in insufficient global contrast of the image and loss of image details. The distribution of luminance pixel values of an image acquired in a case where the global contrast is low may be as shown in fig. 3. Fig. 3 is a still another luminance histogram shown according to an exemplary embodiment, with the horizontal axis representing pixel values and the vertical axis representing luminance. From fig. 3, it can be determined that in the case of low global contrast, the luminance pixel values of the image are mostly distributed in the local luminance region (the region in the middle of the horizontal axis pixel values), which makes it difficult to distinguish the image details, and the dynamic range thereof needs to be stretched to the entire gray scale range. In a High-Dynamic Range (HDR) scene, a distribution of luminance pixel values of an image obtained by taking a backlight shot may be as shown in fig. 4. Fig. 4 is a still another luminance histogram shown according to an exemplary embodiment, with the horizontal axis representing pixel values and the vertical axis representing luminance. From fig. 4, it can be determined that when backlight shooting is performed in an HDR scene, image pixels are mainly distributed in an extremely bright area (an area with a small horizontal axis pixel value) and an extremely dark area (an area with an excessively large horizontal axis pixel value), and the image has a good global contrast, but the local contrast is seriously insufficient, so that details of both the bright area and the dark area are seriously lost.
In the related art, for the scene, a tone mapping algorithm is mainly used to perform nonlinear compression on the brightness range of the real scene, and further dynamically adjust the brightness range of the image, so that the real scene is displayed in front of people through the image. The tone mapping algorithm is a computer graphics technique for approximately displaying high dynamic range images on a medium with limited dynamic range. However, in practical applications, the determination of the brightness value of the image directly affects the brightness range of the subsequent dynamic adjustment image.
In view of this, the present disclosure provides an image processing method, before dynamically adjusting a luminance range of an image to be processed, determining a fitting luminance value of the image to be processed according to a color channel pixel value of the image to be processed and luminance values in at least two color spaces, so as to improve accuracy of determining the luminance value of the image to be processed, which is beneficial to reducing information or detail loss in the image to be processed in a process of converting the color channel pixel value into the luminance channel pixel value, and is further beneficial to improving accuracy of dynamically adjusting the luminance range of the image to be processed, so that luminance of the image to be processed after being dynamically adjusted is closer to luminance seen by human eyes.
FIG. 5 is a flowchart illustrating an image processing method according to an exemplary embodiment. As shown in fig. 5, the image processing method includes the following steps S11 to S13.
In step S11, an image to be processed is acquired, and color channel pixel values of the image to be processed are determined.
In the embodiment of the present disclosure, the image to be processed is an image requiring brightness range adjustment. The color of each pixel in the image to be processed is generated by the superposition and mixing of colors in color channels, and information of color elements in the image is stored in each color channel. And determining the brightness corresponding to the image to be processed through the color channel of the image to be processed. Therefore, to determine the brightness corresponding to the image to be processed, the color channel pixel value of the image to be processed is determined first.
In some examples, the image to be processed may be acquired in the field through a local database, a cloud, or a terminal. In one example, the terminal may include a terminal capable of acquiring an image, such as: mobile phones, tablets, notebooks, etc. In another example, the structure of the terminal may include: a dual-screen terminal, a folding screen terminal, a full-screen terminal, etc.
In an implementation scenario, the color of each pixel in the image to be processed is composed of three primary colors, and thus, the color channel of the image to be processed may include: a red (R) channel, a green (G) channel, and a blue (B) channel.
In step S12, luminance values of the color channel pixel values in at least two color spaces are determined, and the luminance values in the at least two color spaces are fitted to obtain a fitted luminance value.
In the disclosed embodiment, the color space can be understood as an abstract mathematical model capable of defining a color range based on a coordinate system for describing colors under a specified standard. Wherein, the types of the color space may include: RGB color space, HSV color space, HOK color space, HSI color space, and the like. The image parameters expressing the color channel pixel values are different in different color spaces. Therefore, the brightness of the image to be processed is extracted in different color spaces, and part of information or part of details in the original image are easily lost in the extraction process, so that the brightness value expression is inaccurate, and the obtained brightness value easily influences the subsequent dynamic brightness adjustment.
In order to avoid or reduce the loss of information or details of the image to be processed in the process of determining the brightness of the image to be processed, at least two color spaces are adopted to express the brightness of the image to be processed when the corresponding brightness value of the image to be processed is obtained. The brightness expressed by different color spaces is fitted, and then the defect of brightness expression in different color spaces is made up through the fitting of the brightness value, so that the fitting brightness value capable of expressing the brightness of the image to be processed is obtained, the accuracy of extracting the brightness of the image to be processed is improved, and the subsequent effective adjustment can be carried out when the brightness range of the image to be processed is adjusted.
In step S13, the luminance range of the image to be processed is dynamically adjusted based on the fitted luminance value.
In the embodiment of the present disclosure, based on the obtained fitted brightness value, the brightness distribution of the image to be processed can be determined, and then the brightness range of the image to be processed is dynamically adjusted through a tone mapping algorithm. The tone mapping algorithm includes: a global tone mapping algorithm, a local tone mapping algorithm, or a hybrid tone mapping algorithm. Global mapping may be understood as a method in which the same mapping function is applied to the entire image. For example: and determining a brightness mapping curve through linear (nonlinear) contrast stretching or image histogram equalization mapping processing, and readjusting the gray scale range of the image acquired by the image sensor. The local tone mapping method is to adopt different mapping processing according to different positions of pixels. For example: and the mapping processes such as adaptive histogram equalization and fast bilateral filtering which limit the contrast are carried out. The mixed mapping combines two schemes of global mapping and local mapping, lays a foundation by using the global mapping, and adjusts the local contrast on the basis of the global mapping. When the brightness range of the image to be processed is dynamically adjusted, any one of the tone mapping algorithms can be used for adjustment.
Through the embodiment, the brightness value of the image to be processed is obtained through at least two color spaces, the defect that the brightness value of the image to be processed is obtained through a single color space is overcome, the loss of information or details of the image to be processed in the process of extracting the brightness through the color channel pixel value is avoided or reduced, the accuracy of extracting the brightness of the image to be processed is improved, and therefore when the brightness range of the image to be processed is adjusted, the brightness of the image to be processed after dynamic adjustment is closer to the brightness seen by human eyes.
For ease of understanding, the following will describe defects that are easily generated by determining the luminance value of an image through a single color space, taking an RGB image as an example. Each pixel point I (R, c) in the RGB image is composed of three color channels of R, G and B and respectively represents R (R, c), G (R, c) and B (R, c). Where (r, c) are the coordinates of the pixels in the RGB image. The color space adopted by the RGB image is the RGB color space
When the brightness value is determined based on the RGB image, the brightness value of each pixel point I (R, c) can be determined according to the average value of three channels of R, G and B. Namely: l (R, c) = (R, c) + G (R, c) + B (R, c))/3, where L (R, c) represents the luminance value of the pixel point I (R, c). When the brightness is determined in this way for a region with high saturation in an image, the saturation of the region with high saturation is easily reduced, and even the color is easily lost.
Performing HSV color space conversion on the RGB image, and determining the brightness value, namely, when the RGB color space is converted into HSV color space for expression, determining the brightness value of each pixel point I (r, c) by adopting the following color space conversion formula:
V=max(R,G,B)
Figure BDA0003008031380000071
Figure BDA0003008031380000072
wherein, in the HSV color space, V represents the image brightness. When the brightness is determined in the mode, the protection of a single-channel or high-saturation image is facilitated, but the brightness in the image is too lowThe noise in the region where the luminance is too low is likely to be enhanced.
In HSV, V denotes the brightness of an image, S denotes the saturation of an image, and H denotes the hue of an image. Therefore, by the conversion formula of the HSV color space, the determined brightness value is: l (R, c) = max (R, c), G (R, c), B (R, c)).
The method comprises the steps of converting an RGB image into a YUV color space, and determining the brightness value, namely, when the RGB color space is converted into the YUV color space for expression, determining the brightness value of each pixel point I (r, c) by adopting different conversion protocols based on different image qualities and color gamuts. For example: under the BT.601 standard conversion protocol, the conversion formula of YUV color space is as follows: y = 0.299R + 0.587G + 0.114B; u = -0.169. R-0.331. G +0.5. B; v =0.5. R-0.419. G-0.081. B. Under the BT.709 standard conversion protocol, a conversion formula of a YUV color space is as follows: y =0.2126 · R +0.7152 · G +0.0722 · B; u = -0.1146. R-0.3854. G +0.5. B; v =0.5. R-0.4542. G-0.0468. B. Under the BT.2020 standard conversion protocol, the conversion formula of the YUV color space is as follows: y =0.2627. R +0.6780. G +0.0593. B; u = -0.1396. R-0.3604. G +0.5. B; v =0.5 · R-0.4598 · G-0.0402 · B. In YUV color space, Y represents a luminance signal, and U and V represent chrominance signals.
In one implementation scenario, the sRGB (standard Red Green Blue) standard protocol is a color language protocol that can perform indiscriminate color reproduction in different display systems. The standard corresponding to the gamut range described by the sRGB protocol is the bt.709 standard conversion protocol. Therefore, in order to avoid the difference between display systems, when the RGB image is subjected to YUV color space conversion, the conversion can be carried out by adopting a BT.709 standard conversion protocol. The luminance values determined by the YUV color space are therefore: l (r, c) =0.2126 × r (r, c) +0.7152 × g (r, c) +0.0722 × b (r, c). When the brightness is determined in this way for a region with high saturation in an image, the saturation of the region with high saturation is easily reduced, and even the color is easily lost.
Based on the same concept, the embodiment of the disclosure also provides another image processing method. The advantage of each color space in determining the brightness value of the image to be processed can be utilized, and the color space required to be converted is determined according to the image parameters required to be processed of the image to be processed.
FIG. 6 is a flow diagram illustrating another method of image processing according to an exemplary embodiment. As shown in fig. 6, the image processing method includes the following steps S21 to S24.
In step S21, an image to be processed is acquired, and color channel pixel values of the image to be processed are determined.
In step S22, at least two color spaces are determined based on the image parameters of the image to be processed that need to be processed.
In the disclosed embodiments, the image parameters may include one or more of a brightness boost threshold, saturation, and regional brightness. When the color space is determined, at least two color spaces capable of protecting the corresponding image parameters in the process of extracting the brightness value can be determined according to the image parameters to be processed of the image to be processed, so that the original image parameters of the image to be processed are prevented from being lost.
In step S23, luminance values of the color channel pixel values in at least two color spaces are determined, and the luminance values in the at least two color spaces are fitted to obtain a fitted luminance value.
In the embodiment of the present disclosure, since the obtained luminance values are different in different color spaces for the color channel pixel values, the two luminance values may be combined together in a fitting manner, and finally, the luminance value of the image to be processed, that is, the fitting luminance value, is determined. In one example, the luminance values of the color channel pixel values in different color spaces may be fitted in a weighting manner, so as to utilize the advantage of each color space in determining the luminance value of the image to be processed, and protect the required image parameters. Wherein the weighting may include: different weight coefficients are equally divided or assigned.
In step S24, the luminance range of the image to be processed is dynamically adjusted based on the fitted luminance value.
In an embodiment, by dynamically adjusting the brightness range of the image to be processed, when the image parameter to be processed is a brightness enhancement threshold, a saturation, or a brightness enhancement threshold and a saturation, an area with a high single-channel pixel value saturation in the image to be processed needs to be protected. Based on the foregoing, when performing HSV color space, it is helpful to protect a single channel or an image with high saturation, and based on the foregoing sRGB standard protocol, it is a color language protocol that can perform indiscriminate color restoration in different display systems. Therefore, when the color space is determined according to the image parameters, the fitting brightness value of the image to be processed can be determined by adopting the YUV color space and the HSV color space, and the problem that the brightness is increased or reduced in the area with high saturation can be prevented in the process of extracting the brightness value.
The process of dynamically adjusting the brightness range of the image to be processed according to the YUV color space and the HSV color space may be as shown in fig. 7. FIG. 7 is a flow chart illustrating yet another image processing method according to an exemplary embodiment.
In step S31, an image to be processed is acquired, and color channel pixel values of the image to be processed are determined.
In step S32, a YUV color space and an HSV color space are determined based on image parameters of an image to be processed.
In step S33, a first luminance value of the color channel pixel value in the YUV color space is determined, and a second luminance value of the color channel pixel value in the HSV color space is determined.
In the embodiments of the present disclosure, for convenience of explanation, color channels of an image to be processed are described by taking three color channels of R, G, and B as examples. When the first brightness value of the color channel pixel value in the YUV color space is determined, the conversion formula of the YUV color space may be determined based on the adopted conversion protocol, and then the first brightness value is obtained. For example, when the bt.709 standard conversion protocol is adopted, the conversion formula of YUV color space is used: y =0.2126 · R +0.7152 · G +0.0722 · B, the first luminance value is determined. When the BT.2020 standard conversion protocol is adopted, the conversion formula of the YUV color space is adopted: y =0.2627. R +0.6780. G +0.0593. B, the first luminance value was determined. When determining the second luminance value of the color channel pixel value in the HSV color space, the conversion formula of the HSV color space may be: l (R, c) = max (R, c), G (R, c), B (R, c)), the second luminance value is determined.
In step S34, the first luminance value and the second luminance value are weighted and summed to obtain a first fitting luminance value.
In the embodiment of the present disclosure, a first weight corresponding to the first brightness value and a second weight corresponding to the second brightness value may be set according to requirements, and then a first fitting brightness value is obtained through weighted summation. Wherein the sum of the first weight and the second weight is 1. For example: the first luminance value corresponds to a first weight of 0.8 and the second luminance value corresponds to a second weight of 0.2. For convenience of description, the first luminance value is represented by Y, the second luminance value is represented by V, and the resulting first fitted luminance value L (r, c) =0.8 × Y +0.2 × V.
In an example, the first weight and the second weight may be determined based on a specified ratio. In one example, the weight values corresponding to the first weight and the second weight may be determined based on the adjustment requirement of the actual brightness range. In another example, the first weight and the second weight may be determined according to a magnitude of a value between the first luminance value and the second luminance value. If the first brightness value is greater than the second brightness value, the first weight is greater than the second weight. For example: specifying a ratio of 3. If the first luminance value is smaller than the second luminance value, the first weight is 0.3 and the second weight is 0.7. In yet another example, the first weight and the second weight may be given weight coefficients. For example: the first weight and the second weight are both 0.5.
In an embodiment, the determination of the first weight and the second weight may depend on a brightness value of each pixel point in the image to be processed. For example: if the brightness value of each pixel point in the image to be processed is relatively low, when the second brightness value is obtained through the HSV color space, the noise of a dark area in the image to be processed is easily enhanced, and the difference between the extracted second brightness value and the real brightness value of each pixel point in the image to be processed is large. Therefore, when setting the weight values corresponding to the first weight and the second weight, the weight value corresponding to the first weight may be set to be greater than the weight value corresponding to the second weight. If the brightness value of each pixel point in the image to be processed is relatively high, the weight value corresponding to the first weight can be set to be smaller than the weight value corresponding to the second weight. In an implementation scenario, the setting of the first weight may depend on the overall brightness of the image to be processed, and if the overall brightness is relatively high, the first weight is correspondingly reduced. If the overall brightness is relatively low, the first weight is increased accordingly.
In step S35, the luminance range of the image to be processed is dynamically adjusted based on the first fitted luminance value.
In another embodiment, by dynamically adjusting the brightness range of the image to be processed, when the image parameters to be processed include the brightness of the region in addition to the brightness boost threshold, the saturation, or the brightness boost threshold and the saturation, that is, the image parameters to be processed at least include any one or more of the following: the brightness value, the saturation and the area brightness can be determined, and meanwhile, an HOK color space is adopted to determine the fitting brightness value of the image to be processed while the YUV color space and the HSV color space are determined, so that the brightness range of the image to be processed is dynamically adjusted. The HOK color space is a color space that can protect a highlight region in an image when extracting luminance. By increasing the brightness of the color channel pixel values in the HOK color space in the process of fitting the brightness values, the transition of the highlight region is more natural when the brightness range of the image to be processed is adjusted in the follow-up process, and the problem of brightness inversion of the highlight region is avoided to a certain extent.
The process of obtaining a fitting luminance value and dynamically adjusting the luminance range of the image to be processed based on the YUV color space, the HSV color space, and the Hok color space may be as shown in fig. 8. FIG. 8 is a flow chart illustrating yet another image processing method according to an exemplary embodiment.
In step S41, an image to be processed is acquired, and color channel pixel values of the image to be processed are determined.
In step S42, a YUV color space and an HSV color space are determined based on image parameters of the image to be processed that need to be processed.
In step S43, a first luma value of the color channel pixel value in the YUV color space is determined, and a second luma value of the color channel pixel value in the HSV color space is determined.
In step S44, the first luminance value and the second luminance value are weighted and summed to obtain a first fitting luminance value.
In step S45, a third luminance value of the color channel pixel value in the Hok color space is determined.
In the embodiment of the present disclosure, when determining the third luminance value of the color channel pixel value in the Hok color space, the following conversion formula may be employed to determine the third luminance value:
Figure BDA0003008031380000111
L B =B,
Figure BDA0003008031380000112
Figure BDA0003008031380000113
Figure BDA0003008031380000114
Figure BDA0003008031380000115
wherein, the value corresponding to HokY is the third brightness value.
In step S46, the first fitting luminance value and the third luminance value are weighted and summed to obtain a second fitting luminance value.
In the embodiment of the present disclosure, the determined first fitting brightness value and the determined third brightness value are subjected to weighted summation, so as to obtain a second fitting brightness value capable of representing a brightness value of the image to be processed.
In an example, weights may be set for the first fitting brightness value and the third brightness value, and then weighted summation is performed to obtain a second fitting brightness value. In another embodiment, weights may be set for the first luminance value, the second luminance value, and the third luminance value, respectively, and the second fitting luminance value is determined according to the first luminance value, the weight corresponding to the second luminance value, the weight corresponding to the third luminance value, and the weight corresponding to the third luminance value. For example: the first luminance value is represented by Y, the second luminance value is represented by V, the third luminance value is represented by HokY, a, b, and c represent the weight corresponding to the first luminance value, the weight corresponding to the second luminance value, and the weight corresponding to the third luminance value, respectively, and a + b + c =1. Thus, the second fitted luminance value L = a × Y + b × v + c × HokY. In one example, the weight corresponding to the first luminance value, the weight corresponding to the second luminance value, and the weight corresponding to the third luminance value may be a predetermined weight coefficient. In another example, a weight ratio between the weight corresponding to the first luminance value, the weight corresponding to the second luminance value, and the weight corresponding to the third luminance value may be determined according to the luminance range adjustment requirement.
In another example, the first fitting brightness value and the third brightness value may be weighted respectively, and then weighted and summed to obtain the second fitting brightness value. For example: l = d × L (r, c) + e × HokY, where L represents the second fitting value, L (r, c) represents the first fitting value, hokY represents the third luminance value, d and e represent the weight corresponding to the first fitting value and the weight corresponding to the third luminance value, respectively, and d + e =1.
In an embodiment, if the luminance values of the pixel points in the partial region in the image to be processed are relatively high and the luminance of the partial region needs to be protected, the weight value corresponding to the weight of the third luminance value may be correspondingly increased when the weight corresponding to the third luminance value is set. The higher the brightness value of the pixel point of the partial region is, the larger the weight value of the third brightness value corresponding to the weight is.
In step S47, the brightness range of the image to be processed is dynamically adjusted based on the second fitted brightness value.
In an implementation scene, based on the second fitting brightness value, the second fitting brightness value corresponding to each pixel point in the image to be processed can be determined, so as to obtain a brightness channel image. In order to dynamically adjust the brightness range of the image to be processed, a local tone mapping algorithm can be adopted to process the obtained brightness channel image. For example: and carrying out equalization processing on the obtained brightness channel image by adopting a self-adaptive histogram for limiting the contrast. Dividing the brightness channel image into m-n blocks, respectively counting the histogram of each sub-image, and determining the brightness distribution condition of each pixel point in each sub-image. Wherein m and n are any positive integer. In order to limit contrast, a brightness threshold value is determined, all parts of the histogram of each sub-image which are higher than the threshold value are intercepted, the rest parts of the histogram are normalized to obtain a binary system (bin) histogram corresponding to each histogram, the parts which are higher than the threshold value are evenly distributed to each bin, and then an accumulated histogram is calculated to obtain a mapping curve of each image. In order to avoid the block effect when each pixel point in each sub-image is tone mapping transformed by the mapping function in the block, the bilinear interpolation algorithm is adopted to carry out interpolation processing on the mapping function curve between each sub-image block so as to eliminate the block effect brought by the local tone mapping algorithm, thereby avoiding the halo (halo) problem. When the brightness range of the image to be processed is adjusted, the gain (gain) value of each pixel point in the brightness channel image processed by the local tone mapping algorithm is respectively obtained, the gain value of each pixel point is multiplied by the pixel points corresponding to the pixel values of the R, G and B color channels of the image to be processed, and then the color image of the image to be processed after dynamic adjustment is obtained.
By the image processing method, the fitting brightness value of the image to be processed is determined based on the brightness values of the color channel pixel values in various color spaces, so that the problem that the brightness of a region with high saturation is improved or reduced can be solved, the problem of brightness inversion of a highlight region can be avoided, and meanwhile, the noise of a region with low brightness value in the image to be processed can be well inhibited.
Based on the same concept, the embodiment of the disclosure also provides another image processing method.
FIG. 9 is a flowchart illustrating yet another image processing method according to an exemplary embodiment. As shown in fig. 9, the image processing method includes the following steps S51 to S54.
In step S51, an image to be processed is acquired, and color channel pixel values of the image to be processed are determined.
In step S52, if the image to be processed is not subjected to the brightness correction processing and the brightness of the image to be processed is lower than the brightness threshold, the brightness of the image to be processed is corrected according to the color space distribution of the image to be processed.
In the embodiment of the present disclosure, the brightness threshold is used to determine whether the image to be processed is a darker color image. If the brightness of the image to be processed is lower than the brightness threshold, the brightness representing the image to be processed is mainly distributed in the area with lower pixel value. If the brightness range of the image to be processed is adjusted directly based on the fitting brightness value corresponding to the image to be processed with the brightness lower than the brightness threshold, the adjusted image is distorted easily in the adjustment process, and the visual experience of the image is affected. Therefore, for the image to be processed with the brightness lower than the brightness threshold, before the fitting brightness value is obtained, the brightness of the image to be processed is corrected, so that the phenomenon that the brightness of the image to be processed is excessively concentrated in an area with a low pixel value is avoided, and the effectiveness of adjusting the brightness range of the image to be processed is improved. In an implementation scene, when brightness correction is performed on an image to be processed, logarithm (log) conversion processing can be performed on the image to be processed based on a gamma correction algorithm, so that the brightness of the image to be processed is enhanced, and brightness correction is performed, so that the color of an area with the brightness lower than a brightness threshold value in the image to be processed is improved, the color error of each gray scale is reduced, the details of the image to be processed are obvious, the brightness and the color of the image are consistent, the brightness is good, and the contrast is obvious.
In step S53, luminance values of the color channel pixel values in at least two color spaces are determined, and the luminance values in the at least two color spaces are fitted to obtain a fitted luminance value.
In step S54, the luminance range of the image to be processed is dynamically adjusted based on the fitted luminance value.
For the convenience of intuitively describing the beneficial effects of dynamically adjusting the brightness range of the image to be processed by any of the above image processing methods, the following description will be made with reference to fig. 10 to 13 d. FIG. 10 illustrates an image to be processed according to an exemplary embodiment. The image to be processed is an image with high saturation. Fig. 11a is a diagram of effects obtained by adjusting the brightness range of the image to be processed shown in fig. 10 by using the image processing method provided by the present disclosure. Fig. 11b is a diagram of the effect obtained by adjusting the brightness range of the image to be processed shown in fig. 10 based on the YUV color space. Fig. 11c is a diagram of the effect obtained by adjusting the brightness range of the image to be processed shown in fig. 10 based on the HSV color space. Fig. 11d is an effect diagram obtained by adjusting the luminance range of the image to be processed shown in fig. 10 by the average RGB three-color channel. Based on the effect plots in FIGS. 11a-11d, it was determined that FIG. 11a protected the saturation of the region well, while FIGS. 11c-11d had different degrees of reduction in saturation. FIG. 12 illustrates an image to be processed according to an exemplary embodiment. The image to be processed is an image with low brightness. Fig. 13a is a diagram of effects obtained by adjusting the brightness range of the image to be processed shown in fig. 12 by using the image processing method provided by the present disclosure. Fig. 13b is a diagram of the effect obtained by adjusting the brightness range of the image to be processed shown in fig. 12 based on the YUV color space. Fig. 13c is a diagram of the effect of adjusting the brightness range of the image to be processed shown in fig. 12 based on the HSV color space. Fig. 13d is an effect diagram obtained by adjusting the luminance range of the image to be processed shown in fig. 12 by the average RGB three-color channel. Based on the effect graphs in fig. 13a to 13d, it can be determined that, compared with other methods, the image processing method provided by the present disclosure can not only protect the region with high saturation, but also has a good effect of suppressing noise in the dark region of the input image.
Based on the same conception, the embodiment of the disclosure also provides an image processing device.
It is understood that the image processing apparatus provided by the embodiments of the present disclosure includes a hardware structure and/or a software module for performing each function in order to realize the above functions. The disclosed embodiments can be implemented in hardware or a combination of hardware and computer software, in combination with the exemplary elements and algorithm steps disclosed in the disclosed embodiments. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Fig. 14 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. Referring to fig. 14, the image processing apparatus 100 includes an acquisition unit 101, a fitting unit 102, and an adjustment unit 103.
The acquiring unit 101 is configured to acquire an image to be processed and determine a color channel pixel value of the image to be processed.
The fitting unit 102 is configured to determine brightness values of the color channel pixel values in at least two color spaces, and fit the brightness values in the at least two color spaces to obtain a fitted brightness value.
And the adjusting unit 103 is used for dynamically adjusting the brightness range of the image to be processed based on the fitting brightness value.
In an embodiment, the image processing apparatus 100 further comprises: the determining unit is used for determining at least two color spaces based on image parameters needing to be processed of the image to be processed, and the image parameters comprise one or more of a brightness improvement threshold, saturation and regional brightness.
In another embodiment, the at least two color spaces comprise a YUV color space and a HSV color space in response to the image parameter luma value and/or saturation. The fitting unit 102 determines the luminance values of the color channel pixel values in the at least two color spaces by the following method, and fits the luminance values in the at least two color spaces to obtain a fitted luminance value: a first luma value of a color channel pixel value in YUV color space is determined and a second luma value of the color channel pixel value in HSV color space is determined. And carrying out weighted summation on the first brightness value and the second brightness value to obtain a first fitting brightness value. The adjusting unit 103 dynamically adjusts the luminance range of the image to be processed based on the fitted luminance value in the following manner: and dynamically adjusting the brightness range of the image to be processed based on the first fitting brightness value.
In yet another embodiment, in response to the image parameters comprising a luminance value, a saturation, and/or a region luminance, it is determined that the at least two color spaces comprise a YUV color space, a HSV color space, and a Hok color space. The fitting unit 102 determines the luminance values of the color channel pixel values in the at least two color spaces by the following method, and fits the luminance values in the at least two color spaces to obtain a fitted luminance value: the method includes determining a first luma value of a color channel pixel value in YUV color space, determining a second luma value of the color channel pixel value in HSV color space, and determining a third luma value of the color channel pixel value in Hok color space. And carrying out weighted summation on the first brightness value, the second brightness value and the third brightness value to obtain a second fitting brightness value. The adjusting unit 103 dynamically adjusts the luminance range of the image to be processed based on the fitted luminance value in the following manner: and dynamically adjusting the brightness range of the image to be processed based on the second fitting brightness value.
In still another embodiment, the image processing apparatus 100 further includes: and the correcting unit is used for performing brightness correction on the image to be processed according to the color space distribution of the image to be processed if the image to be processed is not subjected to brightness correction processing and the brightness of the image to be processed is lower than a brightness threshold value.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 15 is a block diagram of a terminal for image processing according to an exemplary embodiment. For example, the terminal 200 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
Referring to fig. 15, the terminal 200 may include one or more of the following components: a processing component 202, a memory 204, a power component 206, a multimedia component 208, an audio component 210, an input/output (I/O) interface 212, a sensor component 214, and a communication component 216.
The processing component 202 generally controls overall operation of the terminal 200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 202 may include one or more processors 220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 202 can include one or more modules that facilitate interaction between the processing component 202 and other components. For example, the processing component 202 can include a multimedia module to facilitate interaction between the multimedia component 208 and the processing component 202.
The memory 204 is configured to store various types of data to support operations at the terminal 200. Examples of such data include instructions for any application or method operating on terminal 200, contact data, phonebook data, messages, pictures, videos, and the like. The memory 204 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power components 206 provide power to the various components of the terminal 200. The power components 206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal 200.
The multimedia component 208 includes a screen providing an output interface between the terminal 200 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 208 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the terminal 200 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 210 is configured to output and/or input audio signals. For example, the audio component 210 includes a Microphone (MIC) configured to receive an external audio signal when the terminal 200 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 204 or transmitted via the communication component 216. In some embodiments, audio component 210 also includes a speaker for outputting audio signals.
The I/O interface 212 provides an interface between the processing component 202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 214 includes one or more sensors for providing various aspects of status assessment for the terminal 200. For example, the sensor assembly 214 can detect an open/closed state of the terminal 200, relative positioning of components, such as a display and keypad of the terminal 200, a change in position of the terminal 200 or a component of the terminal 200, the presence or absence of user contact with the terminal 200, orientation or acceleration/deceleration of the terminal 200, and a change in temperature of the terminal 200. The sensor assembly 214 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 216 is configured to facilitate wired or wireless communication between the terminal 200 and other devices. The terminal 200 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 216 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal 200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 204 comprising instructions, executable by the processor 220 of the terminal 200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is further understood that the use of "a plurality" in this disclosure means two or more, as other terms are analogous. "and/or" describes the association relationship of the associated object, indicating that there may be three relationships, for example, a and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like are used to describe various information and that such information should not be limited by these terms. These terms are only used to distinguish one type of information from another, and do not indicate a particular order or degree of importance. Indeed, the terms "first," "second," and the like are fully interchangeable. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further understood that, unless otherwise specified, "connected" includes direct connections between the two without the presence of other elements, as well as indirect connections between the two with the presence of other elements.
It is further to be understood that while operations are depicted in the drawings in a particular order, this is not to be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. An image processing method, characterized in that the image processing method comprises:
acquiring an image to be processed, and determining a color channel pixel value of the image to be processed;
determining brightness values of the color channel pixel values in at least two color spaces, and fitting the brightness values in the at least two color spaces to obtain a fitted brightness value;
and dynamically adjusting the brightness range of the image to be processed based on the fitting brightness value.
2. The image processing method of claim 1, wherein determining the luminance value of the color channel pixel value in at least two color spaces is preceded by:
and determining at least two color spaces based on the image parameters to be processed of the image to be processed, wherein the image parameters comprise one or more of a brightness improvement threshold, saturation and regional brightness.
3. The method of claim 2, wherein in response to the image parameters comprising a luminance value and/or a saturation, the at least two color spaces comprise a YUV color space and a HSV color space;
the determining the brightness values of the color channel pixel values in at least two color spaces and fitting the brightness values in the at least two color spaces to obtain a fitted brightness value includes:
determining a first brightness value of the color channel pixel value in a YUV color space and determining a second brightness value of the color channel pixel value in an HSV color space;
carrying out weighted summation on the first brightness value and the second brightness value to obtain a first fitting brightness value;
based on the fitting brightness value, dynamically adjusting the brightness range of the image to be processed, including:
and dynamically adjusting the brightness range of the image to be processed based on the first fitting brightness value.
4. The method of claim 2, wherein in response to the image parameters comprising a luminance value, a saturation, and/or a region luminance, determining that the at least two color spaces comprise a YUV color space, a HSV color space, and a Hok color space;
the determining the brightness values of the color channel pixel values in at least two color spaces and fitting the brightness values in the at least two color spaces to obtain a fitted brightness value includes:
determining a first luma value of the color channel pixel value in YUV color space, determining a second luma value of the color channel pixel value in HSV color space, and determining a third luma value of the color channel pixel value in Hok color space;
carrying out weighted summation on the first brightness value, the second brightness value and the third brightness value to obtain a second fitting brightness value;
based on the fitting brightness value, dynamically adjusting the brightness range of the image to be processed, including:
and dynamically adjusting the brightness range of the image to be processed based on the second fitting brightness value.
5. The image processing method according to any one of claims 1 to 4, wherein determining the luminance value of the color channel pixel value in at least two color spaces is preceded, the image processing method further comprising:
and if the image to be processed is not subjected to brightness correction processing and the brightness of the image to be processed is lower than a brightness threshold, performing brightness correction on the image to be processed according to the color space distribution of the image to be processed.
6. An image processing apparatus characterized by comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be processed and determining a color channel pixel value of the image to be processed;
the fitting unit is used for determining the brightness values of the color channel pixel values in at least two color spaces and fitting the brightness values in the at least two color spaces to obtain a fitting brightness value;
and the adjusting unit is used for dynamically adjusting the brightness range of the image to be processed based on the fitting brightness value.
7. The image processing apparatus according to claim 6, characterized by further comprising:
the determining unit is used for determining at least two color spaces based on image parameters needing to be processed of the image to be processed, and the image parameters comprise one or more of a brightness improvement threshold, saturation and regional brightness.
8. The image processing apparatus according to claim 7, wherein in response to the image parameter comprising a luminance value and/or a saturation, the at least two color spaces comprise a YUV color space and a HSV color space;
the fitting unit determines the brightness values of the color channel pixel values in at least two color spaces by adopting the following modes, and fits the brightness values in the at least two color spaces to obtain a fitting brightness value:
determining a first brightness value of the color channel pixel value in a YUV color space and determining a second brightness value of the color channel pixel value in an HSV color space;
carrying out weighted summation on the first brightness value and the second brightness value to obtain a first fitting brightness value;
the adjusting unit dynamically adjusts the brightness range of the image to be processed based on the fitting brightness value in the following way:
and dynamically adjusting the brightness range of the image to be processed based on the first fitting brightness value.
9. The image processing apparatus according to claim 7, wherein in response to the image parameter comprising a luminance value, a saturation and/or a region luminance, determining that the at least two color spaces comprise a YUV color space, a HSV color space and a Hok color space;
the fitting unit determines brightness values of the color channel pixel values in at least two color spaces by adopting the following method, and fits the brightness values in the at least two color spaces to obtain a fitted brightness value:
determining a first luminance value of the color channel pixel value in a YUV color space, determining a second luminance value of the color channel pixel value in a HSV color space, and determining a third luminance value of the color channel pixel value in a Hok color space;
carrying out weighted summation on the first brightness value, the second brightness value and the third brightness value to obtain a second fitting brightness value;
the adjusting unit dynamically adjusts the brightness range of the image to be processed based on the fitting brightness value in the following way:
and dynamically adjusting the brightness range of the image to be processed based on the second fitting brightness value.
10. The image processing apparatus according to any one of claims 6 to 9, characterized by further comprising:
and the correcting unit is used for correcting the brightness of the image to be processed according to the color space distribution of the image to be processed if the image to be processed is not subjected to brightness correction processing and the brightness of the image to be processed is lower than a brightness threshold value.
11. An image processing apparatus characterized by comprising:
a memory to store instructions; and
a processor for invoking the memory-stored instructions to perform the image processing method of any of claims 1-5.
12. A computer-readable storage medium storing instructions which, when executed by a processor, perform the image processing method of any one of claims 1 to 5.
CN202110367903.3A 2021-04-06 2021-04-06 Image processing method, device and storage medium Pending CN115170680A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110367903.3A CN115170680A (en) 2021-04-06 2021-04-06 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110367903.3A CN115170680A (en) 2021-04-06 2021-04-06 Image processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN115170680A true CN115170680A (en) 2022-10-11

Family

ID=83476123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110367903.3A Pending CN115170680A (en) 2021-04-06 2021-04-06 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN115170680A (en)

Similar Documents

Publication Publication Date Title
CN111418201B (en) Shooting method and equipment
CN109345485B (en) Image enhancement method and device, electronic equipment and storage medium
WO2018171493A1 (en) Image processing method and device, and storage medium
CN107992182B (en) Method and device for displaying interface image
CN110958401B (en) Super night scene image color correction method and device and electronic equipment
CN104050645B (en) Image processing method and device
CN113450713B (en) Screen display method and device and gray scale mapping information generation method and device
WO2022127174A1 (en) Image processing method and electronic device
CN108200352B (en) Method, terminal and storage medium for adjusting picture brightness
CN105791790B (en) Image processing method and device
US20220044369A1 (en) Image processing method, terminal and non-transitory computer-readable storage medium
CN111625213A (en) Picture display method, device and storage medium
KR20130050800A (en) Apparatus and method for generating a motion blur in a portable terminal
CN105472228B (en) Image processing method and device and terminal
CN115239570A (en) Image processing method, image processing apparatus, and storage medium
CN113472997B (en) Image processing method and device, mobile terminal and storage medium
CN111383166A (en) Method and device for processing image to be displayed, electronic equipment and readable storage medium
CN114827391A (en) Camera switching method, camera switching device and storage medium
CN114222072B (en) Image processing method, device, electronic equipment and storage medium
CN115170680A (en) Image processing method, device and storage medium
CN117616774A (en) Image processing method, device and storage medium
CN114331852A (en) Method and device for processing high dynamic range image and storage medium
KR20090072109A (en) Luminance correction method for photographing image for the use of the video calling
JP2004259177A (en) Image processing device and program
CN116668862B (en) Image processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination