CN113747062A - HDR scene detection method and device, terminal and readable storage medium - Google Patents

HDR scene detection method and device, terminal and readable storage medium Download PDF

Info

Publication number
CN113747062A
CN113747062A CN202110984139.4A CN202110984139A CN113747062A CN 113747062 A CN113747062 A CN 113747062A CN 202110984139 A CN202110984139 A CN 202110984139A CN 113747062 A CN113747062 A CN 113747062A
Authority
CN
China
Prior art keywords
hdr
threshold
dynamic information
preview image
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110984139.4A
Other languages
Chinese (zh)
Other versions
CN113747062B (en
Inventor
邹涵江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110984139.4A priority Critical patent/CN113747062B/en
Publication of CN113747062A publication Critical patent/CN113747062A/en
Application granted granted Critical
Publication of CN113747062B publication Critical patent/CN113747062B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method and a device for detecting an HDR scene, a terminal and a storage medium. The HDR scene detection method comprises the following steps: acquiring a preview image of a current scene; acquiring global dynamic information of a preview image; dividing the preview image into a plurality of blocks with preset sizes, and acquiring local dynamic information of each block; and determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information. In the HDR scene detection method and device, the terminal and the storage medium, the dynamic range of the current scene is analyzed according to the combination of the global dynamic information and the local dynamic information by calculating the global dynamic information of the preview image and calculating the local dynamic information of a plurality of blocks with preset sizes, so that the dynamic range condition of the current scene is judged more carefully and accurately to determine whether the current scene is the HDR scene, the scene detection accuracy is improved, the HDR mode is favorably and correctly triggered during shooting, the shooting fragmentation rate is improved, and the user experience is improved.

Description

HDR scene detection method and device, terminal and readable storage medium
Technical Field
The present application relates to the field of electronic technologies, and in particular, to an HDR scene detection method, an HDR scene detection apparatus, a terminal, and a non-volatile computer-readable storage medium.
Background
The "dynamic range" is used to describe the light amount intensity distribution range from the darkest shaded portion to the brightest highlight portion in the screen. In photographing/photography, there are two concepts of "dynamic range of scene" and "dynamic range of camera", where "dynamic range of scene" refers to the range or ratio of maximum brightness and minimum brightness in a photographed scene, that is, the difference between the brightest area and the darkest area in a picture; and "dynamic range of the camera" refers to the range of brightness variation that is acceptable for the light sensing element. A High Dynamic Range (HDR) scene, i.e., a scene in which the Dynamic Range of the scene is larger than the Dynamic Range of the camera, has too bright or too dark regions beyond the Range that can be recorded by the photosensitive elements, and shows that a completely white (highlight overflow becomes completely white) or completely black (shadow region becomes completely black) region appears in the shot picture, and the image quality is greatly reduced due to lack of details of bright or dark portions. For such scenes, the imaging quality can currently be improved by applying HDR algorithmic processing. It is therefore first to be solved whether the shot scene is an HDR scene. At present, most HDR detection methods can better judge high light ratio HDR scenes (such as backlight), but have the problem of low detection accuracy when HDR processing is still required to improve the imaging effect for scenes with low dynamic range, small area overexposure areas and the like.
Disclosure of Invention
The embodiment of the application provides an HDR scene detection method, an HDR scene detection device, a terminal and a non-volatile computer readable storage medium.
The image processing method of the embodiment of the application comprises the following steps: and acquiring a preview image of the current scene. And acquiring global dynamic information of the preview image. Dividing the preview image into a plurality of blocks with preset sizes, and acquiring local dynamic information of each block. Determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information.
The HDR scene detection device comprises an acquisition module, a processing module and a determination module. The acquisition module is configured to: and acquiring a preview image of the current scene. The processing module is used for: acquiring global dynamic information of the preview image; and dividing the preview image into a plurality of blocks with preset sizes, and acquiring local dynamic information of each block. The determination module is to: determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information.
The terminal of the embodiments of the present application includes one or more processors, memory, and one or more programs. Wherein one or more of the programs are stored in the memory and executed by one or more of the processors, the programs including instructions for performing the HDR scene detection method of embodiments of the present application. The HDR scene detection method comprises the following steps: and acquiring a preview image of the current scene. And acquiring global dynamic information of the preview image. Dividing the preview image into a plurality of blocks with preset sizes, and acquiring local dynamic information of each block. Determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information.
A non-transitory computer-readable storage medium of an embodiment of the present application contains a computer program that, when executed by one or more processors, causes the processors to perform the following HDR scene detection method: and acquiring a preview image of the current scene. And acquiring global dynamic information of the preview image. Dividing the preview image into a plurality of blocks with preset sizes, and acquiring local dynamic information of each block. Determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information.
In the HDR scene detection method, the HDR scene detection device, the terminal and the nonvolatile computer readable storage medium, the dynamic range of the current scene is analyzed according to the combination of the global dynamic information and the local dynamic information by calculating the global dynamic information of the preview image and calculating the local dynamic information of a plurality of blocks with preset sizes, so that the dynamic range condition of the current scene is judged more carefully and accurately, whether the current scene is the HDR scene is determined, the accuracy of scene detection is improved, the HDR mode is favorably and correctly triggered during photographing, the photographing filming rate is improved, and the user experience is improved.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow diagram of an HDR scene detection method according to some embodiments of the present application;
fig. 2 is a schematic structural diagram of an HDR scene detection apparatus according to some embodiments of the present application;
FIG. 3 is a schematic block diagram of a terminal according to some embodiments of the present application;
FIG. 4 is a schematic view of a scene in which a preview image is divided into 4 × 4 blocks in an HDR scene detection method according to some embodiments of the present application;
fig. 5 to 7 are schematic flow diagrams of HDR scene detection methods according to some embodiments of the present application;
fig. 8 is a schematic diagram of different HDR scene graphs and corresponding gray level histograms of scenes in the HDR scene detection method according to some embodiments of the present application;
FIG. 9 is a scene schematic diagram of image entropy differences of different preview images in the same scene in an HDR scene detection method according to some embodiments of the present application;
fig. 10 to 14 are schematic flow diagrams of HDR scene detection methods according to some embodiments of the present application;
fig. 15 is a schematic diagram of a preview image, a local HDR distribution map and a preset weight map in an HDR scene detection method according to some embodiments of the present application;
FIG. 16 is a schematic flow chart diagram of an HDR scene detection method according to some embodiments of the present application;
FIG. 17 is a scene schematic diagram of a preview image with a salient region in an HDR scene detection method according to some embodiments of the present application;
fig. 18 is a scene schematic diagram of a preview image with a saliency region and a face information region in an HDR scene detection method according to some embodiments of the present application;
FIG. 19 is a schematic flow chart diagram of an HDR scene detection method according to some embodiments of the present application;
FIG. 20 is a schematic diagram of a connection between a non-volatile computer readable storage medium and a processor according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1 to 4, an HDR scene detection method according to an embodiment of the present application includes:
01: acquiring a preview image P0 of the current scene;
04: acquiring global dynamic information of a preview image P0;
05: dividing the preview image P0 into a plurality of blocks P01 with preset sizes, and acquiring local dynamic information of each block P01; and
06: and determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information.
Referring to fig. 2, the present embodiment further provides an HDR scene detection apparatus 10, where the HDR scene detection apparatus 10 includes an obtaining module 11, a processing module 13, and a determining module 15. The HDR scene detection method according to the embodiment of the present application is applicable to the HDR scene detection apparatus 10. Wherein the obtaining module 11 is configured to execute the method in 01. The processing module 13 is used for executing the methods in 04 and 05. The determination module 15 is used to execute the method in 06. Namely, the obtaining module 11 is configured to: a preview image P0 of the current scene is acquired. The processing module 13 is configured to: acquiring global dynamic information of a preview image P0; and dividing the preview image P0 into a plurality of blocks P01 with a predetermined size, and obtaining the local dynamic information of each block P01. The determination module 15 is configured to: and determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information.
Referring to fig. 3, the present embodiment further provides a terminal 100, where the terminal 100 includes one or more processors 30, a memory 50, and one or more programs. Wherein one or more programs are stored in the memory 50 and executed by the one or more processors 30, the programs including instructions for performing the HDR scene detection method of embodiments of the present application. That is, when one or more processors 30 execute a program, the processors 30 may implement the methods in 01, 04, 05, and 06. That is, the one or more processors 30 are operable to: acquiring a preview image P0 of the current scene; acquiring global dynamic information of a preview image P0; dividing the preview image P0 into a plurality of blocks P01 with preset sizes, and acquiring local dynamic information of each block P01; and determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information.
Specifically, the terminal 100 may include, but is not limited to, a mobile phone, a notebook computer, a smart television, a tablet computer, a smart watch, a head display device, a drone, a digital camera, a digital camcorder, or a computer. The HDR scene detection apparatus 10 may be an integration of functional modules integrated in the terminal 100. The present application is described only by taking the terminal 100 as a mobile phone as an example, and the case where the terminal 100 is another type of device is similar to the mobile phone, and will not be described in detail.
The "dynamic range" is used to describe the light amount intensity distribution range from the darkest shaded portion to the brightest highlight portion in the screen. In photographing/photography, there are two concepts of "dynamic range of scene" and "dynamic range of camera", where "dynamic range of scene" refers to the range or ratio of maximum brightness and minimum brightness in a photographed scene, that is, the difference between the brightest area and the darkest area in a picture; and "dynamic range of the camera" refers to the range of brightness variation that is acceptable for the light sensing element. A High Dynamic Range (HDR) scene, i.e., a scene in which the Dynamic Range of the scene is larger than the Dynamic Range of the camera, has too bright or too dark regions beyond the Range that can be recorded by the photosensitive elements, and shows that a completely white (highlight overflow becomes completely white) or completely black (shadow region becomes completely black) region appears in the shot picture, and the image quality is greatly reduced due to lack of details of bright or dark portions. For such scenes, the imaging quality can currently be improved by applying HDR algorithmic processing. It is therefore first to be solved whether the shot scene is an HDR scene. At present, most HDR detection methods can better judge high light ratio HDR scenes (such as backlight), but have the problem of low detection accuracy when HDR processing is still required to improve the imaging effect for scenes with low dynamic range, small area overexposure areas and the like.
In the HDR scene detection method, the HDR scene detection apparatus 10, the terminal 100, and the non-volatile computer-readable storage medium 200 of the present application, the dynamic range of the current scene is analyzed in combination according to the global dynamic information and the local dynamic information by calculating the global dynamic information of the preview image P0 and calculating the local dynamic information of the plurality of blocks P01 of the preset size, so as to determine more precisely and accurately the dynamic range condition of the current scene, to determine whether the current scene is an HDR scene, to improve the accuracy of scene detection, to be beneficial to correctly triggering the HDR mode during photographing, to improve the photographing filming rate, and to improve the user experience.
In method 01, the current scene is a scene in which the user captured the preview image P0, for example, various scenes (including buildings, people, scenery, etc.) from day to night. When the preview image P0 of the current scene is acquired, the format of the preview image P0 may be YUV format, RGB format, or the like. This is not limitative. In the present application, the format of preview image P0 will be described as an example of the YUV format.
In method 04, the processing module 13 or the processor 30 obtains global dynamic information of the preview image P0. Since the dynamic range is a luminance difference range in the current scene, the global dynamic information can be analyzed based on the luminance value of each pixel in the preview image P0.
In the method 05, there may be a case where the preview image P0 acquired from the current scene has a low partial light ratio and a small area of overexposed area. When a scene in such a situation is photographed, the imaging effect can be improved after the scene is processed by the HDR algorithm, so that the processing module 13 or the processor 30 divides the preview image P0 into a plurality of blocks P01 with preset sizes and obtains local dynamic information of each block P01. Similarly, whether the local motion information exists in each block P01 can be analyzed based on the brightness values of the pixels in each block P01.
Specifically, when the preview image P0 is divided into a plurality of blocks P01 of a preset size, the size of the block P01 may be set according to the obtained global motion information. For example, if the overall brightness distribution of the current scene is uniform, the preview image P0 needs to be divided into a larger number of blocks P01, that is, the preset size (area size) of the block P01 is smaller, so as to ensure that the dynamic range of the current scene is determined more finely and accurately. If the difference in the overall brightness distribution of the current scene is large, the preview image may be divided into a smaller number of blocks P01, that is, the preset size of the block P01 is larger, so that the power consumption of the processing module 13 or the processor 30 can be effectively saved while the dynamic range of the current scene can be determined in a fine-grained and ready manner. In the present application, it is assumed that the resolution of the preview image P0 obtained from the current scene is 8 × 8, and the preset size of the block P01 is 2 × 2 (i.e. the preview image P0 is divided into 4 × 4 blocks) for example.
In the method 06, the processing module 13 or the processor 30 performs comprehensive analysis on the global dynamic information of the preview image P0 obtained through calculation and the local dynamic information of each block P01 obtained through calculation, so that even if a situation that a part of light ratio is low and an overexposed area is small exists in the current scene, the dynamic range situation of the current scene can be accurately judged to determine whether the current scene is an HDR scene, which is beneficial to correctly triggering an HDR mode during photographing, improving photographing fragmentation rate, and improving user experience.
Referring to fig. 4 and fig. 5, in some embodiments, the HDR scene detection method may further include:
02: acquiring shooting metadata parameters of a current scene;
03: the preview image P0 is subjected to brightness correction according to the shooting metadata parameters. Wherein, 04: the global dynamic information for acquiring the preview image P0 may include: 041: the global dynamic information of the preview image P0 after correction is acquired.
Referring to fig. 2, the obtaining module 11 is further configured to execute the method in 02, and the processing module 13 is further configured to execute the methods in 03 and 041. That is, the obtaining module 11 is further configured to: and acquiring shooting metadata parameters of the current scene. The processing module 13 is further configured to: the preview image P0 is subjected to brightness correction according to the shooting metadata parameters. The acquiring, by the processing module 13, the global dynamic information of the preview image P0 may include: the global dynamic information of the preview image P0 after correction is acquired.
Referring to FIG. 2, processor 30 is also used to execute the methods of 02, 03, and 041. That is, the processor 30 is further configured to: acquiring shooting metadata parameters of a current scene; the preview image P0 is subjected to brightness correction according to the shooting metadata parameters. The obtaining of the global dynamic information of the preview image P0 by the processor 30 may include: the global dynamic information of the preview image P0 after correction is acquired.
When the obtaining module 11 or the Processor 30 obtains the preview Image P0 of the current scene, the preview Image P0 is processed and influenced by each brightness module in an Image Signal Processor (ISP) of the high-pass platform, for example, when the obtaining module 11 or the Processor 30 obtains the preview Image P0, the camera effect debugging therein may adjust each module in the camera, for example, the preview Image P0 may be subjected to automatic exposure, automatic focusing, white balance, brightness color adjustment, and the like, and may not truly reflect the brightness information of the current scene. It is therefore desirable to luminance correct preview image P0 to ensure that determination module 15 or processor 30 can accurately determine whether the current scene is an HDR scene.
Referring to fig. 5 and 6 again, in the method 02, specifically, while the obtaining module 11 or the processor 30 obtains the preview image P0, obtaining the shooting metadata parameters under the shooting condition of the current scene, where the shooting metadata parameters may include one or more of ISO (sensitivity), bright area brightness Gain (drcGain) in an Auto Exposure Control (AEC) module, Dark area brightness Gain (Dark Boost Gain, DBGain) in an AEC module, or face frame information. The drcGain parameter is a ratio calculated by the bright area information in the current scene in the high-pass AEC module, and the use of the drcGain parameter in the ISP makes the bright area of the preview image P0 less overexposed, so that the bright area is darkened to some extent. Similarly, the DBGain parameter is a ratio calculated by dark area information in the current scene in the high-pass AEC module, and the DBGain parameter is used in the ISP to highlight the dark area of the preview image P0.
In the method 03, the processing module 13 or the processor 30 performs brightness correction on the preview image P0 according to the shooting metadata parameter, specifically: the shooting metadata parameters are transmitted to the processing module 13 or the processor 30, and are reflected to the preview image P0, so that the real brightness information of the current scene is obtained through reverse calculation, and the accuracy of the global dynamic information and the local dynamic information is ensured when the global dynamic information and the local dynamic information of the current scene are obtained.
In the embodiment of the application, the obtaining module 11 or the processor 30 first executes the method 01 and the method 02, the processing module 13 or the processor 30 performs brightness correction on the preview image P0 according to the shooting metadata parameters, and then executes the methods 04, 05, and 06 according to the corrected preview image P0, so as to ensure that the real global dynamic information and the real local dynamic information in the current scene are obtained, and improve the accuracy of scene detection.
Referring to fig. 4 and 6, in some embodiments, the shooting metadata parameters include a first brightness gain applied to a bright area of the preview image P0 and a second brightness gain applied to a dark area of the preview image P0, 03: the preview image P0 is luminance-corrected according to the photographing metadata parameters, including traversing all pixels of the preview image P0 by: can include the following steps:
031: when the current brightness value of the pixel is larger than a preset first brightness threshold value, taking the product of the current brightness value of the pixel and the first brightness gain as a correction brightness value of the pixel;
033: and when the current brightness value of the pixel is smaller than a preset second brightness threshold value, taking the ratio of the current brightness value of the pixel to the second brightness gain as the correction brightness value of the pixel.
Referring to fig. 2, the processing module 13 is also used for executing the methods 031 and 033. That is, the processing module 13 is further configured to: and when the current brightness value of the pixel is larger than a preset first brightness threshold value, taking the product of the current brightness value of the pixel and the first brightness gain as the correction brightness value of the pixel. And when the current brightness value of the pixel is smaller than a preset second brightness threshold value, taking the ratio of the current brightness value of the pixel to the second brightness gain as the correction brightness value of the pixel.
Referring to fig. 3, the processor 30 is also configured to execute the methods 031 and 033. That is, the processor 30 is further configured to: and when the current brightness value of the pixel is larger than a preset first brightness threshold value, taking the product of the current brightness value of the pixel and the first brightness gain as the correction brightness value of the pixel. And when the current brightness value of the pixel is smaller than a preset second brightness threshold value, taking the ratio of the current brightness value of the pixel to the second brightness gain as the correction brightness value of the pixel.
In the embodiment of the present application, since the dynamic range is a luminance difference range in the current scene, and the preview image P0 is processed by using drcGain and DBGain when the preview image P0 is acquired, the first luminance gain may be a drcGain parameter obtained by calculating bright area information of the preview image P0 in the high-pass AEC, and the second luminance gain may be a DBGain parameter obtained by calculating dark area information of the preview image P0 in the high-pass AEC. When the processing module 13 or the processor 30 performs the brightness correction on the preview image P0, the processing module 13 or the processor 30 performs the brightness correction on the brightness values of all the pixels in the preview image P0 according to the first brightness gain and the second brightness gain (to restore to the brightness effect of the inactive drcGain and DBGain). The specific process is that the processing module 13 or the processor 30 executes the methods in 031 and 033 by traversing all the pixels in the preview image P0, and obtains the corrected preview image P0.
It is assumed that a pixel having a luminance value greater than a first luminance threshold value is divided into bright-area pixels, a pixel having a luminance value less than a second luminance threshold value is divided into dark-area pixels, and a pixel having a luminance value greater than the second luminance threshold value and less than the first luminance threshold value does not need luminance correction. The setting of the first brightness threshold and the second brightness threshold may be set according to an actual scene, which is not limited in this respect. If the first luminance threshold value is Bright _ th, the second luminance threshold value is Dark _ th, and the current luminance value of the pixel in the preview image P0 is Y (x, Y), which indicates the current luminance value of the pixel in the preview image P0 at (x, Y), the corrected luminance value of the pixel after correction is Y' (x, Y), and the corrected luminance value can be calculated by equation (1).
Figure BDA0003230131690000071
In one embodiment, the first brightness threshold value Bright _ th is 220, the second brightness threshold value Dark _ th is 40, and if the current brightness value 240 of the currently traversed pixel is greater than the first brightness threshold value Bright _ th (220), the corrected brightness value 240 drcGain of the pixel is obtained by the formula (1) since the current brightness value (240) is greater than the first brightness threshold value Bright _ th (220). If the current brightness value of the currently traversed pixel is 10, the corrected brightness value of the pixel is 10/DBGain calculated by the formula (1) because the current brightness value (10) is smaller than the second brightness threshold value Dark _ th (40). If the current luminance value of the currently traversed pixel is 80, since the current luminance value (80) is greater than the second luminance threshold value Dark _ th (40) and less than the first luminance threshold value Bright _ th (240), no correction is required to be performed on the current luminance value of the pixel, and the luminance value of the pixel is still 80.
The processing module 13 or the processor 30 recovers the luminance effects of the inactive drcGain and DBGain through the inverse calculation of the formula (1), so that the luminance information of the preview image P0 is more consistent with the luminance information in the current scene, thereby ensuring the accuracy of the current scene detection.
Referring to fig. 4 and 7, in some embodiments, 04: acquiring the global dynamic information of the preview image P0 may include:
043: counting a global gray histogram of the preview image P0;
045: carrying out normalization processing on the overall gray level histogram to obtain a first normalized histogram; and
047: global dynamic information of the preview image P0 is calculated based on the first normalized histogram, the global dynamic information including a first luminance variance, an image entropy difference, a first overexposed area, and a first overcommed area.
Referring to fig. 2, the processing module 13 is further configured to perform the methods in 043, 045 and 047. That is, the processing module 13 is further configured to: counting a global gray level histogram of the preview image; carrying out normalization processing on the overall gray level histogram to obtain a first normalized histogram; and calculating global dynamic information of the preview image according to the first normalized histogram, wherein the global dynamic information comprises a first brightness variance, an image entropy difference, a first overexposure area and a first excessively dark area.
Referring to fig. 3, the processor 30 is further configured to perform the methods of 043, 045 and 047. That is, the processor 30 is further configured to: counting a global gray level histogram of the preview image; carrying out normalization processing on the overall gray level histogram to obtain a first normalized histogram; and calculating global dynamic information of the preview image according to the first normalized histogram, wherein the global dynamic information comprises a first brightness variance, an image entropy difference, a first overexposure area and a first excessively dark area.
Since the dynamic range is a luminance difference range in the current scene and the gray histogram reflects the condition of the luminance distribution of the pixels in the preview image P0, the condition of the dynamic range of the current scene can be analyzed based on the information of the gray histogram. Fig. 8 shows a global histogram of gray levels of the preview image P0 and each preview image P0 obtained in different HDR scenes, where (a) the image is a typical high backlight HDR scene, and the distribution of pixels is high at the extremely bright gray level and at the extremely dark gray level, and (a) the global histogram of gray levels of the image has a large variance; (b) the figure is a backlit HDR scene with high pixel distribution at very bright gray levels, (b) the variance of the global gray histogram of the figure is large; (c) the figure is an indoor backlight HDR scene with large area of overexposed area. Therefore, luminance variance, overexposure, or overly dark area may be a condition for HDR scene determination.
In an embodiment of the present application, the global dynamic information may include a first luminance variance, an image entropy difference, a first overexposed area, and a first overly dark area. Specifically, the processing module 13 or the processor 30 counts the global gray histogram of the preview image P0 according to the preview image P0 after the brightness correction, so as to avoid that when the dynamic range of the current scene is analyzed according to the data information of the global gray histogram, singular sample data exists in the data of the global gray histogram, and the global gray histogram cannot converge. Therefore, the processing module 13 or the processor 30 performs normalization processing on the global histogram by a normalization method, for example, by a max-min normalization, a Z-score normalization, a function transformation, or the like, to obtain a first normalized histogram.
In the embodiment of the present application, when the processing module 13 or the processor 30 performs the normalization process on the global gray-scale histogram, a ratio between an ordinate (representing the number of each gray-scale pixel) of the global gray-scale histogram and the total number of pixels in the preview image P0 may be used as a corresponding ordinate in the first normalized histogram, until all the ordinates in the global gray-scale histogram are updated, so as to obtain the first normalized histogram. If the first normalized histogram is denoted as X ═ X (X)0,x1,…,x255). Wherein x is0,x1,…,x255Which is the ordinate of the first normalized histogram, and 0, 1, … …, 255, which is the abscissa of the first normalized histogram (representing the gray level of the pixel).
When the processing module 13 or the processor 30 calculates the first luminance variance of the preview image P0, first, a first luminance mean value of the preview image P0 is calculated through the first normalized histogram, and the first luminance mean value is recorded as
Figure BDA0003230131690000091
The first brightness variance is recorded as
Figure BDA0003230131690000092
Wherein x isiThe abscissa representing the first normalized histogram is the gray level.
The entropy of an image reflects the average number of bits of a set of gray levels in the image, describes the average information amount of an image source, and is defined as:
Figure BDA0003230131690000093
at 256 gray levels, when previewing imagesThe entropy of the image takes a maximum value when the luminance of P0 is completely uniformly distributed (the probabilities of the respective grays appearing in the preview image P0 are equal): emaxLog (256) 8. While the uniform distribution of the brightness of the preview image P0 can be regarded as the lack of dynamic image, as shown in fig. 10, fig. 10 is the image entropy difference D calculated by the preview image P0 photographed in the same scene, and the preview image P0 in the left image of fig. 9f0.66109, and the preview image P0 of the right image in fig. 10 calculates the resulting difference in image entropy DfThe dynamic range (luminance difference) of the left image is greater than that of the right image, and the image entropy difference of the left image is also greater than that of the right image, 0.275244. Therefore, by comparing the difference between the image entropy and the maximum entropy of the preview image P0, the lighting condition can be determined to some extent (the image entropy E1 and the maximum entropy E of the preview image P0)maxThe greater the difference, the greater the dynamic change). Let the image entropy difference be DfThen the entropy of the image is different DfCan be calculated by equation (2):
Df=Emax-E formula (2)
The first overexposure area acquisition process may be: the sum of the areas of all pixels larger than a predetermined overexposure threshold (a gray level may be preset, e.g., 240) is taken as a first overexposed area. For example, the number of pixels (ordinate) corresponding to the gray levels 241, 242, 243, 245, … …, and 255 is added to obtain an area addition value, and the product of the area addition value and the area of each pixel is used as the first overexposed area.
Likewise, the acquisition process of the first over-dark area may be similar to the acquisition process of the first over-exposed area, that is, the sum of the areas corresponding to all pixels smaller than a predetermined over-dark threshold (a gray level may be preset, such as 20) is used as the first over-dark area. For example, the number of pixels (ordinate) corresponding to the gray levels 0, 1, 2, 3, … …, and 19 is added to obtain an area addition value, and the product of the area addition value and the area of each pixel is used as the first area for the dark process.
Referring to fig. 4 and 10, in some embodiments, 05: obtaining the local dynamic information of each tile P01 may include:
051: counting a gray histogram of each block P01;
053: normalizing the gray level histogram of each block P01 to obtain a plurality of second normalized histograms in one-to-one correspondence; and
055: local dynamic information of each block P01 is calculated according to each second normalized histogram, and the local dynamic information includes a second luminance variance, a second overexposed area, and a second overcommed area.
Referring to fig. 2, the processing module 13 is also used for executing the methods 051, 053 and 055. That is, the processing module 13 is further configured to: counting a gray histogram of each block P01; normalizing the gray histogram of the block P01 to obtain a second normalized histogram; and calculating local dynamic information of the block P01 according to the second normalized histogram, wherein the local dynamic information includes a second luminance variance, a second overexposed area and a second excessively dark area.
Referring to fig. 3, the processor 30 is also used for executing the methods 051, 053 and 055. That is, the processor 30 is further configured to: counting a gray histogram of each block P01; normalizing the gray level histogram of each block P01 to obtain a plurality of second normalized histograms in one-to-one correspondence; and calculating local dynamic information of each block P01 according to each second normalized histogram, wherein the local dynamic information comprises a second brightness variance, a second overexposed area and a second excessively dark area.
Wherein, a block P01 corresponds to a gray histogram and a second normalized histogram, and when the local dynamic information of the block P01 is calculated, the calculation is performed according to the second normalized histogram corresponding to the block P01.
Similarly, when the processing module 13 or the processor 30 obtains the local dynamic information of each block P01, it also counts the gray histogram of each block P01, and performs normalization on the gray histogram corresponding to each block P01 to obtain a second normalized histogram of each block P01, which is the same as the normalization of the global gray histogram and will not be described herein again. If the second normalized histogram is denoted as Y ═ Y (Y)0,y1,…,y255) Wherein, y0,y1,…,y255Is the ordinate of the second normalized histogram and 0, 1, … …, 255 is the abscissa (representing the gray level of the pixel) of the second normalized histogram.
When the processing module 13 or the processor 30 calculates the second luminance variance of each block P01, first, a second luminance mean value of each block P01 is calculated through a second normalized histogram corresponding to the block P01, and the second luminance mean value is recorded as
Figure BDA0003230131690000101
Figure BDA0003230131690000102
The second variance is recorded as
Figure BDA0003230131690000103
Wherein, yiThe abscissa representing the second normalized histogram is the gray level.
The second overexposure area and the second overexposure area are calculated in the same way as the first overexposure area and the first overexposure area, the threshold for determining whether the overexposure area exists in the block P01 may be the same as the threshold for determining whether the overexposure area exists in the global state, and the threshold for determining whether the overexposure area exists in the block P01 may be the same as the threshold for determining whether the overexposure area exists in the global state.
Referring to fig. 4 and 11, in some embodiments, 06: determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information may include:
061: when the global dynamic information is larger than a global preset threshold value, determining that the current scene is an HDR scene;
063: when the global dynamic information is less than or equal to a global preset threshold value, calculating a local dynamic estimated total value of the preview image P0 according to the local dynamic information;
065: when the total local dynamic estimation value is greater than a preset estimation threshold, determining that the current scene is an HDR scene;
067: and when the total local dynamic estimation value is smaller than a preset estimation threshold, determining that the current scene is a non-HDR scene.
Referring to fig. 2, the processing module 13 is also used for executing the methods of 061, 063, 065 and 067. That is, the processing module 13 is further configured to: when the global dynamic information is larger than a global preset threshold value, determining that the current scene is an HDR scene; when the global dynamic information is less than or equal to a global preset threshold value, calculating a local dynamic estimated total value of the preview image P0 according to the local dynamic information; when the local dynamic estimation total value is larger than a local preset threshold value, determining that the current scene is an HDR scene; and when the local dynamic estimation total value is smaller than a local preset threshold value, determining that the current scene is a non-HDR scene.
Referring to fig. 3, the processor 30 is also used for executing the methods of 061, 063, 065 and 067. That is, the processor 30 is further configured to: when the global dynamic information is larger than a global preset threshold value, determining that the current scene is an HDR scene; when the global dynamic information is less than or equal to a global preset threshold value, calculating a local dynamic estimated total value of the preview image P0 according to the local dynamic information; when the local dynamic estimation total value is larger than a local preset threshold value, determining that the current scene is an HDR scene; and when the local dynamic estimation total value is smaller than a local preset threshold value, determining that the current scene is a non-HDR scene.
Specifically, after the processing module 13 or the processor 30 calculates the global dynamic information of the preview image P0 and the local dynamic information of each tile P01, it is first determined whether the current scene is an HDR scene according to the global dynamic information, and if the global dynamic information is greater than a global preset threshold, it is determined that the current scene is an HDR scene. If the global dynamic information is less than or equal to the global preset threshold, then the total local dynamic estimation value of the preview image P0 is calculated according to the local dynamic information of each block P0, so as to avoid missing a local area with large bright and dark contrast or a small overexposure area, and the like, thereby more finely and accurately judging the dynamic range condition of the current scene, so as to determine whether the current scene is an HDR scene, and improve the accuracy of scene detection.
The processing module 13 or the processor 30 compares the local dynamic total estimation value with a preset estimation threshold, determines that the current scene is an HDR scene when the local dynamic total estimation value is greater than the preset estimation threshold, and finally determines that the current scene is a non-HDR scene when the local dynamic total estimation value is less than or equal to the preset estimation threshold.
Referring to fig. 12, in some embodiments, the global dynamic information includes a first luminance variance, an image entropy difference, a first overexposure area, and a first overexposure area, the global preset threshold includes a first variance threshold, an entropy difference threshold, a first overexposure threshold, a second overexposure threshold, and a first overexposure threshold, 061: when the global dynamic information is greater than a global preset threshold, determining that the current scene is an HDR scene may include:
0611: when the first brightness variance is larger than a first variance threshold value, determining that the current scene is an HDR scene; or
0613: when the image entropy difference is larger than the entropy difference threshold value, determining that the current scene is an HDR scene; or
0615: when the first overexposure area is larger than a first overexposure threshold value, determining that the current scene is an HDR scene; or
0617: and when the first overexposure area is larger than the second overexposure threshold value and the first overexposure area is larger than the first overexposure threshold value, determining that the current scene is the HDR scene.
Referring to fig. 2, the processing module 13 is further configured to execute the methods of 0611, 0613, 0615, and 0617. That is, the processing module 13 is further configured to: when the first brightness variance is larger than a first variance threshold value, determining that the current scene is an HDR scene; or when the image entropy difference is larger than the entropy difference threshold value, determining that the current scene is an HDR scene; or when the first overexposure area is larger than a first overexposure threshold value, determining that the current scene is an HDR scene; or when the first overexposure area is larger than the second overexposure threshold and the first overexposure area is larger than the first overexposure threshold, determining that the current scene is the HDR scene.
Referring to fig. 3, the processor 30 is further configured to execute the methods of 0611, 0613, 0615, and 0617. That is, the processor 30 is further configured to: when the first brightness variance is larger than a first variance threshold value, determining that the current scene is an HDR scene; or when the image entropy difference is larger than the entropy difference threshold value, determining that the current scene is an HDR scene; or when the first overexposure area is larger than a first overexposure threshold value, determining that the current scene is an HDR scene; or when the first overexposure area is larger than the second overexposure threshold and the first overexposure area is larger than the first overexposure threshold, determining that the current scene is the HDR scene.
When the processing module 13 or the processor 30 determines whether the current scene is an HDR scene according to the global dynamic information, the determination may be made according to any one or more of methods 0611, 0613, 0615, or 0617. For example, the first variance threshold is denoted as TH1, the first overexposure threshold is denoted as TH2, the entropy difference threshold is denoted as TH3, the second overexposure threshold is denoted as TH4, and the first overexposure threshold is denoted as TH5, and the thresholds are different. Please refer to fig. 13, when the first luminance variance σ 12If the current scene is more than TH1, determining that the current scene is an HDR scene, and otherwise, comparing the second overexposure area; if the second overexposure area is larger than TH2, determining that the current scene is an HDR scene, and otherwise, comparing the image entropy difference; if the image entropy is different DfIf the first overexposure area is larger than TH4 and the first overexposure area is larger than TH5, determining that the current scene is the HDR scene, otherwise, calculating a total local dynamic estimation value of the preview image P0 according to the local dynamic information, and analyzing whether a region with large local brightness contrast or a small overexposure region exists in the preview image P0.
Referring to fig. 4 and 14, in some embodiments, 063: when the global dynamic information is less than or equal to the global preset threshold, calculating a local dynamic estimation total value of the preview image P0 according to the local dynamic information, including:
0631: and when the first brightness variance is smaller than or equal to a first variance threshold, the image entropy difference is smaller than or equal to an entropy difference threshold, the first overexposure area is smaller than or equal to a first overexposure threshold, the first overexposure area is smaller than or equal to a second overexposure threshold, and the first overexposure area is larger than the first overexposure threshold, calculating a local dynamic estimated total value of the preview image P0 according to the local dynamic information.
Referring to fig. 2, the processing module 13 is further configured to execute the method of 0631. That is, the processing module 13 is further configured to: and when the first brightness variance is smaller than or equal to a first variance threshold, the image entropy difference is smaller than or equal to an entropy difference threshold, the first overexposure area is smaller than or equal to a first overexposure threshold, the first overexposure area is smaller than or equal to a second overexposure threshold, and the first overexposure area is larger than the first overexposure threshold, calculating a local dynamic estimated total value of the preview image P0 according to the local dynamic information.
Referring to fig. 3, the processor 30 is further configured to execute the method of 0631. That is, the processor 30 is further configured to: and when the first brightness variance is smaller than or equal to a first variance threshold, the image entropy difference is smaller than or equal to an entropy difference threshold, the first overexposure area is smaller than or equal to a first overexposure threshold, the first overexposure area is smaller than or equal to a second overexposure threshold, and the first overexposure area is larger than the first overexposure threshold, calculating a local dynamic estimated total value of the preview image P0 according to the local dynamic information.
With reference to fig. 13, further, if the first luminance variance is greater than the first variance threshold, the first overexposure area is greater than the first overexposure threshold, the second overexposure area is greater than the first overexposure threshold, the third image entropy difference is greater than the entropy difference threshold, the fourth image overexposure area is greater than the second overexposure threshold, and the fourth image overexposure area is greater than the first overexposure threshold. Then, when the global dynamic information does not satisfy the first condition, the second condition, the third condition and the fourth condition, the processing module 13 or the processor 30 calculates the total local dynamic estimation value of the preview image P0 according to the local dynamic information. For example, when the first luminance variance is less than or equal to a first variance threshold, the image entropy difference is less than or equal to an entropy difference threshold, the first overexposed area is less than or equal to a first overexposed threshold, and the first overexposed area is less than or equal to a second overexposed threshold and the first overexposed area is greater than a first overexposed threshold; or when the first brightness variance is less than or equal to a first variance threshold, the image entropy difference is less than or equal to an entropy difference threshold, the first overexposure area is less than or equal to a first overexposure threshold, and the first overexposure area is less than or equal to a second overexposure threshold and the first overexposure area is less than or equal to a first overexposure threshold; or when the first brightness variance is less than or equal to a first variance threshold, the image entropy difference is less than or equal to an entropy difference threshold, the first overexposure area is less than or equal to a first overexposure threshold, and the first overexposure area is greater than a second overexposure threshold and the first overexposure area is less than or equal to a first overexposure threshold.
Referring to fig. 14 and 15, in some embodiments, 063: calculating the total local dynamic estimation value of the preview image P0 according to the local dynamic information may include:
0633: traversing the block P01, and taking the block P01 with the local dynamic information larger than the local preset threshold as an HDR block to obtain a local HDR distribution map P1 of the preview image, wherein the local HDR distribution map P1 comprises the HDR attribute of each block P01;
0635: obtaining a local dynamic estimation value of each block P01 according to a preset weight map P2 and a local HDR distribution map P1, where the preset weight map P2 includes a weight value of each block P01; and
0637: the local motion estimation values of each block P01 are summed to obtain a local motion estimation total value of the preview image P0.
Referring to fig. 2, the processing module 13 is also used for executing the methods of 0633, 0635, and 0637. That is, the processing module 13 is further configured to: traversing the block P01, and taking the block P01 with the local dynamic information larger than the local preset threshold as an HDR block to obtain a local HDR distribution map P1 of the preview image P0, wherein the local HDR distribution map comprises HDR attributes of each block P01; obtaining a local dynamic estimation value of each block P01 according to a preset weight map P2 and a local HDR distribution map P1, where the preset weight map P1 includes a weight value of each block P01; and summing the local motion estimation values of each block P01 to obtain a local motion estimation total value of the preview image P0.
Referring to fig. 3, the processor 30 is also used for executing the methods of 0633, 0635, and 0637. That is, the processor 30 is further configured to: traversing the block P01, and taking the block P01 with the local dynamic information larger than the local preset threshold as an HDR block to obtain a local HDR distribution map P1 of the preview image P0, wherein the local HDR distribution map comprises HDR attributes of each block P01; obtaining a local dynamic estimation value of each block P01 according to a preset weight map P2 and a local HDR distribution map P1, where the preset weight map P1 includes a weight value of each block P01; and summing the local motion estimation values of each block P01 to obtain a local motion estimation total value of the preview image P0.
As shown in fig. 15, the processing module 13 or the processor 30 divides the preview image P0 into 4 × 4 blocks P01, and determines whether each block P01 is an HDR block in turn. Specifically, the processing module 13 or the processor 30 takes the block P01 with the local dynamic information greater than the local preset threshold as the HDR block and marks the HDR block in the local HDR distribution map P1, and marks the HDR block as 1 in the local HDR distribution map P1 if the current block P01 is determined to be the local HDR (the local dynamic information of the block P01 is greater than the local preset threshold), and marks the HDR block as 0 in the local HDR distribution map P1 if the current block P01 is determined to be the non-local HDR (the local dynamic information of the block P01 is less than or equal to the local preset threshold), where "0" or "1" is the HDR attribute of the local HDR distribution. The global preset threshold and the local preset threshold are different.
The preset weight map P2 includes a weight value corresponding to each block P01, and the processing module 13 or the processor 30 performs calculation according to the HDR attribute of the block P01 and the weight value corresponding to the block P01 to obtain a local dynamic estimation value of the block P01 until the local dynamic estimation values of all the blocks P01 are calculated, and finally accumulates 16 local dynamic estimation values to obtain a total local dynamic estimation value of the preview image P0. Each of the preset weight values in the weight map P1 is 1, 2 or 3, which is not limited herein. The present application takes an example that each weight value of the preset weight map P1 is 1.
Referring to fig. 16 and 18, in some embodiments, 063: calculating a local dynamic estimation total value of the preview image P0 according to the local dynamic information, further comprising:
0639: calculating a saliency region and a face information region of the preview image P0;
0635: obtaining the local dynamic estimation value of each block P01 according to the preset weight map P2 and the local HDR distribution map P1 may include:
06351: adjusting the weight value of a block P01 corresponding to the salient region in a preset weight map P2 and/or adjusting the weight value of a block P01 corresponding to the face information region; and
06353: the local dynamic estimation value of each block P01 is obtained according to the adjusted weight map P2 and the local HDR distribution map P1.
Referring to fig. 2, the processing module 13 is further used for executing the methods of 0639, 06351, and 06353. That is, the processing module 13 is further configured to: calculating a saliency region and a face information region of the preview image P0; adjusting the weight value of a block P01 corresponding to the salient region in a preset weight map P2 and/or adjusting the weight value of a block P01 corresponding to the face information region; and acquiring a local dynamic estimation value of each block P01 according to the adjusted weight map P2 and the local HDR distribution map P1.
Referring to FIG. 3, the processor 30 is further configured to perform the methods of 0639, 06351, and 06353. That is, the processor 30 is further configured to: calculating a saliency region and a face information region of the preview image P0; adjusting the weight value of a block P01 corresponding to the salient region in a preset weight map P2 and/or adjusting the weight value of a block P01 corresponding to the face information region; and acquiring a local dynamic estimation value of each block P01 according to the adjusted weight map P2 and the local HDR distribution map P1.
When a person looks at an image, attention is often attracted to a part of an area in the image, the part of the area is called a saliency area and is considered as the most important area in the image by most people, and the image quality of the saliency area influences the evaluation of the image effect by the person. Therefore, a greater weight should be designed for the block P01 where the salient region is located. In addition, if there is a face (or a portrait) in the captured scene, it is necessary to ensure the brightness of the face as much as possible, and if it is too dark, HDR processing is performed to brighten the face to improve the imaging effect, so that the block P01 where the face information region is located should also be designed with a large weight.
In one embodiment, as shown in fig. 17, it is assumed that the salient region in preview image P0 is a hatched portion in the figure. When the processing module 13 or the processor 30 calculates the saliency area of the preview image P0, the saliency area of the preview image P0 is calculated by a Histogram contrast-based image pixel saliency value detection method (HC). Specifically, when calculating the saliency value of a pixel in the preview image P0, it is defined by the contrast between the pixel and the colors of other pixels in the preview image P0, as shown in formula (3):
Figure BDA0003230131690000151
d (I) in formula (3)k,Ii) Is a pixel IkAnd a pixel IiAfter the color distance measure of the Lab color space is optimized and calculated by formula (3) and the HC method, the saliency value of the preview image P0 is obtained, and the saliency region (shown as a shaded portion in fig. 17) of the preview image P0 is screened out by threshold segmentation. For example, a region formed by pixels having a saliency value larger than the division threshold is set as the saliency region.
The face information area (e.g., the face area enclosed by the square frame in the human-shaped icon in fig. 18) may be processed by other modules of the camera in the terminal 100 to obtain the result of the face detection (including the number of faces and the corresponding face frame information).
In one embodiment, if only the saliency region exists in the preview image P0, the processing module 13 or the processor 30 adjusts the weight value of the corresponding block P01 according to the block P01 where the saliency region exists. As shown in fig. 16, in the preview image P0 divided into 16 blocks P01, partial areas of the second row, the first column, the second row, the second column, the third row, the first column and the third row, the second column are salient areas, and the weight values corresponding to the blocks P01 in which these areas are located may be all increased. For example, 1 or 2 is added to the weight value of the preset weight map P2. Also for example, the adjustment is made according to the area ratio of the partial saliency region in each tile P01 to the entire tile P01. Specifically, the larger the area ratio of the saliency region is, the larger the increase of the weight value is. As shown in fig. 17, if the area ratio of the saliency region in the block P01 in the third row and first column in the preview image P0 is greater than the area ratio of the saliency region in the block P01 in the second row and first column, the weight value of the block P01 in the third row and first column may be increased by 2, and the weight value of the block P01 in the second row and first column may be increased by 1. For example, the processing module 13 or the processor 30 calculates the local dynamic total estimation value to be 6 according to the adjusted weight map P2 and the local HDR distribution map P1 in fig. 17, compares the local dynamic total estimation value with a preset estimation threshold, and if the local dynamic total estimation value is greater than the preset estimation threshold, the determining module 15 or the processor 30 determines that the current scene is an HDR scene, otherwise, the current scene is considered as a non-HDR scene.
In another embodiment, with reference to fig. 18, when the luminance average value of the face information region is smaller than a preset face luminance threshold, the luminance average value of the face information region is calculated for a block P01 in the preview image P0, and when the luminance average value of the face information region is smaller than the preset face luminance threshold, the weight value of the block P01 in which the face information region is located is increased, and the weight value of the block P01 is greater than the weight value of a block in which only the saliency region exists. Similarly, the adjustment is performed according to the area ratio of the partial face information region in each tile P01 to the entire tile P01. Specifically, the larger the area ratio of the face information region is, the larger the increase amplitude of the weight value is. For example, the processing module 13 or the processor 30 calculates the local dynamic total estimation value to be 8 according to the adjusted weight map P2 and the local HDR distribution map P1 in fig. 18, compares the local dynamic total estimation value with a preset estimation threshold, and if the local dynamic total estimation value is greater than the preset estimation threshold, the determining module 15 or the processor 30 determines that the current scene is an HDR scene, otherwise, the current scene is considered as a non-HDR scene. Or the weighting values of the block P01 only of the face information area and the block P01 only of the saliency area are set to be the same, and the weighting value of the block P01 existing in both the face information area and the saliency area is set to be larger than the weighting value of the block P01 only of the face information area or the block P01 only of the saliency area.
If the face information frame or the salient region is located at the boundary of the block P01, the weight values corresponding to the related block P01 are all increased. Finally, the processing module 13 or the processor 30 obtains the local dynamic estimation value of each block P01 according to the adjusted weight map P2 and the local HDR distribution map P1.
Referring to fig. 19, in some embodiments, the local dynamic information includes a second luminance variance, a second overexposure area and a second overexposure area, and the local preset threshold includes a second variance threshold, a third overexposure threshold and a second overexposure threshold, 0633: taking the block P01 with the local dynamic information greater than the local preset threshold as the HDR block, including:
06331: taking the block P01 with the second luminance variance larger than the second variance threshold as the HDR block; and/or
06333: taking the block P01 with the second overexposure area larger than the third overexposure threshold as an HDR block; and/or
06335: the block P01 with the second too dark area larger than the second too dark threshold is taken as the HDR block.
Referring to fig. 2, the processing module 13 is further used for executing the methods of 06331, 06333, and 06335. That is, the processing module 13 is further configured to: taking the block P01 with the second luminance variance larger than the second variance threshold as the HDR block; and/or taking the block P01 with the second overexposure area larger than the third overexposure threshold as the HDR block; and/or the block P01 with the second too dark area larger than the second too dark threshold is taken as the HDR block.
Referring to fig. 3, the processor 30 is further configured to execute methods of 06331, 06333, and 06335. That is, the processor 30 is further configured to: taking the block P01 with the second luminance variance larger than the second variance threshold as the HDR block; and/or taking the block P01 with the second overexposure area larger than the third overexposure threshold as the HDR block; and/or the block P01 with the second too dark area larger than the second too dark threshold is taken as the HDR block.
In the embodiment of the present application, the second luminance variance σ 22Block P01 greater than the second variance threshold is considered an HDR block; and/or, regarding the block P01 with the second overexposure area larger than the third overexposure threshold as the HDR block; and/or, regarding the block with the second too dark area larger than the second too dark threshold as the HDR block. At the second brightness variance σ 22When the second overexposure area and the second overexposure area are not greater than the corresponding threshold values, the corresponding block P01 is determined to be a non-HDR block. The final calculated total estimated value of the local information of the preview image P0 is 0, and the determination module 15 or the processor 30 determines that the current scene is a non-HDR scene.
Referring to fig. 20, the present embodiment further provides a non-volatile computer-readable storage medium 200 containing a computer program 201. The computer program 301, when executed by one or more processors 30, causes the processor 30 to perform the HDR scene detection method in 01, 02, 03, 04, 05, 06, 031, 033, 041, 043, 045, 047, 051, 053, 055, 061, 063, 065, 067, 0611, 0613, 0615, 0617, 0631, 0633, 0635, 0637, 0639, 06351, 06353, 06331, 06333, and 06335.
For example, the computer program 201, when executed by the one or more processors 30, causes the processors 30 to perform the following method:
01: acquiring a preview image P0 of the current scene;
04: acquiring global dynamic information of a preview image P0;
05: dividing the preview image P0 into a plurality of blocks P01 with preset sizes, and acquiring local dynamic information of each block P01; and
06: and determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information.
In the description herein, references to the description of the terms "certain embodiments," "one example," "exemplary," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (14)

1. An HDR scene detection method, comprising:
acquiring a preview image of a current scene;
acquiring global dynamic information of the preview image;
dividing the preview image into a plurality of blocks with preset sizes, and acquiring local dynamic information of each block; and
determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information.
2. The HDR scene detection method of claim 1, further comprising:
acquiring shooting metadata parameters of the current scene;
performing brightness correction on the preview image according to the shooting metadata parameters; wherein the acquiring of the global dynamic information of the preview image includes: and acquiring the global dynamic information of the preview image after correction.
3. The HDR scene detection method of claim 2, wherein the capture metadata parameters comprise a first luminance gain and a second luminance gain, the first luminance gain acting on bright areas of the preview image and the second luminance gain acting on dark areas of the preview image, the luminance correction of the preview image according to the capture metadata parameters comprising performing the following method across all pixels of the preview image:
when the current brightness value of the pixel is larger than a preset first brightness threshold value, taking the product of the current brightness value of the pixel and the first brightness gain as a correction brightness value of the pixel;
and when the current brightness value of the pixel is smaller than a preset second brightness threshold value, taking the ratio of the current brightness value of the pixel to the second brightness gain as a correction brightness value of the pixel.
4. The HDR scene detection method of claim 1, wherein said obtaining global dynamic information of the preview image comprises:
counting a global gray level histogram of the preview image;
carrying out normalization processing on the global gray level histogram to obtain a first normalized histogram; and
and calculating global dynamic information of the preview image according to the first normalized histogram, wherein the global dynamic information comprises a first brightness variance, an image entropy difference, a first overexposure area and a first excessively dark area.
5. The HDR scene detection method of claim 1, wherein said obtaining the local dynamic information of each of the blocks comprises:
counting a gray level histogram of each block;
normalizing the gray level histogram of each block to obtain a plurality of second normalized histograms in one-to-one correspondence; and
and calculating local dynamic information of each block according to each second normalized histogram, wherein the local dynamic information comprises a second brightness variance, a second overexposure area and a second excessively dark area.
6. The HDR scene detection method of claim 1, wherein said determining whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information comprises:
when the global dynamic information is larger than a global preset threshold value, determining that the current scene is an HDR scene;
when the global dynamic information is smaller than or equal to the preset threshold, calculating a local dynamic estimation total value of the preview image according to the local dynamic information;
when the local dynamic estimation total value is larger than a preset estimation threshold value, determining that the current scene is an HDR scene;
and when the local dynamic estimation total value is smaller than a preset estimation threshold, determining that the current scene is a non-HDR scene.
7. The HDR scene detection method of claim 6, wherein the global dynamic information comprises a first luminance variance, an image entropy difference, a first overexposure area and a first overexposure area, wherein the global preset thresholds comprise a first variance threshold, an entropy difference threshold, a first overexposure threshold, a second overexposure threshold and a first overexposure threshold, and wherein when the global dynamic information is greater than the global preset threshold, determining that the current scene is an HDR scene comprises:
determining that the current scene is an HDR scene when the first luminance variance is greater than the first variance threshold; or
When the image entropy difference is larger than the entropy difference threshold, determining that the current scene is an HDR scene; or
When the first overexposure area is larger than the first overexposure threshold, determining that the current scene is an HDR scene; or
When the first overexposure area is larger than the second overexposure threshold and the first overexposure area is larger than a first overexposure threshold, determining that the current scene is an HDR scene.
8. The HDR scene detection method of claim 6, wherein the global dynamic information comprises a first luminance variance, an image entropy difference, a first overexposure area and a first overexposure area, the global preset thresholds comprise a first variance threshold, an entropy difference threshold, a first overexposure threshold, a second overexposure threshold and a first overexposure threshold, and the calculating the total local dynamic estimation value of the preview image according to the local dynamic information when the global dynamic information is less than or equal to the global preset threshold comprises:
and when the first brightness variance is smaller than or equal to the first variance threshold, the image entropy difference is smaller than or equal to the entropy difference threshold, the first overexposure area is smaller than or equal to the first overexposure threshold, the first overexposure area is smaller than or equal to the second overexposure threshold, and the first overexposure area is larger than the first overexposure threshold, calculating a local dynamic estimated total value of the preview image according to the local dynamic information.
9. The HDR scene detection method of claim 6, wherein said calculating a local dynamic estimated total value of the preview image according to the local dynamic information comprises:
traversing the blocks, and using the blocks with the local dynamic information larger than a local preset threshold value as HDR blocks to obtain a local HDR distribution map of the preview image, wherein the local HDR distribution map comprises HDR attributes of each block;
obtaining a local dynamic estimation value of each block according to a preset weight map and the local HDR distribution map, wherein the preset weight map comprises a weight value of each block; and
and summing the local dynamic estimation values of each block to obtain a local dynamic estimation total value of the preview image.
10. The HDR scene detection method of claim 9, wherein said calculating a local dynamic estimate total value of the preview image according to the local dynamic information further comprises:
calculating a saliency region and a face information region of the preview image;
the obtaining a local dynamic estimation value of each block according to the preset weight map and the local HDR distribution map includes: adjusting the weight value of the block corresponding to the salient region in the preset weight map, and/or adjusting the weight value of the block corresponding to the face information region; and acquiring a local dynamic estimation value of each block according to the adjusted preset weight map and the local HDR distribution map.
11. The HDR scene detection method of claim 9, wherein the local dynamic information comprises a second luminance variance, a second overexposed area and a second overexposed area, the local preset thresholds comprise a second variance threshold, a third overexposed threshold and a second overexposed threshold, and the determining the block with the local dynamic information greater than the local preset threshold as the HDR block comprises:
taking the block with the second luma variance greater than the second variance threshold as an HDR block; and/or
Taking the block with the second overexposure area larger than the third overexposure threshold as an HDR block; and/or
The block with the second too dark area larger than the second too dark threshold is taken as an HDR block.
12. An HDR scene detection apparatus, comprising:
the acquisition module is used for acquiring a preview image of a current scene;
the processing module is used for acquiring global dynamic information of the preview image; dividing the preview image into a plurality of blocks with preset sizes, and acquiring local dynamic information of each block; and
a determining module, configured to determine whether the current scene is an HDR scene according to the global dynamic information and the local dynamic information.
13. A terminal, characterized in that the terminal comprises:
one or more processors, memory; and
one or more programs, wherein one or more of the programs are stored in the memory and executed by one or more of the processors, the programs comprising instructions for performing the HDR scene detection method of any of claims 1 to 11.
14. A non-transitory computer readable storage medium storing a computer program which, when executed by one or more processors, implements the HDR scene detection method of any of claims 1 to 11.
CN202110984139.4A 2021-08-25 2021-08-25 HDR scene detection method and device, terminal and readable storage medium Active CN113747062B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110984139.4A CN113747062B (en) 2021-08-25 2021-08-25 HDR scene detection method and device, terminal and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110984139.4A CN113747062B (en) 2021-08-25 2021-08-25 HDR scene detection method and device, terminal and readable storage medium

Publications (2)

Publication Number Publication Date
CN113747062A true CN113747062A (en) 2021-12-03
CN113747062B CN113747062B (en) 2023-05-26

Family

ID=78732967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110984139.4A Active CN113747062B (en) 2021-08-25 2021-08-25 HDR scene detection method and device, terminal and readable storage medium

Country Status (1)

Country Link
CN (1) CN113747062B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114429476A (en) * 2022-01-25 2022-05-03 惠州Tcl移动通信有限公司 Image processing method, image processing apparatus, computer device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100046855A1 (en) * 2005-02-15 2010-02-25 Marcu Gabriel G Methods and Apparatuses For Image Processing
CN103973988A (en) * 2013-01-24 2014-08-06 华为终端有限公司 Scene recognition method and device
CN106067177A (en) * 2016-06-15 2016-11-02 深圳市万普拉斯科技有限公司 HDR scene method for detecting and device
CN108337433A (en) * 2018-03-19 2018-07-27 广东欧珀移动通信有限公司 A kind of photographic method, mobile terminal and computer readable storage medium
CN109510946A (en) * 2017-09-15 2019-03-22 展讯通信(上海)有限公司 HDR scene detection method and system
WO2019183813A1 (en) * 2018-03-27 2019-10-03 华为技术有限公司 Image capture method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100046855A1 (en) * 2005-02-15 2010-02-25 Marcu Gabriel G Methods and Apparatuses For Image Processing
CN103973988A (en) * 2013-01-24 2014-08-06 华为终端有限公司 Scene recognition method and device
CN106067177A (en) * 2016-06-15 2016-11-02 深圳市万普拉斯科技有限公司 HDR scene method for detecting and device
WO2017215527A1 (en) * 2016-06-15 2017-12-21 深圳市万普拉斯科技有限公司 Hdr scenario detection method, device, and computer storage medium
CN109510946A (en) * 2017-09-15 2019-03-22 展讯通信(上海)有限公司 HDR scene detection method and system
CN108337433A (en) * 2018-03-19 2018-07-27 广东欧珀移动通信有限公司 A kind of photographic method, mobile terminal and computer readable storage medium
WO2019183813A1 (en) * 2018-03-27 2019-10-03 华为技术有限公司 Image capture method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114429476A (en) * 2022-01-25 2022-05-03 惠州Tcl移动通信有限公司 Image processing method, image processing apparatus, computer device, and storage medium

Also Published As

Publication number Publication date
CN113747062B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN107635102B (en) Method and device for acquiring exposure compensation value of high-dynamic-range image
CN108337445B (en) Photographing method, related device and computer storage medium
CN110033418B (en) Image processing method, image processing device, storage medium and electronic equipment
EP1996913B1 (en) Systems, methods, and apparatus for exposure control
US9077905B2 (en) Image capturing apparatus and control method thereof
CN108616689B (en) Portrait-based high dynamic range image acquisition method, device and equipment
CN108337446B (en) High dynamic range image acquisition method, device and equipment based on double cameras
WO2014093042A1 (en) Determining an image capture payload burst structure based on metering image capture sweep
KR20060045424A (en) Digital cameras with luminance correction
CN110246101B (en) Image processing method and device
CN110493539B (en) Automatic exposure processing method, processing device and electronic equipment
CN112738411B (en) Exposure adjusting method, exposure adjusting device, electronic equipment and storage medium
CN104052933A (en) Method for determining dynamic range mode, and image obtaining apparatus
CN110047060B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113691724B (en) HDR scene detection method and device, terminal and readable storage medium
WO2023137956A1 (en) Image processing method and apparatus, electronic device, and storage medium
KR101754425B1 (en) Apparatus and method for auto adjusting brightness of image taking device
CN112653845B (en) Exposure control method, exposure control device, electronic equipment and readable storage medium
CN113747062B (en) HDR scene detection method and device, terminal and readable storage medium
US11496694B2 (en) Dual sensor imaging system and imaging method thereof
CN113438411A (en) Image shooting method, image shooting device, computer equipment and computer readable storage medium
van Beek Improved image selection for stack-based hdr imaging
JP2012109849A (en) Imaging device
CN108337448B (en) High dynamic range image acquisition method and device, terminal equipment and storage medium
US20210243344A1 (en) Dual sensor imaging system and depth map calculation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant