CN111368587A - Scene detection method and device, terminal equipment and computer readable storage medium - Google Patents

Scene detection method and device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN111368587A
CN111368587A CN201811591988.8A CN201811591988A CN111368587A CN 111368587 A CN111368587 A CN 111368587A CN 201811591988 A CN201811591988 A CN 201811591988A CN 111368587 A CN111368587 A CN 111368587A
Authority
CN
China
Prior art keywords
scene
image
component
detected
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811591988.8A
Other languages
Chinese (zh)
Other versions
CN111368587B (en
Inventor
曾鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Research America Inc
Original Assignee
TCL Research America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Research America Inc filed Critical TCL Research America Inc
Priority to CN201811591988.8A priority Critical patent/CN111368587B/en
Publication of CN111368587A publication Critical patent/CN111368587A/en
Application granted granted Critical
Publication of CN111368587B publication Critical patent/CN111368587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application is applicable to the technical field of image processing, and discloses a scene detection method, a scene detection device, terminal equipment and a computer-readable storage medium, wherein the method comprises the following steps: acquiring an image to be detected; dividing an image to be detected into a preset number of areas; determining a target area from the areas, wherein the target area is a white area and/or a black area; extracting a gray level histogram of the image to be detected based on the region outside the target region; and determining the scene of the image to be detected according to the gray level histogram. The method and the device for detecting the scene can improve the accuracy of scene detection.

Description

Scene detection method and device, terminal equipment and computer readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a scene detection method, an apparatus, a terminal device, and a computer-readable storage medium.
Background
With the continuous development of image processing technology, the picture shooting quality and the shooting effect are better and better.
In order to improve the shooting quality and the shooting effect, shooting parameters are often required to be adjusted according to different shooting scenes (such as backlight, forward light, non-backlight or non-forward light). Due to the fact that the intelligent degree of the shooting device is higher and higher, the existing shooting device can automatically recognize different scenes and add corresponding functions to the different scenes, and the shooting effect is better. For example, when a user uses a mobile phone to take a picture everyday, a scene with a backlight or a frontlight is often generated, and an HDR (High Dynamic Range Imaging) function is added to the scene during taking a picture, so as to improve the picture taking effect.
The current scene detection method is generally based on the image brightness information statistics to determine whether to be backlit or frontlit. Thereby realizing the detection of the backlight or the frontlight. However, when some specific scenes are encountered, for example, when a large-scale white or black background appears and a point light source is faced for illumination, the existing scene detection method often has misjudgment, and the scene detection accuracy is low.
Disclosure of Invention
In view of this, embodiments of the present application provide a scene detection method, a scene detection device, a terminal device, and a computer-readable storage medium, so as to solve the problem in the prior art that the scene detection accuracy is low.
A first aspect of an embodiment of the present application provides a scene detection method, including:
acquiring an image to be detected;
dividing the image to be detected into a preset number of areas;
determining a target area from the areas, wherein the target area is a white area and/or a black area;
extracting a gray level histogram of the image to be detected based on the region outside the target region;
and determining the scene of the image to be detected according to the gray level histogram.
With reference to the first aspect, in one possible implementation manner, the determining a target area from the areas includes:
calculating an R component, a G component, and a B component for each of the regions;
and determining the target region from the region according to the relation among the R component, the G component and the B component.
With reference to the first aspect, in a possible implementation manner, the determining the target region from the region according to a relationship among the R component, the G component, and the B component includes:
calculating a first ratio between the R component and the G component, a second ratio between the B component and the G component, and a third ratio between the R component and the B component of each region;
calculating the sum of the R component, the G component and the B component of each region;
when a preset judging condition is met, judging the area to be a white area, wherein the preset judging condition is that the first ratio is smaller than a first preset threshold, the first ratio is larger than a fifth preset threshold, the second ratio is smaller than the first preset threshold, the second ratio is larger than the fifth preset threshold, the third ratio is smaller than the first preset threshold, the third ratio is larger than the fifth preset threshold, and the sum of the sums;
and when the first ratio, the second ratio and the third ratio are all smaller than the first preset threshold and the sum is smaller than a fourth preset threshold, judging the area as a black area.
With reference to the first aspect, in a possible implementation manner, after the determining the target region from the regions, the method further includes:
and marking the target area with a preset identification.
With reference to the first aspect, in a possible implementation manner, the determining a scene of the image to be detected according to the gray histogram includes:
calculating the mean value of the gray level histogram;
calculating the variance of the gray level histogram according to the mean value;
judging whether the variance is larger than a fifth preset threshold value or not;
when the variance is larger than the fifth preset threshold, determining that the scene of the image to be detected is a backlight scene or a frontlight scene;
and when the variance is smaller than or equal to the fifth preset threshold value, determining that the scene of the image to be detected is a non-backlight scene or a non-frontlighting scene.
With reference to the first aspect, in a feasible implementation manner, after determining that the scene of the image to be detected is a backlight scene or a frontlight scene, the method further includes:
calculating an average luminance value for each of the regions;
counting the brightness value distribution rule of the image to be detected according to the average brightness value;
when the brightness value distribution rule accords with a first preset distribution rule, the scene of the image to be detected is a backlight scene;
and when the distribution rule of the brightness values accords with a second preset distribution rule, the scene of the image to be detected is a taillight scene.
With reference to the first aspect, in a possible implementation manner, after the determining a scene of the image to be detected according to the gray level histogram, the method further includes:
and executing corresponding image processing operation according to the scene of the image to be detected.
A second aspect of the embodiments of the present application provides a scene detection apparatus, including:
the acquisition module is used for acquiring an image to be detected;
the brightness value calculation module is used for dividing the image to be detected into a preset number of areas and calculating the average brightness value of each area;
a target area determining module, configured to determine a target area from the areas, where the target area is a white area and/or a black area;
the gray information calculation module is used for extracting a gray histogram of the image to be detected based on the region outside the target region;
and the scene determining module is used for determining the scene of the image to be detected according to the gray level histogram.
With reference to the second aspect, in one possible implementation manner, the target area determination module includes:
a component calculation unit for calculating an R component, a G component, and a B component of each of the regions;
a determining unit, configured to determine the target region from the region according to a relationship among the R component, the G component, and the B component.
With reference to the first aspect, in one possible implementation manner, the determining unit includes:
the ratio operator unit is used for calculating a first ratio between the R component and the G component, a second ratio between the B component and the G component and a third ratio between the R component and the B component of each region;
an addition and calculation subunit for calculating the addition sum of the R component, the G component, and the B component of each of the regions;
a first determining subunit, configured to determine, when a preset determining condition is met, the area as a white area, where the preset determining condition is that the first ratio is smaller than a first preset threshold, the first ratio is greater than a fifth preset threshold, the second ratio is smaller than the first preset threshold, the second ratio is greater than the fifth preset threshold, the third ratio is smaller than the first preset threshold, the third ratio is greater than the fifth preset threshold, and the sum is greater than the second preset threshold and smaller than the third preset threshold;
a second determining subunit, configured to determine the area as a black area when the first ratio, the second ratio, and the third ratio are all smaller than the first preset threshold, and the sum is smaller than a fourth preset threshold.
With reference to the first aspect, in one possible implementation manner, the method further includes:
and the marking module is used for marking the target area with a preset identifier.
With reference to the first aspect, in one possible implementation manner, the scene determining module includes:
a mean value calculating unit for calculating a mean value of the gray level histogram;
a variance calculating unit for calculating the variance of the gray level histogram according to the mean value;
the judging unit is used for judging whether the variance is larger than a fifth preset threshold value or not;
the first scene determining unit is used for determining that the scene of the image to be detected is a backlight scene or a frontlight scene when the variance is larger than the fifth preset threshold;
and the second scene determining unit is used for determining that the scene of the image to be detected is a non-backlight scene or a non-frontlighting scene when the variance is smaller than or equal to the fifth preset threshold.
With reference to the first aspect, in a possible implementation manner, the scene determining module further includes:
an average luminance value calculation unit for calculating an average luminance value of each of the regions;
the distribution rule statistical unit is used for counting the brightness value distribution rule of the image to be detected according to the average brightness value;
the backlight scene determining unit is used for determining the scene of the image to be detected as a backlight scene when the brightness value distribution rule accords with a first preset distribution rule;
and the taillight scene determining unit is used for determining the scene of the image to be detected as the taillight scene when the brightness value distribution rule accords with a second preset distribution rule.
With reference to the first aspect, in one possible implementation manner, the method further includes:
and the execution module is used for executing corresponding image processing operation according to the scene of the image to be detected.
A third aspect of embodiments of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method according to any one of the above first aspects when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, performs the steps of the method according to any one of the above first aspects.
Compared with the prior art, the embodiment of the application has the advantages that:
according to the method and the device, the white area and/or the black area in the image to be detected are calculated, the gray level histogram is counted based on the areas except the white area and the black area, namely, the gray level information is counted after the black area and the white area in the image are removed, then scene detection is carried out based on the gray level information, a large-area black object or white object in the image is removed, the gray level information in the scene is made to accord with actual brightness, the situation that misjudgment is caused due to the fact that the brightness information is too different is avoided, and therefore the accuracy of scene detection based on the gray level information is high.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic block diagram of a flow of a scene detection method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a target area determination process provided in an embodiment of the present application;
fig. 3 is a schematic block diagram of a flow of step S105 provided in an embodiment of the present application;
fig. 4 is a schematic block diagram of a flow of a specific scenario determination process provided in an embodiment of the present application;
fig. 5 is a schematic block diagram of a scene detection apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Example one
The scene detection method provided by the embodiment of the application can be specifically applied to intelligent mobile terminal equipment, and the terminal equipment has a photographing function. Such as a cell phone, tablet, etc.
Referring to fig. 1, a schematic flow chart of a scene detection method provided in an embodiment of the present application is shown, where the scene detection method may include the following steps:
and S101, acquiring an image to be detected.
Step S102, dividing the image to be detected into a preset number of areas.
It should be noted that the numerical value of the preset number can be set according to the actual application requirement. Specifically, the number of divided regions may be set according to the platform used. For example, when using a highpass platform, the predetermined number is 16 by 16. It should be understood that the areas of the regions are generally the same, but the areas of the regions are different to achieve the purpose of the embodiments of the present application.
Compared with scene detection based on a single pixel point, the scene detection based on the region can greatly reduce the calculated amount, improve the operation efficiency and reduce the influence caused by noise points in the image.
And step S103, determining a target area from the areas, wherein the target area is a white area and/or a black area.
After dividing the image into a plurality of areas, it is necessary to find out a target area from the plurality of areas, where the target area may be a black area, a white area, or a black area and a white area. In other words, in some scenes, only a white region or a black region may exist in the image, and a black region and a white region may exist at the same time.
In a specific application, a target area in an image can be found through a ratio relation among the R component, the G component and the B component of each area. Optionally, in an embodiment, the specific process of determining the target area from the areas may include: calculating an R component, a G component and a B component of each region; and determining a target area from the areas according to the relation among the R component, the G component and the B component.
The relationship among the R component, the G component, and the B component may include a ratio relationship, an addition relationship, and the like. By calculating the ratio and the sum, and then comparing the ratio, the sum and the corresponding preset threshold respectively, when a certain condition is met, the area can be determined as a white area or a black area.
Further, referring to fig. 2, the specific process of determining the target region from the region according to the relationship between the R component, the G component and the B component may include:
step S201, calculating a first ratio between the R component and the G component, a second ratio between the B component and the G component, and a third ratio between the R component and the B component of each region.
Step S202, the sum of the R component, the G component and the B component of each area is calculated.
Step S203, when a preset determination condition is met, determining the area as a white area, where the preset determination condition is that the first ratio is smaller than a first preset threshold, the first ratio is greater than a fifth preset threshold, the second ratio is smaller than the first preset threshold, the second ratio is greater than the fifth preset threshold, the third ratio is smaller than the first preset threshold, the third ratio is greater than the fifth preset threshold, and the sum of the first ratio and the third ratio is greater than the second preset threshold and smaller than the third preset threshold.
That is, when R/G < (threshold1) | R/G > (threshold5) & B/G < (threshold1) | B/G > (threshold5) & R/B < (threshold1) | R/B > (threshold5) & (threshold2) < (R + G + B) < (threshold3), the region is judged as a white region, wherein threshold1, threshold12, threshold3, and threshold5 are sequentially represented as the first preset threshold, the second preset threshold, the third preset threshold, and the fifth preset threshold.
When a white area appears, overexposure overflow needs to be taken into consideration, so the sum of the R, G, B needs to be less than the third threshold.
And S204, when the first ratio, the second ratio and the third ratio are all smaller than a first preset threshold and the sum is smaller than a fourth preset threshold, judging the area as a black area.
Since the reduction degree of each color is different depending on the filter coating film of the lens, the threshold values may be set according to the sensor used, and are not limited herein. Generally, the value range of the first preset threshold is about 0.95, the value range of the second preset threshold is about 540, the value range of the third preset threshold is about 720, the value range of the fourth preset threshold is about 150, and the value range of the fifth threshold is about 1.05.
The process of finding the black area and the white area in the image based on the relationship between the ratio of R, G, B for each area, the sum, and the preset threshold value is described above. The process of determining the target area from the areas may be other as long as the target area can be determined.
After the target area is found, in order to better distinguish which area is the target area in the subsequent steps, after the target area is determined, the corresponding area may be marked. Optionally, in an embodiment, after determining the target area from the slave areas, the method may further include: the target area is marked with a preset identification. The preset identifier may be determined as needed, as long as the target area can be distinguished from other non-target areas. In a specific application, the black area and the white area may be marked by one kind of mark, or the black area and the white area may be marked by two kinds of marks, for example, the black area is marked by the first mark, and the white area is marked by the second mark.
And step S104, extracting a gray level histogram of the image to be detected based on the region outside the target region.
After the target area is determined, the gray information of the image to be detected can be calculated based on the area outside the target area. The grayscale information may be embodied as a grayscale histogram. And the region other than the target region means all regions except the target region among the divided regions.
The gray histogram of the image to be detected is counted based on the region outside the target region, which can be regarded as removing the black region or the white region in the image, and then counting the gray histogram of the remaining region. Of course, in actual operation, the gradation information may be directly counted based on the region other than the target region without performing the removal operation.
When a large-area black area or white area exists in the image, the luminance information of the image and the actual luminance have a large difference, and when the scene detection is performed by using the existing scene detection mode, misjudgment and large errors are easy to occur, and the accuracy is low. In the embodiment, for scenes which are difficult to identify and easy to be judged wrongly in the existing method, a white area or a black area in an image is determined, and then gray statistics is performed based on areas except the white area or the black area, so that the accuracy of subsequent scene detection is high.
And S105, determining the scene of the image to be detected according to the gray level histogram.
It can be understood that the brightness distribution of the image in different scenes is different, so that the scene of the image to be detected can be determined by the gray information of the image. The variance of the gray level distribution of its histogram is large for backlit or frontlit scenes, and small for non-backlit or non-frontlit scenes. In a specific application, the scene of the image to be detected can be determined by counting the variance of the gray level histogram of the image.
Alternatively, in an embodiment, referring to the flowchart of step S105 shown in fig. 3, the process of determining the scene of the image to be detected according to the gray histogram may include:
step S301 calculates the mean value of the gradation histogram.
And step S302, calculating the variance of the gray level histogram according to the mean value.
Specifically, after the gray histogram of the image is counted, the overall mean and variance may be calculated according to the following formulas.
Figure BDA0001920479250000091
Wherein x is (x)1,x2,…,x256)TxiAnd represents the probability of the ith gray level appearing in the image. Based on σ, the inverse luminosity can be defined as B, B ═ σ. In general, the larger the value of B, the higher the probability of backlight.
Step S303, determining whether the variance is greater than a fifth preset threshold, when the variance is greater than the fifth preset threshold, the process proceeds to step S304, and when the variance is less than or equal to the fifth preset threshold, the process proceeds to step S305.
And S304, determining the scene of the image to be detected as a backlight scene or a frontlight scene.
And S305, determining the scene of the image to be detected to be a non-backlight scene or a non-frontlight scene.
The fifth threshold may be set according to an actual application scenario. Generally, the value range of the fifth threshold is about 1.05.
In general, the scene of an image is judged using the inverse luminosity B. For example, when the fifth threshold is 1.05, and B > 1.05, the scene of the image to be detected is considered as a backlight scene or a frontlight scene, and B ≦ 1.05, the scene of the image to be detected is considered as a non-backlight scene or a non-frontlight scene.
It can be understood that the image processing operations of the smart devices such as the mobile phone and the like on the backlight scene and the frontlight scene are the same, so that no special distinction is needed in usual photographing, that is, in usual application, only whether the backlight or the frontlight appears in the image needs to be identified, and the backlight scene or the frontlight scene does not need to be specifically identified. Of course, in some application scenarios, after determining that the image is backlit or frontlit, it may further determine whether the image is backlit or frontlit.
Optionally, referring to the flowchart schematic block diagram of the specific scene determination process shown in fig. 4, in an embodiment, after determining that the scene of the image to be detected is a backlight scene or a frontlight scene, the method may further include:
step S401 calculates an average luminance value for each region.
And S402, counting the brightness value distribution rule of the image to be detected according to the average brightness value.
And S403, when the brightness value distribution rule accords with a first preset distribution rule, the scene of the image to be detected is a backlight scene.
And S404, when the brightness value distribution rule accords with a second preset distribution rule, the scene of the image to be detected is a front scene.
It should be noted that, the first preset distribution rule is that the brightness value of the middle area of the image is greater than the brightness values of the peripheral areas, and the second preset distribution rule is that the brightness value of the middle area of the image is less than the brightness values of the peripheral areas. That is, based on the brightness value of each region, when the brightness value of the middle region of the image is greater than the brightness values of the surrounding regions, it is determined that the scene of the image to be detected is a backlight scene, and conversely, when the brightness value of the middle region of the image is less than the brightness values of the surrounding regions, it is determined that the scene is a frontlight scene.
It is understood that the determination of the middle region and the peripheral region of the image may be divided according to actual needs. For example, four regions in the center of the image may be set as the middle region, and regions other than the middle region may be set as the peripheral regions.
After the scene of the image to be detected is determined according to the image gray scale information, corresponding image processing operation can be executed according to the corresponding scene so as to improve the photographing quality and the photographing effect.
Optionally, in an embodiment, after the determining the scene of the image to be detected according to the gray histogram, the method may further include: and executing corresponding image processing operation according to the scene of the image to be detected.
In a specific application, the same operation is used for a backlight scene or a frontlight scene, for example, the HDR function is automatically turned on, and for a non-backlight scene or a frontlight scene, a corresponding image processing operation is performed.
The corresponding image processing operations in different scenes are corresponding operations in the prior art, and are not described herein again.
In this embodiment, the white area and/or the black area in the image to be detected are calculated, and the gray information is counted based on the areas other than the white area and the black area, that is, the gray information is counted after the black area and the white area in the image are removed, and then the scene detection is performed based on the gray information to remove the black object or the white object with a larger area in the image, so that the gray information in the scene conforms to the actual brightness, thereby avoiding the situation of misjudgment caused by the excessive difference of the brightness information, and further ensuring that the accuracy of the scene detection based on the gray information is higher.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two
Referring to fig. 5, a schematic block diagram of a structure of a scene detection apparatus provided in an embodiment of the present application is shown, where the apparatus may include:
an obtaining module 51, configured to obtain an image to be detected;
a brightness value calculating module 52, configured to divide the image to be detected into a preset number of regions, and calculate an average brightness value of each region;
a target area determining module 53, configured to determine a target area from the areas, where the target area is a white area and/or a black area;
a gray information calculation module 54, configured to extract a gray histogram of the to-be-detected image based on a region outside the target region;
and a scene determining module 55, configured to determine a scene of the image to be detected according to the gray histogram.
In a possible implementation manner, the target area determination module may include:
a component calculation unit for calculating an R component, a G component, and a B component of each region;
and the determining unit is used for determining the target area from the areas according to the relation among the R component, the G component and the B component.
In a possible implementation manner, the determining unit includes:
the ratio operator unit is used for calculating a first ratio between the R component and the G component, a second ratio between the B component and the G component and a third ratio between the R component and the B component of each region;
an addition and calculation subunit for calculating the addition sum of the R component, the G component, and the B component of each region;
a first determining subunit, configured to determine, when a preset determination condition is met, the area as a white area, where the preset determination condition is any one of that the first ratio is smaller than a first preset threshold, the first ratio is greater than a fifth preset threshold, the second ratio is smaller than the first preset threshold, the second ratio is greater than the fifth preset threshold, the third ratio is smaller than the first preset threshold, the third ratio is greater than the fifth preset threshold, and the sum is greater than the second preset threshold and smaller than the third preset threshold;
and the second judging subunit is used for judging the area as a black area when the first ratio, the second ratio and the third ratio are all smaller than the first preset threshold and the sum is smaller than a fourth preset threshold.
In a possible implementation manner, the apparatus may further include:
and the marking module is used for marking the target area with the preset identification.
In a possible implementation manner, the scene determining module may include:
the mean value calculating unit is used for calculating the mean value of the gray level histogram;
a variance calculating unit for calculating the variance of the gray level histogram according to the mean value;
the judgment unit is used for judging whether the variance is larger than a fifth preset threshold value or not;
the first scene determining unit is used for determining that the scene of the image to be detected is a backlight scene or a frontlight scene when the variance is larger than a fifth preset threshold;
and the second scene determining unit is used for determining that the scene of the image to be detected is a non-backlight scene or a non-backlight scene when the variance is less than or equal to a fifth preset threshold value.
In a possible implementation manner, the scene determination module may further include:
an average luminance value calculation unit for calculating an average luminance value of each region;
the distribution rule statistical unit is used for counting the brightness value distribution rule of the image to be detected according to the average brightness value;
the backlight scene determining unit is used for determining the scene of the image to be detected as a backlight scene when the brightness value distribution rule accords with a first preset distribution rule;
and the taillight scene determining unit is used for determining the scene of the image to be detected as the taillight scene when the brightness value distribution rule accords with the second preset distribution rule.
In a possible implementation manner, the apparatus may further include:
and the execution module is used for executing corresponding image processing operation according to the scene of the image to be detected.
It should be noted that the scene detection apparatus of the present embodiment corresponds to the scene detection methods of the foregoing embodiments one to one, and for related introduction, reference is made to the corresponding contents above, which is not described herein again.
In this embodiment, the white area and/or the black area in the image to be detected are calculated, and the gray information is counted based on the areas other than the white area and the black area, that is, the gray information is counted after the black area and the white area in the image are removed, and then the scene detection is performed based on the gray information to remove the black object or the white object with a larger area in the image, so that the gray information in the scene conforms to the actual brightness, thereby avoiding the situation of misjudgment caused by the excessive difference of the brightness information, and further ensuring that the accuracy of the scene detection based on the gray information is higher.
EXAMPLE III
Fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present application. As shown in fig. 6, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various scene detection method embodiments described above, such as the steps S101 to S105 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules or units in the above-described device embodiments, such as the functions of the modules 51 to 55 shown in fig. 5.
Illustratively, the computer program 62 may be divided into one or more modules or units, which are stored in the memory 61 and executed by the processor 60 to accomplish the present application. The one or more modules or units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the terminal device 6. For example, the computer program 62 may be divided into an acquisition module, a brightness value calculation module, a target region determination module, a gray scale information calculation module, and a scene determination module, and each module has the following specific functions:
the acquisition module is used for acquiring an image to be detected;
the brightness value calculation module is used for dividing the image to be detected into a preset number of areas and calculating the average brightness value of each area;
the target area determining module is used for determining a target area from the areas, and the target area is a white area and/or a black area;
the gray information calculation module is used for extracting a gray histogram of the image to be detected based on the region outside the target region;
and the scene determining module is used for determining the scene of the image to be detected according to the gray level histogram.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 60, a memory 61. Those skilled in the art will appreciate that fig. 6 is merely an example of a terminal device 6 and does not constitute a limitation of terminal device 6 and may include more or less components than those shown, or some components in combination, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer program and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus and the terminal device are merely illustrative, and for example, the division of the module or the unit is only one logical function division, and there may be another division in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules or units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for scene detection, comprising:
acquiring an image to be detected;
dividing the image to be detected into a preset number of areas;
determining a target area from the areas, wherein the target area is a white area and/or a black area;
extracting a gray level histogram of the image to be detected based on the region outside the target region;
and determining the scene of the image to be detected according to the gray level histogram.
2. The scene detection method according to claim 1, wherein said determining a target region from said regions comprises:
calculating an R component, a G component, and a B component for each of the regions;
and determining the target region from the region according to the relation among the R component, the G component and the B component.
3. The scene detection method according to claim 2, wherein said determining the target region from the regions according to the relationship among the R component, the G component, and the B component includes:
calculating a first ratio between the R component and the G component, a second ratio between the B component and the G component, and a third ratio between the R component and the B component of each region;
calculating the sum of the R component, the G component and the B component of each region;
when a preset judging condition is met, judging the area to be a white area, wherein the preset judging condition is that the first ratio is smaller than a first preset threshold, the first ratio is larger than a fifth preset threshold, the second ratio is smaller than the first preset threshold, the second ratio is larger than the fifth preset threshold, the third ratio is smaller than the first preset threshold, the third ratio is larger than the fifth preset threshold, and the sum of the sums;
and when the first ratio, the second ratio and the third ratio are all smaller than the first preset threshold and the sum is smaller than a fourth preset threshold, judging the area as a black area.
4. The scene detection method according to claim 2, further comprising, after said determining the target region from the regions:
and marking the target area with a preset identification.
5. The scene detection method according to any one of claims 1 to 4, wherein said determining the scene of the image to be detected according to the gray histogram comprises:
calculating the mean value of the gray level histogram;
calculating the variance of the gray level histogram according to the mean value;
judging whether the variance is larger than a fifth preset threshold value or not;
when the variance is larger than the fifth preset threshold, determining that the scene of the image to be detected is a backlight scene or a frontlight scene;
and when the variance is smaller than or equal to the fifth preset threshold value, determining that the scene of the image to be detected is a non-backlight scene or a non-frontlighting scene.
6. The scene detection method according to claim 5, further comprising, after determining that the scene of the image to be detected is a backlight scene or a frontlight scene:
calculating an average luminance value for each of the regions;
counting the brightness value distribution rule of the image to be detected according to the average brightness value;
when the brightness value distribution rule accords with a first preset distribution rule, the scene of the image to be detected is a backlight scene;
and when the distribution rule of the brightness values accords with a second preset distribution rule, the scene of the image to be detected is a taillight scene.
7. The scene detection method according to claim 5, further comprising, after said determining the scene of the image to be detected based on the gray histogram:
and executing corresponding image processing operation according to the scene of the image to be detected.
8. A scene detection apparatus, comprising:
the acquisition module is used for acquiring an image to be detected;
the brightness value calculation module is used for dividing the image to be detected into a preset number of areas and calculating the average brightness value of each area;
a target area determining module, configured to determine a target area from the areas, where the target area is a white area and/or a black area;
the gray information calculation module is used for extracting a gray histogram of the image to be detected based on the region outside the target region;
and the scene determining module is used for determining the scene of the image to be detected according to the gray level histogram.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the steps of the method according to any one of claims 1 to 7.
CN201811591988.8A 2018-12-25 2018-12-25 Scene detection method, device, terminal equipment and computer readable storage medium Active CN111368587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811591988.8A CN111368587B (en) 2018-12-25 2018-12-25 Scene detection method, device, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811591988.8A CN111368587B (en) 2018-12-25 2018-12-25 Scene detection method, device, terminal equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111368587A true CN111368587A (en) 2020-07-03
CN111368587B CN111368587B (en) 2024-04-16

Family

ID=71208076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811591988.8A Active CN111368587B (en) 2018-12-25 2018-12-25 Scene detection method, device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111368587B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770285A (en) * 2020-07-13 2020-10-13 浙江大华技术股份有限公司 Exposure brightness control method and device, electronic equipment and storage medium
CN112291548A (en) * 2020-10-28 2021-01-29 Oppo广东移动通信有限公司 White balance statistical method and device, mobile terminal and storage medium
CN113052836A (en) * 2021-04-21 2021-06-29 深圳壹账通智能科技有限公司 Electronic identity photo detection method and device, electronic equipment and storage medium
CN113340817A (en) * 2021-05-26 2021-09-03 奥比中光科技集团股份有限公司 Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment
CN115526788A (en) * 2022-03-18 2022-12-27 荣耀终端有限公司 Image processing method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090116043A1 (en) * 2005-05-13 2009-05-07 Takeshi Nakajima Image processing method, image processing device, and image processing program
CN105809146A (en) * 2016-03-28 2016-07-27 北京奇艺世纪科技有限公司 Image scene recognition method and device
CN105959585A (en) * 2016-05-12 2016-09-21 深圳众思科技有限公司 Multi-grade backlight detection method and device
CN108337448A (en) * 2018-04-12 2018-07-27 Oppo广东移动通信有限公司 High-dynamic-range image acquisition method, device, terminal device and storage medium
CN108337433A (en) * 2018-03-19 2018-07-27 广东欧珀移动通信有限公司 A kind of photographic method, mobile terminal and computer readable storage medium
CN108805103A (en) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN108848363A (en) * 2018-05-31 2018-11-20 江苏乙生态农业科技有限公司 A kind of auto white balance method suitable for large scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090116043A1 (en) * 2005-05-13 2009-05-07 Takeshi Nakajima Image processing method, image processing device, and image processing program
CN105809146A (en) * 2016-03-28 2016-07-27 北京奇艺世纪科技有限公司 Image scene recognition method and device
CN105959585A (en) * 2016-05-12 2016-09-21 深圳众思科技有限公司 Multi-grade backlight detection method and device
CN108337433A (en) * 2018-03-19 2018-07-27 广东欧珀移动通信有限公司 A kind of photographic method, mobile terminal and computer readable storage medium
CN108337448A (en) * 2018-04-12 2018-07-27 Oppo广东移动通信有限公司 High-dynamic-range image acquisition method, device, terminal device and storage medium
CN108848363A (en) * 2018-05-31 2018-11-20 江苏乙生态农业科技有限公司 A kind of auto white balance method suitable for large scene
CN108805103A (en) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770285A (en) * 2020-07-13 2020-10-13 浙江大华技术股份有限公司 Exposure brightness control method and device, electronic equipment and storage medium
CN111770285B (en) * 2020-07-13 2022-02-18 浙江大华技术股份有限公司 Exposure brightness control method and device, electronic equipment and storage medium
CN112291548A (en) * 2020-10-28 2021-01-29 Oppo广东移动通信有限公司 White balance statistical method and device, mobile terminal and storage medium
CN113052836A (en) * 2021-04-21 2021-06-29 深圳壹账通智能科技有限公司 Electronic identity photo detection method and device, electronic equipment and storage medium
CN113340817A (en) * 2021-05-26 2021-09-03 奥比中光科技集团股份有限公司 Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment
WO2022247840A1 (en) * 2021-05-26 2022-12-01 奥比中光科技集团股份有限公司 Light source spectrum and multispectral reflectivity image acquisition methods and apparatuses, and electronic device
CN113340817B (en) * 2021-05-26 2023-05-05 奥比中光科技集团股份有限公司 Light source spectrum and multispectral reflectivity image acquisition method and device and electronic equipment
CN115526788A (en) * 2022-03-18 2022-12-27 荣耀终端有限公司 Image processing method and device

Also Published As

Publication number Publication date
CN111368587B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN111368587B (en) Scene detection method, device, terminal equipment and computer readable storage medium
CN109146855B (en) Image moire detection method, terminal device and storage medium
CN111311482B (en) Background blurring method and device, terminal equipment and storage medium
CN110490204B (en) Image processing method, image processing device and terminal
CN108769634B (en) Image processing method, image processing device and terminal equipment
CN107909569B (en) Screen-patterned detection method, screen-patterned detection device and electronic equipment
CN110049250B (en) Camera shooting state switching method and device
CN107908998B (en) Two-dimensional code decoding method and device, terminal equipment and computer readable storage medium
CN109214996B (en) Image processing method and device
CN110796041B (en) Principal identification method and apparatus, electronic device, and computer-readable storage medium
CN111654637B (en) Focusing method, focusing device and terminal equipment
CN110648296A (en) Pupil color correction method, correction device, terminal device and storage medium
CN111311481A (en) Background blurring method and device, terminal equipment and storage medium
CN113920022A (en) Image optimization method and device, terminal equipment and readable storage medium
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN113487473B (en) Method and device for adding image watermark, electronic equipment and storage medium
CN111340722B (en) Image processing method, processing device, terminal equipment and readable storage medium
CN111539975B (en) Method, device, equipment and storage medium for detecting moving object
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium
CN111222446B (en) Face recognition method, face recognition device and mobile terminal
CN115690747B (en) Vehicle blind area detection model test method and device, electronic equipment and storage medium
CN116485645A (en) Image stitching method, device, equipment and storage medium
CN112989924B (en) Target detection method, target detection device and terminal equipment
CN113391779A (en) Parameter adjusting method, device and equipment for paper-like screen
CN114596210A (en) Noise estimation method, device, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 516006 TCL science and technology building, No. 17, Huifeng Third Road, Zhongkai high tech Zone, Huizhou City, Guangdong Province

Applicant after: TCL Technology Group Co.,Ltd.

Address before: 516006 Guangdong province Huizhou Zhongkai hi tech Development Zone No. nineteen District

Applicant before: TCL Corp.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant